text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In quantum information theory, the Lieb conjecture is a theorem concerning the Wehrl entropy of quantum systems for which the classical phase space is a sphere. It states that no state of such a system has a lower Wehrl entropy than the SU(2) coherent states .
The analogous property for quantum systems for which the classical phase space is a plane was conjectured by Alfred Wehrl in 1978 and proven soon afterwards by Elliott H. Lieb , [ 1 ] who at the same time extended it to the SU(2) case.
The conjecture was proven in 2012, by Lieb and Jan Philip Solovej . [ 2 ] The uniqueness of the minimizers was only proved in 2022 by Rupert L. Frank [ 3 ] and Aleksei Kulikov, Fabio Nicola, Joaquim Ortega-Cerda' and Paolo Tilli. [ 4 ]
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lieb_conjecture |
The Liebermann reagent named after Hungarian chemist Leo Liebermann (1852-1926) is used as a simple spot-test to presumptively identify alkaloids as well as other compounds. It is composed of a mixture of potassium nitrite and concentrated sulfuric acid . [ 1 ] [ 2 ] 1 g of potassium nitrite is used for every 10 mL of sulfuric acid. [ 3 ] Potassium nitrite may also be substituted by sodium nitrite . [ 4 ] [ 5 ] It is used to test for cocaine , morphine , PMA and PMMA .
The test is performed by scraping off a small amount of the substance and adding a drop of the reagent (which is initially clear and colorless). The results are analyzed by viewing the color of the resulting mixture, and by the time taken for the change in color to become apparent. | https://en.wikipedia.org/wiki/Liebermann_reagent |
The Liebeskind–Srogl coupling reaction is an organic reaction forming a new carbon–carbon bond from a thioester and a boronic acid using a metal catalyst . It is a cross-coupling reaction . [ 1 ] This reaction was invented by and named after Jiri Srogl from the Academy of Sciences, Czech Republic, and Lanny S. Liebeskind from Emory University, Atlanta, Georgia, USA. There are three generations of this reaction, with the first generation shown below. The original transformation used catalytic Pd(0), TFP = tris(2-furyl)phosphine as an additional ligand and stoichiometric CuTC = copper(I) thiophene-2-carboxylate as a co-metal catalyst. The overall reaction scheme is shown below.
Liebeskind-Srogl reaction is most commonly seen with sulfide or thioester electrophiles and boronic acid or stannane nucleophiles but many other coupling partners are viable. In addition to alkyl and aryl thioesters; (hetero)aryl sulfides, thioamides, sulfanyl alkynes, and thiocyanates are competent electrophiles. [ 2 ] Virtually any metal-R bond capable of transmetalation has been demonstrated. [ 2 ] Indium derived nucleophiles require no copper or base. Note that this scope is applicable for the first generation coupling as the second and third generations are mechanistically distinct and have only been demonstrated with thioesters capable of forming the six-membered metallocycle , boronic acids , and stannanes .
The first-generation approach to cross coupling is run under anaerobic conditions using stoichiometric copper and catalytic palladium. [ 1 ]
Second generation approach renders the reactions catalytic in copper by using an extra equivalent of boronic acid under aerobic, palladium free conditions. [ 3 ] The additional equivalent liberates the copper from the sulfur auxiliary and allows it to turn over. This chemistry is limited to thioesters and sulfides and could also be limited by the cost and availability of the organoboron reagent.
The third generation renders the reaction catalytic in copper while using only one equivalent of boronic acid. [ 4 ]
The proposed reaction mechanism for the first generation is shown below. [ 5 ] [ 6 ] The thioester 1 complexes with copper complex 3 to form compound 4 . With the oxidative insertion of [Pd] into the carbon–sulfur bond , compound 5 is formed, and with transmetallation , organopalladium species 8 is formed. The transmetallation proceeds via the transfer of R 2 to the palladium metal center with concomitant transfer of the sulfur atom to the copper complex. Reductive elimination gives ketone 3 with the regeneration of the active catalyst 9 .
The mechanism for the second generation is shown below. [ 3 ] The mechanism does not follow a traditional oxidative addition-transmetelation-reductive elimination pathway like the first generation. In parallel to studies of Cu(I)-dioxygen reactions, a higher oxidation state , Cu-templated coupling is proposed. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] Coordination of copper(I) to the thioester undergoes oxidation by air to give a copper (II/III) intermediate. Metal templating by Cu(II/III) acts as a Lewis acid to both activate the thiol ester and deliver R 2 (from either boron directly or via an intermediate Cu-R 2 species), which produces the ketone and a Cu-thiolate. A second equivalent of boronic acid is needed to break the copper sulfur bond and liberate copper back into the catalytic cycle.
The third generation renders the reaction catalytic in copper and uses only one equivalent of boronic acid by mimicking the metallothionein (MT) system that sponges metals from biological systems. [ 4 ] The thio-auxiliary features an N–O motif that mimics the S–S motif in the MT biosystem, that is necessary to break the copper sulfur bond and turn over the catalyst. This generation is palladium free and under microwave conditions. The mechanism is expected to follow that of the second generation (shown as an active Cu(I)-R 2 species but R 2 could be delivered directly from the coordinated boronic acid) but includes the auxiliary releasing copper back into the catalytic cycle instead of additional boronic acid.
The Liebeskind–Srogl coupling has been used as a key retrosynthetic disconnection in several natural product total synthesis .
For example, in the synthesis of Goniodomin A, the Sasakki lab utilized this chemistry to rapidly access the northern half of the natural product. [ 12 ]
The Guerrero lab used the Liebeskind–Srogl coupling to construct the entire carbon skeleton of viridin in high yield on multi-gram scale. [ 13 ]
The lab of Figadere used the Liebeskind–Srogl coupling early in their synthesis of amphidinolide F [ 14 ] by employing this reaction to construct the north eastern fragment of the macrocycle and the terpene chain.
The Yu lab has demonstrated that in the presence of two sulfide bonds, one can be selectively functionalized in the presence of one equivalent of nucleophile if directed by a carbonyl oxygen. [ 15 ] This reaction proceeds through a five-membered palladacycle with oxidative addition taking place on this cis -thioether. Additional equivalence of nucleophile will functionalize the trans- position. | https://en.wikipedia.org/wiki/Liebeskind–Srogl_coupling |
Liebig–Pasteur dispute is the dispute between Justus von Liebig and Louis Pasteur on the processes and causes of fermentation .
Louis Pasteur a French chemist, supported the idea that fermentation was a biological process. Justus von Liebig , a German chemist, supported the idea that fermentation was a mechanical process. Both chemists had different methods of experimentation, and they focused on different aspects of fermentation because they had different ideas about where the fermentation began in an organism.
The Liebig–Pasteur feud started in 1857 when Pasteur said that fermentation can occur in the absence of oxygen. The two were aware of the other's works, but continued working with their own theories. The two mention each other, as well as other scientists, in articles and other publications about the processes and causes of fermentation. [ 1 ]
Pasteur observed that fermentation does not require oxygen, but needs yeast, which is alive. Fermentation is a biological process, not a reduction and oxygen chemical process. He used two slender bottles. One of the bottles had a curved neck; this is called a swan neck. Pasteur poured liquid broth into the two bottles, and heated in the bottom of the bottles. When the liquid boiled, he let them cool. Pasteur observed that the broth in the curved bottle stayed clear, except when the bottle was shaken.
Pasteur explained that the two bottles were filled with air, but the curved bottle could stop most of the particles in the air, and it kept its nature. However, the liquid in the other bottle degenerated. Therefore, he concluded that fermentation does not require oxygen, but needs the yeast. When yeast is allowed to grow over time, the substance will spoil or rot. [ 2 ]
Pasteur's viewed fermentation as a type of vitalism . [ 1 ] He observed that living organisms were responsible for the process of fermentation.
Liebig formulated his own theory claiming that the production of alcohol was not a biological process but a chemical process, discrediting the idea that fermentation could occur due to microscopic organisms. He believed that vibrations emanating from the decomposition of organic matter would spread to the sugar resulting in the production of solely carbon dioxide and alcohol. [ 3 ]
The change was facilitated by ferment or yeast, which has the characters of a compound of nitrogen in the state of putrefaction. Given that the ferment's susceptibility to change, it is submitted to decomposition, by the action of air (from which oxygen is provided), water (from which moisture is obtained), and a favorable temperature. Prior to contact with oxygen, the constituents are arranged together without action on each other. Through the oxygen, the state of rest (or equilibrium) of the attractions that keep the elements together has been disturbed. As a consequence of this disturbance, a separation or new arrangement of the elements has been formed. Fermentation occurs due to the transference of molecular instability from the ferment (atoms in motion) to the sugar molecules, and continues as long as the decomposition of the ferment continues. [ 4 ] [ 5 ]
Liebig's view of fermentation can be said to fall under a mechanism point of view. From his work, he saw that fermentation, as well as other catalysts happened by a chemical and mechanical process.
Pasteur responded to Liebig's works, often through his own writings, and using results from his own experiments to support his theories. For example, in 1858, Pasteur wrote a paper trying to disprove Liebig's theory that fermentation cannot be caused by the growth of the yeast when it takes place when yeast is added to pure sugar-water. Pasteur thought that in pure sugar-water, yeast was both growing and disintegrating, and developed experiments to support his theories. Liebig, however, was not convinced, and claimed that Pasteur was not solving the questions he had about the decomposition in fermentation. [ 6 ]
In 1869, Liebig responded to Pasteur's challenge, which he had made public ten years before. Liebig still held this ground, and mentioned that some of Pasteur's experiments were difficult to replicate and use effectively. Pasteur was furious, and suggested the Royal Academy hire a third scientist who would replicate his experiments and verify his results in order to support his theories. Neither Liebig nor the Academy responded.
Later, Pasteur demanded a meeting with Liebig, but Liebig did not him receive cordially, and refused to discuss the topic of fermentation. [ 7 ]
The famous controversy between Pasteur and Liebig over the nature of alcoholic fermentation was uncovered by Eduard Büchner , a German chemist and zymologist. Influenced by his brother Hans, who became the famous bacteriologist, Büchner developed an interest in the fermentation process in which yeast breaks down sugar into alcohol and carbon dioxide. He published his first paper in 1885 which revealed that fermentation could occur in the presence of oxygen, a conclusion contrary to the view held by Louis Pasteur.
By 1893, Büchner was fully involved in seeking the active agent of fermentation. He obtained pure samples of the inner fluid of yeast cells by pulverizing yeast within a mixture of sand and diatomaceous earth , then squeezing the mixture through a canvas filter. This process avoided using solvents and high temperatures which had foiled previous investigations. He assumed that the collected fluid was incapable of producing fermentation because the yeast cells were dead. However, when he attempted to preserve the fluid in concentrated sugar, he was startled to observe carbon dioxide being released, a sign that fermentation was taking place. Büchner hypothesized that the fermentation was caused by an enzyme which he named zymase. His findings that fermentation was the result of chemical process both inside and outside cells, were published in 1897. [ 8 ]
Neither Liebig nor Pasteur was completely right. However, each of their arguments led to more discoveries that created a lot of today's fields in science and medicine.
Berzelius had defined the word "ferment" as being an example of catalytic activity. Soon after, Schwann discovered pepsin was the substance responsible for albuminous digestion in the stomach. He believed this was what Berzelius defined as catalysts, or the force for chemical reactions of mineral, organic and living matter. Liebig opposed the idea by saying that the terms catalysts and pepsin are not supposed to be used as they are only representatives of an idea.
Charles Cagniard-Latour , [ 9 ] Theodor Schwann [ 10 ] and Friedrich Traugott Kützing [ 11 ] identified independently yeast as a living organism that nourishes itself by the sugar it ferments, a process which referred to the ethanol fermentation (alcoholic fermentation). Liebig, Berzelius, and Wohler rejected the ideas of Schwann, Latour and Kutzing. In 1839, Liebig and Wohler published a paper on the role of yeast in alcoholic fermentation. In 1858, Liebig's student Moritz Traube [ 12 ] enunciated the theorem, which was used for alcoholic fermentation, that all fermentations produced by living organisms are based on chemical reactions rather than a vital force itself.
The dispute between Liebig and Pasteur had, in a way, slowed down the advances of science and medicine in the area of fermentation, alcohol fermentation, and the enzymes. On the other hand, the conflicting ideas sped up the research in the area of fermentation and enzymes through other scientists and chemists. Through Büchner and his experiment in fermentation, the world of science and medicine went further as to pave ways in enzyme and fermentation studies and marked one of the critical points of the history of modern chemistry. [ 13 ] [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Liebig–Pasteur_dispute |
In physics , the Lieb–Liniger model describes a gas of particles moving in one dimension and satisfying Bose–Einstein statistics . More specifically, it describes a one dimensional Bose gas with Dirac delta interactions. It is named after Elliott H. Lieb and Werner Liniger [ de ] who introduced the model in 1963. [ 1 ] The model was developed to compare and test Nikolay Bogolyubov 's theory of a weakly interaction Bose gas. [ 2 ]
Given N {\displaystyle N} bosons moving in one-dimension on the x {\displaystyle x} -axis defined from [ 0 , L ] {\displaystyle [0,L]} with periodic boundary conditions , a state of the N -body system must be described by a many-body wave function ψ ( x 1 , x 2 , … , x j , … , x N ) {\displaystyle \psi (x_{1},x_{2},\dots ,x_{j},\dots ,x_{N})} . The Hamiltonian , of this model is introduced as
where δ {\displaystyle \delta } is the Dirac delta function . The constant c {\displaystyle c} denotes the strength of the interaction, c > 0 {\displaystyle c>0} represents a repulsive interaction and c < 0 {\displaystyle c<0} an attractive interaction. [ 3 ] The hard core limit c → ∞ {\displaystyle c\to \infty } is known as the Tonks–Girardeau gas . [ 3 ]
For a collection of bosons, the wave function is unchanged under permutation of any two particles (permutation symmetry), i.e., ψ ( … , x i , … , x j , … ) = ψ ( … , x j , … , x i , … ) {\displaystyle \psi (\dots ,x_{i},\dots ,x_{j},\dots )=\psi (\dots ,x_{j},\dots ,x_{i},\dots )} for all i ≠ j {\displaystyle i\neq j} and ψ {\displaystyle \psi } satisfies ψ ( … , x j = 0 , … ) = ψ ( … , x j = L , … ) {\displaystyle \psi (\dots ,x_{j}=0,\dots )=\psi (\dots ,x_{j}=L,\dots )} for all j {\displaystyle j} .
The delta function in the Hamiltonian gives rise to a boundary condition when two coordinates, say x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} are equal. The condition is that as x 2 {\displaystyle x_{2}} approaches x 1 {\displaystyle x_{1}} from above ( x 2 ↘ x 1 {\displaystyle x_{2}\searrow x_{1}} ), the derivative satisfies
The time-independent Schrödinger equation H ψ = E ψ {\displaystyle H\psi =E\psi } , is solved by explicit construction of ψ {\displaystyle \psi } . Since ψ {\displaystyle \psi } is symmetric it is completely determined by its values in the simplex R {\displaystyle {\mathcal {R}}} , defined by the condition that 0 ≤ x 1 ≤ x 2 ≤ … , ≤ x N ≤ L {\displaystyle 0\leq x_{1}\leq x_{2}\leq \dots ,\leq x_{N}\leq L} .
The solution can be written in the form of a Bethe ansatz as [ 2 ]
with wave vectors 0 ≤ k 1 ≤ k 2 ≤ … , ≤ k N {\displaystyle 0\leq k_{1}\leq k_{2}\leq \dots ,\leq k_{N}} , where the sum is over all N ! {\displaystyle N!} permutations, P {\displaystyle P} , of the integers 1 , 2 , … , N {\displaystyle 1,2,\dots ,N} , and P {\displaystyle P} maps 1 , 2 , … , N {\displaystyle 1,2,\dots ,N} to P 1 , P 2 , … , P N {\displaystyle P_{1},P_{2},\dots ,P_{N}} . The coefficients a ( P ) {\displaystyle a(P)} , as well as the k {\displaystyle k} 's are determined by the condition H ψ = E ψ {\displaystyle H\psi =E\psi } , and this leads to a total energy
with the amplitudes given by
These equations determine ψ {\displaystyle \psi } in terms of the k {\displaystyle k} 's. These lead to N {\displaystyle N} equations: [ 2 ]
where I 1 < I 2 < ⋯ < I N {\displaystyle I_{1}<I_{2}<\cdots <I_{N}} are integers when N {\displaystyle N} is odd and, when N {\displaystyle N} is even, they take values ± 1 2 , ± 3 2 , … {\displaystyle \pm {\frac {1}{2}},\pm {\frac {3}{2}},\dots } . For the ground state the I {\displaystyle I} 's satisfy | https://en.wikipedia.org/wiki/Lieb–Liniger_model |
In quantum chemistry and physics , the Lieb–Oxford inequality provides a lower bound for the indirect part of the Coulomb energy of a quantum mechanical system. It is named after Elliott H. Lieb and Stephen Oxford .
The inequality is of importance for density functional theory and plays a role in the proof of stability of matter .
In classical physics, one can calculate the Coulomb energy of a configuration of charged particles in the following way. First, calculate the charge density ρ , where ρ is a function of the coordinates x ∈ ℝ 3 . Second, calculate the Coulomb energy by integrating:
In other words, for each pair of points x and y , this expression calculates the energy related to the fact that the charge at x is attracted to or repelled from the charge at y . The factor of 1 ⁄ 2 corrects for double-counting the pairs of points.
In quantum mechanics, it is also possible to calculate a charge density ρ , which is a function of x ∈ ℝ 3 . More specifically, ρ is defined as the expectation value of charge density at each point. But in this case, the above formula for Coulomb energy is not correct, due to exchange and correlation effects. The above, classical formula for Coulomb energy is then called the "direct" part of Coulomb energy. To get the actual Coulomb energy, it is necessary to add a correction term, called the "indirect" part of Coulomb energy. The Lieb–Oxford inequality concerns this indirect part. It is relevant in density functional theory , where the expectation value ρ plays a central role.
For a quantum mechanical system of N particles, each with charge e , the N -particle density is denoted by
The function P is only assumed to be non-negative and normalized . Thus the following applies to particles with any "statistics". For example, if the system is described by a normalised square integrable N -particle wave function
then
More generally, in the case of particles with spin having q spin states per particle and with corresponding wave function
the N -particle density is given by
Alternatively, if the system is described by a density matrix γ , then P is the diagonal
The electrostatic energy of the system is defined as
For x ∈ ℝ 3 , the single particle charge density is given by
and the direct part of the Coulomb energy of the system of N particles is defined as the electrostatic energy associated with the charge density ρ , i.e.
The Lieb–Oxford inequality states that the difference between the true energy I P and its semiclassical approximation D ( ρ ) is bounded from below as
where C ≤ 1.58 is a constant independent of the particle number N . E P is referred to as the indirect part of the Coulomb energy and in density functional theory more commonly as the exchange plus correlation energy . A similar bound exists if the particles have different charges e 1 , ... , e N . No upper bound is possible for E P .
While the original proof yielded the constant C = 8.52 , [ 1 ] Lieb and Oxford managed to refine this result to C = 1.68 . [ 2 ] Later, the same method of proof was used to further improve the constant to C = 1.64 . [ 3 ] It is only recently that the constant was decreased to C = 1.58 . [ 4 ] With these constants the inequality holds for any particle number N .
The constant can be further improved if the particle number N is restricted. In the case of a single particle N = 1 the Coulomb energy vanishes, I P = 0 , and the smallest possible constant can be computed explicitly as C 1 = 1.092 . [ 2 ] The corresponding variational equation for the optimal ρ is the Lane–Emden equation of order 3. For two particles ( N = 2 ) it is known that the smallest possible constant satisfies C 2 ≥ 1.234 . [ 2 ] In general it can be proved that the optimal constants C N increase with the number of particles, i.e. C N ≤ C N + 1 , [ 2 ] and converge in the limit of large N to the best constant C LO in the inequality ( 1 ). Any lower bound on the optimal constant for fixed particle number N is also a lower bound on the optimal constant C LO . The best numerical lower bound was obtained for N = 60 where C 60 ≥ 1.41 . [ 5 ] This bound has been obtained by considering an exponential density. For the same particle number a uniform density gives C 60 ≥ 1.34 .
The largest proved lower bound on the best constant is C LO ≥ 1.4442 , which was first proven by Cotar and Petrache. [ 6 ] The same lower bound was later obtained in using a uniform electron gas, melted in the neighborhood of its surface, by Lewin, Lieb & Seiringer. [ 7 ] Hence, to summarise, the best known bounds for C are 1.44 ≤ C ≤ 1.58 .
Historically, the first approximation of the indirect part E P of the Coulomb energy in terms of the single particle charge density was given by Paul Dirac in 1930 for fermions . [ 8 ] The wave function under consideration is
With the aim of evoking perturbation theory, one considers the eigenfunctions of the Laplacian in a large cubic box of volume | Λ | and sets
where χ 1 , ..., χ q forms an orthonormal basis of ℂ q . The allowed values of k ∈ ℝ 3 are n /| Λ | 1 ⁄ 3 with n ∈ ℤ 3 + . For large N , | Λ | , and fixed ρ = N | e |/| Λ | , the indirect part of the Coulomb energy can be computed to be
with C = 0.93 .
This result can be compared to the lower bound ( 1 ). In contrast to Dirac's approximation the Lieb–Oxford inequality does not include the number q of spin states on the right-hand side. The dependence on q in Dirac's formula is a consequence of his specific choice of wave functions and not a general feature.
The constant C in ( 1 ) can be made smaller at the price of adding another term to the right-hand side. By including a term that involves the gradient of a power of the single particle charge density ρ , the constant C can be improved to 1.45 . [ 9 ] [ 10 ] Thus, for a uniform density system C ≤ 1.45 . | https://en.wikipedia.org/wiki/Lieb–Oxford_inequality |
The Lieb–Robinson bound is a theoretical upper limit on the speed at which information can propagate in non- relativistic quantum systems. It demonstrates that information cannot travel instantaneously in quantum theory, even when the relativity limits of the speed of light are ignored. The existence of such a finite speed was discovered mathematically by Elliott H. Lieb and Derek W. Robinson in 1972. [ 1 ] It turns the locality properties of physical systems into the existence of, and upper bound for this speed. The bound is now known as the Lieb–Robinson bound and the speed is known as the Lieb–Robinson velocity. This velocity is always finite but not universal, depending on the details of the system under consideration. For finite-range, e.g. nearest-neighbor, interactions, this velocity is a constant independent of the distance travelled. In long-range interacting systems, this velocity remains finite, but it can increase with the distance travelled. [ 2 ] [ 3 ]
In the study of quantum systems such as quantum optics , quantum information theory , atomic physics , and condensed matter physics , it is important to know that there is a finite speed with which information can propagate. The theory of relativity shows that no information, or anything else for that matter, can travel faster than the speed of light. When non-relativistic mechanics is considered, however, ( Newton's equations of motion or Schrödinger's equation of quantum mechanics) it had been thought that there is then no limitation to the speed of propagation of information. This is not so for certain kinds of quantum systems of atoms arranged in a lattice, often called quantum spin systems. This is important conceptually and practically, because it means that, for short periods of time, distant parts of a system act independently.
One of the practical applications of Lieb–Robinson bounds is quantum computing . Current proposals to construct quantum computers built out of atomic-like units mostly rely on the existence of this finite speed of propagation to protect against too rapid dispersal of information. [ 4 ] [ 3 ]
To define the bound, it is necessary to first describe basic facts about quantum mechanical systems composed of several units, each with a finite dimensional Hilbert space .
Lieb–Robinson bounds are considered on a ν {\displaystyle \nu } -dimensional lattice ( ν = 1 , 2 {\displaystyle \nu =1,2} or 3 {\displaystyle 3} ) Γ {\displaystyle \Gamma } , such as the square lattice Γ = Z 2 {\displaystyle \Gamma =\mathbb {Z} ^{2}} .
A Hilbert space of states H x {\displaystyle {\mathcal {H}}_{x}} is associated with each point x ∈ Γ {\displaystyle x\in \Gamma } . The dimension of this space is finite, but this was generalized in 2008 to include infinite dimensions (see below). This is called quantum spin system .
For every finite subset of the lattice, X ⊂ Γ {\displaystyle X\subset \Gamma } , the associated Hilbert space is given by the tensor product
An observable A {\displaystyle A} supported on (i.e., depends only on) a finite set X ⊂ Γ {\displaystyle X\subset \Gamma } is a linear operator on the Hilbert space H X {\displaystyle {\mathcal {H}}_{X}} .
When H x {\displaystyle {\mathcal {H}}_{x}} is finite dimensional, choose a finite basis of operators that span the set of linear operators on H x {\displaystyle {\mathcal {H}}_{x}} . Then any observable on H x {\displaystyle {\mathcal {H}}_{x}} can be written as a sum of basis operators on H x {\displaystyle {\mathcal {H}}_{x}} .
The Hamiltonian of the system is described by an interaction Φ ( ⋅ ) {\displaystyle \Phi (\cdot )} . The interaction is a function from the finite sets X ⊂ Γ {\displaystyle X\subset \Gamma } to self-adjoint observables Φ ( X ) {\displaystyle \Phi (X)} supported in X {\displaystyle X} . The interaction is assumed to be finite range (meaning that Φ ( X ) = 0 {\displaystyle \Phi (X)=0} if the size of X {\displaystyle X} exceeds a certain prescribed size) and translation invariant . These requirements were lifted later. [ 2 ] [ 5 ]
Although translation invariance is usually assumed, it is not necessary to do so. It is enough to assume that the interaction is bounded above and below on its domain. Thus,
the bound is quite robust in the sense that it is tolerant of changes of the Hamiltonian. A finite range is essential, however. An interaction is said to be of finite range if there is a finite number R {\displaystyle R} such that for any set X {\displaystyle X} with diameter greater than R {\displaystyle R} the interaction is zero, i.e., Φ ( X ) = 0 {\displaystyle \Phi (X)=0} . Again, this requirement was lifted later. [ 2 ] [ 5 ]
The Hamiltonian of the system with interaction Φ {\displaystyle \Phi } is defined formally by:
The laws of quantum mechanics say that corresponding to every physically observable quantity there is a self-adjoint operator A {\displaystyle A} .
For every observable A {\displaystyle A} with a finite support Hamiltonian defines a continuous one-parameter group τ t {\displaystyle \tau _{t}} of transformations of the observables τ t {\displaystyle \tau _{t}} given by
Here, t {\displaystyle t} has a physical meaning of time.
(Technically speaking, this time evolution is defined by a power-series expansion that is known to be a norm-convergent series A ( t ) = A + i t [ H , A ] + ( i t ) 2 2 ! [ H , [ H , A ] ] + ⋯ {\displaystyle A(t)=A+it[H,A]+{\frac {(it)^{2}}{2!}}[H,[H,A]]+\cdots } , see, [ 6 ] Theorem 7.6.2, which is an adaptation from. [ 7 ] More rigorous details can be found in. [ 1 ] )
The bound in question was proved in [ 1 ] and is the following: For any observables A {\displaystyle A} and B {\displaystyle B} with finite supports X ⊂ Γ {\displaystyle X\subset \Gamma } and Y ⊂ Γ {\displaystyle Y\subset \Gamma } , respectively, and for any time t ∈ R {\displaystyle t\in \mathbb {R} } the following holds for some positive constants a , c {\displaystyle a,c} and v {\displaystyle v} :
where d ( X , Y ) {\displaystyle d(X,Y)} denotes the distance between the sets X {\displaystyle X} and Y {\displaystyle Y} . The operator [ A , B ] = A B − B A {\displaystyle [A,B]=AB-BA} is called the commutator of the operators A {\displaystyle A} and B {\displaystyle B} , while the symbol ‖ O ‖ {\displaystyle \|O\|} denotes the norm , or size, of an operator O {\displaystyle O} . The bound has nothing to do with the state of the quantum system, but depends only on the Hamiltoninan governing the dynamics. [ citation needed ] Once this operator bound is established it necessarily carries over to any state of the system.
A positive constant c {\displaystyle c} depends on the norms of the observables A {\displaystyle A} and B {\displaystyle B} , the sizes of the supports X {\displaystyle X} and Y {\displaystyle Y} , the interaction, the lattice structure and the dimension of the Hilbert space H x {\displaystyle {\mathcal {H}}_{x}} . A positive constant v {\displaystyle v} depends on the interaction and the lattice structure only. The number a > 0 {\displaystyle a>0} can be chosen at will provided d ( X , Y ) / v | t | {\displaystyle d(X,Y)/v|t|} is chosen sufficiently large. In other words, the further out one goes on the light cone, d ( X , Y ) − v | t | {\displaystyle d(X,Y)-v|t|} , the sharper the exponential decay rate is.
(In later works authors tended to regard a {\displaystyle a} as a fixed constant.) The constant v {\displaystyle v} is called the group velocity or Lieb–Robinson velocity .
The bound ( 1 ) is presented slightly differently from the equation in the original paper which derived velocity-dependent decay rates along spacetime rays with velocity greater than v L R {\displaystyle v_{LR}} . [ 1 ] This more explicit form ( 1 ) can be seen from the proof of the bound [ 1 ]
Lieb–Robinson bound shows that for times | t | < d ( X , Y ) / v {\displaystyle |t|<d(X,Y)/v} the norm on the right-hand side is exponentially small. This is the exponentially small error mentioned above.
The reason for considering the commutator on the left-hand side of the Lieb–Robinson bounds is the following:
The commutator between observables A {\displaystyle A} and B {\displaystyle B} is zero if their supports are disjoint.
The converse is also true: if observable A {\displaystyle A} is such that its commutator with any observable B {\displaystyle B} supported outside some set X {\displaystyle X} is zero, then A {\displaystyle A} has a support inside set X {\displaystyle X} .
This statement is also approximately true in the following sense: [ 8 ] suppose that there exists some ϵ > 0 {\displaystyle \epsilon >0} such that ‖ [ A , B ] ‖ ≤ ϵ ‖ B ‖ {\displaystyle \|[A,B]\|\leq \epsilon \|B\|} for some observable A {\displaystyle A} and any observable B {\displaystyle B} that is supported outside the set X {\displaystyle X} . Then there exists an observable A ( ϵ ) {\displaystyle A(\epsilon )} with support inside set X {\displaystyle X} that approximates an observable A {\displaystyle A} , i.e. ‖ A − A ( ϵ ) ‖ ≤ ϵ {\displaystyle \|A-A(\epsilon )\|\leq \epsilon } .
Thus, Lieb–Robinson bounds say that the time evolution of an observable A {\displaystyle A} with support in a set X {\displaystyle X} is supported (up to exponentially small errors) in a δ {\displaystyle \delta } -neighborhood of set X {\displaystyle X} , where δ < v | t | {\displaystyle \delta <v|t|} with v {\displaystyle v} being the Lieb–Robinson velocity. Outside this set there is no influence of A {\displaystyle A} . In other words, this bounds assert that the speed of propagation of perturbations in quantum spin systems is bounded.
In [ 9 ] Robinson generalized the bound ( 1 ) by considering exponentially decaying interactions (that need not be translation invariant), i.e., for which the strength of the interaction decays exponentially with the diameter of the set.
This result is discussed in detail in, [ 10 ] Chapter 6. No great interest was shown in the Lieb–Robinson bounds until 2004 when Hastings [ 11 ] applied them to the Lieb–Schultz–Mattis theorem.
Subsequently, Nachtergaele and Sims [ 12 ] extended the results of [ 9 ] to include models on vertices with a metric and to derive exponential decay of correlations . From 2005 to 2006 interest in Lieb–Robinson bounds strengthened with additional applications to exponential decay of correlations (see [ 2 ] [ 5 ] [ 13 ] and the sections below). New proofs of the bounds were developed and, in particular, the constant in ( 1 ) was improved making it independent of the dimension of the Hilbert space.
Several further improvements of the constant c {\displaystyle c} in ( 1 ) were made. [ 14 ] In 2008 the Lieb–Robinson bound was extended to the case in which each H x {\displaystyle H_{x}} is infinite dimensional. [ 15 ] In [ 15 ] it was shown that on-site unbounded perturbations do not change the Lieb–Robinson bound. That is, Hamiltonians of the following form can be considered on a finite subset Λ ⊂ Γ {\displaystyle \Lambda \subset \Gamma } :
where H x {\displaystyle H_{x}} is a self-adjoint operator over H x {\displaystyle {\mathcal {H}}_{x}} , which needs not to be bounded.
The Lieb–Robinson bounds were extended to certain continuous quantum systems, that is to a general harmonic Hamiltonian, [ 15 ] which, in a finite volume Γ L = ( − L , L ) d ∩ Z d , {\displaystyle \Gamma _{L}=(-L,L)^{d}\cap \mathbb {Z} ^{d},} , where L , d {\displaystyle L,d} are positive integers, takes the form:
where the periodic boundary conditions are imposed and λ j ≥ 0 {\displaystyle \lambda _{j}\geq 0} , ω > 0 {\displaystyle \omega >0} . Here { e j } {\displaystyle \{e_{j}\}} are canonical basis vectors in Z d {\displaystyle \mathbb {Z} ^{d}} .
Anharmonic Hamiltonians with on-site and multiple-site perturbations were considered and the Lieb–Robinson bounds were derived for them, [ 15 ] [ 16 ] Further generalizations of the harmonic lattice were discussed, [ 17 ] [ 18 ]
Another generalization of the Lieb–Robinson bounds was made to the irreversible dynamics,
in which case the dynamics has a Hamiltonian part and also a dissipative part. The dissipative part is described by terms of Lindblad form, so that the dynamics τ t {\displaystyle \tau _{t}} satisfies the Lindblad-Kossakowski master equation.
Lieb–Robinson bounds for the irreversible dynamics were considered by [ 13 ] in the classical context and by [ 19 ] for a class of quantum lattice systems with finite-range interactions. Lieb–Robinson bounds for lattice models with a dynamics generated by both Hamiltonian and dissipative interactions with suitably fast decay in space, and that may depend on time, were proved by, [ 20 ] where they also proved the existence of the infinite dynamics as a strongly continuous cocycle of unit preserving completely positive maps.
The Lieb–Robinson bounds were also generalized to interactions that decay as a power-law, i.e. the strength of the interaction is upper bounded by 1 / r α , {\displaystyle 1/r^{\alpha },} where r {\displaystyle r} is the diameter of the set and α {\displaystyle \alpha } is a positive constant. [ 2 ] [ 21 ] [ 22 ] [ 3 ] Understanding whether locality persists for power-law interactions hold serious implications for systems such as trapped ions, Rydberg atoms, ultracold atoms and molecules.
In contrast to the finite-range interacting systems where information may only travel at a constant speed, power-law interactions allow information to travel at a speed that increases with the distance. [ 23 ] Thus, the Lieb–Robinson bounds for power-law interactions typically yield a sub-linear light cone that is asymptotically linear in the limit α → ∞ . {\displaystyle \alpha \rightarrow \infty .} A recent analysis [ when? ] using quantum simulation algorithm implied a light cone t ≳ r ( α − 2 D ) ( α − D ) {\displaystyle t\gtrsim r^{(\alpha -2D)(\alpha -D)}} , where D {\displaystyle D} is the dimension of the system. [ 3 ] Tightening the light cone for power-law interactions is still an active research area.
Lieb–Robinson bounds are used in many areas of mathematical physics. Among the main applications of the bound there is the error bounds on quantum simulation algorithms, the existence of the thermodynamic limit, the exponential decay of correlations and the Lieb–Schultz–Mattis theorem.
The aim of digital quantum simulation is to simulate the dynamics of a quantum system using the fewest elementary quantum gates. For a nearest-neighbor interacting system with n {\displaystyle n} particles, simulating its dynamics for time t {\displaystyle t} using the Lie product formula requires O ( n 2 t 2 ) {\displaystyle O(n^{2}t^{2})} quantum gates. In 2018, Haah et al. [ 4 ] proposed a near optimal quantum algorithm that uses only O ( n t log ( n t ) ) {\displaystyle O(nt\log(nt))} quantum gates. The idea is to approximate the dynamics of the system by dynamics of its subsystems, some of them spatially separated. The error of the approximation is bounded by the original Lieb–Robinson bound. Later, the algorithm is generalized to power-law interactions and subsequently used to derive a stronger Lieb–Robinson bound. [ 3 ]
One of the important properties of any model meant to describe properties of bulk matter is the existence of the thermodynamic limit. This says that intrinsic properties of the system should be essentially independent of the size of the system which, in any experimental setup, is finite.
The static thermodynamic limit from the equilibrium point of view was settled much before the Lieb–Robinson bound was proved, see [ 6 ] for example. In certain cases one can use a Lieb–Robinson bound to establish the existence of a thermodynamic limit of the dynamics , τ t Γ {\displaystyle \tau _{t}^{\Gamma }} , for an
infinite lattice Γ {\displaystyle \Gamma } as the limit of finite lattice dynamics. The limit is usually considered over an increasing sequence of finite subsets Λ n ⊂ Γ {\displaystyle \Lambda _{n}\subset \Gamma } , i.e. such that for n < m {\displaystyle n<m} , there is an inclusion Λ n ⊂ Λ m {\displaystyle \Lambda _{n}\subset \Lambda _{m}} . In order to prove the existence of the infinite dynamics τ t Γ {\displaystyle \tau _{t}^{\Gamma }} as a strongly continuous, one-parameter group of automorphisms, it was proved that { τ t Λ n } n {\displaystyle \{\tau _{t}^{\Lambda _{n}}\}_{n}} is a Cauchy sequence and consequently is convergent. By elementary considerations, the existence of the thermodynamic limit then follows. A more detailed discussion of the thermodynamic limit can be found in [ 24 ] section 6.2.
Robinson was the first to show the existence of the thermodynamic limit for exponentially decaying interactions. [ 9 ] Later, Nachtergaele et al. [ 5 ] [ 16 ] [ 20 ] showed the existence of the infinite volume dynamics for almost every type of interaction described in the section "Improvements of Lieb–Robinson bounds" above.
Let ⟨ A ⟩ Ω {\displaystyle \langle A\rangle _{\Omega }} denote the expectation value of the observable A {\displaystyle A} in a state Ω {\displaystyle \Omega } . The correlation function between two observables A {\displaystyle A} and B {\displaystyle B} is defined as ⟨ A B ⟩ Ω − ⟨ A ⟩ Ω ⟨ B ⟩ Ω . {\displaystyle \langle AB\rangle _{\Omega }-\langle A\rangle _{\Omega }\langle B\rangle _{\Omega }.}
Lieb–Robinson bounds are used to show that the correlations decay exponentially in distance for a system with an energy gap above a non-degenerate ground state Ω {\displaystyle \Omega } , see. [ 2 ] [ 12 ] In other words, the inequality
holds for observables A {\displaystyle A} and B {\displaystyle B} with support in the sets X {\displaystyle X} and Y {\displaystyle Y} respectively. Here K {\displaystyle K} and a {\displaystyle a} are some constants.
Alternatively the state Ω {\displaystyle \Omega } can be taken as a product state, in which case correlations decay exponentially without assuming the energy gap above the ground state. [ 5 ]
Such a decay was long known for relativistic dynamics, but only guessed for Newtonian dynamics. The Lieb–Robinson bounds succeed in replacing the relativistic symmetry by local estimates on the Hamiltonian.
Lieb–Schultz–Mattis theorem implies that the ground state of the Heisenberg antiferromagnet on a bipartite lattice with isomorphic sublattices, is non-degenerate, i.e., unique, but the gap can be very small. [ 25 ]
For one-dimensional and quasi-one-dimensional systems of even length and with half-integral spin Affleck and Lieb, [ 26 ] generalizing the original result by Lieb, Schultz, and Mattis, [ 27 ] proved that the gap γ L {\displaystyle \gamma _{L}} in the spectrum above the ground state is bounded above by
where L {\displaystyle L} is the size of the lattice and c {\displaystyle c} is a constant. Many attempts were made to extend this result to higher dimensions , d > 1 {\displaystyle d>1} ,
The Lieb–Robinson bound was utilized by Hastings [ 11 ] and by Nachtergaele-Sims [ 28 ] in a proof of the Lieb–Schultz–Mattis Theorem for higher-dimensional cases.
The following bound on the gap was obtained:
In 2015, it was shown that the Lieb–Robinson bound can also have applications outside of the context of local Hamiltonians as we now explain. The spin-boson model describes the dynamics of a spin coupled to a continuum of oscillators. It has been studied in great detail and explains quantum dissipative effects in a wide range of quantum systems. Let H {\displaystyle H} denote the Hamiltonian of the Spin-Boson model with a continuum bosonic bath, and H L {\displaystyle H_{L}} denote the Spin-Boson model whose bath has been discretised to include L ∈ N + {\displaystyle L\in \mathbb {N} ^{+}} harmonic oscillators with frequencies chosen according to Gauss quadrature rules . For all observables A {\displaystyle A} on the Spin Hamiltonian, the error on the expectation value of A {\displaystyle A} induced by discretising the Spin-Boson model according to the above discretisation scheme is bounded by [ 29 ]
where c , a {\displaystyle c,a} are positive constants and v {\displaystyle v} is the Lieb–Robinson velocity which in this case is directly proportional to ω m a x {\displaystyle \omega _{max}} , the maximum frequency of the bath in the Spin-Boson model. Here, the number of discrete modes L {\displaystyle L} play the role of a distance d ( X , Y ) {\displaystyle d(X,Y)} mentioned below Eq. ( 1 ). One can also bound the error induced by local Fock space truncation of the harmonic oscillators [ 30 ]
The first experimental observation of the Lieb–Robinson velocity was done by Cheneau et al. [ 31 ] | https://en.wikipedia.org/wiki/Lieb–Robinson_bounds |
In mathematics and physics , Lieb–Thirring inequalities provide an upper bound on the sums of powers of the negative eigenvalues of a Schrödinger operator in terms of integrals of the potential. They are named after E. H. Lieb and W. E. Thirring .
The inequalities are useful in studies of quantum mechanics and differential equations and imply, as a corollary, a lower bound on the kinetic energy of N {\displaystyle N} quantum mechanical particles that plays an important role in the proof of stability of matter . [ 1 ]
For the Schrödinger operator − Δ + V ( x ) = − ∇ 2 + V ( x ) {\displaystyle -\Delta +V(x)=-\nabla ^{2}+V(x)} on R n {\displaystyle \mathbb {R} ^{n}} with real-valued potential V ( x ) : R n → R , {\displaystyle V(x):\mathbb {R} ^{n}\to \mathbb {R} ,} the numbers λ 1 ≤ λ 2 ≤ ⋯ ≤ 0 {\displaystyle \lambda _{1}\leq \lambda _{2}\leq \dots \leq 0} denote the (not necessarily finite) sequence of negative eigenvalues. Then, for γ {\displaystyle \gamma } and n {\displaystyle n} satisfying one of the conditions
there exists a constant L γ , n {\displaystyle L_{\gamma ,n}} , which only depends on γ {\displaystyle \gamma } and n {\displaystyle n} , such that
where V ( x ) − := max ( − V ( x ) , 0 ) {\displaystyle V(x)_{-}:=\max(-V(x),0)} is the negative part of the potential V {\displaystyle V} . The cases γ > 1 / 2 , n = 1 {\displaystyle \gamma >1/2,n=1} as well as γ > 0 , n ≥ 2 {\displaystyle \gamma >0,n\geq 2} were proven by E. H. Lieb and W. E. Thirring in 1976 [ 1 ] and used in their proof of stability of matter. In the case γ = 0 , n ≥ 3 {\displaystyle \gamma =0,n\geq 3} the left-hand side is simply the number of negative eigenvalues, and proofs were given independently by M. Cwikel, [ 2 ] E. H. Lieb [ 3 ] and G. V. Rozenbljum. [ 4 ] The resulting γ = 0 {\displaystyle \gamma =0} inequality is thus also called the Cwikel–Lieb–Rosenbljum bound. The remaining critical case γ = 1 / 2 , n = 1 {\displaystyle \gamma =1/2,n=1} was proven to hold by T. Weidl [ 5 ] The conditions on γ {\displaystyle \gamma } and n {\displaystyle n} are necessary and cannot be relaxed.
The Lieb–Thirring inequalities can be compared to the semi-classical limit.
The classical phase space consists of pairs ( p , x ) ∈ R 2 n . {\displaystyle (p,x)\in \mathbb {R} ^{2n}.} Identifying the momentum operator − i ∇ {\displaystyle -\mathrm {i} \nabla } with p {\displaystyle p} and assuming that every quantum state is contained in a volume ( 2 π ) n {\displaystyle (2\pi )^{n}} in the 2 n {\displaystyle 2n} -dimensional phase space, the semi-classical approximation
is derived with the constant
While the semi-classical approximation does not need any assumptions on γ > 0 {\displaystyle \gamma >0} , the Lieb–Thirring inequalities only hold for suitable γ {\displaystyle \gamma } .
Numerous results have been published about the best possible constant L γ , n {\displaystyle L_{\gamma ,n}} in ( 1 ) but this problem is still partly open. The semiclassical approximation becomes exact in the limit of large coupling, that is for potentials β V {\displaystyle \beta V} the Weyl asymptotics
hold. This implies that L γ , n c l ≤ L γ , n {\displaystyle L_{\gamma ,n}^{\mathrm {cl} }\leq L_{\gamma ,n}} . Lieb and Thirring [ 1 ] were able to show that L γ , n = L γ , n c l {\displaystyle L_{\gamma ,n}=L_{\gamma ,n}^{\mathrm {cl} }} for γ ≥ 3 / 2 , n = 1 {\displaystyle \gamma \geq 3/2,n=1} . M. Aizenman and E. H. Lieb [ 6 ] proved that for fixed dimension n {\displaystyle n} the ratio L γ , n / L γ , n c l {\displaystyle L_{\gamma ,n}/L_{\gamma ,n}^{\mathrm {cl} }} is a monotonic , non-increasing function of γ {\displaystyle \gamma } . Subsequently L γ , n = L γ , n c l {\displaystyle L_{\gamma ,n}=L_{\gamma ,n}^{\mathrm {cl} }} was also shown to hold for all n {\displaystyle n} when γ ≥ 3 / 2 {\displaystyle \gamma \geq 3/2} by A. Laptev and T. Weidl. [ 7 ] For γ = 1 / 2 , n = 1 {\displaystyle \gamma =1/2,\,n=1} D. Hundertmark, E. H. Lieb and L. E. Thomas [ 8 ] proved that the best constant is given by L 1 / 2 , 1 = 2 L 1 / 2 , 1 c l = 1 / 2 {\displaystyle L_{1/2,1}=2L_{1/2,1}^{\mathrm {cl} }=1/2} .
On the other hand, it is known that L γ , n c l < L γ , n {\displaystyle L_{\gamma ,n}^{\mathrm {cl} }<L_{\gamma ,n}} for 1 / 2 ≤ γ < 3 / 2 , n = 1 {\displaystyle 1/2\leq \gamma <3/2,n=1} [ 1 ] and for γ < 1 , d ≥ 1 {\displaystyle \gamma <1,d\geq 1} . [ 9 ] In the former case Lieb and Thirring conjectured that the sharp constant is given by
The best known value for the physical relevant constant L 1 , 3 {\displaystyle L_{1,3}} is 1.456 L 1 , 3 c l {\displaystyle 1.456L_{1,3}^{\mathrm {cl} }} [ 10 ] and the smallest known constant in the Cwikel–Lieb–Rosenbljum inequality is 6.869 L 0 , 3 c l {\displaystyle 6.869L_{0,3}^{\mathrm {cl} }} . [ 3 ] A complete survey of the presently best known values for L γ , n {\displaystyle L_{\gamma ,n}} can be found in the literature. [ 11 ]
The Lieb–Thirring inequality for γ = 1 {\displaystyle \gamma =1} is equivalent to a lower bound on the kinetic energy of a given normalised N {\displaystyle N} -particle wave function ψ ∈ L 2 ( R N n ) {\displaystyle \psi \in L^{2}(\mathbb {R} ^{Nn})} in terms of the one-body density. For an anti-symmetric wave function such that
for all 1 ≤ i , j ≤ N {\displaystyle 1\leq i,j\leq N} , the one-body density is defined as
The Lieb–Thirring inequality ( 1 ) for γ = 1 {\displaystyle \gamma =1} is equivalent to the statement that
where the sharp constant K n {\displaystyle K_{n}} is defined via
The inequality can be extended to particles with spin states by replacing the one-body density by the spin-summed one-body density. The constant K n {\displaystyle K_{n}} then has to be replaced by K n / q 2 / n {\displaystyle K_{n}/q^{2/n}} where q {\displaystyle q} is the number of quantum spin states available to each particle ( q = 2 {\displaystyle q=2} for electrons). If the wave function is symmetric, instead of anti-symmetric, such that
for all 1 ≤ i , j ≤ N {\displaystyle 1\leq i,j\leq N} , the constant K n {\displaystyle K_{n}} has to be replaced by K n / N 2 / n {\displaystyle K_{n}/N^{2/n}} . Inequality ( 2 ) describes the minimum kinetic energy necessary to achieve a given density ρ ψ {\displaystyle \rho _{\psi }} with N {\displaystyle N} particles in n {\displaystyle n} dimensions. If L 1 , 3 = L 1 , 3 c l {\displaystyle L_{1,3}=L_{1,3}^{\mathrm {cl} }} was proven to hold, the right-hand side of ( 2 ) for n = 3 {\displaystyle n=3} would be precisely the kinetic energy term in Thomas–Fermi theory.
The inequality can be compared to the Sobolev inequality . M. Rumin [ 12 ] derived the kinetic energy inequality ( 2 ) (with a smaller constant) directly without the use of the Lieb–Thirring inequality.
(for more information, read the Stability of matter page)
The kinetic energy inequality plays an important role in the proof of stability of matter as presented by Lieb and Thirring. [ 1 ] The Hamiltonian under consideration describes a system of N {\displaystyle N} particles with q {\displaystyle q} spin states and M {\displaystyle M} fixed nuclei at locations R j ∈ R 3 {\displaystyle R_{j}\in \mathbb {R} ^{3}} with charges Z j > 0 {\displaystyle Z_{j}>0} . The particles and nuclei interact with each other through the electrostatic Coulomb force and an arbitrary magnetic field can be introduced. If the particles under consideration are fermions (i.e. the wave function ψ {\displaystyle \psi } is antisymmetric), then the kinetic energy inequality ( 2 ) holds with the constant K n / q 2 / n {\displaystyle K_{n}/q^{2/n}} (not K n / N 2 / n {\displaystyle K_{n}/N^{2/n}} ). This is a crucial ingredient in the proof of stability of matter for a system of fermions. It ensures that the ground state energy E N , M ( Z 1 , … , Z M ) {\displaystyle E_{N,M}(Z_{1},\dots ,Z_{M})} of the system can be bounded from below by a constant depending only on the maximum of the nuclei charges, Z max {\displaystyle Z_{\max }} , times the number of particles,
The system is then stable of the first kind since the ground-state energy is bounded from below and also stable of the second kind, i.e. the energy of decreases linearly with the number of particles and nuclei. In comparison, if the particles are assumed to be bosons (i.e. the wave function ψ {\displaystyle \psi } is symmetric), then the kinetic energy inequality ( 2 ) holds only with the constant K n / N 2 / n {\displaystyle K_{n}/N^{2/n}} and for the ground state energy only a bound of the form − C N 5 / 3 {\displaystyle -CN^{5/3}} holds. Since the power 5 / 3 {\displaystyle 5/3} can be shown to be optimal, a system of bosons is stable of the first kind but unstable of the second kind.
If the Laplacian − Δ = − ∇ 2 {\displaystyle -\Delta =-\nabla ^{2}} is replaced by ( i ∇ + A ( x ) ) 2 {\displaystyle (\mathrm {i} \nabla +A(x))^{2}} , where A ( x ) {\displaystyle A(x)} is a magnetic field vector potential in R n , {\displaystyle \mathbb {R} ^{n},} the Lieb–Thirring inequality ( 1 ) remains true. The proof of this statement uses the diamagnetic inequality . Although all presently known constants L γ , n {\displaystyle L_{\gamma ,n}} remain unchanged, it is not known whether this is true in general for the best possible constant.
The Laplacian can also be replaced by other powers of − Δ {\displaystyle -\Delta } . In particular for the operator − Δ {\displaystyle {\sqrt {-\Delta }}} , a Lieb–Thirring inequality similar to ( 1 ) holds with a different constant L γ , n {\displaystyle L_{\gamma ,n}} and with the power on the right-hand side replaced by γ + n {\displaystyle \gamma +n} . Analogously a kinetic inequality similar to ( 2 ) holds, with 1 + 2 / n {\displaystyle 1+2/n} replaced by 1 + 1 / n {\displaystyle 1+1/n} , which can be used to prove stability of matter for the relativistic Schrödinger operator under additional assumptions on the charges Z k {\displaystyle Z_{k}} . [ 13 ]
In essence, the Lieb–Thirring inequality ( 1 ) gives an upper bound on the distances of the eigenvalues λ j {\displaystyle \lambda _{j}} to the essential spectrum [ 0 , ∞ ) {\displaystyle [0,\infty )} in terms of the perturbation V {\displaystyle V} . Similar inequalities can be proved for Jacobi operators . [ 14 ] | https://en.wikipedia.org/wiki/Lieb–Thirring_inequality |
Liesegang rings ( / ˈ l iː z ə ɡ ɑː ŋ / ) are a phenomenon seen in many, if not most, chemical systems undergoing a precipitation reaction under certain conditions of concentration and in the absence of convection . Rings are formed when weakly soluble salts are produced from reaction of two soluble substances, one of which is dissolved in a gel medium. [ 1 ] The phenomenon is most commonly seen as rings in a Petri dish or bands in a test tube ; however, more complex patterns have been observed, such as dislocations of the ring structure in a Petri dish, helices , and " Saturn rings " in a test tube. [ 1 ] [ 2 ] Despite continuous investigation since rediscovery of the rings in 1896, the mechanism for the formation of Liesegang rings is still unclear.
The phenomenon was first noticed in 1855 by the German chemist Friedlieb Ferdinand Runge . He observed them in the course of experiments on the precipitation of reagents in blotting paper . [ 3 ] [ 4 ] In 1896 the German chemist Raphael E. Liesegang noted the phenomenon when he dropped a solution of silver nitrate onto a thin layer of gel containing potassium dichromate . After a few hours, sharp concentric rings of insoluble silver dichromate formed. It has aroused the curiosity of chemists for many years. When formed in a test tube by diffusing one component from the top, layers or bands of precipitate form, rather than rings.
The reactions are most usually carried out in test tubes into which a gel is formed that contains a dilute solution of one of the reactants.
If a hot solution of agar gel also containing a dilute solution of potassium dichromate is poured in a test tube, and after the gel solidifies a more concentrated solution of silver nitrate is poured on top of the gel, the silver nitrate will begin to diffuse into the gel. It will then encounter the potassium dichromate and will form a continuous region of precipitate at the top of the tube.
After some hours, the continuous region of precipitation is followed by a clear region with no sensible precipitate, followed by a short region of precipitate further down the tube. This process continues down the tube forming several, up to perhaps a couple dozen, alternating regions of clear gel and precipitate rings.
Over the decades huge number of precipitation reactions have been used to study the phenomenon, and it seems quite general. Chromates , metal hydroxides , carbonates , and sulfides , formed with lead, copper, silver, mercury and cobalt salts are sometimes favored by investigators, perhaps because of the pretty, colored precipitates formed. [ 5 ] [ 6 ]
The gels used are usually gelatin , agar or silicic acid gel.
The concentration ranges over which the rings form in a given gel for a precipitating system can usually be found for any system by a little systematic empirical experimentation in a few hours. Often the concentration of the component in the agar gel should be substantially less concentrated (perhaps an order of magnitude or more) than the one placed on top of the gel.
The first feature usually noted is that the bands which form farther away from the liquid-gel interface are generally farther apart. Some investigators measure this distance and report in some systems, at least, a systematic formula for the distance that they form at. The most frequent observation is that the distance apart that the rings form is proportional to the distance from the liquid-gel interface. This is by no means universal, however, and sometimes they form at essentially random, irreproducible distances.
Another feature often noted is that the bands themselves do not move with time, but rather form in place and stay there.
For very many systems the precipitate that forms is not the fine coagulant or flocs seen on mixing the two solutions in the absence of the gel, but rather coarse, crystalline dispersions. Sometimes the crystals are well separated from one another, and only a few form in each band.
The precipitate that forms a band is not always a binary insoluble compound, but may be even a pure metal. Water glass of density 1.06 made acidic by sufficient acetic acid to make it gel, with 0.05 N copper sulfate in it, covered by a 1 percent solution of hydroxylamine hydrochloride produces large tetrahedrons of metallic copper in the bands.
It is not possible to make any general statement of the effect of the composition of the gel. A system that forms nicely for one set of components, might fail altogether and require a different set of conditions if the gel is switched, say, from agar to gelatin. The essential feature of the gel required is that thermal convection in the tube be prevented altogether.
Most systems will form rings in the absence of the gelling system if the experiment is carried out in a capillary, where convection does not disturb their formation. In fact, the system does not have to even be liquid. A tube plugged with cotton with a little ammonium hydroxide at one end, and a solution of hydrochloric acid at the other will show rings of deposited ammonium chloride where the two gases meet, if the conditions are chosen correctly. Ring formation has also been observed in solid glasses containing a reducible species. For example, bands of silver have been generated by immersing silicate glass in molten AgNO 3 for extended periods of time (Pask and Parmelee, 1943).
Several different theories have been proposed to explain the formation of Liesegang rings. The chemist Wilhelm Ostwald in 1897 proposed a theory based on the idea that a precipitate is not formed immediately upon the concentration of the ions exceeding a solubility product, but a region of supersaturation occurs first. When the limit of stability of the supersaturation is reached, the precipitate forms, and a clear region forms ahead of the diffusion front because the precipitate that is below the solubility limit diffuses into the precipitate. This was argued to be a critically flawed theory when it was shown that seeding the gel with a colloidal dispersion of the precipitate (which would arguably prevent any significant region of supersaturation) did not prevent the formation of the rings. [ 7 ]
Another theory focuses on the adsorption of one or the other of the precipitating ions onto the colloidal particles of the precipitate which forms. If the particles are small, the absorption is large, diffusion is "hindered" and this somehow results in the formation of the rings.
Still another proposal, the " coagulation theory" states that the precipitate first forms as a fine colloidal dispersion, which then undergoes coagulation by an excess of the diffusing electrolyte and this somehow results in the formation of the rings.
Some more recent theories invoke an auto-catalytic step in the reaction that results in the formation of the precipitate. This would seem to contradict the notion that auto-catalytic reactions are, actually, quite rare in nature.
The solution of the diffusion equation with proper boundary conditions, and a set of good assumptions on supersaturation, adsorption, auto-catalysis, and coagulation alone, or in some combination, has not been done yet, it appears, at least in a way that makes a quantitative comparison with experiment possible. However, a theoretical approach for the Matalon-Packter law predicting the position of the precipitate bands when the experiments are performed in a test tube, has been provided [ 8 ]
A general theory based on Ostwald's 1897 theory has recently been proposed. [ 9 ] It can account for several important features sometimes seen, such as revert and helical banding. | https://en.wikipedia.org/wiki/Liesegang_rings |
Lifastuzumab vedotin ( INN ; [ 1 ] development code DNIB0600A ) is an experimental monoclonal antibody-drug conjugate designed for the treatment of cancer. [ 2 ]
This drug was developed by Genentech / Roche .
This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lifastuzumab_vedotin |
Life-cycle cost analysis (LCCA) is an economic analysis tool to determine the most cost-effective option to purchase, run, sustain or dispose of an object or process. The method is popular in helping managers determine economic sustainability by figuring out the life cycle of a product or process.
The term differs slightly from Total cost of ownership analysis (TCOA). LCCA determines the most cost-effective option to purchase, run, sustain or dispose of an object or process, and TCOA is used by managers or buyers to analyze and determine the direct and indirect cost of an item. [ 1 ]
The term is used in the study of Industrial ecology (IE). The purpose of IE is to help managers make informed decisions by tracking and analyzing products, resources and wastes. [ 2 ]
In Green design Managers add their operating costs and capital to help decide the effect of an investment. [ 3 ] The method also allows managers to determine if more investments may be needed for green buildings . [ 4 ]
This economics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Life-cycle_cost_analysis |
LifeAct is a 17 amino acid recombinant peptide that stains filamentous actin (F-actin) structures of eukaryotic living or fixed cells. [ 1 ] There are several types and combinations of LifeAct that can be utilized depending on the cell type, protocol, and purpose of the analysis.
Lifeact 17 amino acid sequence is MGVADLIKKFESISKEE. [ 1 ]
LifeAct-TagGFP2 being the most widely used fluorescent variant compared to other LifeAct constructs is composed of the first 17 amino acid from the Saccharomyces cerevisiae Abp140, an actin-binding protein. The Abp140 is highly conserved among Saccharomyces cerevisiae and other closely related organisms. [ 2 ] The 17 amino acid fragment of Abp140 was genetically fused to GFP and fluoresces green when it binds the F-actin structures of living and fixed cells, allowing for visualization of cell mechanics under microscopes. Previous experiments involving the analysis of cell mechanics had depended on fluorescently labeled phalloidin and actin GFP fusion proteins obtained from utrophin in Xenopus laevis and ABP120 in Dictyostelium discoideum . [ 3 ] [ 4 ] However, due to their large protein size, markers such as phalloidin and GFP fusion proteins are limited to cells that can be transfected and tend to compete with their orthologous protein. These localization markers affect cellular mechanical properties and F-actin structures, thus making these markers unreliable. [ 5 ] An alternative to these markers is Life Act-TagGFP2, which is a much smaller protein and does not affect cell mechanics. Cells synthesize LifeAct-TagGFP2 in a short period of time making it suitable as a cost-effective in vivo marker. [ 1 ]
LifeAct peptides have been used as a universal marker for F-actin visualization in biomedical research. An experiment conducted by Sawant et al. utilized LifeAct GFP to visualize the migration of control border cells in the ovaries of Drosophila flies, in order to determine how cells move in terms of small and large collectives during development and cancer. [ 6 ] Lifeact labels F-actin in border cells and adjacent follicle cells allowed for the detailed examination of border cell membranes and protrusions. Studies regarding the degradation of actin cytoskeleton due to aging relied on LifeAct for the analysis of cytoskeletal organization as a function of age. Transgenic lines that expressed the LifeAct in various tissues of C. elegans were primarily used for imaging. [ 7 ] | https://en.wikipedia.org/wiki/LifeAct_Dye |
LifeStraw is a brand of water filtration and purification devices. The original LifeStraw was designed as a portable water filter "straw". It filters a maximum of 4,000 litres of water, and enough for one person for three years. It removes almost all waterborne bacteria , microplastics and germs. [ 1 ] A bottle was later developed which incorporated a LifeStraw cartridge into a 650-millilitre (22 US fl oz) BPA-free plastic sports water bottle. [ 2 ] In addition to these portable filters, the manufacturer produces high-volume purifiers powered by gravity that also remove viruses . These are designed for family and community use. [ 3 ]
The water filters are designed by the Swiss -based Vestergaard Frandsen . While originally developed for people living in developing nations and for distribution in humanitarian crisis , the filters have gained popularity as consumer products. The device is now used as a tool for survivalists and outdoor enthusiasts in addition to being used to help combat clean water scarcity worldwide. The filters can provide clean water without the need for batteries or chemical treatment. They are made using hollow fiber membrane technology. Some of them also incorporate an activated carbon component.
Contrary to popular belief, the original device does not incorporate a reverse-osmosis membrane nor is it able to filter out salts or minerals. [ 4 ]
The devices were distributed in the 2010 Haiti earthquake , 2010 Pakistan floods , 2011 Thailand floods , and 2016 Ecuador earthquake , among other crises and initiatives. In the Mutomo District in Kenya which has suffered from long term drought, the Kenya Red Cross supplied filters to 3,750 school children and 6,750 households. [ 5 ] In 2015, they were deployed in Rwanda . [ 6 ] The company funds a retail give back program that as of 2018 has provided safe water to more than 1 million school children in rural Kenya. [ 7 ]
The original LifeStraw is a plastic tube 22 centimetres ( 8 + 5 ⁄ 8 in) long and 3 centimetres ( 1 + 1 ⁄ 8 in) in diameter. [ 8 ] Water that is drawn up through the straw first passes through hollow fibres that filter water particles down to 0.2 µm across, using only physical filtration methods and no chemicals. The entire process is powered by suction, similar to using a conventional drinking straw , and filters up to 4,000 litres (1,100 US gal) of water. [ 9 ]
While the initial model of the filter did not remove Giardia lamblia , [ 10 ] current models remove a minimum of 99.999% of waterborne protozoan parasites including Giardia and Cryptosporidium . [ 11 ] The original device does not filter viruses, chemicals, salt water, and heavy metals, [ 12 ] but newer versions of the product, (like LifeStraw Flex or LifeStraw Home) [ 13 ] are capable of removing chemicals and heavy metals including lead.
LifeStraw has been generally praised for its effective and quick method of bacteria and protozoa removal and consumer acceptability. [ 14 ]
Although the devices are available for retail sale in the developed world, the majority are distributed as part of public health campaigns or in response to complex emergencies by NGOs and organizations that give them away for free in the developing world. [ 15 ]
LifeStraw has been praised in the international media and won several awards including the 2008 Saatchi & Saatchi Award for World Changing Ideas, the 'INDEX: 2005' International Design Award and "Best Invention of 2005" by Time Magazine . [ 16 ] It was featured in the Museum of Modern Art in New York. [ 17 ] In 2019, the Lifestraw Home water filter pitcher was launched and won the IDEA design award [ 18 ] and the Red Dot design award. [ 19 ] | https://en.wikipedia.org/wiki/LifeStraw |
Life 3.0: Being Human in the Age of Artificial Intelligence [ 1 ] is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark . Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.
The book begins by positing a scenario in which AI has exceeded human intelligence and become pervasive in society. Tegmark refers to different stages of human life since its inception: Life 1.0 referring to biological origins, Life 2.0 referring to cultural developments in humanity, and Life 3.0 referring to the technological age of humans. He characterizes these different classification based on their ability to alter their hardware and software . The book focuses on "Life 3.0", and on emerging technology such as artificial general intelligence that may someday, in addition to being able to learn, be able to also redesign its own hardware and internal structure.
The first part of the book looks at the origin of intelligence billions of years ago and goes on to project the future development of intelligence. Tegmark considers short-term effects of the development of advanced technology, such as technological unemployment , AI weapons , and the quest for human-level AGI ( Artificial General Intelligence ). The book cites examples like Deepmind and OpenAI , self-driving cars , and AI players that can defeat humans in chess , [ 2 ] Jeopardy , [ 3 ] and Go . [ 4 ]
After reviewing existing issues in AI, Tegmark then considers a range of possible futures that involve intelligent machines or humans. The fifth chapter describes a number of potential outcomes, such as altered social structures, integration of humans and machines, and both positive and negative scenarios like Friendly AI or an AI apocalypse. [ 5 ] Tegmark argues that the risks of AI come not from malevolence or conscious behavior per se, but rather from the misalignment of the goals of AI with those of humans. Many of the goals of the book align with those of the Future of Life Institute , [ 6 ] of which Tegmark is a co-founder.
The remaining chapters explore concepts in physics, goals, consciousness and meaning, and investigate what society can do to help create a desirable future for humanity.
One criticism of the book by Kirkus Reviews is that some of the scenarios or solutions in the book are a stretch or somewhat prophetic: "Tegmark's solutions to inevitable mass unemployment are a stretch." [ 7 ] AI researcher Stuart J. Russell , writing in Nature , said: "I am unlikely to disagree strongly with the premise of Life 3.0 . Life, Tegmark argues, may or may not spread through the Universe and 'flourish for billions or trillions of years' because of decisions we make now — a possibility both seductive and overwhelming." [ 8 ] Writing in Science , Haym Hirsh called it "a highly readable book that complements The Second Machine Age 's economic perspective on the near-term implications of recent accomplishments in AI and the more detailed analysis of how we might get from where we are today to AGI and even the superhuman AI in Superintelligence ." [ 9 ] The Telegraph called it "One of the very best overviews of the arguments around artificial intelligence". [ 10 ] [ 11 ] The Christian Science Monitor said "Although it's probably not his intention, much of what Tegmark writes will quietly terrify his readers." [ 12 ] Publishers Weekly gave a positive review, but also stated that Tegmark's call for researching how to maintain control over superintelligent machines "sits awkwardly beside his acknowledgment that controlling such godlike entities will be almost impossible." [ 13 ] Library Journal called it a "must-read" for technologists, but stated the book was not for the casual reader. [ 14 ] The Wall Street Journal called it "lucid and engaging"; however, it cautioned readers that the controversial notion that superintelligence could run amok has more credence than it does few years ago, but is still fiercely opposed by many computer scientists. [ 15 ]
Rather than endorse a specific future, the book invites readers to think about what future they would like to see, and to discuss their thoughts on the Future of Life Website. [ 16 ] The Wall Street Journal review called this attitude noble but naive, and criticized the referenced Web site for being "chockablock with promo material for the book". [ 15 ]
The hardcover edition was on the general New York Times Best Seller List for two weeks, [ 17 ] and made on the New York Times business bestseller list in September and October 2017. [ 18 ]
Former President Barack Obama included the book in his "best of 2018" list. [ 19 ] [ 20 ]
Business magnate Elon Musk (who had previously endorsed the thesis that, under some scenarios, advanced AI could jeopardize human survival ) recommended Life 3.0 as "worth reading". [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Life_3.0 |
The publication Life Safety Code , known as NFPA 101, is a consensus standard widely adopted in the United States. [ clarification needed ] It is administered, trademarked, copyrighted, and published by the National Fire Protection Association and, like many NFPA documents, is systematically revised on a three-year cycle. [ not verified in body ]
Despite its title, the standard is not a legal code , is not published as an instrument of law, and has no statutory authority in its own right. However, it is deliberately crafted with language suitable for mandatory application to facilitate adoption into law by those empowered to do so.
The bulk of the standard addresses "those construction, protection, and occupancy features necessary to minimize danger to life from the effects of fire, including smoke, heat, and toxic gases created during a fire.". [ 1 ] The standard does not address the "general fire prevention or building construction features that are normally a function of fire prevention codes and building codes ". [ 2 ]
The Life Safety Code was originated in 1913 by the Committee on Safety to Life (one of the NFPA's more than 200 committees). As noted in the 1991 Life Safety Code Handbook; "...the Committee devoted its attention to a study of notable fires involving loss of life and to analyzing the causes of that loss of life. This work led to the preparation of standards for the construction of stairways, fire escapes, and similar structures; for fire drills in various occupancies and for the construction and arrangement of exit facilities for factories, schools and other occupancies, which form the basis of the present Code." [ 3 ] This study became the basis for two early NFPA publications, "Outside Stairs for Fire Exits" (1916) and "Safeguarding Factory Workers from Fire" (1918).
In 1921 the Committee on Safety to Life expanded and the publication they generated in 1927 became known as the Building Exits Code. New editions were published in 1929, 1934, 1936, 1938, 1942 and 1946.
After a disastrous series of fires between 1942 and 1946, including the Cocoanut Grove Nightclub fire in Boston, which claimed the lives of 492 people and the Winecoff Hotel fire in Atlanta which claimed 119 lives, the Building Exits Code began to be utilized as potential legal legislation. The verbiage of the code, however, was intended for building contractors and not legal statutes, so the NFPA decided to re-edit the Code and some revisions appeared in the 1948, 1949, 1951 and 1952 publications. The editions published in 1957, 1958, 1959, 1960, 1961 and 1963 refined the verbiage and presentation even further.
In 1955 the NPFA101 was broken into three separate documents, NFPA101B (covering nursing homes) and NFPA101C (covering interior finishes). NFPA101C was revised once in 1956 before both publications were withdrawn and pertinent passages re-incorporated back into the main body.
The Committee on Safety to Life was restructured in 1963 and the first publication in 1966 was a complete revision. The title was changed from Building Exits Code to Code for Safety to Life from Fire in Buildings and Structures. The final revision to all "code language" (legalese) was made and it was decided that the Code would be revised and republished on a three-year schedule. [ citation needed ]
New editions were subsequently published in 1967, 1970, 1973 and 1976. [ needs update ] The Committee was reorganized again in 1977 and the 1981 edition of the Code featured major editorial and structural changes that reflect the organization of the modern Code. [ citation needed ]
Codes produced by NFPA are continually updated to incorporate new technologies as well as lessons learned from actual fire experiences. [ citation needed ]
The fire at The Station nightclub in 2003, which claimed the lives of 100 and injured more than 200, resulted in swift attention to several amendments specific to nightclubs and large crowds.
The Life Safety Code is unusual among safety codes in that it applies to existing structures as well as new structures. When a Code revision is adopted into local law, existing structures may have a grace period before they must comply, but all structures must comply with code. In some cases, the authority having jurisdiction can simply permit previously approved features to be used under specified conditions. In other cases, the local law amends the Code to omit undesired sections prior to its adoption.
When some or all of the Code is adopted as regulations in a jurisdiction , it can be enforced by inspectors from local zoning boards, fire departments , building inspectors , fire marshals or other bodies and authorities having jurisdiction.
In particular, the Life Safety Code deals with hazards to human life in buildings , public and private conveyances and other human occupancies, but only when permanently fixed to a foundation, attached to a building, or permanently moored for human habitation. [ 4 ] Regardless of official adoption as regulations, Life Safety Code provides a valuable source for determination of liability in accidents, and many codes and related standards are sponsored by insurance companies.
The Life Safety Code is coordinated with hundreds of other building codes and standards such as National Electrical Code NFPA 70, fuel-gas, mechanical, plumbing (for sprinklers and standpipes), energy and fire codes.
Normally, the Life Safety Code is used by architects and designers of vehicles and vessels used for human occupancy. Since the Life Safety Code is a valuable source for determining liability in accidents, it is also used by insurance companies to evaluate risks and set rates, not to mention assessment of compliance after an incident.
In the United States, the words Life Safety Code and NFPA 101 are registered trademarks of NFPA. All or part of the NFPA's Life Safety Code are adopted as local regulations throughout the country.
This listing of chapters from the 2009 edition [ 5 ] shows the scope of the Code.
Beyond the policies, core definitions and topical requirements of chapters 1–11, chapters 12–42 address the specific requirements for each listed class of occupancy, making reference to Chapters 1–11, as well as other codes.
The Code and corresponding Handbook also include several supplemental publications including: | https://en.wikipedia.org/wiki/Life_Safety_Code |
Life Sciences Foundation ( LSF ) was a San Francisco-based nonprofit organization that was established in 2011 to collect, preserve, interpret, and promote the history of biotechnology . [ 2 ] [ 3 ] LSF conducted historical research, maintained archives and published historically relevant materials and information. [ 2 ]
On December 1, 2015, the LSF and the Chemical Heritage Foundation finalized a merger, creating one organization that covers "the history of the life sciences and biotechnology together with the history of the chemical sciences and engineering." [ 4 ] [ 5 ]
As of February 1, 2018, the organization was renamed the Science History Institute , to reflect its wider range of historical interests, from chemical sciences and engineering to the life sciences and biotechnology. [ 6 ] The organization is headquartered in Philadelphia but retains offices in the San Francisco Bay area. [ 4 ]
The LSF mandate was to collect and promote the history of biotechnology. This includes telling the stories of "scientists, inventors, entrepreneurs, managers, executives, and financiers" in order to "humanize" biotechnology to a lay audience. [ 2 ] [ 3 ] The history of the biotechnology industry includes examining the complex relationships and socio-political dynamics that occur when science and entrepreneurship come together. [ 7 ]
The idea for a foundation that would collect and share the history of biotechnology came about at a meeting in early January 2009 in San Francisco attended by G. Steven Burrill of Burrill & Company, Dennis Gillings of Quintiles in Durham, NC , John Lechleiter of Eli Lilly and Company , Henri Termeer , then CEO of Genzyme and Arnold Thackray, founding President and CEO of the Chemical Heritage Foundation (CHF) [ 1 ] [ 8 ] [ 9 ]
Five years ago, G. Steven Burrill was part of a small group of biotech leaders who came together to discuss the importance of capturing the great stories and lessons of the biotech pioneers for future generations. From this meeting, the Life Sciences Foundation was formed in 2010.
Thackray had shaped Chemical Heritage Foundation—"the premier institution preserving the history of chemistry, chemical engineering, and related sciences and technologies." Oral history was one component of the CHF mandate of preserving interpreting, and promoting the history of science. [ 8 ] [ 10 ]
In 1982 the University of Pennsylvania and the American Chemical Society had launched the Center for the History of Chemistry which was renamed the Chemical Heritage Foundation (CHF) in 1992. [ 4 ] Thackray, a Fellow of American Academy of Arts and Sciences , the Royal Historical Society and the Royal Society of Chemistry , [ 8 ] [ 11 ] Thackray received his M.A. and Ph.D. degrees in the history of science from Cambridge University . [ 11 ]
Thackray argued that before LSF was founded, the recorded history of biotechnology was "fragmented, uneven, and rather paltry." He observed that, "If you don't write your own history, somebody else will do it for you, and they may be hostile." [ 2 ]
There is a valuable heritage here. The life sciences will shape the course of the 21st century. We need to preserve their history. We need to teach young people about the world in which they live ... Records are being scattered, memories are fading, stories are disappearing. Once lost, they're gone forever.
By the end of 2011, LSF's steering committee of industry leaders— Joshua Boger , Robert Carpenter, Bob Coughlin, Henri Termeer and Peter Wirth— were promoting the foundation's work by encouraging scientists and industrialists who were members of the Massachusetts Biotechnology Council, to contribute potential stories and materials to the archival record of the history of biotechnology in Boston and the surrounding region. [ 12 ]
The Life Sciences Foundation conducted oral history interviews with scientists, entrepreneurs, executives, policy makers, and leaders of thought in the biotechnology industry. [ 3 ] LSF's hosts timelines, transcripts and audio recordings and provides links to existing oral histories housed at institutions across the globe. [ 12 ]
Original documentary materials pertinent to the history of biotechnology and the life sciences are being collected. The materials include personal papers and correspondence, donated company records, laboratory notebooks, photographs, video and audio recordings. Collected materials will be guided to permanent repositories in appropriate institutional settings. Electronic reproductions will be made available to scholars, journalists, educators, and the general public in a digital archive. [ 8 ]
LSF historians work on a range of publications including a quarterly magazine, scholarly articles, white papers, and books. These works are intended for multiple audiences and focused on the emergence and evolution of biotechnologies in pharmaceutical discovery and development, agriculture, energy production, and environmental remediation . [ 8 ] In October 2011, the University of Chicago Press released Genentech: The Beginnings of Biotech by Life Sciences Foundation historian Sally Smith Hughes. [ 2 ] [ 13 ]
Founding partners of the Life Sciences Foundation include Burrill, Celgene , John Lechleiter, Genentech , Henri Termeer, Merck & Co. , Millennium, Pfizer , Quintiles , and Thermo Fisher . [ 2 ] MIT professor, Phillip Sharp , serves as LSF's academic advisor. [ 2 ] Its executive and advisory board members are leaders from biotech, venture capital, academic institutions and trade associations. [ 2 ]
When Thackray retired in 2012, Heather R. Erickson, 34, was appointed as LSF President and CEO and member of the Board of Directors. [ 14 ] Thackray remained as LSF advisor to its scholarly activities. The Board also includes Brook Byers of Kleiner Perkins Caufield & Byers in Menlo Park, California , Carl B. Feldbaum of Biotechnology Industry Organization (BIO) [ 15 ] [ 16 ] [ 17 ] in Washington, DC who replaced Burrill, Frederick Frank of EVOLUTION Life Science Partners in New York, NY, Gillings in Durham, NC, Lechleiter [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] in Indianapolis, IN, Scott Morrison from San Francisco, CA, Ivor Royston , MD, of Forward Ventures in San Diego, CA, Phillip Sharp from Massachusetts Institute of Technology in Cambridge, MA and Henri Termeer in Cambridge, MA. [ 25 ] The first board of directors also included G. Steven Burrill, CEO of Burrill & Company— who also published The Journal of Life Sciences and Joshua Boger, former chairman and CEO of Vertex Pharmaceuticals . [ 2 ] | https://en.wikipedia.org/wiki/Life_Sciences_Foundation |
Life Sciences Greenhouse of Central Pennsylvania ( LSGPA ) is a biotechnology initiative and non-profit organization based in Harrisburg, Pennsylvania . It was founded in 2001. It focuses on in the advancement of life sciences through technology to improve the healthcare and economic opportunities of Pennsylvanians.
The initiative began in 2001, funded from the state's settlement with the tobacco industry . [ 1 ] Other life sciences greenhouses in Philadelphia and Pittsburgh also received seed money from the settlement. [ 2 ] LSGPA partners with a range of institutions, including local research universities , colleges , medical centers , economic development agencies and companies of various sizes to identify needs and opportunities. [ 1 ] It then works to help transfer technologies, develop new companies, provide support for existing companies (particularly those seeking to expand or relocate), and ensure that the infrastructure to support a thriving life sciences industry keeps pace with development. [ 1 ]
Central Pennsylvania has three large research universities which contribute to the initiative. Collectively, these three institutions attract more than $ 600 million in sponsored research funding annually. They are: [ 3 ] | https://en.wikipedia.org/wiki/Life_Sciences_Greenhouse_of_Central_Pennsylvania |
The Life Sciences Research Foundation ( LSRF ) is a postdoctoral fellowship program, with missions "to identify and fund exceptional young scientists at a critical juncture of their training in all areas of basic life sciences" and "to establish partnerships between those who support research in the life sciences and academic institutions for their mutual benefit". [ 1 ]
LSRF was established in 1983 by Donald D. Brown of the Carnegie Institution for Science Department of Embryology. As one of four highly competitive postdoctoral awards in the life sciences, [ 2 ] each year LSRF receives more than 1000 applications and awards 15-25 fellowships. The Board of Directors also includes Douglas Koshland and Solomon H. Snyder . The 56 sponsors include many top companies in the biotech and pharmaceutical industry. [ 3 ]
In 2012, Brown won the Albert Lasker Special Achievement Award in Medical Science, in part for his initiation and 30-year dedication to LSRF. [ 4 ]
Notable alumni include: | https://en.wikipedia.org/wiki/Life_Sciences_Research_Foundation |
Life Sciences Switzerland ( LS2 ) is the Swiss federation of scientific societies for life sciences . It was formerly known as the Union of the Swiss Societies for Experimental Biology (USGEB). [ 1 ] [ 2 ] It was founded in 1969, with the founding meeting taking place in Bern, Switzerland. [ 3 ] At the founding, four societies, Swiss Society for Physiology, Swiss Society for Biochemistry, Swiss Society for Pharmacology and Swiss Society for Cell & Molecular Biology comprised the original societies. [ 3 ]
Life Sciences Switzerland is a member of the Swiss Academy of Natural Sciences (SCNAT). [ 4 ]
Its members are: [ 5 ]
Source: [ 7 ] | https://en.wikipedia.org/wiki/Life_Sciences_Switzerland |
Life That Glows (also known as David Attenborough's Light on Earth ) is a 2016 British nature documentary programme made for BBC Television , first shown in the UK on BBC Two on 9 May 2016. The programme is presented and narrated by Sir David Attenborough .
Life That Glows depicts the biology and ecology of bioluminescent organisms, that is, organisms capable of creating light. The programme features fireflies , who use light as a means of sexual attraction, luminous fungi , luminous marine bacteria responsible for the Milky seas effect , the flashlight fish , the aposematism of the Sierra luminous millipede , earthworms, and the bioluminescent tides created by blooms of dinoflagellates in Tasmania , as well as dolphins swimming in the bloom in the Sea of Cortez , the defensive flashes of brittle stars and ostracods , sexual attraction in ostracods, prey attraction by luminous click beetles in Cerrado , Brazil and Arachnocampa gnats in New Zealand .
The programme then introduces many luminous deep sea animals, including the vampire squid , the polychaete worm Tomopteris that generates yellow light, the jellyfish Atolla , the comb jelly Beroe , the viper fish , pyrosomes , a dragonfish , and the polychaete worm Flota . Next, the programme discusses specialised adaptations in the eyes of particular animals to see bioluminescence, such as the barreleye fish and the cock-eyed squid . Lastly, the programme features the mass spawning event of the firefly squid in Japan . | https://en.wikipedia.org/wiki/Life_That_Glows |
Human life expectancy is a statistical measure of the estimate of the average remaining years of life at a given age. The most commonly used measure is life expectancy at birth (LEB, or in demographic notation e 0 , where e x denotes the average life remaining at age x ). This can be defined in two ways. Cohort LEB is the mean length of life of a birth cohort (in this case, all individuals born in a given year) and can be computed only for cohorts born so long ago that all their members have died. Period LEB is the mean length of life of a hypothetical cohort [ 3 ] [ 4 ] assumed to be exposed, from birth through death, to the mortality rates observed at a given year. [ 5 ] National LEB figures reported by national agencies and international organizations for human populations are estimates of period LEB.
Human remains from the early Bronze Age indicate an LEB of 24. [ 6 ] In 2019, world LEB was 73.3. [ 7 ] A combination of high infant mortality and deaths in young adulthood from accidents, epidemics , plagues, wars, and childbirth, before modern medicine was widely available, significantly lowers LEB. For example, a society with a LEB of 40 would have relatively few people dying at exactly 40: most will die before 30 or after 55. In populations with high infant mortality rates, LEB is highly sensitive to the rate of death in the first few years of life. Because of this sensitivity, LEB can be grossly misinterpreted, leading to the belief that a population with a low LEB would have a small proportion of older people. [ 8 ] A different measure, such as life expectancy at age 5 (e 5 ), can be used to exclude the effect of infant mortality to provide a simple measure of overall mortality rates other than in early childhood. For instance, in a society with a life expectancy of 30, it may nevertheless be common to have a 40-year remaining timespan at age 5 (but not a 60-year one [ dubious – discuss ] ).
Aggregate population measures—such as the proportion of the population in various age groups—are also used alongside individual-based measures—such as formal life expectancy—when analyzing population structure and dynamics. Pre-modern societies had universally higher mortality rates and lower life expectancies at every age for both males and females.
Life expectancy, longevity , and maximum lifespan are not synonymous. Longevity refers to the relatively long lifespan of some members of a population. Maximum lifespan is the age at death for the longest-lived individual of a species. Mathematically, life expectancy is denoted e x {\displaystyle e_{x}} [ a ] and is the mean number of years of life remaining at a given age x {\displaystyle x} , with a particular mortality . [ 9 ] Because life expectancy is an average, a particular person may die many years before or after the expected survival.
Life expectancy is also used in plant or animal ecology , [ 10 ] and in life tables (also known as actuarial tables). The concept of life expectancy may also be used in the context of manufactured objects, [ 11 ] though the related term [ dubious – discuss ] shelf life is commonly used for consumer products, and the terms "mean time to breakdown" and " mean time between failures " are used in engineering.
The earliest documented work on life expectancy was done in the 1660s by John Graunt , [ 12 ] Christiaan Huygens , and Lodewijck Huygens . [ 13 ]
The longest verified lifespan for any human is that of French woman Jeanne Calment , who is verified as having lived to age 122 years, 164 days, between 21 February 1875 and 4 August 1997. This is referred to as the " maximum life span ", which is the upper boundary of life, the maximum number of years any human is known to have lived. Although maximum life expectancy is around 125 years, genetic enhancements could allow humans to live for a maximum of 245 years, according to InsideTracker. [ 14 ] According to a study by biologists Bryan G. Hughes and Siegfried Hekimi, there is no evidence for a limit on human lifespan. [ 15 ] [ 16 ] However, this view has been questioned on the basis of error patterns. [ 17 ] A theoretical study shows that the maximum life expectancy at birth is limited by the human life characteristic value δ, which is around 104 years. [ 18 ]
The following information is derived from the 1961 Encyclopædia Britannica and other sources, some with questionable accuracy. Unless otherwise stated, it represents estimates of the life expectancies of the world population as a whole. In many instances, life expectancy varied considerably according to class and gender.
Life expectancy at birth takes account of infant mortality and child mortality but not prenatal mortality.
[ 29 ] [ 30 ] [ 31 ] [ 28 ] [ 19 ] [ 32 ]
Life expectancy at age 1 reached 34–41 remaining years for the 67 [ 29 ] –75% surviving the first year. For the 55-65% surviving to age 5, remaining life expectancy reached around 40–45, [ 31 ] while the ~50% reaching age 10 could expect another 40 years of life. [ 29 ] Average remaining years fell to 33–39 at age 15; ~20 at age 40; [ 29 ] 14–18 at age 50; ~10–12 at age 60; and ~6–7 at age 70. [ 31 ] [ 33 ]
Only half of the people born in the early 19th century made it past their 50th birthday. In contrast, 97% of the people born in 21st century England and Wales can expect to live longer than 50 years. [ 44 ]
English life expectancy at birth averaged about 36 years in the 17th and 18th centuries, one of the highest levels in the world although infant and child mortality remained higher than in later periods. Life expectancy was under 25 years in the early Colony of Virginia , [ 52 ] and in seventeenth-century New England, about 40% died before reaching adulthood. [ 53 ] During the Industrial Revolution , the life expectancy of children increased dramatically. [ 54 ] Recorded deaths among children under the age of 5 years fell in London from 74.5% of the recorded births in 1730–49 to 31.8% in 1810–29, [ 55 ] [ 56 ] though this overstates mortality and its fall because of net immigration (hence more dying in the metropolis than were born there) and incomplete registration (particularly of births, and especially in the earlier period). English life expectancy at birth reached 41 years in the 1840s, 43 in the 1870s and 46 in the 1890s, though infant mortality remained at around 150 per thousand throughout this period.
Public health measures are credited with much of the recent increase in life expectancy. During the 20th century, despite a brief drop due to the 1918 flu pandemic , [ 57 ] the average lifespan in the United States increased by more than 30 years, of which 25 years can be attributed to advances in public health. [ 58 ]
There are great variations in life expectancy between different parts of the world, mostly caused by differences in public health , medical care, and diet. [ 59 ]
Human beings are expected to live on average 60 years in Eswatini [ 60 ] and 82.6 years in Japan. [ b ] An analysis published in 2011 in The Lancet attributes Japanese life expectancy to equal opportunities , excellent public health , and a healthy diet. [ 62 ] [ 63 ]
The World Health Organization announced that the COVID-19 pandemic reversed the trend of steady gain in life expectancy at birth. The pandemic wiped out nearly a decade of progress in improving life expectancy. [ 64 ]
During the last 200 years, African countries have generally not had the same improvements in mortality rates that have been enjoyed by countries in Asia, Latin America, and Europe. [ 66 ] [ 67 ] This is most apparent by the impact of AIDS on many African countries. According to projections made by the United Nations in 2002, the life expectancy at birth for 2010–2015 (if HIV/AIDS did not exist) would have been: [ 68 ]
On average, eastern Europeans tend to live shorter lives than their western counterparts. For example, Spaniards from Madrid can expect to live to 85, but Bulgarians from the region of Severozapaden are predicted to live just past their 73rd birthday. This is in large part due to poor health habits, such as heavy smoking and high alcoholism in the region, and environmental factors, such as high air pollution. [ 69 ]
In 2022, the life expectancy was 77.5 in the United States, a decline from 2014, but an increase from 2021. In what has been described as a "life expectancy crisis", there were a total of 13 million "missing Americans" from 1980 to 2021, deaths that would have been averted if it had the standard mortality rate of " wealthy nations ". [ citation needed ]
The annual number of "missing Americans" has been increasing, with 622,534 in 2019 alone. [ 70 ] Most excess deaths in the United States can largely be attributed to increasing obesity , alcoholism , drug overdoses , car accidents , suicides , and murders , with poor sleep , unhealthy diets , and loneliness being linked to most of them. [ 71 ]
Black Americans have generally shorter life expectancies than their White American counterparts. For example, white Americans in 2010 are expected to live until age 78.9, but black Americans only until age 75.1. This 3.8-year gap, however, is the lowest it has been since 1975 at the latest, the greatest difference being 7.1 years in 1993. [ 72 ] In contrast, Asian American women live the longest of all ethnic and gender groups in the United States, with a life expectancy of 85.8 years. [ 73 ] The life expectancy of Hispanic Americans is 81.2 years. [ 72 ]
In 2023, the life expectancy was 84.5 in Japan, 4.2 years above the OECD average, and one of the highest in the world. Japan's high life expectancy can largely be explained by their healthy diets, which are low on salt , fat , and red meat. For these reasons, Japan has a low obesity rate, and ultimately low mortality from heart disease and cancers . [ 74 ]
Cities also experience a wide range of life expectancy based on neighborhood breakdowns. This is largely due to economic clustering and poverty conditions that tend to associate based on geographic location. Multi-generational poverty found in struggling neighborhoods also contributes. In American cities such as Cincinnati , the life expectancy gap between low income and high-income neighborhoods touches 20 years. [ 75 ]
Economic circumstances also affect life expectancy. For example, in the United Kingdom, life expectancy in the wealthiest and richest areas is several years higher than in the poorest areas. This may reflect factors such as diet and lifestyle, as well as access to medical care. It may also reflect a selective effect: people with chronic life-threatening illnesses are less likely to become wealthy or to reside in affluent areas. [ 77 ] In Glasgow , the disparity is amongst the highest in the world : life expectancy for males in the heavily deprived Calton area stands at 54, which is 28 years less than in the affluent area of Lenzie , which is only 8 km (5.0 mi) away. [ 78 ] [ 79 ]
A study published in the American Geriatrics Society found that the average life expectancy of the Chinese emperors (which have much wealth) from the first Qin Dynasty (221–207 BC) to the last Qing Dynasty, was 41.3 years. This is much lower than that of the Buddhist monks (66.9 years) traditional Chinese doctors (75.1 years) and the emperors' servant, who survived to 71.3 years (range 55–94), during the same time. [ 80 ]
A 2013 study found a pronounced relationship between economic inequality and life expectancy. [ 81 ] However, in contrast, a study by José A. Tapia Granados and Ana Diez Roux at the University of Michigan found that life expectancy actually increased during the Great Depression , and during recessions and depressions in general. [ 82 ] The authors suggest that when people are working at a more extreme degree during prosperous economic times, they undergo more stress , exposure to pollution , and the likelihood of injury among other longevity-limiting factors.
Life expectancy is also likely to be affected by exposure to high levels of highway air pollution or industrial air pollution . This is one way that occupation can have a major effect on life expectancy. Coal miners (and in prior generations, asbestos cutters) often have lower life expectancies than average. Other factors affecting an individual's life expectancy are genetic disorders, drug use, tobacco smoking , excessive alcohol consumption, obesity, access to health care, diet, and exercise.
In the present, female human life expectancy is greater than that of males, despite females having higher morbidity rates (see health survival paradox ). There are many potential reasons for this. Traditional arguments tend to favor sociology-environmental factors: historically, men have generally consumed more tobacco , alcohol , and drugs than women in most societies, and are more likely to die from many associated diseases such as lung cancer , tuberculosis , and cirrhosis of the liver . [ 84 ] Men are also more likely to die from injuries, whether unintentional (such as occupational , war , or car wrecks ) or intentional ( suicide ). [ 84 ] Men are also more likely to die from most of the leading causes of death (some already stated above) than women. Some of these in the United States include cancer of the respiratory system, motor vehicle accidents, suicide, cirrhosis of the liver, emphysema, prostate cancer, and coronary heart disease. [ 14 ] These far outweigh the female mortality rate from breast cancer and cervical cancer. In the past, mortality rates for females in child-bearing age groups were higher than for males at the same age.
A paper from 2015 found that female foetuses have a higher mortality rate than male foetuses. [ 85 ] This finding contradicts papers dating from 2002 and earlier that attribute the male sex to higher in-utero mortality rates. [ 86 ] [ 87 ] [ 88 ] Among the smallest premature babies (those under 2 pounds (910 grams)), females have a higher survival rate. At the other extreme, about 90% of individuals aged 110 are female. The difference in life expectancy between men and women in the United States dropped from 7.8 years in 1979 to 5.3 years in 2005, with women expected to live to age 80.1 in 2005. [ 89 ] Data from the United Kingdom shows the gap in life expectancy between men and women decreasing in later life. This may be attributable to the effects of infant mortality and young adult death rates. [ 90 ]
Some argue that shorter male life expectancy is merely another manifestation of the general rule, seen in all mammal species, that larger-sized individuals within a species tend, on average, to have shorter lives. [ 91 ] [ 92 ] This biological difference [ clarification needed ] occurs because women have more resistance to infections and degenerative diseases. [ 14 ]
In her extensive review of the existing literature, Kalben concluded that the fact that women live longer than men was observed at least as far back as 1750 and that, with relatively equal treatment, today males in all parts of the world experience greater mortality than females. However, Kalben's study was restricted to data in Western Europe alone, where the demographic transition occurred relatively early. United Nations statistics from mid-twentieth century onward, show that in all parts of the world, females have a higher life expectancy at age 60 than males. [ 93 ] Of 72 selected causes of death, only 6 yielded greater female than male age-adjusted death rates in 1998 in the United States. Except for birds, for almost all of the animal species studied, males have higher mortality than females. Evidence suggests that the sex mortality differential in people is due to both biological/genetic and environmental/behavioral risk and protective factors. [ 86 ]
One recent suggestion is that mitochondrial mutations which shorten lifespan continue to be expressed in males (but less so in females) because mitochondria are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival; therefore, such mitochondria are less likely to be passed on to the next generation. This thus suggests that females tend to live longer than males. The authors claim that this is a partial explanation. [ 94 ] [ 95 ]
Another explanation is the unguarded X hypothesis . According to this hypothesis, one reason for why the average lifespan of males is not as long as that of females––by 18% on average, according to the study––is that they have a Y chromosome which cannot protect an individual from harmful genes expressed on the X chromosome, while a duplicate X chromosome, as present in female organisms, can ensure harmful genes are not expressed . [ 96 ] [ 97 ]
In developed countries, starting around 1880, death rates decreased faster among women, leading to differences in mortality rates between males and females. Before 1880, death rates were the same. In people born after 1900, the death rate of 50- to 70-year-old men was double that of women of the same age. Men may be more vulnerable to cardiovascular disease than women, but this susceptibility was evident only after deaths from other causes, such as infections, started to decline. [ 98 ] Most of the difference in life expectancy between the sexes is accounted for by differences in the rate of death by cardiovascular diseases among persons aged 50–70. [ 99 ]
The heritability of lifespan is estimated to be less than 10%, meaning the majority of variation in lifespan is attributable due to differences in environment rather than genetic variation . [ 100 ] However, researchers have identified regions of the genome which can influence the length of life and the number of years lived in good health. For example, a genome-wide association study of 1 million lifespans found 12 genetic loci which influenced lifespan by modifying susceptibility to cardiovascular and smoking-related disease . [ 101 ] The locus with the largest effect is APOE . Carriers of the APOE ε4 allele live approximately one year less than average (per copy of the ε4 allele), mainly due to increased risk of Alzheimer's disease . [ 101 ]
In July 2020, scientists identified 10 genomic loci with consistent effects across multiple lifespan-related traits, including healthspan , lifespan, and longevity . [ 102 ] The genes affected by variation in these loci highlighted haem metabolism as a promising candidate for further research within the field. This study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. [ 103 ]
A follow-up study which investigated the genetics of frailty and self-rated health in addition to healthspan, lifespan, and longevity also highlighted haem metabolism as an important pathway, and found genetic variants which lower blood protein levels of LPA and VCAM1 were associated with increased healthy lifespan. [ 104 ]
In developed countries, the number of centenarians is increasing at approximately 5.5% per year, which means doubling the centenarian population every 13 years, pushing it from some 455,000 in 2009 to 4.1 million in 2050. [ 105 ] Japan is the country with the highest ratio of centenarians (347 for every 1 million inhabitants in September 2010). Shimane Prefecture had an estimated 743 centenarians per million inhabitants. [ 106 ]
In the United States, the number of centenarians grew from 32,194 in 1980 to 71,944 in November 2010 (232 centenarians per million inhabitants). [ 107 ]
Mental illness is reported to occur in approximately 18% of the average American population. [ 108 ] [ 109 ]
The mentally ill have been shown to have a 10- to 25-year reduction in life expectancy. [ 111 ] Generally, the reduction of lifespan in the mentally ill population compared to the mentally stable population has been studied and documented. [ 112 ] [ 113 ] [ 114 ] [ 115 ] [ 116 ]
The greater mortality of people with mental disorders may be due to death from injury, from co-morbid conditions, or medication side effects. [ 117 ] For instance, psychiatric medications can increase the risk of developing diabetes . [ 118 ] [ 119 ] [ 120 ] [ 121 ] It has been shown that the psychiatric medication olanzapine can increase risk of developing agranulocytosis , among other comorbidities. [ 122 ] [ 123 ] Psychiatric medicines also affect the gastrointestinal tract ; the mentally ill have a four times risk of gastrointestinal disease. [ 124 ] [ 125 ] [ 126 ]
As of 2020 and the COVID-19 pandemic, researchers have found an increased risk of death in the mentally ill. [ 127 ] [ 128 ] [ 129 ]
The life expectancy of people with diabetes, which is 9.3% of the U.S. population, is reduced by roughly 10–20 years. [ 130 ] [ 131 ] People over 60 years old with Alzheimer's disease have about a 50% life expectancy of 3–10 years. [ 132 ] Other demographics that tend to have a lower life expectancy than average include transplant recipients [ 133 ] and the obese. [ 134 ]
Education on all levels has been shown to be strongly associated with increased life expectancy. [ 135 ] This association may be due partly to higher income, [ 136 ] which can lead to increased life expectancy. Despite the association, among identical twin pairs with different education levels, there is only weak evidence of a relationship between educational attainment and adult mortality. [ 135 ]
According to a paper from 2015, the mortality rate for the Caucasian population in the United States from 1993 to 2001 is four times higher [ dubious – discuss ] for those who did not complete high school compared to those who have at least 16 years of education. [ 135 ] In fact, within the U.S. adult population, people with less than a high school education have the shortest life expectancies.
Preschool education also plays a large role in life expectancy. It was found that high-quality early-stage childhood education had positive effects on health. Researchers discovered this by analyzing the results of the Carolina Abecedarian Project , finding that the disadvantaged children who were randomly assigned to treatment had lower instances of risk factors for cardiovascular and metabolic diseases in their mid-30s. [ 137 ]
In June 2024, Italian researchers showed that the Covid-19 mRNA vaccination raised all-cause mortality risks and caused a statistically significant loss of life expectancy. The retrospective cohort study found that the adjusted hazard ratio (HR) in terms of all-cause deaths between those who got two doses of the mRNA vaccine and the unvaccinated group was 1.98, and the restricted mean survival time (RMST) difference was -2.71. The study showed that the restricted mean time lost (RMTL) ratio between the two-dose group and the unvaccinated was 1.37, suggesting that those who received two doses of the shots could lose 37 percent of their life expectancy. [ 138 ]
Various species of plants and animals, including humans, have different lifespans. Evolutionary theory states that organisms which—by virtue of their defenses or lifestyle—live for long periods and avoid accidents, disease, predation, etc. are likely to have genes that code for slow aging, which often translates to good cellular repair. One theory is that if predation or accidental deaths prevent most individuals from living to an old age, there will be less natural selection to increase the intrinsic life span. [ 139 ] That finding was supported in a classic study of opossums by Austad; [ 140 ] however, the opposite relationship was found in an equally prominent study of guppies by Reznick. [ 141 ] [ 142 ]
One prominent and very popular theory states that lifespan can be lengthened by a tight budget for food energy called caloric restriction . [ 143 ] Caloric restriction observed in many animals (most notably mice and rats) shows a near doubling of life span from a very limited calorific intake. Support for the theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy. [ 144 ] [ 145 ] [ 146 ] That is the key to why animals like giant tortoises can live so long. [ 147 ] Studies of humans with life spans of at least 100 have shown a link to decreased thyroid activity, resulting in their lowered metabolic rate. [ citation needed ]
The ability of skin fibroblasts to perform DNA repair after UV irradiation was measured in shrew , mouse , rat , hamster , cow , elephant and human . [ 148 ] It was found that DNA repair capability increased systematically with species life span . Since this original study in 1974, at least 14 additional studies were performed on mammals to test this correlation. [ 149 ] In all, but two of these studies, lifespan correlated with DNA repair levels, suggesting that DNA repair capability contributes to life expectancy. [ 149 ] See DNA damage theory of aging .
In a broad survey of zoo animals, no relationship was found between investment of the animal in reproduction and its life span. [ 150 ]
In actuarial notation , the probability of surviving from age x {\displaystyle x} to age x + n {\displaystyle x+n} is denoted n p x {\displaystyle \,_{n}p_{x}\!} and the probability of dying during age x {\displaystyle x} (i.e. between ages x {\displaystyle x} and x + 1 {\displaystyle x+1} ) is denoted q x {\displaystyle q_{x}\!} . For example, if 10% of a group of people alive at their 90th birthday die before their 91st birthday, the age-specific death probability at 90 would be 10%. This probability describes the likelihood of dying at that age, and is not the rate at which people of that age die. [ c ] It can be shown that
The curtate future lifetime , denoted K ( x ) {\displaystyle K(x)} , is a discrete random variable representing the remaining lifetime at age x {\displaystyle x} , rounded down to whole years. Life expectancy, more technically called the curtate expected lifetime and denoted e x {\displaystyle \,e_{x}\!} , [ a ] is the mean of K ( x ) {\displaystyle K(x)} —that is to say, the expected number of whole years of life remaining, assuming survival to age x {\displaystyle x} . [ 151 ] So,
Substituting ( 1 ) into the sum and simplifying gives the final result [ 152 ]
If the assumption is made that, on average, people live a half year on the year of their death, the complete life expectancy at age x {\displaystyle x} would be e x + 1 / 2 {\displaystyle e_{x}+1/2} , which is denoted by e̊ x , and is the intuitive definition of life expectancy.
By definition, life expectancy is an arithmetic mean . It can also be calculated by integrating the survival curve from 0 to positive infinity (or equivalently to the maximum lifespan, sometimes called 'omega'). For an extinct or completed cohort (all people born in the year 1850, for example), it can of course simply be calculated by averaging the ages at death. For cohorts with some survivors, it is estimated by using mortality experience in recent years. The estimates are called period cohort life expectancies.
The starting point for calculating life expectancy is the age-specific death rates of the population members. If a large amount of data is available, a statistical population can be created that allow the age-specific death rates to be simply taken as the mortality rates actually experienced at each age (the number of deaths divided by the number of years "exposed to risk" in each data cell). However, it is customary to apply smoothing to remove (as much as possible) the random statistical fluctuations from one year of age to the next. In the past, a very simple model used for this purpose was the Gompertz function , but more sophisticated methods are now used. [ 153 ] The most common modern methods include:
The age-specific death rates are calculated separately for separate groups of data that are believed to have different mortality rates (such as males and females, or smokers and non-smokers) and are then used to calculate a life table from which one can calculate the probability of surviving to each age. While the data required are easily identified in the case of humans, the computation of life expectancy of industrial products and wild animals involves more indirect techniques. The life expectancy and demography of wild animals are often estimated by capturing, marking, and recapturing them. [ 156 ] The life of a product, more often termed shelf life , is also computed using similar methods. In the case of long-lived components, such as those used in critical applications (e.g. aircraft), methods like accelerated aging are used to model the life expectancy of a component. [ 11 ]
The life expectancy statistic is usually based on past mortality experience and assumes that the same age-specific mortality rates will continue. Thus, such life expectancy figures need to be adjusted for temporal trends before calculating how long a currently living individual of a particular age is expected to live. Period life expectancy remains a commonly used statistic to summarize the current health status of a population. However, for some purposes, such as pensions calculations, it is usual to adjust the life table used by assuming that age-specific death rates will continue to decrease over the years, as they have usually done in the past. That is often done by simply extrapolating past trends, but some models exist to account for the evolution of mortality, like the Lee–Carter model . [ 157 ]
As discussed above, on an individual basis, some factors correlate with longer life. Factors that are associated with variations in life expectancy include family history, marital status, economic status, physique, exercise, diet, drug use (including smoking and alcohol consumption), disposition, education, environment, sleep, climate, and health care. [ 14 ]
To assess the quality of these additional years of life, 'healthy life expectancy' has been calculated for the last 30 years. Since 2001, the World Health Organization has published statistics called Healthy life expectancy ( HALE ), defined as the average number of years that a person can expect to live in "full health" excluding the years lived in less than full health due to disease and/or injury. [ 158 ] [ 159 ] Since 2004, Eurostat publishes annual statistics called Healthy Life Years (HLY) based on reported activity limitations. The United States uses similar indicators in the framework of the national health promotion and disease prevention plan " Healthy People 2010 ". More and more countries are using health expectancy indicators to monitor the health of their population.
The long-standing quest for longer life led in the 2010s to a more promising focus on increasing HALE, also known as a person's "healthspan". Besides the benefits of keeping people healthier longer, a goal is to reduce health-care expenses on the many diseases associated with cellular senescence . Approaches being explored include fasting , exercise , and senolytic drugs. [ 160 ]
Forecasting life expectancy and mortality form an important subdivision of demography . Future trends in life expectancy have huge implications for old-age support programs (like U.S. Social Security and pension ) since the cash flow in these systems depends on the number of recipients who are still living (along with the rate of return on the investments or the tax rate in pay-as-you-go systems). With longer life expectancies, the systems see increased cash outflow; if the systems underestimate increases in life-expectancies, they will be unprepared for the large payments that will occur, as humans live longer and longer.
Life expectancy forecasting is usually based on one of two different approaches:
Life expectancy is one of the factors in measuring the Human Development Index (HDI) of each nation along with adult literacy, education, and standard of living. [ 162 ]
Life expectancy is used in describing the physical quality of life of an area. It is also used for an individual when the value of a life settlement is determined a life insurance policy is sold for a cash asset. [ clarification needed ]
Disparities in life expectancy are often cited as demonstrating the need for better medical care or increased social support. A strongly associated indirect measure is income inequality . For the top 21 industrialized countries, if each person is counted equally, life expectancy is lower in more unequal countries (r = −0.907). [ 163 ] There is a similar relationship among states in the U.S. (r = −0.620). [ 164 ]
Life expectancy may be confused with the average age an adult could expect to live, creating the misunderstanding that an adult's lifespan would be unlikely to exceed their life expectancy at birth. This is not the case, as life expectancy is an average of the lifespans of all individuals, including those who die before adulthood. One may compare the life expectancy of the period after childhood to estimate also the life expectancy of an adult. [ 166 ]
As a measure of the years of life remaining, life expectancy decreases with age after initially rising in early childhood, but the average age to which a person is likely to live increases as they survive to successive higher ages. [ 167 ] In the table above, the estimated modern hunter-gatherer average expectation of life at birth of 33 years (often considered an upper-bound for Paleolithic populations) equates to a life expectancy at 15 of 39 years, so that those surviving to age 15 will on average die at 54.
In England in the 13th–19th centuries with life expectancy at birth rising from perhaps 25 years to over 40, expectation of life at age 30 has been estimated at 20–30 years, [ 168 ] giving an average age at death of about 50–60 for those (a minority at the start of the period but two-thirds at its end) surviving beyond their twenties.
The table above gives the life expectancy at birth among 13th-century English nobles as 30–33, but having surviving to the age of 21, a male member of the English aristocracy could expect to live:
A further concept is that of modal age at death, the single age when deaths among a population are more numerous than at any other age. In all pre-modern societies the most common age at death is the first year of life: it is only as infant mortality falls below around 33–34 per thousand (roughly a tenth of estimated ancient and medieval levels) that deaths in a later year of life (usually around age 80) become more numerous. While the most common age of death in adulthood among modern hunter-gatherers (often taken as a guide to the likely most favourable Paleolithic demographic experience) is estimated to average 72 years, [ 169 ] the number dying at that age is dwarfed by those (over a fifth of all infants) dying in the first year of life, and only around a quarter usually survive to the higher age.
Maximum life span is an individual-specific concept, and therefore is an upper bound rather than an average. [ 166 ] Science author Christopher Wanjek writes, "[H]as the human race increased its life span? Not at all. This is one of the biggest misconceptions about old age: we are not living any longer." The maximum life span, or oldest age a human can live, may be constant. [ 166 ] Further, there are many examples of people living significantly longer than the average life expectancy of their time period, such as Socrates (71), Saint Anthony the Great (105), Michelangelo (88), and John Adams (90). [ 166 ]
However, anthropologist John D. Hawks criticizes the popular conflation of life span (life expectancy) and maximum life span when popular science writers falsely imply that the average adult human does not live longer than their ancestors. He writes, "[a]ge-specific mortality rates have declined across the adult lifespan. A smaller fraction of adults die at 20, at 30, at 40, at 50, and so on across the lifespan. As a result, we live longer on average... In every way we can measure, human lifespans are longer today than in the immediate past, and longer today than they were 2000 years ago... age-specific mortality rates in adults really have reduced substantially." [ 170 ] | https://en.wikipedia.org/wiki/Life_expectancy |
Chemical
Neurological
Life extension is the concept of extending the human lifespan , either modestly through improvements in medicine or dramatically by increasing the maximum lifespan beyond its generally-settled biological limit of around 125 years . [ 1 ] Several researchers in the area, along with "life extensionists", " immortalists ", or " longevists " (those who wish to achieve longer lives themselves), postulate that future breakthroughs in tissue rejuvenation , stem cells , regenerative medicine , molecular repair, gene therapy , pharmaceuticals, and organ replacement (such as with artificial organs or xenotransplantations ) will eventually enable humans to have indefinite lifespans through complete rejuvenation to a healthy youthful condition (agerasia [ 2 ] ). The ethical ramifications, if life extension becomes a possibility, are debated by bioethicists .
The sale of purported anti-aging products such as supplements and hormone replacement is a lucrative global industry. For example, the industry that promotes the use of hormones as a treatment for consumers to slow or reverse the aging process in the US market generated about $50 billion of revenue a year in 2009. [ 3 ] The use of such hormone products has not been proven to be effective or safe. [ 3 ] [ 4 ] [ 5 ] [ 6 ]
During the process of aging , an organism accumulates damage to its macromolecules , cells , tissues , and organs . Specifically, aging is characterized as and thought to be caused by "genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis , deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence , stem cell exhaustion, and altered intercellular communication." [ 7 ] Oxidation damage to cellular contents caused by free radicals is believed to contribute to aging as well. [ 8 ] [ 9 ]
The longest documented human lifespan is 122 years 164 days, the case of Jeanne Calment , who according to records was born in 1875 and died in 1997, whereas the maximum lifespan of a wildtype mouse, commonly used as a model in research on aging, is about three years. [ 10 ] Genetic differences between humans and mice that may account for these different aging rates include differences in efficiency of DNA repair , antioxidant defenses, energy metabolism , proteostasis maintenance, and recycling mechanisms such as autophagy . [ 11 ]
The average life expectancy in a population is lowered by infant and child mortality , which are frequently linked to infectious diseases or nutrition problems. Later in life, vulnerability to accidents and age-related chronic disease such as cancer or cardiovascular disease play an increasing role in mortality. Extension of life expectancy and lifespan can often be achieved by access to improved medical care, vaccinations , good diet , exercise , and avoidance of hazards such as smoking .
Maximum lifespan is determined by the rate of aging for a species inherent in its genes and by environmental factors. Widely recognized methods of extending maximum lifespan in model organisms such as nematodes , fruit flies, and mice include caloric restriction , gene manipulation , and administration of pharmaceuticals. [ 12 ] Another technique uses evolutionary pressures such as breeding from only older members or altering levels of extrinsic mortality. [ 13 ] [ 14 ] Some animals such as hydra , planarian flatworms , and certain sponges , corals , and jellyfish do not die of old age and exhibit potential immortality. [ 15 ] [ 16 ] [ 17 ] [ 18 ]
The extension of life has been a desire of humanity and a mainstay motif in the history of scientific pursuits and ideas throughout history, from the Sumerian Epic of Gilgamesh and the Egyptian Smith medical papyrus , all the way through the Taoists , Ayurveda practitioners, alchemists , hygienists such as Luigi Cornaro , Johann Cohausen and Christoph Wilhelm Hufeland , and philosophers such as Francis Bacon , René Descartes , Benjamin Franklin and Nicolas Condorcet . However, the beginning of the modern period in this endeavor can be traced to the end of the 19th – beginning of the 20th century, to the so-called " fin-de-siècle " (end of the century) period, denoted as an "end of an epoch" and characterized by the rise of scientific optimism and therapeutic activism, entailing the pursuit of life extension (or life-extensionism). Among the foremost researchers of life extension at this period were the Nobel Prize winning biologist Elie Metchnikoff (1845-1916) -- the author of the cell theory of immunity and vice director of Institut Pasteur in Paris, and Charles-Édouard Brown-Séquard (1817-1894) -- the president of the French Biological Society and one of the founders of modern endocrinology. [ 19 ]
Sociologist James Hughes claims that science has been tied to a cultural narrative of conquering death since the Age of Enlightenment . He cites Francis Bacon (1561–1626) as an advocate of using science and reason to extend human life, noting Bacon's novel New Atlantis , wherein scientists worked toward delaying aging and prolonging life. Robert Boyle (1627–1691), founding member of the Royal Society , also hoped that science would make substantial progress with life extension, according to Hughes, and proposed such experiments as "to replace the blood of the old with the blood of the young". Biologist Alexis Carrel (1873–1944) was inspired by a belief in indefinite human lifespan that he developed after experimenting with cells , says Hughes. [ 20 ]
Regulatory and legal struggles between the Food and Drug Administration (FDA) and the Life Extension organization included seizure of merchandise and court action. [ 21 ] In 1991, Saul Kent and Bill Faloon , the principals of the organization, were jailed for four hours and were released on $850,000 bond each. [ 22 ] After 11 years of legal battles, Kent and Faloon convinced the US Attorney's Office to dismiss all criminal indictments brought against them by the FDA. [ 23 ]
In 2003, Doubleday published "The Immortal Cell: One Scientist's Quest to Solve the Mystery of Human Aging," by Michael D. West . West emphasised the potential role of embryonic stem cells in life extension. [ 24 ]
Other modern life extensionists include writer Gennady Stolyarov , who insists that death is "the enemy of us all, to be fought with medicine, science, and technology"; [ 25 ] transhumanist philosopher Zoltan Istvan , who proposes that the "transhumanist must safeguard one's own existence above all else"; [ 26 ] futurist George Dvorsky , who considers aging to be a problem that desperately needs to be solved; [ 27 ] and recording artist Steve Aoki , who has been called "one of the most prolific campaigners for life extension". [ 28 ]
In 1991, the American Academy of Anti-Aging Medicine (A4M) was formed. The American Board of Medical Specialties recognizes neither anti-aging medicine nor the A4M's professional standing. [ 29 ]
In 2003, Aubrey de Grey and David Gobel formed the Methuselah Foundation , which gives financial grants to anti-aging research projects. In 2009, de Grey and several others founded the SENS Research Foundation , a California-based scientific research organization which conducts research into aging and funds other anti-aging research projects at various universities. [ 30 ] In 2013, Google announced Calico , a new company based in San Francisco that will harness new technologies to increase scientific understanding of the biology of aging. [ 31 ] It is led by Arthur D. Levinson , [ 32 ] and its research team includes scientists such as Hal V. Barron , David Botstein , and Cynthia Kenyon . In 2014, biologist Craig Venter founded Human Longevity Inc., a company dedicated to scientific research to end aging through genomics and cell therapy. They received funding with the goal of compiling a comprehensive human genotype, microbiome, and phenotype database. [ 33 ]
Aside from private initiatives, aging research is being conducted in university laboratories, and includes universities such as Harvard and UCLA . University researchers have made a number of breakthroughs in extending the lives of mice and insects by reversing certain aspects of aging. [ 34 ] [ 35 ] [ 36 ] [ 37 ]
Theoretically, extension of maximum lifespan in humans could be achieved by reducing the rate of aging damage by periodic replacement of damaged tissues , molecular repair or rejuvenation of deteriorated cells and tissues, reversal of harmful epigenetic changes, or the enhancement of enzyme telomerase activity. [ 38 ] [ 39 ]
Research geared towards life extension strategies in various organisms is currently under way at a number of academic and private institutions. Since 2009, investigators have found ways to increase the lifespan of nematode worms and yeast by 10-fold; the record in nematodes was achieved through genetic engineering and the extension in yeast by a combination of genetic engineering and caloric restriction . [ 40 ] A 2009 review of longevity research noted: "Extrapolation from worms to mammals is risky at best, and it cannot be assumed that interventions will result in comparable life extension factors. Longevity gains from dietary restriction, or from mutations studied previously, yield smaller benefits to Drosophila than to nematodes, and smaller still to mammals. This is not unexpected, since mammals have evolved to live many times the worm's lifespan, and humans live nearly twice as long as the next longest-lived primate. From an evolutionary perspective, mammals and their ancestors have already undergone several hundred million years of natural selection favoring traits that could directly or indirectly favor increased longevity, and may thus have already settled on gene sequences that promote lifespan. Moreover, the very notion of a "life-extension factor" that could apply across taxa presumes a linear response rarely seen in biology." [ 40 ]
There are numerous chemicals intended to slow the aging process under study in animal models . [ 41 ] One type of research is related to the observed effects of a calorie restriction (CR) diet, which has been shown to extend lifespan in some animals. [ 42 ] Based on that research, there have been attempts to develop drugs that will have the same effect on the aging process as a CR diet, which are known as caloric restriction mimetic drugs, such as rapamycin [ 43 ] and metformin . [ 44 ]
Sirtuin activating polyphenols , such as resveratrol and pterostilbene , [ 45 ] [ 46 ] [ 47 ] and flavonoids , such as quercetin and fisetin , [ 48 ] as well as oleic acid [ 49 ] are dietary supplements that have also been studied in this context. Other common supplements with less clear biological pathways to target aging include lipoic acid , [ 50 ] senolytics , [ 48 ] and coenzyme Q10 . [ 51 ]
While agents such as these have some limited laboratory evidence of efficacy in animals, there are no studies to date in humans for drugs that may promote life extension, mainly because research investment remains at a low level, and regulatory standards are high. [ 52 ] Aging is not recognized as a preventable condition by governments, indicating there is no clear pathway to approval of anti-aging medications. [ 52 ] Further, anti-aging drug candidates are under constant review by regulatory authorities like the US Food and Drug Administration , which stated in 2023 that "no medication has been proven to slow or reverse the aging process." [ 53 ]
Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler , one of the founders of nanotechnology , postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular computers , in his 1986 book Engines of Creation . Raymond Kurzweil , a futurist and transhumanist , stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. [ 54 ] According to Richard Feynman , it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical nanomachines (see biological machine ). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) " swallow the doctor ". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom . [ 55 ]
Replacement of biological (susceptible to diseases) organs with mechanical ones could extend life. This is the goal of the 2045 Initiative . [ 56 ]
Cryonics is the low-temperature freezing (usually at −196 °C or −320.8 °F or 77.1 K) of a human corpse, with the hope that resuscitation may be possible in the future . [ 57 ] [ 58 ] It is regarded with skepticism within the mainstream scientific community and has been characterized as quackery . [ 59 ]
Another proposed life extension technology aims to combine existing and predicted future biochemical and genetic techniques. SENS proposes that rejuvenation may be obtained by removing aging damage via the use of stem cells and tissue engineering , telomere -lengthening machinery, allotopic expression of mitochondrial proteins, targeted ablation of cells, immunotherapeutic clearance, and novel lysosomal hydrolases . [ 60 ]
While some biogerontologists find these ideas "worthy of discussion", [ 61 ] [ 62 ] others contend that the alleged benefits are too speculative given the current state of technology, referring to it as "fantasy rather than science". [ 4 ] [ 6 ]
Genome editing , in which nucleic acid polymers are delivered as a drug and are either expressed as proteins, interfere with the expression of proteins, or correct genetic mutations, has been proposed as a future strategy to prevent aging. [ 63 ] [ 64 ]
CRISPR/Cas9 edits genes by precisely cutting DNA and then harnessing natural DNA repair processes to modify the gene in the desired manner. The system has two components: the Cas9 enzyme and a guide RNA. [ 65 ] A large array of genetic modifications have been found to increase lifespan in model organisms such as yeast, nematode worms, fruit flies, and mice. As of 2013, the longest extension of life caused by a single gene manipulation was roughly 50% in mice and 10-fold in nematode worms. [ 66 ]
In July 2020 scientists, using public biological data on 1.75 m people with known lifespans overall, identify 10 genomic loci which appear to intrinsically influence healthspan , lifespan, and longevity – of which half have not been reported previously at genome-wide significance and most being associated with cardiovascular disease – and identify haem metabolism as a promising candidate for further research within the field. Their study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. [ 68 ] [ 67 ] The same month other scientists report that yeast cells of the same genetic material and within the same environment age in two distinct ways, describe a biomolecular mechanism that can determine which process dominates during aging and genetically engineer a novel aging route with substantially extended lifespan. [ 69 ] [ 70 ]
In The Selfish Gene , Richard Dawkins describes an approach to life-extension that involves "fooling genes" into thinking the body is young. [ 71 ] Dawkins attributes inspiration for this idea to Peter Medawar . The basic idea is that our bodies are composed of genes that activate throughout our lifetimes, some when we are young and others when we are older. Presumably, these genes are activated by environmental factors, and the changes caused by these genes activating can be lethal. It is a statistical certainty that we possess more lethal genes that activate in later life than in early life. Therefore, to extend life, we should be able to prevent these genes from switching on, and we should be able to do so by "identifying changes in the internal chemical environment of a body that take place during aging... and by simulating the superficial chemical properties of a young body". [ 72 ]
Some life extensionists suggest that therapeutic cloning and stem cell research could one day provide a way to generate cells, body parts, or even entire bodies (generally referred to as reproductive cloning ) that would be genetically identical to a prospective patient. In 2008, the US Department of Defense announced a program to research the possibility of growing human body parts on mice. [ 73 ] Complex biological structures, such as mammalian joints and limbs, have not yet been replicated. Dog and primate brain transplantation experiments were conducted in the mid-20th century but failed due to rejection and the inability to restore nerve connections. As of 2006, the implantation of bio-engineered bladders grown from patients' own cells has proven to be a viable treatment for bladder disease. [ 74 ] Proponents of body part replacement and cloning contend that the required biotechnologies are likely to appear earlier than other life-extension technologies.
The use of human stem cells , particularly embryonic stem cells , is controversial. Opponents' objections generally are based on interpretations of religious teachings or ethical considerations. [ 75 ] Proponents of stem cell research point out that cells are routinely formed and destroyed in a variety of contexts. Use of stem cells taken from the umbilical cord or parts of the adult body may not provoke controversy. [ 76 ]
The controversies over cloning are similar, except general public opinion in most countries stands in opposition to reproductive cloning . Some proponents of therapeutic cloning predict the production of whole bodies, lacking consciousness, for eventual brain transplantation.
Some critics dispute the portrayal of aging as a disease. For example, Leonard Hayflick , who determined that fibroblasts are limited to around 50 cell divisions, reasons that aging is an unavoidable consequence of entropy . Hayflick and fellow biogerontologists Jay Olshansky and Bruce Carnes have strongly criticized the anti-aging industry in response to what they see as unscrupulous profiteering from the sale of unproven anti-aging supplements . [ 5 ]
Research by Sobh and Martin (2011) suggests that people buy anti-aging products to obtain a hoped-for self (e.g., keeping a youthful skin) or to avoid a feared-self (e.g., looking old). The research shows that when consumers pursue a hoped-for self, it is expectations of success that most strongly drive their motivation to use the product. The research also shows why doing badly when trying to avoid a feared self is more motivating than doing well. When product use is seen to fail it is more motivating than success when consumers seek to avoid a feared-self. [ 77 ]
Though many scientists state [ 78 ] that life extension and radical life extension are possible, there are still no international or national programs focused on radical life extension. There are political forces working both for and against life extension. By 2012, in Russia, the United States, Israel, and the Netherlands, the Longevity political parties started. They aimed to provide political support to radical life extension research and technologies, and ensure the fastest possible and at the same time soft transition of society to the next step – life without aging and with radical life extension, and to provide access to such technologies to most currently living people. [ 79 ]
Some tech innovators and Silicon Valley entrepreneurs have invested heavily into anti-aging research. This includes Jeff Bezos (founder of Amazon ), Larry Ellison (founder of Oracle ), Peter Thiel (former PayPal CEO), [ 80 ] Larry Page (co-founder of Google ), Peter Diamandis , [ 81 ] Sam Altman (CEO of OpenAI , invested in Retro Biosciences ), and Brian Armstrong (founder of Coinbase and NewLimit), [ 82 ] Bryan Johnson (Founder of Kernel ). [ 83 ]
Leon Kass (chairman of the US President's Council on Bioethics from 2001 to 2005) has questioned whether potential exacerbation of overpopulation problems would make life extension unethical. [ 84 ] He states his opposition to life extension with the words:
"simply to covet a prolonged life span for ourselves is both a sign and a cause of our failure to open ourselves to procreation and to any higher purpose ... [The] desire to prolong youthfulness is not only a childish desire to eat one's life and keep it; it is also an expression of a childish and narcissistic wish incompatible with devotion to posterity." [ 85 ]
John Harris, former editor-in-chief of the Journal of Medical Ethics, argues that as long as life is worth living, according to the person himself, we have a powerful moral imperative to save the life and thus to develop and offer life extension therapies to those who want them. [ 86 ]
Transhumanist philosopher Nick Bostrom has argued that any technological advances in life extension must be equitably distributed and not restricted to a privileged few. [ 87 ] In an extended metaphor entitled " The Fable of the Dragon-Tyrant ", Bostrom envisions death as a monstrous dragon who demands human sacrifices. In the fable, after a lengthy debate between those who believe the dragon is a fact of life and those who believe the dragon can and should be destroyed, the dragon is finally killed. Bostrom argues that political inaction allowed many preventable human deaths to occur. [ 88 ]
Controversy about life extension is due to fear of overpopulation and possible effects on society. [ 89 ] Biogerontologist Aubrey De Grey counters the overpopulation critique by pointing out that the therapy could postpone or eliminate menopause , allowing women to space out their pregnancies over more years and thus decreasing the yearly population growth rate. [ 90 ] Moreover, the philosopher and futurist Max More argues that, given that the worldwide population growth rate is slowing down and is projected to eventually stabilize and begin falling, superlongevity would be unlikely to contribute to overpopulation. [ 89 ]
A Spring 2013 Pew Research poll in the United States found that 38% of Americans would want life extension treatments, and 56% would reject it. However, it also found that 68% believed most people would want it and that only 4% consider an "ideal lifespan" to be more than 120 years. The median "ideal lifespan" was 91 years of age and the majority of the public (63%) viewed medical advances aimed at prolonging life as generally good. 41% of Americans believed that radical life extension (RLE) would be good for society, while 51% said they believed it would be bad for society. [ 91 ] One possibility for why 56% of Americans claim they would reject life extension treatments may be due to the cultural perception that living longer would result in a longer period of decrepitude, and that the elderly in our current society are unhealthy. [ 92 ]
Religious people are no more likely to oppose life extension than the unaffiliated, [ 91 ] though some variation exists between religious denominations.
Most mainstream medical organizations and practitioners do not consider aging to be a disease. Biologist David Sinclair says: "I don't see aging as a disease, but as a collection of quite predictable diseases caused by the deterioration of the body." [ 93 ] The two main arguments used are that aging is both inevitable and universal while diseases are not. [ 94 ] However, not everyone agrees. Harry R. Moody, director of academic affairs for AARP , notes that what is normal and what is disease strongly depend on a historical context. [ 95 ] David Gems , assistant director of the Institute of Healthy Ageing, argues that aging should be viewed as a disease. [ 96 ] In response to the universality of aging, David Gems notes that it is as misleading as arguing that Basenji are not dogs because they do not bark. [ 97 ] Because of the universality of aging he calls it a "special sort of disease". Robert M. Perlman, coined the terms "aging syndrome" and "disease complex" in 1954 to describe aging. [ 98 ]
The discussion whether aging should be viewed as a disease or not has important implications. One view is, this would stimulate pharmaceutical companies to develop life extension therapies and in the United States of America, it would also increase the regulation of the anti-aging market by the Food and Drug Administration (FDA). Anti-aging now falls under the regulations for cosmetic medicine which are less tight than those for drugs. [ 97 ] [ 99 ]
A senolytic (from the words senescence and -lytic , "destroying") is among a class of small molecules under basic research to determine if they can selectively induce death of senescent cells and improve health in humans. [ 100 ] A goal of this research is to discover or develop agents to delay, prevent, alleviate, or reverse age-related diseases. [ 101 ] [ 102 ] Removal of senescent cells with senolytics has been proposed as a method of enhancing immunity during aging. [ 103 ]
Senolytics eliminate senescent cells whereas senomorphics – with candidates such as Apigenin , Everolimus and Rapamycin – modulate properties of senescent cells without eliminating them, suppressing phenotypes of senescence, including the SASP . [ 105 ] [ 106 ] Senomorphic effects may be one major effect mechanism of a range of prolongevity drug candidates. Such candidates are however typically not studied for just one mechanism, but multiple. There are biological databases of prolongevity drug candidates under research as well as of potential gene/protein targets. These are enhanced by longitudinal cohort studies , electronic health records , computational (drug) screening methods, computational biomarker-discovery methods and computational biodata-interpretation/ personalized medicine methods. [ 107 ] [ 108 ] [ 109 ]
Besides rapamycin and senolytics, the drug-repurposing candidates studied most extensively include metformin , acarbose , spermidine and NAD+ enhancers. [ 110 ]
Many prolongevity drugs are synthetic alternatives or potential complements to existing nutraceuticals, such as various sirtuin-activating compounds under investigation like SRT2104 . [ 111 ] In some cases pharmaceutical administration is combined with that of neutraceuticals – such as in the case of glycine combined with NAC . [ 112 ] Often studies are structured based on or thematize specific prolongevity targets, listing both nutraceuticals and pharmaceuticals (together or separately) such as FOXO3 -activators. [ 113 ]
Researchers are also exploring ways to mitigate side-effects from such substances (possibly most notably rapamycin and its derivatives ) such as via protocols of intermittent administration [ 114 ] [ 106 ] [ 105 ] [ 115 ] [ 116 ] and have called for research that helps determine optimal treatment schedules (including timing) in general. [ 117 ]
The free-radical theory of aging suggests that antioxidant supplements might extend human life. Reviews, however, have found that use of vitamin A (as β-carotene) and vitamin E supplements possibly can increase mortality. [ 118 ] [ 119 ] Other reviews have found no relationship between vitamin E and other vitamins with mortality. [ 120 ] Vitamin D supplementation of various dosages is investigated in trials [ 121 ] and there also is research into GlyNAC (see above ) . [ 112 ]
Complications of antioxidant supplementation (especially continuous high dosages far above the RDA ) include that reactive oxygen species (ROS), which are mitigated by antioxidants, "have been found to be physiologically vital for signal transduction, gene regulation, and redox regulation, among others, implying that their complete elimination would be harmful". In particular, one way of multiple they can be detrimental is by inhibiting adaptation to exercise such as muscle hypertrophy (e.g. during dedicated periods of caloric surplus). [ 122 ] [ 123 ] [ 124 ] There is also research into stimulating/activating/fueling endogenous antioxidant generation, in particular e.g. of neutraceutical glycine and pharmaceutical NAC. [ 125 ] Antioxidants can change the oxidation status of different e.g. tissues, targets or sites each with potentially different implications, especially for different concentrations. [ 126 ] [ 127 ] [ 128 ] [ 129 ] A review suggests mitochondria have a hormetic response to ROS, whereby low oxidative damage can be beneficial. [ 130 ]
As of 2021, there is no clinical evidence that any dietary restriction practice contributes to human longevity. [ 131 ]
Research suggests that increasing adherence to Mediterranean diet patterns is associated with a reduction in total and cause-specific mortality, extending health- and lifespan. [ 132 ] [ 133 ] [ 134 ] [ 135 ] Research is identifying the key beneficial components of the Mediterranean diet. [ 136 ] [ 137 ] Studies suggest dietary changes are a factor of national relative rises in life-span. [ 138 ]
Approaches to develop optimal diets for health- and lifespan (or "longevity diets") [ 139 ] include:
Further advanced biosciences-based approaches include:
There is a need and research into the development of aging biomarkers such as the epigenetic clock "to assess the ageing process and the efficacy of interventions to bypass the need for large-scale longitudinal studies". [ 159 ] [ 108 ] Such biomarkers may also include in vivo brain imaging . [ 165 ]
Reviews sometimes include structured tables that provide systematic overviews of intervention/drug candidates with a review calling for integrating "current knowledge with multi-omics, health records, and drug safety data to predict drugs that can improve health in late life" and listing major outstanding questions . [ 107 ] Biological databases of prolongevity drug candidates under research as well as of potential gene/protein targets include GenAge, DrugAge and Geroprotectors. [ 107 ] [ 166 ]
A review has pointed out that the approach of "'epidemiological' comparison of how a low versus a high consumption of an isolated macronutrient and its association with health and mortality may not only fail to identify protective or detrimental nutrition patterns but may lead to misleading interpretations". It proposes a multi-pillar approach, and summarizes findings towards constructing – multi-system-considering and at least age-personalized dynamic – refined longevity diets. Epidemiological-type observational studies included in meta-analyses should according to the study at least be complemented by "(1) basic research focused on lifespan and healthspan, (2) carefully controlled clinical trials, and (3) studies of individuals and populations with record longevity". [ 139 ]
The anti-aging industry offers several hormone therapies . Some of these have been criticized for possible dangers and a lack of proven effect. For example, the American Medical Association has been critical of some anti-aging hormone therapies. [ 3 ]
While growth hormone (GH) decreases with age, the evidence for use of growth hormone as an anti-aging therapy is mixed and based mostly on animal studies. There are mixed reports that GH or IGF-1 modulates the aging process in humans and about whether the direction of its effect is positive or negative. [ 167 ]
Klotho [ 151 ] [ 168 ] and exerkines [ 156 ] (see above ) like irisin [ 169 ] are being investigated for potential pro-longevity therapies.
Loneliness /isolation, social life and support, [ 135 ] [ 170 ] exercise/physical activity (partly via neurobiological effects and increased NAD+ levels), [ 135 ] [ 171 ] [ 159 ] [ 160 ] [ 172 ] [ 173 ] psychological characteristics/personality (possibly highly indirectly), [ 174 ] [ 175 ] sleep duration, [ 135 ] circadian rhythms (patterns of sleep, drug-administration and feeding), [ 176 ] [ 177 ] [ 178 ] type of leisure activities, [ 135 ] not smoking, [ 135 ] altruistic emotions and behaviors, [ 179 ] [ 180 ] subjective well-being , [ 181 ] mood [ 135 ] and stress (including via heat shock protein ) [ 135 ] [ 182 ] are investigated as potential (modulatable) factors of life extension.
Healthy lifestyle practices and healthy diet have been suggested as "first-line function-preserving strategies, with pharmacological agents, including existing and new pharmaceuticals and novel 'nutraceutical' compounds, serving as potential complementary approaches". [ 183 ]
Collectively, addressing common causes of death could extend lifespans of populations and humanity overall. For instance, a 2020 study indicates that the global mean loss of life expectancy (LLE) from air pollution in 2015 was 2.9 years, substantially more than, for example, 0.3 years from all forms of direct violence, albeit a significant fraction of the LLE (a measure similar to years of potential life lost ) is considered to be unavoidable. [ 185 ]
Regular screening and doctor visits has been suggested as a lifestyle-societal intervention. [ 135 ] (See also: medical test and biomarker )
Health policy and changes to standard healthcare could support the adoption of the field's conclusions – a review suggests that the longevity diet would be a "valuable complement to standard healthcare and that, taken as a preventative measure, it could aid in avoiding morbidity, sustaining health into advanced age" as a form of preventive healthcare . [ 139 ]
It has been suggested that in terms of healthy diets, Mediterranean-style diets could be promoted by countries for ensuring healthy-by-default choices ("to ensure the healthiest choice is the easiest choice") and with highly effective measures including dietary education , food checklists and recipes that are "simple, palatable, and affordable". [ 186 ]
A review suggests that "targeting the aging process per se may be a far more effective approach to prevent or delay aging-associated pathologies than treatments specifically targeted to particular clinical conditions". [ 187 ]
Low ambient temperature as a physical factor affecting free radical levels was identified as a treatment producing exceptional lifespan increase in Drosophila melanogaster and other living beings. [ 188 ]
Conspiracy theorists claim that some clinics currently offer injection of blood products from young donors. The alleged benefits of the treatment, none of which have been demonstrated in a proper study, include a longer life, darker hair, better memory, better sleep, curing heart diseases, diabetes and Alzheimer's disease. [ 189 ] [ 190 ] [ 191 ] [ 192 ] [ 193 ] The approach is based on parabiosis studies such as those Irina Conboy has done on mice, but Conboy says young blood does not reverse aging (even in mice) and that those who offer those treatments have misunderstood her research. [ 190 ] [ 191 ] Neuroscientist Tony Wyss-Coray, who also studied blood exchanges on mice as recently as 2014, said people offering those treatments are "basically abusing people's trust" [ 194 ] [ 191 ] and that young blood treatments are "the scientific equivalent of fake news". [ 195 ] The treatment appeared in HBO's Silicon Valley fiction series. [ 194 ]
Two clinics in California, run by Jesse Karmazin and David C. Wright, [ 189 ] offer $8,000 injections of plasma extracted from the blood of young people. Karmazin has not published in any peer-reviewed journal and his current study does not use a control group. [ 195 ] [ 194 ] [ 189 ] [ 191 ]
Fecal microbiota transplantation [ 196 ] [ 197 ] and probiotics are being investigated as means for life and healthspan extension. [ 198 ] [ 199 ] [ 200 ]
One hypothetical future strategy that, as some suggest, [ who? ] "eliminates" the complications related to a physical body, involves the copying or transferring (e.g. by progressively replacing neurons with transistors) of a conscious mind from a biological brain to a non-biological computer system or computational device. The basic idea is to scan the structure of a particular brain in detail, and then construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain. [ 201 ] Whether or not an exact copy of one's mind constitutes actual life extension is matter of debate.
However, critics argue that the uploaded mind would simply be a clone and not a true continuation of a person's consciousness. [ 202 ]
Some scientists believe that the dead may one day be "resurrected" through simulation technology. [ 203 ] | https://en.wikipedia.org/wiki/Life_extension |
Life on Earth: A Natural History by David Attenborough is a British television natural history series made by the BBC in association with Warner Bros. Television and Reiner Moritz Productions . It was transmitted in the UK from 16 January 1979.
During the course of the series presenter David Attenborough , following the format established by Kenneth Clark 's Civilisation and Jacob Bronowski 's The Ascent of Man (both series which he designed and produced as director of BBC2), travels the globe in order to trace the story of the evolution of life on the planet. Like the earlier series, it was divided into 13 programmes (each of around 55 minutes' duration). The executive producer was Christopher Parsons and the music was composed by Edward Williams .
At a cost exceeding £1 million ($1.2 million), it was an immense project that involved filming over 100 locations around the world and took three years in the making by a team of 30 people with the help of more than 500 scientists. [ 1 ] [ 2 ] Highly acclaimed as a milestone in the history of British wildlife television, it established Attenborough as not only the foremost television naturalist, but also an iconic figure in British cultural life. [ 2 ] It is the first in Attenborough's Life series of programmes and was followed by The Living Planet (1984).
Several special filming techniques were devised to obtain some of the footage of rare and elusive animals. One cameraman spent hundreds of hours waiting for the fleeting moment when a Darwin's frog , which incubates its young in its mouth, finally spat them out. Another built a replica of a mole rat burrow in a horizontally mounted wheel, so that as the mole rat ran along the tunnel, the wheel could be spun to keep the animal adjacent to the camera. To illustrate the motion of bats ' wings in flight, a slow-motion sequence was filmed in a wind tunnel . The series was also the first to include footage of a live (although dying) coelacanth .
The cameramen took advantage of improved film stock to produce some of the sharpest and most colourful wildlife footage that had been seen to date.
The programmes also pioneered a style of presentation whereby David Attenborough would begin describing a certain species' behaviour in one location, before cutting to another to complete his illustration. Continuity was maintained, despite such sequences being filmed several months and thousands of miles apart.
The best remembered sequence occurs in the twelfth episode, when Attenborough encounters a group of mountain gorillas in Dian Fossey 's sanctuary in Rwanda . The primates had become used to humans through years of being studied by researchers. Attenborough originally intended merely to get close enough to narrate a piece about the apes' use of the opposable thumb , but as he advanced on all fours toward the area where they were feeding, he suddenly found himself face to face with an adult female. Discarding his scripted speech, he turned to camera and delivered a whispered ad lib :
There is more meaning and mutual understanding in exchanging a glance with a gorilla than with any other animal I know. Their sight, their hearing, their sense of smell are so similar to ours that they see the world in much the same way as we do. We live in the same sort of social groups with largely permanent family relationships. They walk around on the ground as we do, though they are immensely more powerful than we are. So if there were ever a possibility of escaping the human condition and living imaginatively in another creature's world, it must be with the gorilla . The male is an enormously powerful creature but he only uses his strength when he is protecting his family and it is very rare that there is violence within the group. So it seems really very unfair that man should have chosen the gorilla to symbolise everything that is aggressive and violent, when that is the one thing that the gorilla is not—and that we are.
When Attenborough returned to the site the next day, the female and two young gorillas began to groom and play with him. In his memoirs, Attenborough describes this as "one of the most exciting encounters of my life". He subsequently discovered, to his chagrin, that only a few seconds had been recorded: the cameraman was running low on film and wanted to save it for the planned description of the opposable thumb. [ 3 ]
In 1999 viewers of Channel 4 voting for the 100 Greatest TV Moments placed the gorilla sequence at number 12—ranking it ahead of Queen Elizabeth II 's coronation and the wedding of Charles and Diana .
The series attracted a weighted average of 15 million viewers in the UK, an exceptionally high figure for a BBC documentary back in the late 1970s. [ 4 ] It was also a major international success, being sold to over 100 territories and watched by an estimated audience of 500 million people worldwide. [ 4 ] [ 5 ] [ 6 ] However, Life on Earth did not generate the same revenue for the BBC as later Attenborough series because the corporation signed away the American and European rights to their co-production partners, Warner Bros. and Reiner Moritz . [ 7 ]
It was nominated for four BAFTA TV awards and won the Broadcasting Press Guild Award for Best Documentary Series. [ 8 ] In a list of the 100 Greatest British Television Programmes drawn up by the British Film Institute in 2000, voted for by industry professionals, Life on Earth was placed 32nd.
A shortened series, using the footage and commentary from the original, was aired in 1997, edited down to three episodes: early life forms, plants, insects, and amphibians in the first; fish, birds and reptiles in the second; and mammals in the third.
The series is available in the UK for Regions 2 and 4 as a four-disc DVD set (BBCDVD1233, released 1 September 2003) and as part of The Life Collection .
In 2012 the series was released as a four-disc Blu-ray set (released 12 November 2012).
A hardback book, Life on Earth by David Attenborough, was published in 1979 and became a worldwide bestseller . Its cover image of a Panamanian red-eyed tree frog , was taken by Attenborough himself, [ 9 ] became an instantly recognisable emblem of the series. It is currently out of print.
A revised and updated edition of the book was published in 2018 to favourable reviews. Most if not all of the images in the 2018 edition are new, but the text remains substantially the same as the original.
Edward Williams ' avant-garde score matched the innovative production techniques of the television series. Williams used a traditional chamber music ensemble of ( harp , flute , clarinet , strings and percussion ) combined with electronic sounds. The pieces were crafted scene-by-scene to synchronise with and complement the imagery on screen: in one sequence examining the flight of birds, the instrumentation mirrors each new creature's appearance. The sounds were processed through an early British synthesiser , the EMS VCS 3 , to create its evocative sound.
"I started using the filters and voltage control of the VCS 3 on conventionally created classical sounds by the orchestra. It made possible all sorts of marvellous explorations of new sounds which could then be made into music."
The score was never intended to be released commercially, but Williams had 100 copies pressed as gifts for the musicians involved. One of these LPs found its way into the hands of Jonny Trunk, owner of independent label Trunk Records , who negotiated the licence from the BBC. The soundtrack was finally released on 2 November 2009. [ 9 ]
eefu⁸xcvktfthuf8gg | https://en.wikipedia.org/wiki/Life_on_Earth_(TV_series) |
The possibility of life on Mars is a subject of interest in astrobiology due to the planet 's proximity and similarities to Earth . To date, no conclusive evidence of past or present life has been found on Mars. Cumulative evidence suggests that during the ancient Noachian time period, the surface environment of Mars had liquid water and may have been habitable for microorganisms, but habitable conditions do not necessarily indicate life. [ 1 ] [ 2 ]
Scientific searches for evidence of life began in the 19th century and continue today via telescopic investigations and deployed probes, searching for water, chemical biosignatures in the soil and rocks at the planet's surface, and biomarker gases in the atmosphere. [ 3 ]
Mars is of particular interest for the study of the origins of life because of its similarity to the early Earth. This is especially true since Mars has a cold climate and lacks plate tectonics or continental drift , so it has remained almost unchanged since the end of the Hesperian period. At least two-thirds of Mars' surface is more than 3.5 billion years old, and it could have been habitable 4.48 billion years ago, 500 million years before the earliest known Earth lifeforms; [ 4 ] Mars may thus hold the best record of the prebiotic conditions leading to life, even if life does not or has never existed there. [ 5 ] [ 6 ]
Following the confirmation of the past existence of surface liquid water, the Curiosity , Perseverance and Opportunity rovers started searching for evidence of past life, including a past biosphere based on autotrophic , chemotrophic , or chemolithoautotrophic microorganisms , as well as ancient water, including fluvio-lacustrine environments ( plains related to ancient rivers or lakes) that may have been habitable. [ 7 ] [ 8 ] [ 9 ] [ 10 ] The search for evidence of habitability, fossils , and organic compounds on Mars is now a primary objective for space agencies .
The discovery of organic compounds inside sedimentary rocks and of boron on Mars are of interest as they are precursors for prebiotic chemistry . Such findings, along with previous discoveries that liquid water was clearly present on ancient Mars, further supports the possible early habitability of Gale Crater on Mars. [ 11 ] [ 12 ] Currently, the surface of Mars is bathed with ionizing radiation , and Martian soil is rich in perchlorates toxic to microorganisms . [ 13 ] [ 14 ] Therefore, the consensus is that if life exists—or existed—on Mars, it could be found or is best preserved in the subsurface, away from present-day harsh surface processes.
In June 2018, NASA announced the detection of seasonal variation of methane levels on Mars. Methane could be produced by microorganisms or by geological means. [ 15 ] The European ExoMars Trace Gas Orbiter started mapping the atmospheric methane in April 2018, and the 2022 ExoMars rover Rosalind Franklin was planned to drill and analyze subsurface samples before the programme's indefinite suspension, while the NASA Mars 2020 rover Perseverance , having landed successfully, will cache dozens of drill samples for their potential transport to Earth laboratories in the late 2020s or 2030s. As of February 8, 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine ) and Mars (via methane ) was reported. [ 16 ] In October 2024, NASA announced that it may be possible for photosynthesis to occur within dusty water ice exposed [ 17 ] in the mid-latitude regions of Mars. [ 18 ]
Mars's polar ice caps were discovered in the mid-17th century. [ citation needed ] In the late 18th century, William Herschel proved they grow and shrink alternately, in the summer and winter of each hemisphere. By the mid-19th century, astronomers knew that Mars had certain other similarities to Earth , for example that the length of a day on Mars was almost the same as a day on Earth. They also knew that its axial tilt was similar to Earth's, which meant it experienced seasons just as Earth does—but of nearly double the length owing to its much longer year . These observations led to increasing speculation that the darker albedo features were water and the brighter ones were land, whence followed speculation on whether Mars may be inhabited by some form of life. [ 19 ]
In 1854, William Whewell , a fellow of Trinity College , Cambridge, theorized that Mars had seas, land and possibly life forms. [ 20 ] Speculation about life on Mars exploded in the late 19th century, following telescopic observation by some observers of apparent Martian canals —which were later found to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, [ 21 ] proposing that the canals were the work of a long-gone civilization. [ 22 ] This idea led British writer H. G. Wells to write The War of the Worlds in 1897, telling of an invasion by aliens from Mars who were fleeing the planet's desiccation . [ 23 ]
The 1907 book Is Mars Habitable? by British naturalist Alfred Russel Wallace was a reply to, and refutation of, Lowell's Mars and Its Canals . Wallace's book concluded that Mars "is not only uninhabited by intelligent beings such as Mr. Lowell postulates, but is absolutely uninhabitable." [ 24 ] Historian Charles H. Smith refers to Wallace's book as one of the first works in the field of astrobiology . [ 25 ]
Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen were present in the Martian atmosphere . [ 26 ] The influential observer Eugène Antoniadi used the 83-cm (32.6 inch) aperture telescope at Meudon Observatory at the 1909 opposition of Mars and saw no canals, the outstanding photos of Mars taken at the new Baillaud dome at the Pic du Midi observatory also brought formal discredit to the Martian canals theory in 1909, [ 27 ] and the notion of canals began to fall out of favor. [ 26 ]
Chemical, physical, geological, and geographic attributes shape the environments on Mars. Isolated measurements of these factors may be insufficient to deem an environment habitable, but the sum of measurements can help predict locations with greater or lesser habitability potential. [ 28 ] The two current ecological approaches for predicting the potential habitability of the Martian surface use 19 or 20 environmental factors, with an emphasis on water availability, temperature, the presence of nutrients, an energy source, and protection from solar ultraviolet and galactic cosmic radiation . [ 29 ] [ 30 ]
Scientists do not know the minimum number of parameters for determination of habitability potential, but they are certain it is greater than one or two of the factors in the table below. [ 28 ] Similarly, for each group of parameters, the habitability threshold for each is to be determined. [ 28 ] Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. [ 31 ] There are no full-Mars simulations published yet that include all of the biocidal factors combined. [ 31 ] Furthermore, the possibility of Martian life having a far different biochemistry and habitability requirements than the terrestrial biosphere is an open question. A common hypothesis is methanogenic Martian life, and while such organisms exist on Earth too, they are exceptionally rare (while in itself numerous, there are not many environments on Earth life commonly known to humans exists where methanogenic life also can) and cannot survive in the majority of terrestrial environments that contain oxygen. [ 32 ]
Recent models have shown that, even with a dense CO 2 atmosphere, early Mars was colder than Earth has ever been. [ 33 ] [ 34 ] [ 35 ] [ 36 ] Transiently warm conditions related to impacts or volcanism could have produced conditions favoring the formation of the late Noachian valley networks, even though the mid-late Noachian global conditions were probably icy. Local warming of the environment by volcanism and impacts would have been sporadic, but there should have been many events of water flowing at the surface of Mars. [ 36 ] Both the mineralogical and the morphological evidence indicates a degradation of habitability from the mid Hesperian onward. The exact causes are not well understood but may be related to a combination of processes including loss of early atmosphere, or impact erosion, or both. [ 36 ] Billions of years ago, before this degradation, the surface of Mars was apparently fairly habitable, consisted of liquid water and clement weather, though it is unknown if life existed on Mars. [ 37 ]
The loss of the Martian magnetic field strongly affected surface environments through atmospheric loss and increased radiation; this change significantly degraded surface habitability. [ 39 ] When there was a magnetic field, the atmosphere would have been protected from erosion by the solar wind , which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. [ 40 ] The loss of the atmosphere was accompanied by decreasing temperatures. Part of the liquid water inventory sublimed and was transported to the poles, while the rest became
trapped in permafrost , a subsurface ice layer. [ 36 ]
Observations on Earth and numerical modeling have shown that a crater-forming impact can result in the creation of a long-lasting hydrothermal system when ice is present in the crust. For example, a 130 km large crater could sustain an active hydrothermal system for up to 2 million years, that is, long enough for microscopic life to emerge, [ 36 ] but unlikely to have progressed any further down the evolutionary path. [ 41 ]
Soil and rock samples studied in 2013 by NASA's Curiosity rover's onboard instruments brought about additional information on several habitability factors. [ 42 ] The rover team identified some of the key chemical ingredients for life in this soil, including sulfur , nitrogen , hydrogen , oxygen, phosphorus and possibly carbon , as well as clay minerals, suggesting a long-ago aqueous environment—perhaps a lake or an ancient streambed—that had neutral acidity and low salinity. [ 42 ] On December 9, 2013, NASA reported that, based on evidence from Curiosity studying Aeolis Palus , Gale Crater contained an ancient freshwater lake which could have been a hospitable environment for microbial life . [ 43 ] [ 44 ] The confirmation that liquid water once flowed on Mars, the existence of nutrients, and the previous discovery of a past magnetic field that protected the planet from cosmic and solar radiation, [ 45 ] [ 46 ] together strongly suggest that Mars could have had the environmental factors to support life. [ 47 ] [ 48 ] The assessment of past habitability is not in itself evidence that Martian life has ever actually existed. If it did, it was probably microbial , existing communally in fluids or on sediments, either free-living or as biofilms , respectively. [ 39 ] The exploration of terrestrial analogues provide clues as to how and where best look for signs of life on Mars. [ 49 ]
Impactite , shown to preserve signs of life on Earth, was discovered on Mars and could contain signs of ancient life, if life ever existed on the planet. [ 50 ]
On June 7, 2018, NASA announced that the Curiosity rover had discovered organic molecules in sedimentary rocks dating to three billion years old. [ 51 ] [ 52 ] The detection of organic molecules in rocks indicate that some of the building blocks for life were present. [ 53 ] [ 54 ]
Research into how the conditions for habitability ended is ongoing. On October 7, 2024, NASA announced that the results of the previous three years of sampling onboard Curiosity suggested that based on high carbon-13 and oxygen-18 levels in the regolith, the early Martian atmosphere was less likely than previously thought, to be stable enough to support surface water hospitable to life, with rapid wetting-drying cycles and very high-salinity cryogenic brines providing potential explanations. [ 55 ] [ 56 ]
Conceivably, if life exists (or existed) on Mars, evidence of life could be found, or is best preserved, in the subsurface, away from present-day harsh surface conditions. [ 57 ] Present-day life on Mars, or its biosignatures, could occur kilometers below the surface, or in subsurface geothermal hot spots, or it could occur a few meters below the surface. The permafrost layer on Mars is only a couple of centimeters below the surface, and salty brines can be liquid a few centimeters below that but not far down. Water is close to its boiling point even at the deepest points in the Hellas basin, and so cannot remain liquid for long on the surface of Mars in its present state, except after a sudden release of underground water. [ 58 ] [ 59 ] [ 60 ]
So far, NASA has pursued a "follow the water" strategy on Mars and has not searched for biosignatures for life there directly since the Viking missions. The consensus by astrobiologists is that it may be necessary to access the Martian subsurface to find currently habitable environments. [ 57 ]
In 1965, the Mariner 4 probe discovered that Mars had no global magnetic field that would protect the planet from potentially life-threatening cosmic radiation and solar radiation ; observations made in the late 1990s by the Mars Global Surveyor confirmed this discovery. [ 61 ] Scientists speculate that the lack of magnetic shielding helped the solar wind blow away much of Mars's atmosphere over the course of several billion years. [ 62 ] As a result, the planet has been vulnerable to radiation from space for about 4 billion years. [ 63 ]
Recent in-situ data from Curiosity rover indicates that ionizing radiation from galactic cosmic rays (GCR) and solar particle events (SPE) may not be a limiting factor in habitability assessments for present-day surface life on Mars. The level of 76 mGy per year measured by Curiosity is similar to levels inside the ISS. [ 64 ]
Curiosity rover measured ionizing radiation levels of 76 mGy per year. [ 65 ] This level of ionizing radiation is sterilizing for dormant life on the surface of Mars. It varies considerably in habitability depending on its orbital eccentricity and the tilt of its axis. If the surface life has been reanimated as recently as 450,000 years ago, then rovers on Mars could find dormant but still viable life at a depth of one meter below the surface, according to an estimate. [ 66 ] Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars since Mars lost its protective magnetosphere and atmosphere. [ 67 ] [ 68 ] After mapping cosmic radiation levels at various depths on Mars, researchers have concluded that over time, any life within the first several meters of the planet's surface would be killed by lethal doses of cosmic radiation. [ 67 ] [ 69 ] [ 70 ] The team calculated that the cumulative damage to DNA and RNA by cosmic radiation would limit retrieving viable dormant cells on Mars to depths greater than 7.5 meters below the planet's surface. [ 69 ] Even the most radiation-tolerant terrestrial bacteria would survive in dormant spore state only 18,000 years at the surface; at 2 meters—the greatest depth at which the ExoMars rover will be capable of reaching—survival time would be 90,000 to half a million years, depending on the type of rock. [ 71 ]
Data collected by the Radiation assessment detector (RAD) instrument on board the Curiosity rover revealed that the absorbed dose measured is 76 mGy /year at the surface, [ 72 ] and that " ionizing radiation strongly influences chemical compositions and structures, especially for water, salts, and redox-sensitive components such as organic molecules." [ 72 ] Regardless of the source of Martian organic compounds (meteoric, geological, or biological), its carbon bonds are susceptible to breaking and reconfiguring with surrounding elements by ionizing charged particle radiation. [ 72 ] These improved subsurface radiation estimates give insight into the potential for the preservation of possible organic biosignatures as a function of depth as well as survival times of possible microbial or bacterial life forms left dormant beneath the surface. [ 72 ] The report concludes that the in situ "surface measurements—and subsurface estimates—constrain the preservation window for Martian organic matter following exhumation and exposure to ionizing radiation in the top few meters of the Martian surface." [ 72 ]
In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled and were associated with an aurora 25 times brighter than any observed earlier, due to a major, and unexpected, solar storm in the middle of the month. [ 73 ]
On UV radiation, a 2014 report concludes [ 74 ] that "[T]he Martian UV radiation environment is rapidly lethal to unshielded microbes but can be attenuated by global dust storms and shielded completely by < 1 mm of regolith or by other organisms." In addition, laboratory research published in July 2017 demonstrated that UV irradiated perchlorates cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. [ 75 ] [ 76 ] The penetration depth of UV radiation into soils is in the sub-millimeter to millimeter range and depends on the properties of the soil. [ 76 ] A recent study found that photosynthesis could occur within dusty ice exposed in the Martian mid-latitudes because the overlying dusty ice blocks the harmful ultraviolet radiation at Mars’ surface. [ 77 ]
The Martian regolith is known to contain a maximum of 0.5% (w/v) perchlorate (ClO 4 − ) that is toxic for most living organisms, [ 78 ] but since they drastically lower the freezing point of water and a few extremophiles can use it as an energy source (see Perchlorates - Biology ) and grow at concentrations of up to 30% (w/v) sodium perchlorate [ 79 ] by physiologically adapting to increasing perchlorate concentrations, [ 80 ] it has prompted speculation of what their influence would be on habitability. [ 75 ] [ 79 ] [ 81 ] [ 82 ] [ 83 ]
Research published in July 2017 shows that when irradiated with a simulated Martian UV flux, perchlorates become even more lethal to bacteria ( bactericide ). Even dormant spores lost viability within minutes. [ 75 ] In addition, two other compounds of the Martian surface, iron oxides and hydrogen peroxide , act in synergy with irradiated perchlorates to cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. [ 75 ] [ 76 ] It was also found that abraded silicates (quartz and basalt) lead to the formation of toxic reactive oxygen species . [ 84 ] The researchers concluded that "the surface of Mars is lethal to vegetative cells and renders much of the surface and near-surface regions uninhabitable." [ 85 ] This research demonstrates that the present-day surface is more uninhabitable than previously thought, [ 75 ] [ 86 ] and reinforces the notion to inspect at least a few meters into the ground to ensure the levels of radiation would be relatively low. [ 86 ] [ 87 ]
However, researcher Kennda Lynch discovered the first-known instance of a habitat containing perchlorates and perchlorates-reducing bacteria in an analog environment: a paleolake in Pilot Valley, Great Salt Lake Desert , Utah, United States. [ 88 ] She has been studying the biosignatures of these microbes, and is hoping that the Mars Perseverance rover will find matching biosignatures at its Jezero Crater site. [ 89 ] [ 90 ]
Recurrent slope lineae (RSL) features form on Sun-facing slopes at times of the year when the local temperatures reach above the melting point for ice. The streaks grow in spring, widen in late summer and then fade away in autumn. This is hard to model in any other way except as involving liquid water in some form, though the streaks themselves are thought to be a secondary effect and not a direct indication of the dampness of the regolith. Although these features are now confirmed to involve liquid water in some form, the water could be either too cold or too salty for life. At present they are treated as potentially habitable, as "Uncertain Regions, to be treated as Special Regions".). [ 91 ] [ 92 ] They were suspected as involving flowing brines back then. [ 93 ] [ 94 ] [ 95 ] [ 96 ]
The thermodynamic availability of water ( water activity ) strictly limits microbial propagation on Earth, particularly in hypersaline environments, and there are indications that the brine ionic strength is a barrier to the habitability of Mars. Experiments show that high ionic strength , driven to extremes on Mars by the ubiquitous occurrence of divalent ions, "renders these environments uninhabitable despite the presence of biologically available water." [ 97 ]
After carbon, nitrogen is arguably the most important element needed for life. Thus, measurements of nitrate over the range of 0.1% to 5% are required to address the question of its occurrence and distribution. There is nitrogen (as N 2 ) in the atmosphere at low levels, but this is not adequate to support nitrogen fixation for biological incorporation. [ 98 ] Nitrogen in the form of nitrate could be a resource for human exploration both as a nutrient for plant growth and for use in chemical processes. On Earth, nitrates correlate with perchlorates in desert environments, and this may also be true on Mars. Nitrate is expected to be stable on Mars and to have formed by thermal shock from impact or volcanic plume lightning on ancient Mars. [ 99 ]
On March 24, 2015, NASA reported that the SAM instrument on the Curiosity rover detected nitrates by heating surface sediments. The nitrogen in nitrate is in a "fixed" state, meaning that it is in an oxidized form that can be used by living organisms . The discovery supports the notion that ancient Mars may have been hospitable for life. [ 99 ] [ 100 ] [ 101 ] It is suspected that all nitrate on Mars is a relic, with no modern contribution. [ 102 ] Nitrate abundance ranges from non-detection to 681 ± 304 mg/kg in the samples examined until late 2017. [ 102 ] Modeling indicates that the transient condensed water films on the surface should be transported to lower depths (≈10 m) potentially transporting nitrates, where subsurface microorganisms could thrive. [ 103 ]
In contrast, phosphate, one of the chemical nutrients thought to be essential for life, is readily available on Mars. [ 104 ]
Further complicating estimates of the habitability of the Martian surface is the fact that very little is known about the growth of microorganisms at pressures close to those on the surface of Mars. Some teams determined that some bacteria may be capable of cellular replication down to 25 mbar, but that is still above the atmospheric pressures found on Mars (range 1–14 mbar). [ 105 ] In another study, twenty-six strains of bacteria were chosen based on their recovery from spacecraft assembly facilities, and only Serratia liquefaciens strain ATCC 27592 exhibited growth at 7 mbar, 0 °C, and CO 2 -enriched anoxic atmospheres. [ 105 ]
Liquid water is a necessary but not sufficient condition for life as humans know it, as habitability is a function of a multitude of environmental parameters. [ 106 ] Liquid water cannot exist on the surface of Mars except at the lowest elevations for minutes or hours. [ 107 ] [ 108 ] Liquid water does not appear at the surface itself, [ 109 ] but it could form in minuscule amounts around dust particles in snow heated by the Sun. [ 110 ] [ 111 ] [ unreliable source? ] Also, the ancient equatorial ice sheets beneath the ground may slowly sublimate or melt, accessible from the surface via caves. [ 112 ] [ 113 ] [ 114 ] [ 115 ]
Water on Mars exists almost exclusively as water ice, located in the Martian polar ice caps and under the shallow Martian surface even at more temperate latitudes. [ 119 ] [ 120 ] A small amount of water vapor is present in the atmosphere . [ 121 ] There are no bodies of liquid water on the Martian surface because the water vapor pressure is less than 1 Pa, [ 122 ] the atmospheric pressure at the surface averages 600 pascals (0.087 psi)—about 0.6% of Earth's mean sea level pressure—and because the temperature is far too low, (210 K (−63 °C)) leading to immediate freezing. Despite this, about 3.8 billion years ago, [ 123 ] there was a denser atmosphere , higher temperature, and vast amounts of liquid water flowed on the surface, [ 124 ] [ 125 ] [ 126 ] [ 127 ] including large oceans. [ 128 ] [ 129 ] [ 130 ] [ 131 ] [ 132 ]
It has been estimated that the primordial oceans on Mars would have covered between 36% [ 133 ] and 75% of the planet. [ 134 ] On November 22, 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region of Mars. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior . [ 116 ] [ 117 ] [ 118 ] Analysis of Martian sandstones, using data obtained from orbital spectrometry, suggests that the waters that previously existed on the surface of Mars would have had too high a salinity to support most Earth-like life. Tosca et al. found that the Martian water in the locations they studied all had water activity , a w ≤ 0.78 to 0.86—a level fatal to most Terrestrial life. [ 135 ] Haloarchaea , however, are able to live in hypersaline solutions, up to the saturation point. [ 136 ]
In June 2000, possible evidence for current liquid water flowing at the surface of Mars was discovered in the form of flood-like gullies. [ 137 ] [ 138 ] Additional similar images were published in 2006, taken by the Mars Global Surveyor , that suggested that water occasionally flows on the surface of Mars. The images showed changes in steep crater walls and sediment deposits, providing the strongest evidence yet that water coursed through them as recently as several years ago.
There is disagreement in the scientific community as to whether or not the recent gully streaks were formed by liquid water. Some suggest the flows were merely dry sand flows. [ 139 ] [ 140 ] [ 141 ] Others suggest it may be liquid brine near the surface, [ 142 ] [ 143 ] [ 144 ] but the exact source of the water and the mechanism behind its motion are not understood. [ 145 ]
In July 2018, scientists reported the discovery of a subglacial lake on Mars, 1.5 km (0.93 mi) below the southern polar ice cap , and extending sideways about 20 km (12 mi), the first known stable body of water on the planet. [ 146 ] [ 147 ] [ 148 ] [ 149 ] The lake was discovered using the MARSIS radar on board the Mars Express orbiter, and the profiles were collected between May 2012 and December 2015. [ 150 ] The lake is centered at 193°E, 81°S, a flat area that does not exhibit any peculiar topographic characteristics but is surrounded by higher ground, except on its eastern side, where there is a depression. [ 146 ] However, subsequent studies disagree on whether any liquid can be present at this depth without anomalous heating from the interior of the planet. [ 151 ] [ 152 ] Instead, some studies propose that other factors may have led to radar signals resembling those containing liquid water, such as clays, or interference between layers of ice and dust. [ 153 ] [ 154 ] [ 155 ]
In May 2007, the Spirit rover disturbed a patch of ground with its inoperative wheel, uncovering an area 90% rich in silica . [ 156 ] The feature is reminiscent of the effect of hot spring water or steam coming into contact with volcanic rocks. Scientists consider this as evidence of a past environment that may have been favorable for microbial life and theorize that one possible origin for the silica may have been produced by the interaction of soil with acid vapors produced by volcanic activity in the presence of water. [ 157 ]
Based on Earth analogs, hydrothermal systems on Mars would be highly attractive for their potential for preserving organic and inorganic biosignatures . [ 158 ] [ 159 ] [ 160 ] For this reason, hydrothermal deposits are regarded as important targets in the exploration for fossil evidence of ancient Martian life. [ 161 ] [ 162 ] [ 163 ]
In May 2017, evidence of the earliest known life on land on Earth may have been found in 3.48-billion-year-old geyserite and other related mineral deposits (often found around hot springs and geysers ) uncovered in the Pilbara Craton of Western Australia. [ 164 ] [ 165 ] These findings may be helpful in deciding where best to search for early signs of life on the planet Mars. [ 164 ] [ 165 ]
Methane (CH 4 ) is chemically unstable in the current oxidizing atmosphere of Mars. It would quickly break down due to ultraviolet radiation from the Sun and chemical reactions with other gases. Therefore, a persistent presence of methane in the atmosphere may imply the existence of a source to continually replenish the gas.
Trace amounts of methane, at the level of several parts per billion (ppb), were first reported in Mars's atmosphere by a team at the NASA Goddard Space Flight Center in 2003. [ 166 ] [ 167 ] Large differences in the abundances were measured between observations taken in 2003 and 2006, which suggested that the methane was locally concentrated and probably seasonal. [ 168 ] On June 7, 2018, NASA announced it has detected a seasonal variation of methane levels on Mars. [ 15 ] [ 169 ] [ 53 ] [ 54 ] [ 170 ] [ 171 ] [ 172 ] [ 52 ]
The ExoMars Trace Gas Orbiter (TGO), launched in March 2016, began on April 21, 2018, to map the concentration and sources of methane in the atmosphere, [ 173 ] [ 174 ] as well as its decomposition products such as formaldehyde and methanol . As of May 2019, the Trace Gas Orbiter showed that the concentration of methane is under detectable level (< 0.05 ppbv). [ 175 ] [ 176 ]
The principal candidates for the origin of Mars's methane include non-biological processes such as water -rock reactions, radiolysis of water, and pyrite formation, all of which produce H 2 that could then generate methane and other hydrocarbons via Fischer–Tropsch synthesis with CO and CO 2 . [ 177 ] It has also been shown that methane could be produced by a process involving water, carbon dioxide, and the mineral olivine , which is known to be common on Mars. [ 178 ] Although geologic sources of methane such as serpentinization are possible, the lack of current volcanism , hydrothermal activity or hotspots [ 179 ] are not favorable for geologic methane.
Living microorganisms , such as methanogens , are another possible source, but no evidence for the presence of such organisms has been found on Mars, [ 180 ] [ 181 ] [ 182 ] until June 2019 as methane was detected by the Curiosity rover. [ 183 ] Methanogens do not require oxygen or organic nutrients, are non-photosynthetic, use hydrogen as their energy source and carbon dioxide (CO 2 ) as their carbon source, so they could exist in subsurface environments on Mars. [ 184 ] If microscopic Martian life is producing the methane, it probably resides far below the surface, where it is still warm enough for liquid water to exist. [ 185 ]
Since the 2003 discovery of methane in the atmosphere, some scientists have been designing models and in vitro experiments testing the growth of methanogenic bacteria on simulated Martian soil, where all four methanogen strains tested produced substantial levels of methane, even in the presence of 1.0wt% perchlorate salt. [ 186 ]
A team led by Levin suggested that both phenomena—methane production and degradation—could be accounted for by an ecology of methane-producing and methane-consuming microorganisms. [ 187 ] [ 188 ]
Research at the University of Arkansas presented in June 2015 suggested that some methanogens could survive in Mars's low pressure. Rebecca Mickol found that in her laboratory, four species of methanogens survived low-pressure conditions that were similar to a subsurface liquid aquifer on Mars. The four species that she tested were Methanothermobacter wolfeii , Methanosarcina barkeri , Methanobacterium formicicum , and Methanococcus maripaludis . [ 184 ] In June 2012, scientists reported that measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars. [ 180 ] [ 181 ] According to the scientists, "low H 2 /CH 4 ratios (less than approximately 40)" would "indicate that life is likely present and active". [ 180 ] The observed ratios in the lower Martian atmosphere were "approximately 10 times" higher "suggesting that biological processes may not be responsible for the observed CH 4 ". [ 180 ] The scientists suggested measuring the H 2 and CH 4 flux at the Martian surface for a more accurate assessment. Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres . [ 189 ] [ 190 ]
Even if rover missions determine that microscopic Martian life is the seasonal source of the methane, the life forms probably reside far below the surface, outside of the rover's reach. [ 191 ]
In February 2005, it was announced that the Planetary Fourier Spectrometer (PFS) on the European Space Agency 's Mars Express Orbiter had detected traces of formaldehyde in the atmosphere of Mars . Vittorio Formisano, the director of the PFS, has speculated that the formaldehyde could be the byproduct of the oxidation of methane and, according to him, would provide evidence that Mars is either extremely geologically active or harboring colonies of microbial life. [ 192 ] [ 193 ] NASA scientists consider the preliminary findings well worth a follow-up but have also rejected the claims of life. [ 194 ] [ 195 ]
The 1970s Viking program placed two identical landers on the surface of Mars tasked to look for biosignatures of microbial life on the surface. The 'Labeled Release' (LR) experiment gave a positive result for metabolism , while the gas chromatograph–mass spectrometer did not detect organic compounds . The LR was a specific experiment designed to test only a narrowly defined critical aspect of the theory concerning the possibility of life on Mars; therefore, the overall results were declared inconclusive. [ 26 ] No Mars lander mission has found meaningful traces of biomolecules or biosignatures . The claim of extant microbial life on Mars is based on old data collected by the Viking landers, currently reinterpreted as sufficient evidence of life, mainly by Gilbert Levin , [ 196 ] [ 197 ] Joseph D. Miller, [ 198 ] Navarro, [ 199 ] Giorgio Bianciardi and Patricia Ann Straat .
Assessments published in December 2010 by Rafael Navarro-Gonzáles [ 200 ] [ 201 ] [ 202 ] [ 203 ] indicate that organic compounds "could have been present" in the soil analyzed by both Viking 1 and 2. The study determined that perchlorate —discovered in 2008 by Phoenix lander [ 204 ] [ 205 ] —can destroy organic compounds when heated, and produce chloromethane and dichloromethane as a byproduct, the identical chlorine compounds discovered by both Viking landers when they performed the same tests on Mars. Because perchlorate would have broken down any Martian organics, the question of whether or not Viking found organic compounds is still wide open. [ 206 ] [ 207 ]
The Labeled Release evidence was not generally accepted initially, and, to this day lacks the consensus of the scientific community. [ 208 ]
As of 2018, there are 224 known Martian meteorites (some of which were found in several fragments). [ 209 ] These are valuable because they are the only physical samples of Mars available to Earth-bound laboratories. Some researchers have argued that microscopic morphological features found in ALH84001 are biomorphs , however this interpretation has been highly controversial and is not supported by the majority of researchers in the field. [ 210 ]
Seven criteria have been established for the recognition of past life within terrestrial geologic samples. Those criteria are: [ 210 ]
For general acceptance of past life in a geologic sample, essentially most or all of these criteria must be met. All seven criteria have not yet been met for any of the Martian samples. [ 210 ]
In 1996, the Martian meteorite ALH84001 , a specimen that is much older than the majority of Martian meteorites that have been recovered so far, received considerable attention when a group of NASA scientists led by David S. McKay reported microscopic features and geochemical anomalies that they considered to be best explained by the rock having hosted Martian bacteria in the distant past. Some of these features resembled terrestrial bacteria, aside from their being much smaller than any known form of life. Much controversy arose over this claim, and ultimately all of the evidence McKay's team cited as evidence of life was found to be explainable by non-biological processes. Although the scientific community has largely rejected the claim ALH 84001 contains evidence of ancient Martian life, the controversy associated with it is now seen as a historically significant moment in the development of exobiology. [ 211 ] [ 212 ]
The Nakhla meteorite fell on Earth on June 28, 1911, on the locality of Nakhla, Alexandria , Egypt. [ 213 ] [ 214 ]
In 1998, a team from NASA's Johnson Space Center obtained a small sample for analysis. Researchers found preterrestrial aqueous alteration phases and objects [ 215 ] of the size and shape consistent with Earthly fossilized nanobacteria .
Analysis with gas chromatography and mass spectrometry (GC-MS) studied its high molecular weight polycyclic aromatic hydrocarbons in 2000, and NASA scientists concluded that as much as 75% of the organic compounds in Nakhla "may not be recent terrestrial contamination". [ 210 ] [ 216 ]
This caused additional interest in this meteorite, so in 2006, NASA managed to obtain an additional and larger sample from the London Natural History Museum. On this second sample, a large dendritic carbon content was observed. When the results and evidence were published in 2006, some independent researchers claimed that the carbon deposits are of biologic origin. It was remarked that since carbon is the fourth most abundant element in the Universe , finding it in curious patterns is not indicative or suggestive of biological origin. [ 217 ] [ 218 ]
The Shergotty meteorite , a Martian meteorite weighing 4 kilograms (8.8 lb), fell on Earth on Shergotty , India on August 25, 1865, and was retrieved by witnesses almost immediately. [ 219 ] It is composed mostly of pyroxene and thought to have undergone preterrestrial aqueous alteration for several centuries. Certain features in its interior suggest remnants of a biofilm and its associated microbial communities. [ 210 ]
Yamato 000593 is the second largest meteorite from Mars found on Earth. Studies suggest the Martian meteorite was formed about 1.3 billion years ago from a lava flow on Mars . An impact occurred on Mars about 12 million years ago and ejected the meteorite from the Martian surface into space . The meteorite landed on Earth in Antarctica about 50,000 years ago. The mass of the meteorite is 13.7 kg (30 lb) and it has been found to contain evidence of past water movement. [ 220 ] [ 221 ] [ 222 ] At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to NASA scientists. [ 220 ] [ 221 ] [ 222 ]
Organism–substrate interactions and their products are important biosignatures on Earth as they represent direct evidence of biological behaviour. [ 223 ] It was the recovery of fossilized products of life-substrate interactions (ichnofossils) that has revealed biological activities in the early history of life on the Earth, e.g., Proterozoic burrows, Archean microborings and stromatolites. [ 224 ] [ 225 ] [ 226 ] [ 227 ] [ 228 ] [ 229 ] Two major ichnofossil-like structures have been reported from Mars, i.e. the stick-like structures from Vera Rubin Ridge and the microtunnels from Martian Meteorites.
Observations at Vera Rubin Ridge by the Mars Space Laboratory rover Curiosity show millimetric, elongate structures preserved in sedimentary rocks deposited in fluvio-lacustrine environments within Gale Crater. Morphometric and topologic data are unique to the stick-like structures among Martian geological features and show that ichnofossils are among the closest morphological analogues of these unique features. [ 230 ] Nevertheless, available data cannot fully disprove two major abiotic hypotheses, that are sedimentary cracking and evaporitic crystal growth as genetic processes for the structures.
Microtunnels have been described from Martian meteorites. They consist of straight to curved microtunnels that may contain areas of enhanced carbon abundance. The morphology of the curved microtunnels is consistent with biogenic traces on Earth, including microbioerosion traces observed in basaltic glasses. [ 231 ] [ 232 ] [ 229 ] Further studies are needed to confirm biogenicity.
The seasonal frosting and defrosting of the southern ice cap results in the formation of spider-like radial channels carved on 1-meter thick ice by sunlight. Then, sublimed CO 2 – and probably water – increase pressure in their interior producing geyser-like eruptions of cold fluids often mixed with dark basaltic sand or mud. [ 233 ] [ 234 ] [ 235 ] [ 236 ] This process is rapid, observed happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars. [ 237 ]
A team of Hungarian scientists propose that the geysers' most visible features, dark dune spots and spider channels, may be colonies of photosynthetic Martian microorganisms, which over-winter beneath the ice cap, and as the sunlight returns to the pole during early spring, light penetrates the ice, the microorganisms photosynthesize and heat their immediate surroundings. A pocket of liquid water, which would normally evaporate instantly in the thin Martian atmosphere, is trapped around them by the overlying ice. As this ice layer thins, the microorganisms show through grey. When the layer has completely melted, the microorganisms rapidly desiccate and turn black, surrounded by a grey aureole. [ 238 ] [ 239 ] [ 240 ] The Hungarian scientists believe that even a complex sublimation process is insufficient to explain the formation and evolution of the dark dune spots in space and time. [ 241 ] [ 242 ] Since their discovery, fiction writer Arthur C. Clarke promoted these formations as deserving of study from an astrobiological perspective. [ 243 ]
A multinational European team suggests that if liquid water is present in the spiders' channels during their annual defrost cycle, they might provide a niche where certain microscopic life forms could have retreated and adapted while sheltered from solar radiation. [ 244 ] A British team also considers the possibility that organic matter , microbes , or even simple plants might co-exist with these inorganic formations, especially if the mechanism includes liquid water and a geothermal energy source. [ 237 ] They also remark that the majority of geological structures may be accounted for without invoking any organic "life on Mars" hypothesis. [ 237 ] It has been proposed to develop the Mars Geyser Hopper lander to study the geysers up close. [ 245 ]
Planetary protection of Mars aims to prevent biological contamination of the planet. [ 246 ] A major goal is to preserve the planetary record of natural processes by preventing human-caused microbial introductions, also called forward contamination . There is abundant evidence as to what can happen when organisms from regions on Earth that have been isolated from one another for significant periods of time are introduced into each other's environment. Species that are constrained in one environment can thrive – often out of control – in another environment much to the detriment of the original species that were present. In some ways, this problem could be compounded if life forms from one planet were introduced into the totally alien ecology of another world. [ 247 ]
The prime concern of hardware contaminating Mars derives from incomplete spacecraft sterilization of some hardy terrestrial bacteria ( extremophiles ) despite best efforts. [ 30 ] [ 248 ] Hardware includes landers, crashed probes, end-of-mission disposal of hardware, and the hard landing of entry, descent, and landing systems. This has prompted research on survival rates of radiation-resistant microorganisms including the species Deinococcus radiodurans and genera Brevundimonas , Rhodococcus , and Pseudomonas under simulated Martian conditions. [ 249 ] Results from one of these experimental irradiation experiments, combined with previous radiation modeling, indicate that Brevundimonas sp. MV.7 emplaced only 30 cm deep in Martian dust could survive the cosmic radiation for up to 100,000 years before suffering 10 6 population reduction. [ 249 ] The diurnal Mars-like cycles in temperature and relative humidity affected the viability of Deinococcus radiodurans cells quite severely. [ 250 ] In other simulations, Deinococcus radiodurans also failed to grow under low atmospheric pressure, under 0 °C, or in the absence of oxygen. [ 251 ]
Since the 1950s, researchers have used containers that simulate environmental conditions on Mars to determine the viability of a variety of lifeforms on Mars. Such devices, called " Mars jars " or "Mars simulation chambers", were first described and used in U.S. Air Force research in the 1950s by Hubertus Strughold , and popularized in civilian research by Joshua Lederberg and Carl Sagan . [ 252 ]
On April 26, 2012, scientists reported that an extremophile lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). [ 253 ] [ 254 ] [ 255 ] [ 256 ] [ 257 ] [ 258 ] The ability to survive in an environment is not the same as the ability to thrive, reproduce, and evolve in that same environment, necessitating further study. [ 31 ] [ 30 ]
Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none has considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith including perchlorates, [ 259 ] and others, all at the same time and in combination. [ 260 ] Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. [ 31 ]
Astrobiologists funded by NASA are researching the limits of microbial life in solutions with high salt concentrations at low temperature. [ 261 ] Any body of liquid water under the polar ice caps or underground is likely to exist under high hydrostatic pressure and have a significant salt concentration. They know that the landing site of Phoenix lander was found to be regolith cemented with water ice and salts, and the soil samples likely contained magnesium sulfate, magnesium perchlorate, sodium perchlorate, potassium perchlorate, sodium chloride and calcium carbonate. [ 261 ] [ 262 ] [ 263 ] Earth bacteria capable of growth and reproduction in the presence of highly salted solutions, called halophile or "salt-lover", were tested for survival using salts commonly found on Mars and at decreasing temperatures. [ 261 ] The species tested include Halomonas , Marinococcus , Nesterenkonia , and Virgibacillus . [ 261 ] Laboratory simulations show that whenever multiple Martian environmental factors are combined, the survival rates plummet quickly, [ 31 ] however, halophile bacteria were grown in a lab in water solutions containing more than 25% of salts common on Mars, and starting in 2019 [ needs update ] , the experiments will incorporate exposure to low temperature, salts, and high pressure. [ 261 ]
On 21 February 2023, scientists reported the findings of a " dark microbiome " of unfamiliar microorganisms in the Atacama Desert in Chile , a Mars-like region of Earth. [ 264 ] [ 265 ]
Mars-1 was the first spacecraft launched to Mars in 1962, [ 266 ] but communication was lost while en route to Mars. With Mars-2 and Mars-3 in 1971–1972, information was obtained on the nature of the surface rocks and altitude profiles of the surface density of the soil, its thermal conductivity, and thermal anomalies detected on the surface of Mars. The program found that its northern polar cap has a temperature below −110 °C (−166 °F) and that the water vapor content in the atmosphere of Mars is five thousand times less than on Earth. No signs of life were found. [ 267 ]
Signs of life of the Mars space program AMS from orbit were not found. The descent vehicle Mars-2 crashed on landing, the descent vehicle Mars-3 launched 1.5 minutes after landing in the Ptolemaeus crater , but worked only 14.5 seconds/ [ 268 ]
Mariner 4 probe performed the first successful flyby of the planet Mars, returning the first pictures of the Martian surface in 1965. The photographs showed an arid Mars without rivers, oceans, or any signs of life. Further, it revealed that the surface (at least the parts that it photographed) was covered in craters, indicating a lack of plate tectonics and weathering of any kind for the last 4 billion years. The probe also found that Mars has no global magnetic field that would protect the planet from potentially life-threatening cosmic rays . The probe was able to calculate the atmospheric pressure on the planet to be about 0.6 kPa (compared to Earth's 101.3 kPa), meaning that liquid water could not exist on the planet's surface. [ 26 ] After Mariner 4, the search for life on Mars changed to a search for bacteria-like living organisms rather than for multicellular organisms, as the environment was clearly too harsh for these. [ 26 ] [ 269 ] [ 270 ]
Liquid water is necessary for known life and metabolism , so if water was present on Mars, the chances of it having supported life may have been determinant. The Viking orbiters found evidence of possible river valleys in many areas, erosion and, in the southern hemisphere, branched streams. [ 271 ] [ 272 ] [ 273 ]
The primary mission of the Viking probes of the mid-1970s was to carry out experiments designed to detect microorganisms in Martian soil because the favorable conditions for the evolution of multicellular organisms ceased some four billion years ago on Mars. [ 274 ] The tests were formulated to look for microbial life similar to that found on Earth. Of the four experiments, only the Labeled Release (LR) experiment returned a positive result, [ dubious – discuss ] showing increased 14 CO 2 production on first exposure of soil to water and nutrients. All scientists agree on two points from the Viking missions: that radiolabeled 14 CO 2 was evolved in the Labeled Release experiment, and that the GCMS detected no organic molecules. There are vastly different interpretations of what those results imply: A 2011 astrobiology textbook notes that the GCMS was the decisive factor due to which "For most of the Viking scientists, the final conclusion was that the Viking missions failed to detect life in the Martian soil." [ 275 ]
Norman Horowitz was the head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. [ 276 ] However, he also considered that the conditions found on Mars were incompatible with carbon based life.
One of the designers of the Labeled Release experiment, Gilbert Levin , believes his results are a definitive diagnostic for life on Mars. [ 26 ] Levin's interpretation is disputed by many scientists. [ 277 ] A 2006 astrobiology textbook noted that "With unsterilized Terrestrial samples, though, the addition of more nutrients after the initial incubation would then produce still more radioactive gas as the dormant bacteria sprang into action to consume the new dose of food. This was not true of the Martian soil; on Mars, the second and third nutrient injections did not produce any further release of labeled gas." [ 278 ] Other scientists argue that superoxides in the soil could have produced this effect without life being present. [ 279 ] An almost general consensus discarded the Labeled Release data as evidence of life, because the gas chromatograph and mass spectrometer, designed to identify natural organic matter , did not detect organic molecules. [ 196 ] More recently, high levels of organic chemicals , particularly chlorobenzene , were detected in powder drilled from one of the rocks, named " Cumberland ", analyzed by the Curiosity rover . [ 280 ] [ 281 ] The results of the Viking mission concerning life are considered by the general expert community as inconclusive. [ 26 ] [ 279 ] [ 282 ]
In 2007, during a Seminar of the Geophysical Laboratory of the Carnegie Institution (Washington, D.C., US), Gilbert Levin 's investigation was assessed once more. [ 196 ] Levin still maintains that his original data were correct, as the positive and negative control experiments were in order. [ 283 ] Moreover, Levin's team, on April 12, 2012, reported a statistical speculation, based on old data—reinterpreted mathematically through cluster analysis —of the Labeled Release experiments , that may suggest evidence of "extant microbial life on Mars". [ 283 ] [ 284 ] Critics counter that the method has not yet been proven effective for differentiating between biological and non-biological processes on Earth so it is premature to draw any conclusions. [ 285 ]
A research team from the National Autonomous University of Mexico headed by Rafael Navarro-González concluded that the GCMS equipment (TV-GC-MS) used by the Viking program to search for organic molecules, may not be sensitive enough to detect low levels of organics. [ 203 ] Klaus Biemann , the principal investigator of the GCMS experiment on Viking wrote a rebuttal. [ 286 ] Because of the simplicity of sample handling, TV–GC–MS is still considered the standard method for organic detection on future Mars missions, so Navarro-González suggests that the design of future organic instruments for Mars should include other methods of detection. [ 203 ]
After the discovery of perchlorates on Mars by the Phoenix lander , practically the same team of Navarro-González published a paper arguing that the Viking GCMS results were compromised by the presence of perchlorates. [ 287 ] A 2011 astrobiology textbook notes that "while perchlorate is too poor an oxidizer to reproduce the LR results (under the conditions of that experiment perchlorate does not oxidize organics), it does oxidize, and thus destroy, organics at the higher temperatures used in the Viking GCMS experiment." [ 288 ] Biemann has written a commentary critical of this Navarro-González paper as well, [ 289 ] to which the latter have replied; [ 290 ] the exchange was published in December 2011.
The Phoenix mission landed a robotic spacecraft in the polar region of Mars on May 25, 2008, and it operated until November 10, 2008. One of the mission's two primary objectives was to search for a "habitable zone" in the Martian regolith where microbial life could exist, the other main goal being to study the geological history of water on Mars. The lander has a 2.5 meter robotic arm that was capable of digging shallow trenches in the regolith. There was an electrochemistry experiment which analysed the ions in the regolith and the amount and type of antioxidants on Mars. The Viking program data indicate that oxidants on Mars may vary with latitude, noting that Viking 2 saw fewer oxidants than Viking 1 in its more northerly position. Phoenix landed further north still. [ 291 ] Phoenix ' s preliminary data revealed that Mars soil contains perchlorate , and thus may not be as life-friendly as thought earlier. [ 292 ] [ 293 ] [ 205 ] The pH and salinity level were viewed as benign from the standpoint of biology. The analysers also indicated the presence of bound water and CO 2 . [ 294 ] A recent analysis of Martian meteorite EETA79001 found 0.6 ppm ClO 4 − , 1.4 ppm ClO 3 − , and 16 ppm NO 3 − , most likely of Martian origin. The ClO 3 − suggests presence of other highly oxidizing oxychlorines such as ClO 2 − or ClO, produced both by UV oxidation of Cl and X-ray radiolysis of ClO 4 − . Thus only highly refractory and/or well-protected (sub-surface) organics are likely to survive. [ 295 ] In addition, recent analysis of the Phoenix WCL showed that the Ca(ClO 4 ) 2 in the Phoenix soil has not interacted with liquid water of any form, perhaps for as long as 600 Myr. If it had, the highly soluble Ca(ClO 4 ) 2 in contact with liquid water would have formed only CaSO 4 . This suggests a severely arid environment, with minimal or no liquid water interaction. [ 296 ]
The Mars Science Laboratory mission is a NASA project that launched on November 26, 2011, the Curiosity rover , a nuclear-powered robotic vehicle, bearing instruments designed to assess past and present habitability conditions on Mars. [ 297 ] [ 298 ] The Curiosity rover landed on Mars on Aeolis Palus in Gale Crater , near Aeolis Mons (a.k.a. Mount Sharp), [ 299 ] [ 300 ] [ 301 ] [ 302 ] on August 6, 2012. [ 303 ] [ 304 ] [ 305 ]
On December 16, 2014, NASA reported the Curiosity rover detected a "tenfold spike", likely localized, in the amount of methane in the Martian atmosphere . Sample measurements taken "a dozen times over 20 months" showed increases in late 2013 and early 2014, averaging "7 parts of methane per billion in the atmosphere". Before and after that, readings averaged around one-tenth that level. [ 280 ] [ 281 ] In addition, low levels of chlorobenzene ( C 6 H 5 Cl ), were detected in powder drilled from one of the rocks, named " Cumberland ", analyzed by the Curiosity rover. [ 280 ] [ 281 ]
The NASA Mars 2020 mission includes the Perseverance rover. Launched on July 30, 2020 it is intended to investigate an astrobiologically relevant ancient environment on Mars. This includes its surface geological processes and history, and an assessment of its past habitability and the potential for preservation of biosignatures within accessible geological materials. [ 307 ] Perseverance has been on Mars for 4 years, 90 days.
The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature " and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available.
Some of the main reasons for colonizing Mars include economic interests, long-term scientific research best carried out by humans as opposed to robotic probes, and sheer curiosity. Surface conditions and the presence of water on Mars make it arguably the most hospitable of the planets in the Solar System , other than Earth. Human colonization of Mars would require in situ resource utilization ( ISRU ); A NASA report states that "applicable frontier technologies include robotics, machine intelligence, nanotechnology, synthetic biology, 3-D printing/additive manufacturing, and autonomy. These technologies combined with the vast natural resources should enable, pre- and post-human arrival ISRU to greatly increase reliability and safety and reduce cost for human colonization of Mars." [ 311 ] [ 312 ] [ 313 ] | https://en.wikipedia.org/wiki/Life_on_Mars |
Whether there is life on Titan , the largest moon of Saturn , is currently an open question and a topic of scientific assessment and research. Titan is far colder than Earth , but of all the places in the Solar System , Titan is the only place besides Earth known to have liquids in the form of rivers, lakes, and seas on its surface. Its thick atmosphere is chemically active and rich in carbon compounds. On the surface there are small and large bodies of both liquid methane and ethane , and it is likely that there is a layer of liquid water under its ice shell. Some scientists speculate that these liquid mixes may provide prebiotic chemistry for living cells different from those on Earth .
In June 2010, scientists analyzing data from the Cassini–Huygens mission reported anomalies in the atmosphere near the surface which could be consistent with the presence of methane-producing organisms, but may alternatively be due to non-living chemical or meteorological processes. [ 1 ] The Cassini–Huygens mission was not equipped to look directly for micro-organisms or to provide a thorough inventory of complex organic compounds .
Titan's consideration as an environment for the study of prebiotic chemistry or potentially exotic life stems in large part due to the diversity of the organic chemistry that occurs in its atmosphere, driven by photochemical reactions in its outer layers. The following chemicals have been detected in Titan's upper atmosphere by Cassini ' s mass spectrometer :
As mass spectrometry identifies the atomic mass of a compound but not its structure, additional research is required to identify the exact compound that has been detected. Where the compounds have been identified in the literature, their chemical formula has been replaced by their name above. The figures in Magee (2009) involve corrections for high pressure background. Other compounds believed to be indicated by the data and associated models include ammonia , polyynes , amines , ethylenimine , deuterium hydride , allene , 1,3 butadiene and any number of more complex chemicals in lower concentrations, as well as carbon dioxide and limited quantities of water vapour. [ 2 ] [ 3 ] [ 4 ]
Due to its distance from the Sun, Titan is much colder than Earth. Its surface temperature is about 94 K (−179 °C, or −290 °F). At these temperatures, water ice—if present—does not melt, evaporate or sublimate, but remains solid. Because of the extreme cold and also because of lack of carbon dioxide (CO 2 ) in the atmosphere, scientists such as Jonathan Lunine have viewed Titan less as a likely habitat for extraterrestrial life , than as an experiment for examining hypotheses on the conditions that prevailed prior to the appearance of life on Earth. [ 5 ] Even though the usual surface temperature on Titan is not compatible with liquid water, calculations by Lunine and others suggest that meteor strikes could create occasional "impact oases"—craters in which liquid water might persist for hundreds of years or longer, which would enable water-based organic chemistry. [ 6 ] [ 7 ] [ 8 ]
However, Lunine does not rule out life in an environment of liquid methane and ethane, and has written about what discovery of such a life form (even if very primitive) would imply about the prevalence of life in the universe. [ 9 ]
In the 1970s, astronomers found unexpectedly high levels of infrared emissions from Titan. [ 10 ] One possible explanation for this was the surface was warmer than expected, due to a greenhouse effect . Some estimates of the surface temperature even approached temperatures in the cooler regions of Earth. There was, however, another possible explanation for the infrared emissions: Titan's surface was very cold, but the upper atmosphere was heated due to absorption of ultraviolet light by molecules such as ethane, ethylene and acetylene. [ 10 ]
In September 1979, Pioneer 11 , the first space probe to conduct fly-by observations of Saturn and its moons, sent data showing Titan's surface to be extremely cold by Earth standards, and much below the temperatures generally associated with planetary habitability . [ 11 ]
Titan may become warmer in the future. [ 12 ] Five to six billion years from now, as the Sun becomes a red giant , surface temperatures could rise to ~200 K (−70 °C), high enough for stable oceans of a water–ammonia mixture to exist on its surface. As the Sun's ultraviolet output decreases, the haze in Titan's upper atmosphere will be depleted, lessening the anti-greenhouse effect on its surface and enabling the greenhouse effect created by atmospheric methane to play a far greater role. These conditions together could create an environment agreeable to exotic forms of life, and will persist for several hundred million years. [ 12 ] This was sufficient time for simple life to evolve on Earth, although the presence of ammonia on Titan could cause the same chemical reactions to proceed more slowly. [ 12 ]
The lack of liquid water on Titan's surface was cited by NASA astrobiologist Andrew Pohorille in 2009 as an argument against life there. Pohorille considers that water is important not only as the solvent used by "the only life we know" but also because its chemical properties are "uniquely suited to promote self-organization of organic matter". He has questioned whether prospects for finding life on Titan's surface are sufficient to justify the expense of a mission that would look for it. [ 13 ]
Laboratory simulations have led to the suggestion that enough organic material exists on Titan to start a chemical evolution analogous to what is thought to have started life on Earth. While the analogy assumes the presence of liquid water for longer periods than is currently observable, several hypotheses suggest that liquid water from an impact could be preserved under a frozen isolation layer. [ 14 ] It has also been proposed that ammonia oceans could exist deep below the surface; [ 15 ] [ 16 ] one model suggests an ammonia–water solution as much as 200 km deep beneath a water ice crust, conditions that, "while extreme by terrestrial standards, are such that life could indeed survive". [ 17 ] Heat transfer between the interior and upper layers would be critical in sustaining any sub-surface oceanic life. [ 15 ] Detection of microbial life on Titan would depend on its biogenic effects. For example, the atmospheric methane and nitrogen could be examined for biogenic origin. [ 17 ]
Data published in 2012 obtained from NASA's Cassini spacecraft, have strengthened evidence that Titan likely harbors a layer of liquid water under its ice shell. [ 18 ]
Titan is the only known natural satellite (moon) in the Solar System that has a fully developed atmosphere that consists of more than trace gases. Titan's atmosphere is thick, chemically active, and is known to be rich in organic compounds ; this has led to speculation about whether chemical precursors of life may have been generated there. [ 19 ] [ 20 ] [ 21 ] The atmosphere also contains hydrogen gas, which is cycling through the atmosphere and the surface environment, and which living things comparable to Earth methanogens could combine with some of the organic compounds (such as acetylene ) to obtain energy. [ 19 ] [ 20 ] [ 21 ]
The Miller–Urey experiment and several following experiments have shown that with an atmosphere similar to that of Titan and the addition of UV radiation , complex molecules and polymer substances like tholins can be generated. The reaction starts with dissociation of nitrogen and methane, forming hydrogen cyanide and acetylene . Further reactions have been studied extensively. [ 22 ]
In October 2010, Sarah Hörst of the University of Arizona reported finding the five nucleotide bases —building blocks of DNA and RNA —among the many compounds produced when energy was applied to a combination of gases like those in Titan's atmosphere. Hörst also found amino acids , the building blocks of protein . She said it was the first time nucleotide bases and amino acids had been found in such an experiment without liquid water being present. [ 23 ]
In April 2013, NASA reported that complex organic chemicals could arise on Titan based on studies simulating the atmosphere of Titan. [ 24 ] In June 2013, polycyclic aromatic hydrocarbons (PAHs) were detected in the upper atmosphere of Titan. [ 25 ]
A team of researchers led by Martin Rahm suggested in 2016 that polyimine could readily function as a building block in Titan's conditions. [ 26 ] Titan's atmosphere produces significant quantities of hydrogen cyanide, which readily polymerize into forms which can capture light energy in Titan's surface conditions. As of yet, the answer to what happens with Titan's cyanide is unknown; while it is rich in the upper atmosphere where it is created, it is depleted at the surface, suggesting that there is some sort of reaction consuming it. [ 27 ]
In July 2017, Cassini scientists positively identified the presence of carbon chain anions in Titan's upper atmosphere which appeared to be involved in the production of large complex organics. [ 28 ] These highly reactive molecules were previously known to contribute to building complex organics in the Interstellar Medium, therefore highlighting a possibly universal stepping stone to producing complex organic material. [ 29 ]
In July 2017, scientists reported that acrylonitrile (C 2 H 3 CN), a chemical possibly essential for life by being related to cell membrane and vesicle structure formation, had been found on Titan. [ 30 ]
In October 2018, researchers reported low-temperature chemical pathways from simple organic compounds to complex polycyclic aromatic hydrocarbon (PAH) chemicals. Such chemical pathways may help explain the presence of PAHs in the low-temperature atmosphere of Titan, and may be significant pathways, in terms of the PAH world hypothesis , in producing precursors to biochemicals related to life as we know it. [ 31 ] [ 32 ]
Although all living things on Earth (including methanogens) use liquid water as a solvent, it is conceivable that life on Titan might instead use a liquid hydrocarbon, such as methane or ethane. [ 33 ] Water is a stronger solvent than hydrocarbons; [ 34 ] however, water is more chemically reactive, and can break down large organic molecules through hydrolysis . [ 33 ] A life-form whose solvent was a hydrocarbon would not face the risk of its biomolecules being destroyed in this way. [ 33 ]
Titan appears to have lakes of liquid ethane or liquid methane on its surface, as well as rivers and seas, which some scientific models suggest could support hypothetical non-water-based life . [ 19 ] [ 20 ] [ 21 ] It has been speculated that life could exist in the liquid methane and ethane that form rivers and lakes on Titan's surface, just as organisms on Earth live in water. [ 35 ] Such hypothetical creatures would take in H 2 in place of O 2 , react it with acetylene instead of glucose , and produce methane instead of carbon dioxide. [ 35 ] By comparison, some methanogens on Earth obtain energy by reacting hydrogen with carbon dioxide, producing methane and water.
In 2005, astrobiologists Christopher McKay and Heather Smith predicted that if methanogenic life is consuming atmospheric hydrogen in sufficient volume, it will have a measurable effect on the mixing ratio in the troposphere of Titan. The effects predicted included a level of acetylene much lower than otherwise expected, as well as a reduction in the concentration of hydrogen itself. [ 35 ]
Evidence consistent with these predictions was reported in June 2010 by Darrell Strobel of Johns Hopkins University , who analysed measurements of hydrogen concentration in the upper and lower atmosphere. Strobel found that the hydrogen concentration in the upper atmosphere is so much larger than near the surface that the physics of diffusion leads to hydrogen flowing downwards at a rate of roughly 10 25 molecules per second. Near the surface the downward-flowing hydrogen apparently disappears. [ 34 ] [ 35 ] [ 36 ] Another paper released the same month showed very low levels of acetylene on Titan's surface. [ 34 ]
Chris McKay agreed with Strobel that presence of life, as suggested in McKay's 2005 article, is a possible explanation for the findings about hydrogen and acetylene, but also cautioned that other explanations are currently more likely: namely the possibility that the results are due to human error , to a meteorological process, or to the presence of some mineral catalyst enabling hydrogen and acetylene to react chemically. [ 1 ] [ 37 ] He noted that such a catalyst, one effective at −178 °C (95 K), is presently unknown and would in itself be a startling discovery, though less startling than discovery of an extraterrestrial life form. [ 1 ]
The June 2010 findings gave rise to considerable media interest, including a report in the British newspaper, the Telegraph , which spoke of clues to the existence of "primitive aliens". [ 38 ]
A hypothetical cell membrane capable of functioning in liquid methane was modeled in February 2015. [ 39 ] The proposed chemical base for these membranes is acrylonitrile , which has been detected on Titan. [ 40 ] Called an " azotosome " ('nitrogen body'), formed from "azoto", Greek for nitrogen, and "soma", Greek for body, it lacks the phosphorus and oxygen found in phospholipids on Earth but contains nitrogen. Despite the very different chemical structure and external environment, its properties are surprisingly similar, including autoformation of sheets, flexibility, stability, and other properties. According to computer simulations azotosomes could not form under the weather conditions found on Titan. [ 41 ]
An analysis of Cassini data, completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere. [ 42 ] [ 30 ]
In order to assess the likelihood of finding any sort of life on various planets and moons, Dirk Schulze-Makuch and other scientists have developed a planetary habitability index which takes into account factors including characteristics of the surface and atmosphere, availability of energy, solvents and organic compounds. [ 43 ] Using this index, based on data available in late 2011, the model suggests that Titan has the highest current habitability rating of any known world, other than Earth. [ 43 ]
While the Cassini–Huygens mission was not equipped to provide evidence for biosignatures or complex organics, it showed an environment on Titan that is similar, in some ways, to ones theorized for the primordial Earth. [ 44 ] Scientists think that the atmosphere of early Earth was similar in composition to the current atmosphere on Titan, with the important exception of a lack of water vapor on Titan. [ 45 ] Many hypotheses have developed that attempt to bridge the step from chemical to biological evolution.
Titan is presented as a test case for the relation between chemical reactivity and life, in a 2007 report on life's limiting conditions prepared by a committee of scientists under the United States National Research Council . The committee, chaired by John Baross , considered that "if life is an intrinsic property of chemical reactivity, life should exist on Titan. Indeed, for life not to exist on Titan, we would have to argue that life is not an intrinsic property of the reactivity of carbon-containing molecules under conditions where they are stable..." [ 46 ]
David Grinspoon , one of the scientists who in 2005 proposed that hypothetical organisms on Titan might use hydrogen and acetylene as an energy source, [ 47 ] has mentioned the Gaia hypothesis in the context of discussion about Titan life. He suggests that, just as Earth's environment and its organisms have evolved together, the same thing is likely to have happened on other worlds with life on them. In Grinspoon's view, worlds that are "geologically and meteorologically alive are much more likely to be biologically alive as well". [ 48 ]
An alternate explanation for life's hypothetical existence on Titan has been proposed: if life were to be found on Titan, it could have originated from Earth in a process called panspermia . It is theorized that large asteroid and cometary impacts on Earth's surface have caused hundreds of millions of fragments of microbe-laden rock to escape Earth's gravity. Calculations indicate that a number of these would encounter many of the bodies in the Solar System, including Titan. [ 49 ] [ 50 ] On the other hand, Jonathan Lunine has argued that any living things in Titan's cryogenic hydrocarbon lakes would need to be so different chemically from Earth life that it would not be possible for one to be the ancestor of the other. [ 9 ] In Lunine's view, presence of organisms in Titan's lakes would mean a second, independent origin of life within the Solar System, implying that life has a high probability of emerging on habitable worlds throughout the cosmos. [ 9 ]
The proposed Titan Mare Explorer mission, a Discovery-class lander that would splash down in a lake, "would have the possibility of detecting life", according to astronomer Chris Impey of the University of Arizona . [ 51 ]
The planned Dragonfly rotorcraft mission is intended to land on solid ground and relocate many times. [ 52 ] Dragonfly will be New Frontiers program Mission #4. Its instruments will study how far prebiotic chemistry may have progressed. [ 53 ] Dragonfly will carry equipment to study the chemical composition of Titan's surface, and to sample the lower atmosphere for possible biosignatures , including hydrogen concentrations. [ 53 ] | https://en.wikipedia.org/wiki/Life_on_Titan |
The possibility of life on Venus is a subject of interest in astrobiology due to Venus 's proximity and similarities to Earth . To date, no definitive evidence has been found of past or present life there. In the early 1960s, studies conducted via spacecraft demonstrated that the current Venusian environment is extreme compared to Earth's. Studies continue to question whether life could have existed on the planet's surface before a runaway greenhouse effect took hold, and whether a relict biosphere could persist high in the modern Venusian atmosphere .
With extreme surface temperatures reaching nearly 735 K (462 °C; 863 °F) and an atmospheric pressure 92 times that of Earth, the conditions on Venus make water-based life as we know it unlikely on the surface of the planet. However, a few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the temperate, acidic upper layers of the Venusian atmosphere. [ 1 ] [ 2 ] [ 3 ] In September 2020, research was published that reported the presence of phosphine in the planet's atmosphere, a potential biosignature . [ 4 ] [ 5 ] [ 6 ] However, doubts have been cast on these observations. [ 7 ] [ 8 ]
As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane ) was reported, though whether these gases are present is still unclear. [ 9 ] On 2 June 2021, NASA announced two new related missions to Venus: DAVINCI and VERITAS . [ 10 ]
Because Venus is completely covered in clouds, human knowledge of surface conditions was largely speculative until the space probe era. Until the mid-20th century, the surface environment of Venus was believed to be similar to Earth, hence it was widely believed that Venus could harbor life. In 1870, the British astronomer Richard A. Proctor said the existence of life on Venus was impossible near its equator, [ 11 ] but possible near its poles.
Microwave observations published by C. Mayer et al. [ 12 ] in 1958 indicated a high-temperature source (600 K). Strangely, millimetre-band observations made by A. D. Kuzmin indicated much lower temperatures. [ 13 ] Two competing theories explained the unusual radio spectrum, one suggesting the high temperatures originated in the ionosphere, and another suggesting a hot planetary surface.
In 1962, Mariner 2 , the first successful mission to Venus , measured the planet's temperature for the first time, and found it to be "about 500 degrees Celsius (900 degrees Fahrenheit)." [ 14 ] Since then, increasingly clear evidence from various space probes showed Venus has an extreme climate, with a greenhouse effect generating a constant temperature of about 500 °C (932 °F) on the surface. The atmosphere contains sulfuric acid clouds. In 1968, NASA reported that air pressure on the Venusian surface was 75 to 100 times that of Earth. [ 15 ] This was later revised to 92 bars , [ 16 ] almost 100 times that of Earth and similar to that of more than 1,000 m (3,300 ft) deep in Earth's oceans. In such an environment, and given the hostile characteristics of the Venusian weather, life as we know it is highly unlikely to occur.
Scientists have speculated that if liquid water existed on its surface before the runaway greenhouse effect heated the planet, microbial life may have formed on Venus, but it may no longer exist. [ 18 ] Assuming the process that delivered water to Earth was common to all the planets near the habitable zone, it has been estimated that liquid water could have existed on its surface for up to 600 million years during and shortly after the Late Heavy Bombardment , which could be enough time for simple life to form, but this figure can vary from as little as a few million years to as much as a few billion. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] A study published in September 2019 concluded that Venus may have had surface water and a habitable condition for around 3 billion years and may have been in this condition until 700 to 750 million years ago. If correct, this would have been an ample amount of time for the formation of life, [ 24 ] and for microbial life to evolve to become aerial. [ 25 ] Since then, there have been more studies and climate models, with different conclusions.
There has been very little analysis of Venusian surface material, so it is possible that evidence of past life, if it ever existed, could be found with a probe capable of enduring Venus's current extreme surface conditions. [ 26 ] [ 27 ] However, the resurfacing of the planet in the past 500 million years [ 28 ] means that it is unlikely that ancient surface rocks remain, especially those containing the mineral tremolite which, theoretically, could have encased some biosignatures . [ 27 ]
Studies reported on 26 October 2023 suggest Venus, for the first time, may have had plate tectonics during ancient times, and, as a result, may have had a more habitable environment , and possibly one capable of life forms . [ 29 ] [ 30 ]
It has been speculated that life on Venus may have come to Earth through lithopanspermia , via the ejection of icy bolides that facilitated the preservation of multicellular life on long interplanetary voyages. "Current models indicate that Venus may have been habitable. Complex life may have evolved on the highly irradiated Venus, and transferred to Earth on asteroids. This model fits the pattern of pulses of highly developed life appearing, diversifying and going extinct with astonishing rapidity through the Cambrian and Ordovician periods, and also explains the extraordinary genetic variety which appeared over this period." [ 31 ] This theory, however, is a fringe one, and is seen as being unlikely. [ 32 ]
Between 700 and 750 million years ago, a near-global resurfacing event triggered the release of carbon dioxide from rock on the planet, which transformed its climate. [ 33 ] In addition, according to a study from researchers at the University of California, Riverside , Venus would be able to support life if Jupiter had not altered its orbit around the Sun. [ 34 ]
Although there is little possibility of existing life near the surface of Venus, the altitudes about 50 km (31 mi) above the surface have a mild temperature, and hence there are still some opinions in favor of such a possibility in the atmosphere of Venus . [ 35 ] [ 36 ] The idea was first brought forward by German physicist Heinz Haber in 1950. [ 37 ] In September 1967, Carl Sagan and Harold Morowitz published an analysis of the issue of life on Venus in the journal Nature . [ 26 ]
In the analysis of mission data from the Venera , Pioneer Venus and Magellan missions, it was discovered that carbonyl sulfide , hydrogen sulfide and sulfur dioxide were present together in the upper atmosphere. Venera also detected large amounts of toxic chlorine just below the Venusian cloud cover. [ 38 ] Carbonyl sulfide is difficult to produce inorganically, [ 36 ] but it can be produced by volcanism . [ 39 ] Sulfuric acid is produced in the upper atmosphere by the Sun's photochemical action on carbon dioxide , sulfur dioxide , and water vapor. [ 40 ] The re-analysis of Pioneer Venus data in 2020 has found part of chlorine and all of hydrogen sulfide spectral features are instead phosphine -related, meaning lower than thought concentration of chlorine and non-detection of hydrogen sulfide . [ 41 ]
Solar radiation constrains the atmospheric habitable zone to between 51 km (65 °C) and 62 km (−20 °C) altitude, within the acidic clouds. [ 3 ] It has been speculated that clouds in the atmosphere of Venus could contain chemicals that can initiate forms of biological activity and have zones where photophysical and chemical conditions allow for Earth-like phototrophy . [ 42 ] [ 43 ] [ 44 ]
It has been speculated that any hypothetical microorganisms inhabiting the atmosphere, if present, could employ ultraviolet light (UV) emitted by the Sun as an energy source, which could be an explanation for the dark lines (called "unknown UV absorber") observed in the UV photographs of Venus. [ 45 ] [ 46 ] The existence of this "unknown UV absorber" prompted Carl Sagan to publish an article in 1963 proposing the hypothesis of microorganisms in the upper atmosphere as the agent absorbing the UV light. [ 47 ]
In August 2019, astronomers reported a newly discovered long-term pattern of UV light absorbance and albedo changes in the atmosphere of Venus and its weather, that is caused by "unknown absorbers" that may include unknown chemicals or even large colonies of microorganisms high up in the atmosphere. [ 48 ] [ 49 ]
In January 2020, astronomers reported evidence that suggests Venus is currently (within 2.5 million years from present) volcanically active, and the residue from such activity may be a potential source of nutrients for possible microorganisms in the Venusian atmosphere . [ 50 ] [ 51 ] [ 52 ]
In 2021, it was suggested the color of "unknown UV absorber" match that of "red oil", a known substance comprising a mix of organic carbon compounds dissolved in concentrated sulfuric acid. [ 53 ]
Research published in September 2020 indicated the detection of phosphine (PH 3 ) in Venus's atmosphere by Atacama Large Millimeter Array (ALMA) telescope that was not linked to any known abiotic method of production present or possible under Venusian conditions. [ 4 ] [ 5 ] [ 6 ] However, the claimed detection of phosphine was disputed by several subsequent studies. [ 54 ] [ 55 ] [ 56 ] [ 57 ] A molecule like phosphine is not expected to persist in the Venusian atmosphere since, under the ultraviolet radiation, it will eventually react with water and carbon dioxide. PH 3 is associated with anaerobic ecosystems on Earth, and may indicate life on anoxic planets. [ 58 ] Related studies suggested that the initially claimed concentration of phosphine (20 ppb) in the clouds of Venus indicated a "plausible amount of life," and further, that the typical predicted biomass densities were "several orders of magnitude lower than the average biomass density of Earth’s aerial biosphere.” [ 59 ] [ 60 ] As of 2019 [update] , no known abiotic process generates phosphine gas on terrestrial planets (as opposed to gas giants [ 61 ] ) in appreciable quantities. The phosphine can be generated by geological process of weathering olivine lavas containing inorganic phosphides, but this process requires an ongoing and massive volcanic activity. [ 62 ] Therefore, detectable amounts of phosphine could indicate life. [ 63 ] [ 64 ] In July 2021, a volcanic origin was proposed for phosphine, by extrusion from the mantle . [ 65 ]
In a statement published on October 5, 2020, on the website of the International Astronomical Union 's commission F3 on astrobiology, the authors of the September 2020 paper about phosphine were accused of unethical behaviour and criticized for being unscientific and misleading the public. [ 66 ] Members of that commission have since distanced themselves from the IAU statement, claiming that it had been published without their knowledge or approval. [ 67 ] [ 68 ] The statement was removed from the IAU website shortly thereafter. The IAU's media contact Lars Lindberg Christensen stated that IAU did not agree with the content of the letter, and that it had been published by a group within the F3 commission, not IAU itself. [ 69 ]
By late October 2020, the review of data processing of the data collected by both ALMA used in original publication of September 2020, and later James Clerk Maxwell Telescope (JCMT) data, has revealed background calibration errors resulting in multiple spurious lines, including the spectral feature of phosphine. Re-analysis of data with a proper subtraction of background either does not result in the detection of the phosphine [ 54 ] [ 55 ] [ 56 ] or detects it with concentration of 1ppb, 20 times below original estimate. [ 70 ]
On 16 November 2020, ALMA staff released a corrected version of the data used by the scientists of the original study published on 14 September.
On the same day, authors of this study published a re-analysis as a preprint using the new data that concludes the planet-averaged PH 3 abundance to be ~7 times lower than what they detected with data of the previous ALMA processing, to likely vary by location and to be reconcilable with the JCMT detection of ~20 times this abundance if it varies substantially in time. They also respond to points raised in a critical study by Villanueva et al. that challenged their conclusions and find that so far the presence of no other compound can explain the data. [ 71 ] [ 72 ] [ 73 ] [ 70 ] The authors reported that more advanced processing of the JCMT data was ongoing. [ 70 ]
Re-analysis of the in situ data gathered by Pioneer Venus Multiprobe in 1978 has also revealed the presence of phosphine and its dissociation products in the atmosphere of Venus. [ 41 ] In 2021, a further analysis detected trace amounts of ethane , hydrogen sulfide, nitrite , nitrate , hydrogen cyanide , and possibly ammonia . [ 74 ]
The phosphine signal was also detected in data collected using the JCMT , though much weaker than that found using ALMA . [ 7 ]
In October 2020, a reanalysis of archived infrared spectrum measurement in 2015 did not reveal any phosphine in the Venusian atmosphere, placing an upper limit of phosphine volume concentration 5 parts per billion (a quarter of value measured in radio band in 2020). [ 75 ] However, the wavelength used in these observations (10 microns) would only have detected phosphine at the very top of the clouds of the atmosphere of Venus. [ 7 ]
BepiColombo , launched in 2018 to study Mercury , flew by Venus on October 15, 2020, and on August 10, 2021. Johannes Benkhoff, project scientist, believed BepiColombo 's MERTIS (Mercury Radiometer and Thermal Infrared Spectrometer) could possibly detect phosphine, but "we do not know if our instrument is sensitive enough". [ 76 ]
In 2022, observations of Venus using the SOFIA airborne infrared telescope failed to detect phosphine, with an upper limit on the concentration of 0.8 ppb announced for Venusian altitudes 75–110 km. [ 57 ] A subsequent reanalysis of the SOFIA data using nonstandard calibration techniques resulted in a phosphine detection at the concentration level ~ 1 ppb, [ 77 ] but this work is yet to be peer-reviewed and therefore remains questionable. If present, phosphine appears to be more abundant in pre-morning parts of the Venusian atmosphere. [ 77 ]
In 2024 the existence of phosphine was confirmed. [ 78 ]
ALMA restarted 17 March 2021 after a year-long shutdown in response to the COVID-19 pandemic and may enable further observations that could provide insights for the ongoing investigation. [ 73 ] [ 79 ]
Despite controversies, NASA is in the beginning stages of sending a future mission to Venus. The Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy mission ( VERITAS ) would carry radar to view through the clouds to get new images of the surface, of much higher quality than those last photographed thirty-one years ago. The other, Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus (DAVINCI+) would actually go through the atmosphere, sampling the air as it descends, to hopefully detect the phosphine. [ 80 ] [ 81 ] In June 2021, NASA announced DAVINCI+ and VERITAS would be selected from four mission concepts picked in February 2020 as part of the NASA's Discovery 2019 competition for launch in the 2028–2030 time frame. [ 82 ]
There is also an ongoing long-term monitoring campaign with JCMT to study phosphine and other molecules in Venus's atmosphere. [ 83 ]
According to new research announced in January 2021, the spectral line at 266.94 GHz attributed to phosphine in the clouds of Venus was more likely to have been produced by sulfur dioxide in the mesosphere . [ 84 ] That claim was refuted in April 2021 for being inconsistent with the available data. The detection of PH 3 in the Venusian atmosphere with ALMA was recovered to ~7 ppb. [ 70 ] By August 2021 it was found the suspected contamination by sulfur dioxide was contributing only 10% to the tentative signal in phosphine spectral line band in ALMA spectra taken in 2019, and about 50% in ALMA spectra taken in 2017. [ 85 ]
Conventional water-based biochemistry was claimed to be impossible in Venusian conditions. In June 2021, calculations of water activity levels in Venusian clouds based on data from space probes showed these to be two magnitudes too low at the examined places for any known extremophile bacteria to survive. [ 86 ] [ 87 ] Alternative calculations based on the estimation of energy costs of obtaining hydrogen in Venus conditions compared to Earth conditions indicate only minor (6.5%) additional energy expenditure during Venusian photosynthesis of glucose. [ 88 ]
In August 2021, it was suggested that even saturated hydrocarbons are unstable in ultra-acid conditions of Venusian clouds, making cellular membranes of Venusian life concepts problematic. Instead, it was proposed that Venusian "life" may be based on self-replicating molecular components of "red oil" – a known class of substances consisting of a mixture of polycyclic carbon compounds dissolved in concentrated sulfuric acid. [ 53 ] Oppositely, in September 2024 it was reported what while short-chain fatty acids are unstable in concentrated sulfuric acid, it is possible to construct their acid-stable analogs capable of bilayer membrane formation by replacing carboxylic groups with sulfate , amine or phosphate groups. [ 89 ] Also, 19 of the 20 protein-making amino acids (with the exception of tryptophan ) and all nucleic acids are stable under Venusian cloud conditions. [ 90 ] [ 91 ]
In December 2021, it was suggested Venusian life – as the chemically most plausible cause – may photochemically produce ammonia from available chemicals, resulting in life-bearing droplets becoming a slurry of ammonium sulfite with a less acidic pH of 1. These droplets would deplete sulfur dioxide in upper cloud layers as they settle down, explaining the observed distribution of sulfur dioxide in the atmosphere of Venus, and may make the clouds no more acidic than some extreme terrestrial environments that harbor life. [ 92 ]
The hypothesis paper in 2020 has suggested the microbial life of Venus may have a two-stage life cycle. The metabolically active part of such a cycle would have to happen within cloud droplets to avoid a fatal loss of liquid. After such droplets grow large enough to sink under the force of gravity, organisms would fall with them into hotter lower layers and desiccate, becoming small and light enough to be raised again to the habitable layer by gravity waves at a timescale of approximately a year. [ 93 ]
The hypothesis paper in 2021 has criticized the concept above, pointing to the large stagnancy of lower haze layers in Venus making return from the haze layer to relatively habitable clouds problematic even for small particles. Instead, an in-cloud evolution model was proposed where organisms are evolving to become maximally absorptive (dark) for the given amount of biomass and the darker, solar-heated areas of cloud are kept afloat by thermal updrafts initiated by organisms itself. [ 53 ] Alternatively, microorganisms can be kept aloft by negative photophoresis effect. [ 88 ] | https://en.wikipedia.org/wiki/Life_on_Venus |
Verily Life Sciences LLC , [ 2 ] also known as Verily (formerly Google Life Sciences ), is Alphabet Inc. 's research organization devoted to the study of life sciences . [ 3 ] [ 4 ] The organization was formerly a division of Google X , until August 10, 2015, when Sergey Brin announced that the organization would become an independent subsidiary of Alphabet Inc . [ 5 ] with restructuring completed on October 2, 2015. On December 7, 2015, Google Life Sciences was renamed Verily. [ 6 ] [ 7 ] As of 2025, Verily is the first “bet” to successfully divest from Google and is now operating as a standalone company under Alphabet.
On 9 September 2014, the division acquired Lift Labs, the makers of Liftware . [ 8 ]
Verily Life Sciences in January 2019 raised $1 billion in funding.
At the end of 2019, Verily sold its stake in robot-assisted surgery joint venture Verb Surgical to development partner Johnson & Johnson . [ 9 ]
In August 2020, Verily announced that it is entering into the insurance market with the launch of Coefficient Insurance Company. The new subsidiary will be backed by Swiss Re Group's commercial insurance unit. [ 10 ]
In September 2022, Verily announced longtime CEO Andy Conrad would step down as CEO in January 2023, to be replaced by Stephen Gillett [ 11 ] who became CEO on January 3, 2023. [ 12 ]
In January 2023, fifteen percent of Verily's workforce was laid off as part of a broader restructuring by parent company, Alphabet. [ 13 ] The Information reported in August that Gillett had told employees they would stop relying on Alphabet on "a wide range of corporate services", signaling a potential spin-out as an independent company. [ 14 ]
In June 2024, Verily decided to close its operations in Israel three years after opening a research and development center in the country. Verily staff in Israel are expected to leave by the third quarter of 2024. The company cited an effort to refocus its strategy on core products and projects as the reason for the closure. [ 15 ]
In August 2024, Verily moved its headquarters from South San Francisco to Dallas citing significant investment and involvement in the Texas healthcare and technology sectors. [ 16 ] | https://en.wikipedia.org/wiki/Life_sciences_division_of_Google_X |
The Lifecycle Modeling Language (LML) is an open-standard modeling language designed for systems engineering . It supports the full lifecycle : conceptual, utilization, support and retirement stages. Along with the integration of all lifecycle disciplines including, program management , systems and design engineering , verification and validation , deployment and maintenance into one framework. [ 1 ] LML was originally designed by the LML steering committee. The specification was published October 17, 2013.
This is a modeling language like UML and SysML that supports additional project management uses such as risk analysis and scheduling. LML uses common language to define its modeling elements such as entity, attribute, schedule, cost, and relationship. [ 2 ]
LML communicates cost, schedule and performance to all stakeholders in the system lifecycle.
LML combines the logical constructs with an ontology to capture information. SysML is mainly constructs and has a limited ontology, while DoDAF MetaModel 2.0 (DM2) only has an ontology. Instead LML simplifies both the constructs and ontology to make them more complete, but still easier to use. There are only 12 primary entity classes. Almost all of the classes relate to each other and themselves with consistent words, i.e., Asset performs Action. Action performed by Asset. [ 3 ] SysML uses object oriented design, because it was designed to relate systems thinking to software development. No other discipline in the lifecycle uses object oriented design and analysis extensively. LML captures the entire lifecycle from cradle to grave. [ 1 ]
Systems Engineers have identified complexity as a major issue. [ 3 ] LML is a new approach to analyzing, planning, specifying, designing, building and maintaining modern systems.
LML focuses on these 6 goals:
1. To be easy to understand
2. To be easy to extend
3. To support both functional and object oriented approaches within the same design
4. To be a language that can be understood by most system stakeholders, not just Systems Engineers
5. To support systems from cradle to grave
6. To support both evolutionary and revolutionary changes to system plans and designs over the lifetime of a system [ 1 ]
The LML Steering Committee was formed in February 2013 to review a proposed draft ontology and set of diagrams that forms the LML specification. Contributors from many academic and commercial organizations provided direct input into the specification, resulting in its publication in October 2013. Presentations and tutorials were given at the National Defense Industrial Association (NDIA) Systems Engineering Conference (October 2013) and the Systems Engineering in DC (SEDC) in April 2014.
A predecessor to LML was developed by Dr. Steven H. Dam, SPEC Innovations, as part of a methodology called Knowledge-Based Analysis and Design (KBAD). The ontology portion was prototyping in a systems engineering database tool. Ideas on how to better implement it and the development of key LML diagrams (Action and Asset) were part of their Innoslate product development from 2009 to present. [ 4 ]
Ontologies provide a set of defined terms and relationships between the terms to capture the information that describes the physical, functional, performance, and programmatic aspects of the system.
Common ways for describing such ontologies are "Entity", "Relationship", and "Attribute" (ERA). ERA is often used to define database schemas. LML extends the ERA schema with "Attributes on Relationship", a feature that can reduce the number of required "Relationships", in the same way that "Attribute" reduce the number of required "Entities" in ERA.
In alignment with the first goal of LML, "Entity", "Relationship", "Attribute", and "Attribute on Relationship" have equivalent English language elements: noun , verb , adjective and adverb . [ 1 ]
Entity (noun) An entity is defined as something that is uniquely identifiable and can exist by itself. There are only 12 parent entities in LML: Action, Artifact, Asset, Characteristic, Connection, Cost, Decision, Input/Output, Location, Risk, Statement and Time.
Several child entities have been defined to capture information that stakeholders need. The child entities have the attributes and relationships of the parents plus additional attributes and relationships that make them unique. Child entities include: Conduit (child of Connection), Logical (child of Connection), Measure (child of Characteristic), Orbital (child of Location), Physical (child of Location), Requirement (child of Statement), Resource (child of Asset), and Virtual (child of Location).
Every entity has a name or number or description attribute or combination of the three to identify it uniquely. The name is a word or small collection of words providing an overview of information about the entity.
The number provides a numerical way to identify the entity. The description provides more detail about that entity. [ 1 ]
Attribute (adjective) The attributes work in the same way an adjective. Entities (the nouns) can have names, numbers, and description attributes. The inherent characteristic or quality of an entity is an attribute. Every attribute has a name that identifies it uniquely within an entity. Attributes names are unique within an entity, but may be used in other entities. The name provides an overview of information about the attribute. The attribute data type specifies the data associated with the attribute. [ 1 ]
Relationship (verb) The relationship works the same way a verb connects nouns or in this case the entities. The relationships enable a simple method to see how [entities] connect. For example, when connecting an action to a statement, LML uses “traced from” as the relationship: an Action is traced from a Statement. The inverse relation of traced from is “traced to.” Relationships are defined in both directions and have unique names with the same verb. The standard parent child relationship is decomposed by and its inverse is decomposes.
Relationship names are unique across the whole schema. [ 1 ]
Attributes on Relationships (adverb) Classic ERA modeling does not include "attributes on relationships", but is included in LML. In terms of the English language, an "attribute on a relationship" is like an adverb, helping to describe the relationship. Analogous to the way in which attributes relate to entities the "attribute on a relationship" has a name that is unique to its relationship, but need not be unique across other relationships. [ 1 ] | https://en.wikipedia.org/wiki/Lifecycle_Modeling_Language |
Lifileucel , sold under the brand name Amtagvi , is an adoptive T cell therapy used for the treatment of melanoma . [ 1 ] [ 2 ] [ 3 ]
Specifically, lifileucel is a tumor-derived T cell immunotherapy composed of a recipient's own T cells. A portion of the recipient's tumor tissue is removed during a surgical procedure prior to treatment. [ 3 ] The recipient's T cells (the tumor-infiltrating lymphocytes ) are separated from the tumor tissue, multiplied and then infused into the patient in a single dose. [ 3 ] T cells are a type of cell that helps the immune system fight cancer and infections. [ 3 ]
Lifileucel is the first tumor-derived T cell immunotherapy approved by the US Food and Drug Administration (FDA). [ 3 ] It was approved for medical use in the United States in February 2024. [ 2 ] [ 4 ]
Lifileucel is indicated for the treatment of adults with unresectable (unable to be removed with surgery) or metastatic (spread to other parts of the body) melanoma previously treated with other therapies (a PD-1 blocking antibody , and if BRAF V600 mutation positive, a BRAF inhibitor with or without a MEK inhibitor). [ 3 ]
There are many side effects and toxicological aspects to consider before administering this therapy. The manufacturer mentions that there is a correlation between death and treatment (7.5%). [ 2 ] The deaths are because of side effects such as severe infections (26.9%), internal organ bleeding, acute renal failure, bone marrow failure and many other organism failures. This can be within a period of 30 days to 150 days. Likewise, different reactions can occur after the first day of infusion; the most common is the hypersensitivity reaction. [ 2 ] However, the less severe adverse effect with a high incidence (20%) includes symptoms like fever, rigors, hypotension, rash, chills, tachycardia, coughs and wheezing [ 2 ] There is no evidence of an overdose to cause a toxicological effect.
Before using the TIL in the patient, this must go under chemotherapy to suppress the immunological system. After this, AMTAGVI is administrated intravenously with a dose 7.5 X 109 to 72 X 109 viable cells with a frequency of just one time. The therapy will reach its target, as any lymphocyte would do to trigger the immunological system, so the delivery of the active treatment will be through the blood. [ 2 ] A critical step for the administration and correct functioning of AMTAGVI is the application of Interleukin-2 (IL2) after the AMTAGVI infusion. This is because IL-2 will help to promote and enhance the activity of dendritic cells, which are in charge of antigen presentation, CD8+ and T cell activity. [ 5 ]
This information can be used to discuss bioavailability. However, the manufacturer or other databases have yet to publish this data, and the same is true for half-life. Nevertheless, as AMTAGVI is administered intravenously, we can consider 100% bioavailability (as mentioned above) and a regular immunology activity disposition because AMTAGVI therapy is a mix of IL-2 and cytotoxic T cells. Using literature as a base, the half-life of CD4+ and CD8+ is 87 and 77 days, respectively. [ 6 ] Regarding the IL2 half-life, this cytokine has a very short one, with an average of 85 minutes for elimination and 13 minutes for distributio,n [ 7 ] but enough to trigger a proper immune response.
Because of the nature of this therapy, metabolism and elimination are from an immunological perspective. After a T cell encounters its specific antigen, there are many pathways that a lymphocyte can follow in an immune response. The most common and principal mechanisms are apoptosis (via intrinsic, extrinsic, or caspase pathways), activated cell-autonomous death (ACAD), and activation- induced cell death (AICD). [ 8 ] These pathways (metabolism) have been studied for many years in many mammals.
Melanoma is a malignant neoplasm (abnormal tissue mass formed due to excessive division of a single cell) of melanocytes, the cells responsible for producing skin pigment. [ 9 ] As such, melanoma is classified as a type of cancer.
Extensive research has identified ultraviolet radiation (UVr) as the primary environmental trigger for melanoma. [ 9 ] UVr is believed to initiate multiple pathways leading to the disease, primarily through genetic mutations. A key mutation linked to melanoma is V600E in the BRAF gene, which promotes cancer development and, in most cases, leads to metastasis. [ 10 ] Other genetic mutations associated with melanoma include NRAS , c-KIT , and GNAQ / GNA11 .
Genomic analysis of melanoma tumors has revealed that approximately 50% of cases involve BRAF mutations, highlighting the critical role of this mechanism. UVr exposure causes C > T substitutions in melanocyte DNA, [ 11 ] leading to the formation of pyrimidine dimers. The BRAF gene, classified as an oncogene, encodes a protein kinase that drives cell proliferation. [ 9 ] Due to its well-studied role and statistical significance, the BRAF pathway is a major target for melanoma therapies.
Despite therapeutic advances, melanoma has shown resistance to many conventional treatments, including targeted drugs and immunotherapies such as immune checkpoint inhibitors (ICI). Patients with advanced or metastatic melanoma often have limited treatment options. [ 12 ] Studies indicate that about 60% of patients receiving ICI therapy develop resistanc, and 35%–40% discontinue treatment due to severe adverse effects.
Given the high resistance rates, new treatment approaches are being developed. One promising therapy is lifileucel (AMTAGVI), a tumor-infiltrating lymphocyte (TIL) therapy designed to enhance the immune system’s ability to target melanoma cells. [ 13 ] Unlike traditional small-molecule drugs or protein-based therapies, lifileucel is a form of cell therapy. It involves extracting TILs from the patient’s tumor, modifying them ex vivo, and reinfusing them into the patient to boost the immune response against melanoma. [ 13 ]
The main advantage of this cell therapy is that it offers a possible treatment for a better quality of life after all melanoma-related treatments have failed. It is incredible to understand how specific innovative therapies, such as the immune checkpoints (Nobel prize winners), have failed. However, it is even more amazing to know how new innovative therapies such as AMTAGVI/Lifileucel can tackle this situation. When a small molecule or a biologic doesn’t work, we can achieve health and safety through other ways, such as Cell therapy and gene therapy, among others. Unfortunately, many disadvantages are related to this treatment. Since people who are candidates for this treatment have endured many health problems, taking the risk of AMGTAGVI may be even more painful.
AMTAGVI is a tumour autologous T cell immunotherapy (biologic), and nowadays, there are not many patients for this therapy because it’s the last resource for treating melanoma. Due to this, an industrial process doesn ́t exist. However, Iovance Biotherapeutics, Inc (the manufacturer) describes the manufacturing process from “tumour to treatment process” as follows: AMTAGVI starts with tumour tissue collection by surgery; after this, the tissue collected is shipped to a high-level laboratory (not specified) and T cell are extracted. These, are amplified and grow to a billion-scale for shipping back to the treatment centre where the patient is located [ 14 ] Although the procedure to expand the lymphocytes is not described by the manufacturer, the TIL expansion methodology has been well-defined in other sources, and a method proposed by the Karolinska Institutet has been used as a base for this article [ 15 ] With this, the formulation of the AMTAGVI is composed of CD4+, CD8+, monocytes, B cells and NK cells (but mostly the two first).
The formula contains 48% PlasmaLyte A, 50% CryoStor CS10, 2% of 25% human serum albumin, and 300 IU/mL IL-2 (61). Finally, AMTAGVI is delivered in bags containing 125 mL of viable cells [ 2 ]
The safety and effectiveness of lifileucel was evaluated in a global, multicenter, multicohort, clinical study including adult participants with unresectable or metastatic melanoma who had previously been treated with at least one systemic therapy, including a PD-1 blocking antibody, and if positive for the BRAF V600 mutation, a BRAF inhibitor or BRAF inhibitor with an MEK inhibitor. [ 3 ] Effectiveness was measured via the objective response rate to treatment and duration of response (measured from the date of confirmed initial objective response to the date of progression, death from any cause, starting a new anti-cancer treatment or discontinuation from follow-up, whichever came first). [ 3 ]
The US Food and Drug Administration (FDA) approved Lifileucel through the accelerated approval pathway and granted the application orphan drug , regenerative medicine advanced therapy , fast track , and priority review designations under the brand name Amtagvi to Iovance Biotherapeutics . [ 3 ]
The clinical trials for lifileucel/AMGTAVI in melanoma include two phases. Phase II (178 patients) demonstrated the therapy’s efficacy and durable response in patients with unresectable or metastatic melanoma who had failed PD-1 blockers and BRAF inhibitors. However, adverse effects were statistically significant. [ 12 ] Phase III (670 patients) aims to compare lifileucel combined with pembrolizumab for advanced melanoma stages (IIIC, IIID, or IV). Results are expected by 2028 and full completion by 2030. [ 8 ] The supplementary information includes a table summarising the clinical trial (Phase II and III).
Lifileucel was approved for medical use in the United States in February 2024. [ 2 ] [ 4 ] [ 16 ] [ 17 ]
Lifileucel is the international nonproprietary name . [ 18 ]
Regarding patents, many TIL therapies exist under different brands and for many cancer types. However, according to DrugBank, there are no patents registered to AMGTAVI, but some FDA protocols are protected. The leading cause of these could be patent protection, especially for the type of medium used for the growth of lymphocytes after tumour tissue extraction and some other formulations. Related to the generic version of AMTAGVI, the possibility of a generic treatment is almost null because of the highly personalised therapy offered. There is a possibility of similar treatments for other types of cancer, but they will not be a generic version of AMTAGVI.
According to the Pharmaceutical Technology portal, AMTAGVI will generate an annual revenue of 584,000 million dollars in the USA. Currently, AMTAGVI is only available in the USA. However, it ́s expected to expand to other regions in the future to increase its market share. [ 19 ] When this report was written, AMTAGVI/Lifileucel was the first and only cellular therapy for melanoma treatment. For instance, no competitors or other existing drugs are available in the market.
This article incorporates public domain material from US Food and Drug Administration . United States Department of Health and Human Services . | https://en.wikipedia.org/wiki/Lifileucel |
In condensed matter physics and physical chemistry , the Lifshitz theory of van der Waals forces , sometimes called the macroscopic theory of van der Waals forces , is a method proposed by Evgeny Mikhailovich Lifshitz in 1954 for treating van der Waals forces between bodies which does not assume pairwise additivity of the individual intermolecular forces; that is to say, the theory takes into account the influence of neighboring molecules on the interaction between every pair of molecules located in the two bodies, rather than treating each pair independently. [ 1 ] [ 2 ]
The van der Waals force between two molecules, in this context, is the sum of the attractive or repulsive forces between them; these forces are primarily electrostatic in nature, and in their simplest form might consist of a force between two charges, two dipoles , or between a charge and a dipole. Thus, the strength of the force may often depend on the net charge, electric dipole moment, or the electric polarizability ( α {\displaystyle \alpha } ) (see for example London force ) of the molecules, with highly polarizable molecules contributing to stronger forces, and so on.
The total force between two bodies, each consisting of many molecules in the van der Waals theory is simply the sum of the intermolecular van der Waals forces, where pairwise additivity is assumed. That is to say, the forces are summed as though each pair of molecules interacts completely independently of their surroundings (See Van der Waals forces between Macroscopic Objects for an example of such a treatment). This assumption is usually correct for gasses, but presents a problem for many condensed materials, as it is known that the molecular interactions may depend strongly on their environment and neighbors. For example, in a conductor, a point-like charge might be screened by the electrons in the conductance band , [ 3 ] and the polarizability of a condensed material may be vastly different from that of an individual molecule. [ 4 ] In order to correctly predict the van der Waals forces of condensed materials, a theory that takes into account their total electrostatic response is needed.
The problem of pairwise additivity is completely avoided in the Lifshitz theory, where the molecular structure is ignored and the bodies are treated as continuous media. The forces between the bodies are now derived in terms of their bulk properties, such as dielectric constant and refractive index , which already contain all the necessary information from the original molecular structure.
The original Lifshitz 1955 paper proposed this method relying on quantum field theory principles, and is, in essence, a generalization of the Casimir effect , from two parallel, flat, ideally conducting surfaces, to two surfaces of any material. Later papers by Langbein , [ 5 ] [ 6 ] Ninham, [ 7 ] Parsegian [ 8 ] and Van Kampen [ 9 ] showed that the essential equations could be derived using much simpler theoretical techniques, an example of which is presented here.
The Lifshitz theory can be expressed as an effective Hamaker constant in the van der Waals theory.
Consider, for example, the interaction between an ion of charge Q {\textstyle Q} , and a nonpolar molecule with polarizability α 2 {\textstyle \alpha _{2}} at distance r {\textstyle r} . In a medium with dielectric constant ϵ 3 {\displaystyle \epsilon _{3}} , the interaction energy between a charge and an electric dipole p {\displaystyle p} is given by [ 10 ]
with the dipole moment of the polarizable molecule given by p = α 2 E {\textstyle p=\alpha _{2}E} , where E {\textstyle E} is the strength of the electric field at distance r {\textstyle r} from the ion. According to Coulomb's law:
so we may write the interaction energy as
Consider now, how the interaction energy will change if the right hand molecule is replaced with a medium of density ρ 2 {\textstyle \rho _{2}} of such molecules. According to the "classical" van der Waals theory, the total force will simply be the summation over individual molecules. Integrating over the volume of the medium (see the third figure), we might expect the total interaction energy with the charge to be
But this result cannot be correct, since It is well known that a charge Q {\textstyle Q} in a medium of dielectric constant ϵ 3 {\displaystyle \epsilon _{3}} at a distance D {\textstyle D} from the plane surface of a second medium of dielectric constant ϵ 2 {\displaystyle \epsilon _{2}} experiences a force as if there were an 'image' charge of strength Q ′ = − Q ( ϵ 2 − ϵ 3 ) / ( ϵ 2 + ϵ 3 ) {\textstyle Q'=-Q(\epsilon _{2}-\epsilon _{3})/(\epsilon _{2}+\epsilon _{3})} at distance D on the other side of the boundary. [ 11 ] The force between the real and image charges must then be
and the energy, therefore
Equating the two expressions for the energy, we define a new effective polarizability that must obey
Similarly, replacing the real charge Q {\textstyle Q} with a medium of density ρ 1 {\textstyle \rho _{1}} and polarizability α 1 {\displaystyle \alpha _{1}} gives an expression for α 1 ρ 1 {\displaystyle \alpha _{1}\rho _{1}} . Using these two relations, we may restate our theory in terms of an effective Hamaker constant. Specifically, using McLachlan's generalized theory of VDW forces the Hamaker constant for an interaction potential of the form U ( r ) = − C / r n {\textstyle U(r)=-C/r^{n}} between two bodies at temperature T {\textstyle T} is [ 12 ]
with ν n = 2 π n k B T / h {\textstyle \nu _{n}=2\pi nk_{B}T/h} , where k B {\textstyle k_{B}} and h {\textstyle h} are Boltzmann's and Planck's constants correspondingly. Inserting our relations for ρ α {\displaystyle \rho \alpha } and approximating the sum as an integral k B T ∑ n = 0 , 1... ∞ → h 2 π ∫ ν 1 ∞ d ν {\textstyle k_{B}T\sum _{n=0,1...}^{\infty }\rightarrow {\frac {h}{2\pi }}\int \limits _{\nu _{1}}^{\infty }d\nu } , the effective Hamaker constant in the Lifshitz theory may be approximated as
We note that ϵ ( i ν ) {\displaystyle \epsilon (i\nu )} are real functions, and are related to measurable properties of the medium; [ 13 ] thus, the Hamaker constant in the Lifshitz theory can be expressed in terms of observable properties of the physical system.
The macroscopic theory of van der Waals theory has many experimental validations. Among which, some of the most notable ones are Derjaguin (1960); [ 14 ] Derjaguin, Abrikosova and Lifshitz (1956) [ 15 ] and Israelachvili and Tabor (1973), [ 16 ] who measured the balance of forces between macroscopic bodies of glass, or glass and mica; Haydon and Taylor (1968), [ 17 ] who measured the forces across bilayers by measuring their contact angle; and lastly Shih and Parsegian (1975), [ 18 ] who investigated van der Waals potentials between heavy alkali-metal atoms and gold surfaces using atomic-beam-deflection. | https://en.wikipedia.org/wiki/Lifshitz_theory_of_van_der_Waals_force |
In polymer science , the Lifson–Roig model [ 1 ] is a helix-coil transition model applied to the alpha helix - random coil transition of polypeptides ; [ 2 ] it is a refinement of the Zimm–Bragg model that recognizes that a polypeptide alpha helix is only stabilized by a hydrogen bond only once three consecutive residues have adopted the helical conformation. To consider three consecutive residues each with two states (helix and coil), the Lifson–Roig model uses a 4x4 transfer matrix instead of the 2x2 transfer matrix of the Zimm–Bragg model, which considers only two consecutive residues. However, the simple nature of the coil state allows this to be reduced to a 3x3 matrix for most applications.
The Zimm–Bragg and Lifson–Roig models are but the first two in a series of analogous transfer-matrix methods in polymer science that have also been applied to nucleic acids and branched polymers. The transfer-matrix approach is especially elegant for homopolymers, since the statistical mechanics may be solved exactly using a simple eigenanalysis .
The Lifson–Roig model is characterized by three parameters: the statistical weight for nucleating a helix, the weight for propagating a helix and the weight for forming a hydrogen bond, which is granted only if three consecutive residues are in a helical state. Weights are assigned at each position in a polymer as a function of the conformation of the residue in that position and as a function of its two neighbors. A statistical weight of 1 is assigned to the "reference state" of a coil unit whose neighbors are both coils, and a "nucleation" unit is defined (somewhat arbitrarily) as two consecutive helical units neighbored by a coil. A major modification of the original Lifson–Roig model introduces "capping" parameters for the helical termini, in which the N- and C-terminal capping weights may vary independently. [ 3 ] The correlation matrix for this modification can be represented as a matrix M, reflecting the statistical weights of the helix state h and coil state c .
The Lifson–Roig model may be solved by the transfer-matrix method using the transfer matrix M shown at the right, where w is the statistical weight for helix propagation, v for initiation, n for N-terminal capping, and c for C-terminal capping. (In the traditional model n and c are equal to 1.) The partition function for the helix-coil transition equilibrium is
where V is the end vector V = [ 0001 ] {\displaystyle V=[0001]} , arranged to ensure the coil state of the first and last residues in the polymer.
This strategy for parameterizing helix-coil transitions was originally developed for alpha helices , whose hydrogen bonds occur between residues i and i+4 ; however, it is straightforward to extend the model to 3 10 helices and pi helices , with i+3 and i+5 hydrogen bonding patterns respectively. The complete alpha/3 10 /pi transfer matrix includes weights for transitions between helix types as well as between helix and coil states. However, because 3 10 helices are much more common in the tertiary structures of proteins than pi helices, extension of the Lifson–Roig model to accommodate 3 10 helices - resulting in a 9x9 transfer matrix when capping is included - has found a greater range of application. [ 4 ] Analogous extensions of the Zimm–Bragg model have been put forth but have not accommodated mixed helical conformations. [ 5 ] | https://en.wikipedia.org/wiki/Lifson–Roig_model |
The lift-off process in microstructuring technology is a method of creating structures (patterning) of a target material on the surface of a substrate (e.g. wafer ) using a sacrificial material (e.g. photoresist ).
It is an additive technique as opposed to more traditional subtracting technique like etching .
The scale of the structures can vary from the nanoscale up to the centimeter scale or further, but are typically of micrometric dimensions .
An inverse pattern is first created in the sacrificial stencil layer (ex. photoresist ), deposited on the surface of the substrate. This is done by etching openings through the layer so that the target material can reach the surface of the substrate in those regions, where the final pattern is to be created. The target material is deposited over the whole area of the wafer, reaching the surface of the substrate in the etched regions and staying on the top of the sacrificial layer in the regions, where it was not previously etched. When the sacrificial layer is washed away (photoresist in a solvent ), the material on the top is lifted-off and washed together with the sacrificial layer below. After the lift-off, the target material remains only in the regions where it had a direct contact with the substrate.
Lift-off is applied in cases where a direct etching of structural material would have undesirable effects on the layer below. Lift-off is a cheap alternative to etching in a research context, which permits a slower turn-around time. Finally, lifting off a material is an option if there is no access to an etching tool with the appropriate gases.
There are 3 major problems with lift-off:
If the ears remain on the surface, the risk remains that these ears will go through different layers put on top of the wafer and they might cause unwanted connections.
Lift-off process is used mostly to create metallic interconnections.
There are several types of lift-off processes, and what can be achieved depends highly on the actual process being used. Very fine structures have been used using EBL , for instance. The lift-off process can also involve multiple layers of different types of resist. This can for instance be used to create shapes that will prevent side walls of the resist being covered in the metal deposition stage. | https://en.wikipedia.org/wiki/Lift-off_(microtechnology) |
In aerodynamics , the lift-to-drag ratio (or L/D ratio ) is the lift generated by an aerodynamic body such as an aerofoil or aircraft, divided by the aerodynamic drag caused by moving through air. It describes the aerodynamic efficiency under given flight conditions. The L/D ratio for any given body will vary according to these flight conditions.
For an aerofoil wing or powered aircraft, the L/D is specified when in straight and level flight. For a glider it determines the glide ratio , of distance travelled against loss of height.
The term is calculated for any particular airspeed by measuring the lift generated, then dividing by the drag at that speed. These vary with speed, so the results are typically plotted on a 2-dimensional graph. In almost all cases the graph forms a U-shape, due to the two main components of drag. The L/D may be calculated using computational fluid dynamics or computer simulation . It is measured empirically by testing in a wind tunnel or in free flight test . [ 1 ] [ 2 ] [ 3 ]
The L/D ratio is affected by both the form drag of the body and by the induced drag associated with creating a lifting force. It depends principally on the lift and drag coefficients, angle of attack to the airflow and the wing aspect ratio .
The L/D ratio is inversely proportional to the energy required for a given flightpath, so that doubling the L/D ratio will require only half of the energy for the same distance travelled. This results directly in better fuel economy .
The L/D ratio can also be used for water craft and land vehicles. The L/D ratios for hydrofoil boats and displacement craft are determined similarly to aircraft.
Lift can be created when an aerofoil-shaped body travels through a viscous fluid such as air. The aerofoil is often cambered and/or set at an angle of attack to the airflow. The lift then increases as the square of the airspeed.
Whenever an aerodynamic body generates lift, this also creates lift-induced drag or induced drag. At low speeds an aircraft has to generate lift with a higher angle of attack , which results in a greater induced drag. This term dominates the low-speed side of the graph of lift versus velocity.
Form drag is caused by movement of the body through air. This type of drag, known also as air resistance or profile drag varies with the square of speed (see drag equation ). For this reason profile drag is more pronounced at greater speeds, forming the right side of the lift/velocity graph's U shape. Profile drag is lowered primarily by streamlining and reducing cross section.
The total drag on any aerodynamic body thus has two components, induced drag and form drag.
The rates of change of lift and drag with angle of attack (AoA) are called respectively the lift and drag coefficients C L and C D . The varying ratio of lift to drag with AoA is often plotted in terms of these coefficients.
For any given value of lift, the AoA varies with speed. Graphs of C L and C D vs. speed are referred to as drag curves . Speed is shown increasing from left to right. The lift/drag ratio is given by the slope from the origin to some point on the curve and so the maximum L/D ratio does not occur at the point of least drag coefficient, the leftmost point. Instead, it occurs at a slightly greater speed. Designers will typically select a wing design which produces an L/D peak at the chosen cruising speed for a powered fixed-wing aircraft, thereby maximizing economy. Like all things in aeronautical engineering , the lift-to-drag ratio is not the only consideration for wing design. Performance at a high angle of attack and a gentle stall are also important.
As the aircraft fuselage and control surfaces will also add drag and possibly some lift, it is fair to consider the L/D of the aircraft as a whole. The glide ratio , which is the ratio of an (unpowered) aircraft's forward motion to its descent, is (when flown at constant speed) numerically equal to the aircraft's L/D. This is especially of interest in the design and operation of high performance sailplanes , which can have glide ratios almost 60 to 1 (60 units of distance forward for each unit of descent) in the best cases, but with 30:1 being considered good performance for general recreational use. Achieving a glider's best L/D in practice requires precise control of airspeed and smooth and restrained operation of the controls to reduce drag from deflected control surfaces. In zero wind conditions, L/D will equal distance traveled divided by altitude lost. Achieving the maximum distance for altitude lost in wind conditions requires further modification of the best airspeed, as does alternating cruising and thermaling. To achieve high speed across country, glider pilots anticipating strong thermals often load their gliders (sailplanes) with water ballast : the increased wing loading means optimum glide ratio at greater airspeed, but at the cost of climbing more slowly in thermals. As noted below, the maximum L/D is not dependent on weight or wing loading, but with greater wing loading the maximum L/D occurs at a faster airspeed. Also, the faster airspeed means the aircraft will fly at greater Reynolds number and this will usually bring about a lower zero-lift drag coefficient .
Mathematically, the maximum lift-to-drag ratio can be estimated as [ 6 ]
where AR is the aspect ratio , ε {\displaystyle \varepsilon } the span efficiency factor , a number less than but close to unity for long, straight-edged wings, and C D , 0 {\displaystyle C_{D,0}} the zero-lift drag coefficient .
Most importantly, the maximum lift-to-drag ratio is independent of the weight of the aircraft, the area of the wing, or the wing loading.
It can be shown that two main drivers of maximum lift-to-drag ratio for a fixed wing aircraft are wingspan and total wetted area . One method for estimating the zero-lift drag coefficient of an aircraft is the equivalent skin-friction method. For a well designed aircraft, zero-lift drag (or parasite drag) is mostly made up of skin friction drag plus a small percentage of pressure drag caused by flow separation. The method uses the equation [ 7 ]
where C fe {\displaystyle C_{\text{fe}}} is the equivalent skin friction coefficient, S wet {\displaystyle S_{\text{wet}}} is the wetted area and S ref {\displaystyle S_{\text{ref}}} is the wing reference area. The equivalent skin friction coefficient accounts for both separation drag and skin friction drag and is a fairly consistent value for aircraft types of the same class. Substituting this into the equation for maximum lift-to-drag ratio, along with the equation for aspect ratio ( b 2 / S ref {\displaystyle b^{2}/S_{\text{ref}}} ), yields the equation ( L / D ) max = 1 2 π ε C fe b 2 S wet , {\displaystyle (L/D)_{\text{max}}={\frac {1}{2}}{\sqrt {{\frac {\pi \varepsilon }{C_{\text{fe}}}}{\frac {b^{2}}{S_{\text{wet}}}}}},} where b is wingspan. The term b 2 / S wet {\displaystyle b^{2}/S_{\text{wet}}} is known as the wetted aspect ratio. The equation demonstrates the importance of wetted aspect ratio in achieving an aerodynamically efficient design.
At supersonic speeds L/D values are lower. Concorde had a lift/drag ratio of about 7 at Mach 2, whereas a 747 has about 17 at about mach 0.85.
Dietrich Küchemann developed an empirical relationship for predicting L/D ratio for high Mach numbers: [ 8 ]
where M is the Mach number. Windtunnel tests have shown this to be approximately accurate.
[ 13 ] | https://en.wikipedia.org/wiki/Lift-to-drag_ratio |
When a fluid flows around an object, the fluid exerts a force on the object. Lift is the component of this force that is perpendicular to the oncoming flow direction. [ 1 ] It contrasts with the drag force, which is the component of the force parallel to the flow direction. Lift conventionally acts in an upward direction in order to counter the force of gravity , but it is defined to act perpendicular to the flow and therefore can act in any direction.
If the surrounding fluid is air, the force is called an aerodynamic force . In water or any other liquid, it is called a hydrodynamic force .
Dynamic lift is distinguished from other kinds of lift in fluids. Aerostatic lift or buoyancy , in which an internal fluid is lighter than the surrounding fluid, does not require movement and is used by balloons, blimps, dirigibles, boats, and submarines. Planing lift , in which only the lower portion of the body is immersed in a liquid flow, is used by motorboats, surfboards, windsurfers, sailboats, and water-skis.
A fluid flowing around the surface of a solid object applies a force on it. It does not matter whether the object is moving through a stationary fluid (e.g. an aircraft flying through the air) or whether the object is stationary and the fluid is moving (e.g. a wing in a wind tunnel) or whether both are moving (e.g. a sailboat using the wind to move forward). Lift is the component of this force that is perpendicular to the oncoming flow direction. [ 1 ] Lift is always accompanied by a drag force, which is the component of the surface force parallel to the flow direction.
Lift is mostly associated with the wings of fixed-wing aircraft , although it is more widely generated by many other streamlined bodies such as propellers , kites , helicopter rotors , racing car wings , maritime sails , wind turbines , and by sailboat keels , ship's rudders , and hydrofoils in water. Lift is also used by flying and gliding animals , especially by birds , bats , and insects , and even in the plant world by the seeds of certain trees. [ 2 ] While the common meaning of the word " lift " assumes that lift opposes weight, lift can be in any direction with respect to gravity, since it is defined with respect to the direction of flow rather than to the direction of gravity. When an aircraft is cruising in straight and level flight, the lift opposes gravity. However, when an aircraft is climbing , descending , or banking in a turn the lift is tilted with respect to the vertical. [ 3 ] Lift may also act as downforce on the wing of a fixed-wing aircraft at the top of an aerobatic loop , and on the horizontal stabiliser of an aircraft. Lift may also be largely horizontal, for instance on a sailing ship.
The lift discussed in this article is mainly in relation to airfoils; marine hydrofoils and propellers share the same physical principles and work in the same way, despite differences between air and water such as density, compressibility, and viscosity.
The flow around a lifting airfoil is a fluid mechanics phenomenon that can be understood on essentially two levels: There are mathematical theories , which are based on established laws of physics and represent the flow accurately, but which require solving equations. And there are physical explanations without math, which are less rigorous. [ 4 ] Correctly explaining lift in these qualitative terms is difficult because the cause-and-effect relationships involved are subtle. [ 5 ] A comprehensive explanation that captures all of the essential aspects is necessarily complex. There are also many simplified explanations , but all leave significant parts of the phenomenon unexplained, while some also have elements that are simply incorrect. [ 4 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ]
An airfoil is a streamlined shape that is capable of generating significantly more lift than drag. [ 11 ] A flat plate can generate lift, but not as much as a streamlined airfoil, and with somewhat higher drag.
Most simplified explanations follow one of two basic approaches, based either on Newton's laws of motion or on Bernoulli's principle . [ 4 ] [ 12 ] [ 13 ] [ 14 ]
An airfoil generates lift by exerting a downward force on the air as it flows past. According to Newton's third law , the air must exert an equal and opposite (upward) force on the airfoil, which is lift. [ 15 ] [ 16 ] [ 17 ] [ 18 ]
As the airflow approaches the airfoil it is curving upward, but as it passes the airfoil it changes direction and follows a path that is curved downward. According to Newton's second law, this change in flow direction requires a downward force applied to the air by the airfoil. Then Newton's third law requires the air to exert an upward force on the airfoil; thus a reaction force, lift, is generated opposite to the directional change. In the case of an airplane wing, the wing exerts a downward force on the air and the air exerts an upward force on the wing. [ 19 ] [ 20 ] The downward turning of the flow is not produced solely by the lower surface of the airfoil, and the air flow above the airfoil accounts for much of the downward-turning action. [ 21 ] [ 22 ] [ 23 ] [ 24 ]
This explanation is correct but it is incomplete. It does not explain how the airfoil can impart downward turning to a much deeper swath of the flow than it actually touches. Furthermore, it does not mention that the lift force is exerted by pressure differences , and does not explain how those pressure differences are sustained. [ 4 ]
Some versions of the flow-deflection explanation of lift cite the Coandă effect as the reason the flow is able to follow the convex upper surface of the airfoil. The conventional definition in the aerodynamics field is that the Coandă effect refers to the tendency of a fluid jet to stay attached to an adjacent surface that curves away from the flow, and the resultant entrainment of ambient air into the flow. [ 25 ] [ 26 ] [ 27 ]
More broadly, some consider the effect to include the tendency of any fluid boundary layer to adhere to a curved surface, not just the boundary layer accompanying a fluid jet. It is in this broader sense that the Coandă effect is used by some popular references to explain why airflow remains attached to the top side of an airfoil. [ 28 ] [ 29 ] This is a controversial use of the term "Coandă effect"; the flow following the upper surface simply reflects an absence of boundary-layer separation, thus it is not an example of the Coandă effect. [ 30 ] [ 31 ] [ 32 ] [ 33 ] Regardless of whether this broader definition of the "Coandă effect" is applicable, calling it the "Coandă effect" does not provide an explanation, it just gives the phenomenon a name. [ 34 ]
The ability of a fluid flow to follow a curved path is not dependent on shear forces, viscosity of the fluid, or the presence of a boundary layer. Air flowing around an airfoil, adhering to both upper and lower surfaces, and generating lift, is accepted as a phenomenon in inviscid flow. [ 35 ]
There are two common versions of this explanation, one based on "equal transit time", and one based on "obstruction" of the airflow.
The "equal transit time" explanation starts by arguing that the flow over the upper surface is faster than the flow over the lower surface because the path length over the upper surface is longer and must be traversed in equal transit time. [ 36 ] [ 37 ] [ 38 ] Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure. It is concluded that the reduced pressure over the upper surface results in upward lift. [ 39 ]
While it is true that the flow speeds up, a serious flaw in this explanation is that it does not correctly explain what causes the flow to speed up. [ 4 ] The longer-path-length explanation is incorrect. No difference in path length is needed, and even when there is a difference, it is typically much too small to explain the observed speed difference. [ 40 ] This is because the assumption of equal transit time is wrong when applied to a body generating lift. There is no physical principle that requires equal transit time in all situations and experimental results confirm that for a body generating lift the transit times are not equal. [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] In fact, the air moving past the top of an airfoil generating lift moves much faster than equal transit time predicts. [ 47 ] The much higher flow speed over the upper surface can be clearly seen in this animated flow visualization .
Like the equal transit time explanation, the "obstruction" or "streamtube pinching" explanation argues that the flow over the upper surface is faster than the flow over the lower surface, but gives a different reason for the difference in speed. It argues that the curved upper surface acts as more of an obstacle to the flow, forcing the streamlines to pinch closer together, making the streamtubes narrower. When streamtubes become narrower, conservation of mass requires that flow speed must increase. [ 48 ] Reduced upper-surface pressure and upward lift follow from the higher speed by Bernoulli's principle , just as in the equal transit time explanation. Sometimes an analogy is made to a venturi nozzle , claiming the upper surface of the wing acts like a venturi nozzle to constrict the flow. [ 49 ]
One serious flaw in the obstruction explanation is that it does not explain how streamtube pinching comes about, or why it is greater over the upper surface than the lower surface. For conventional wings that are flat on the bottom and curved on top this makes some intuitive sense, but it does not explain how flat plates, symmetric airfoils, sailboat sails, or conventional airfoils flying upside down can generate lift, and attempts to calculate lift based on the amount of constriction or obstruction do not predict experimental results. [ 50 ] [ 51 ] [ 52 ] [ 53 ] Another flaw is that conservation of mass is not a satisfying physical reason why the flow would speed up. Effectively explaining the acceleration of an object requires identifying the force that accelerates it. [ 54 ]
A serious flaw common to all the Bernoulli-based explanations is that they imply that a speed difference can arise from causes other than a pressure difference, and that the speed difference then leads to a pressure difference, by Bernoulli's principle. This implied one-way causation is a misconception. The real relationship between pressure and flow speed is a mutual interaction . [ 4 ] As explained below under a more comprehensive physical explanation , producing a lift force requires maintaining pressure differences in both the vertical and horizontal directions. The Bernoulli-only explanations do not explain how the pressure differences in the vertical direction are sustained. That is, they leave out the flow-deflection part of the interaction. [ 4 ]
Although the two simple Bernoulli-based explanations above are incorrect, there is nothing incorrect about Bernoulli's principle or the fact that the air goes faster on the top of the wing, and Bernoulli's principle can be used correctly as part of a more complicated explanation of lift. [ 55 ]
Lift is a result of pressure differences and depends on angle of attack, airfoil shape, air density, and airspeed.
Pressure is the normal force per unit area exerted by the air on itself and on surfaces that it touches. The lift force is transmitted through the pressure, which acts perpendicular to the surface of the airfoil. Thus, the net force manifests itself as pressure differences. The direction of the net force implies that the average pressure on the upper surface of the airfoil is lower than the average pressure on the underside. [ 56 ]
These pressure differences arise in conjunction with the curved airflow. When a fluid follows a curved path, there is a pressure gradient perpendicular to the flow direction with higher pressure on the outside of the curve and lower pressure on the inside. [ 57 ] This direct relationship between curved streamlines and pressure differences, sometimes called the streamline curvature theorem , was derived from Newton's second law by Leonhard Euler in 1754:
The left side of this equation represents the pressure difference perpendicular to the fluid flow. On the right side of the equation, ρ is the density, v is the velocity, and R is the radius of curvature. This formula shows that higher velocities and tighter curvatures create larger pressure differentials and that for straight flow (R → ∞), the pressure difference is zero. [ 58 ]
The angle of attack is the angle between the chord line of an airfoil and the oncoming airflow. A symmetrical airfoil generates zero lift at zero angle of attack. But as the angle of attack increases, the air is deflected through a larger angle and the vertical component of the airstream velocity increases, resulting in more lift. For small angles, a symmetrical airfoil generates a lift force roughly proportional to the angle of attack. [ 59 ] [ 60 ]
As the angle of attack increases, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the wing; there is less deflection downward so the airfoil generates less lift. The airfoil is said to be stalled . [ 61 ]
The maximum lift force that can be generated by an airfoil at a given airspeed depends on the shape of the airfoil, especially the amount of camber (curvature such that the upper surface is more convex than the lower surface, as illustrated at right). Increasing the camber generally increases the maximum lift at a given airspeed. [ 62 ] [ 63 ]
Cambered airfoils generate lift at zero angle of attack. When the chord line is horizontal, the trailing edge has a downward direction and since the air follows the trailing edge it is deflected downward. [ 64 ] When a cambered airfoil is upside down, the angle of attack can be adjusted so that the lift force is upward. This explains how a plane can fly upside down. [ 65 ] [ 66 ]
The ambient flow conditions which affect lift include the fluid density, viscosity and speed of flow. Density is affected by temperature, and by the medium's acoustic velocity – i.e. by compressibility effects.
Lift is proportional to the density of the air and approximately proportional to the square of the flow speed. Lift also depends on the size of the wing, being generally proportional to the wing's area projected in the lift direction. In calculations it is convenient to quantify lift in terms of a lift coefficient based on these factors.
No matter how smooth the surface of an airfoil seems, any surface is rough on the scale of air molecules. Air molecules flying into the surface bounce off the rough surface in random directions relative to their original velocities. The result is that when the air is viewed as a continuous material, it is seen to be unable to slide along the surface, and the air's velocity relative to the airfoil decreases to nearly zero at the surface (i.e., the air molecules "stick" to the surface instead of sliding along it), something known as the no-slip condition . [ 67 ] Because the air at the surface has near-zero velocity but the air away from the surface is moving, there is a thin boundary layer in which air close to the surface is subjected to a shearing motion. [ 68 ] [ 69 ] The air's viscosity resists the shearing, giving rise to a shear stress at the airfoil's surface called skin friction drag . Over most of the surface of most airfoils, the boundary layer is naturally turbulent, which increases skin friction drag. [ 69 ] [ 70 ]
Under usual flight conditions, the boundary layer remains attached to both the upper and lower surfaces all the way to the trailing edge, and its effect on the rest of the flow is modest. Compared to the predictions of inviscid flow theory, in which there is no boundary layer, the attached boundary layer reduces the lift by a modest amount and modifies the pressure distribution somewhat, which results in a viscosity-related pressure drag over and above the skin friction drag. The total of the skin friction drag and the viscosity-related pressure drag is usually called the profile drag . [ 70 ] [ 71 ]
An airfoil's maximum lift at a given airspeed is limited by boundary-layer separation . As the angle of attack is increased, a point is reached where the boundary layer can no longer remain attached to the upper surface. When the boundary layer separates, it leaves a region of recirculating flow above the upper surface, as illustrated in the flow-visualization photo at right. This is known as the stall , or stalling . At angles of attack above the stall, lift is significantly reduced, though it does not drop to zero. The maximum lift that can be achieved before stall, in terms of the lift coefficient, is generally less than 1.5 for single-element airfoils and can be more than 3.0 for airfoils with high-lift slotted flaps and leading-edge devices deployed. [ 72 ]
The flow around bluff bodies – i.e. without a streamlined shape, or stalling airfoils – may also generate lift, in addition to a strong drag force. This lift may be steady, or it may oscillate due to vortex shedding . Interaction of the object's flexibility with the vortex shedding may enhance the effects of fluctuating lift and cause vortex-induced vibrations . [ 73 ] For instance, the flow around a circular cylinder generates a Kármán vortex street : vortices being shed in an alternating fashion from the cylinder's sides. The oscillatory nature of the flow produces a fluctuating lift force on the cylinder, even though the net (mean) force is negligible. The lift force frequency is characterised by the dimensionless Strouhal number , which depends on the Reynolds number of the flow. [ 74 ] [ 75 ]
For a flexible structure, this oscillatory lift force may induce vortex-induced vibrations. Under certain conditions – for instance resonance or strong spanwise correlation of the lift force – the resulting motion of the structure due to the lift fluctuations may be strongly enhanced. Such vibrations may pose problems and threaten collapse in tall man-made structures like industrial chimneys . [ 73 ]
In the Magnus effect , a lift force is generated by a spinning cylinder in a freestream. Here the mechanical rotation acts on the boundary layer, causing it to separate at different locations on the two sides of the cylinder. The asymmetric separation changes the effective shape of the cylinder as far as the flow is concerned such that the cylinder acts like a lifting airfoil with circulation in the outer flow. [ 76 ]
As described above under " Simplified physical explanations of lift on an airfoil ", there are two main popular explanations: one based on downward deflection of the flow (Newton's laws), and one based on pressure differences accompanied by changes in flow speed (Bernoulli's principle). Either of these, by itself, correctly identifies some aspects of the lifting flow but leaves other important aspects of the phenomenon unexplained. A more comprehensive explanation involves both downward deflection and pressure differences (including changes in flow speed associated with the pressure differences), and requires looking at the flow in more detail. [ 77 ]
The airfoil shape and angle of attack work together so that the airfoil exerts a downward force on the air as it flows past. According to Newton's third law, the air must then exert an equal and opposite (upward) force on the airfoil, which is the lift. [ 17 ]
The net force exerted by the air occurs as a pressure difference over the airfoil's surfaces. [ 78 ] Pressure in a fluid is always positive in an absolute sense, [ 79 ] so that pressure must always be thought of as pushing, and never as pulling. The pressure thus pushes inward on the airfoil everywhere on both the upper and lower surfaces. The flowing air reacts to the presence of the wing by reducing the pressure on the wing's upper surface and increasing the pressure on the lower surface. The pressure on the lower surface pushes up harder than the reduced pressure on the upper surface pushes down, and the net result is upward lift. [ 78 ]
The pressure difference which results in lift acts directly on the airfoil surfaces; however, understanding how the pressure difference is produced requires understanding what the flow does over a wider area.
An airfoil affects the speed and direction of the flow over a wide area, producing a pattern called a velocity field . When an airfoil produces lift, the flow ahead of the airfoil is deflected upward, the flow above and below the airfoil is deflected downward leaving the air far behind the airfoil in the same state as the oncoming flow far ahead. The flow above the upper surface is sped up, while the flow below the airfoil is slowed down. Together with the upward deflection of air in front and the downward deflection of the air immediately behind, this establishes a net circulatory component of the flow. The downward deflection and the changes in flow speed are pronounced and extend over a wide area, as can be seen in the flow animation on the right. These differences in the direction and speed of the flow are greatest close to the airfoil and decrease gradually far above and below. All of these features of the velocity field also appear in theoretical models for lifting flows. [ 80 ] [ 81 ]
The pressure is also affected over a wide area, in a pattern of non-uniform pressure called a pressure field . When an airfoil produces lift, there is a diffuse region of low pressure above the airfoil, and usually a diffuse region of high pressure below, as illustrated by the isobars (curves of constant pressure) in the drawing. The pressure difference that acts on the surface is just part of this pressure field. [ 82 ]
The non-uniform pressure exerts forces on the air in the direction from higher pressure to lower pressure. The direction of the force is different at different locations around the airfoil, as indicated by the block arrows in the pressure field around an airfoil figure. Air above the airfoil is pushed toward the center of the low-pressure region, and air below the airfoil is pushed outward from the center of the high-pressure region.
According to Newton's second law , a force causes air to accelerate in the direction of the force. Thus the vertical arrows in the accompanying pressure field diagram indicate that air above and below the airfoil is accelerated, or turned downward, and that the non-uniform pressure is thus the cause of the downward deflection of the flow visible in the flow animation. To produce this downward turning, the airfoil must have a positive angle of attack or have sufficient positive camber. Note that the downward turning of the flow over the upper surface is the result of the air being pushed downward by higher pressure above it than below it. Some explanations that refer to the "Coandă effect" suggest that viscosity plays a key role in the downward turning, but this is false. (see above under " Controversy regarding the Coandă effect ").
The arrows ahead of the airfoil indicate that the flow ahead of the airfoil is deflected upward, and the arrows behind the airfoil indicate that the flow behind is deflected upward again, after being deflected downward over the airfoil. These deflections are also visible in the flow animation.
The arrows ahead of the airfoil and behind also indicate that air passing through the low-pressure region above the airfoil is sped up as it enters, and slowed back down as it leaves. Air passing through the high-pressure region below the airfoil is slowed down as it enters and then sped back up as it leaves. Thus the non-uniform pressure is also the cause of the changes in flow speed visible in the flow animation. The changes in flow speed are consistent with Bernoulli's principle , which states that in a steady flow without viscosity, lower pressure means higher speed, and higher pressure means lower speed.
Thus changes in flow direction and speed are directly caused by the non-uniform pressure. But this cause-and-effect relationship is not just one-way; it works in both directions simultaneously. The air's motion is affected by the pressure differences, but the existence of the pressure differences depends on the air's motion. The relationship is thus a mutual, or reciprocal, interaction: Air flow changes speed or direction in response to pressure differences, and the pressure differences are sustained by the air's resistance to changing speed or direction. [ 83 ] A pressure difference can exist only if something is there for it to push against. In aerodynamic flow, the pressure difference pushes against the air's inertia, as the air is accelerated by the pressure difference. [ 84 ] This is why the air's mass is part of the calculation, and why lift depends on air density.
Sustaining the pressure difference that exerts the lift force on the airfoil surfaces requires sustaining a pattern of non-uniform pressure in a wide area around the airfoil. This requires maintaining pressure differences in both the vertical and horizontal directions, and thus requires both downward turning of the flow and changes in flow speed according to Bernoulli's principle. The pressure differences and the changes in flow direction and speed sustain each other in a mutual interaction. The pressure differences follow naturally from Newton's second law and from the fact that flow along the surface follows the predominantly downward-sloping contours of the airfoil. And the fact that the air has mass is crucial to the interaction. [ 85 ]
Producing a lift force requires both downward turning of the flow and changes in flow speed consistent with Bernoulli's principle. Each of the simplified explanations given above in Simplified physical explanations of lift on an airfoil falls short by trying to explain lift in terms of only one or the other, thus explaining only part of the phenomenon and leaving other parts unexplained. [ 86 ]
When the pressure distribution on the airfoil surface is known, determining the total lift requires adding up the contributions to the pressure force from local elements of the surface, each with its own local value of pressure. The total lift is thus the integral of the pressure, in the direction perpendicular to the farfield flow, over the airfoil surface. [ 87 ]
where:
The above lift equation neglects the skin friction forces, which are small compared to the pressure forces.
By using the streamwise vector i parallel to the freestream in place of k in the integral, we obtain an expression for the pressure drag D p (which includes the pressure portion of the profile drag and, if the wing is three-dimensional, the induced drag). If we use the spanwise vector j , we obtain the side force Y .
The validity of this integration generally requires the airfoil shape to be a closed curve that is piecewise smooth .
Lift depends on the size of the wing, being approximately proportional to the wing area. It is often convenient to quantify the lift of a given airfoil by its lift coefficient C L {\displaystyle C_{L}} , which defines its overall lift in terms of a unit area of the wing.
If the value of C L {\displaystyle C_{L}} for a wing at a specified angle of attack is given, then the lift produced for specific flow conditions can be determined: [ 88 ]
where
Mathematical theories of lift are based on continuum fluid mechanics, assuming that air flows as a continuous fluid. [ 90 ] [ 91 ] [ 92 ] Lift is generated in accordance with the fundamental principles of physics, the most relevant being the following three principles: [ 93 ]
Because an airfoil affects the flow in a wide area around it, the conservation laws of mechanics are embodied in the form of partial differential equations combined with a set of boundary condition requirements which the flow has to satisfy at the airfoil surface and far away from the airfoil. [ 94 ]
To predict lift requires solving the equations for a particular airfoil shape and flow condition, which generally requires calculations that are so voluminous that they are practical only on a computer, through the methods of computational fluid dynamics (CFD). Determining the net aerodynamic force from a CFD solution requires "adding up" ( integrating ) the forces due to pressure and shear determined by the CFD over every surface element of the airfoil as described under " pressure integration ".
The Navier–Stokes equations (NS) provide the potentially most accurate theory of lift, but in practice, capturing the effects of turbulence in the boundary layer on the airfoil surface requires sacrificing some accuracy, and requires use of the Reynolds-averaged Navier–Stokes equations (RANS). Simpler but less accurate theories have also been developed.
These equations represent conservation of mass, Newton's second law (conservation of momentum), conservation of energy, the Newtonian law for the action of viscosity , the Fourier heat conduction law , an equation of state relating density, temperature, and pressure, and formulas for the viscosity and thermal conductivity of the fluid. [ 95 ] [ 96 ]
In principle, the NS equations, combined with boundary conditions of no through-flow and no slip at the airfoil surface, could be used to predict lift with high accuracy in any situation in ordinary atmospheric flight. However, airflows in practical situations always involve turbulence in the boundary layer next to the airfoil surface, at least over the aft portion of the airfoil. Predicting lift by solving the NS equations in their raw form would require the calculations to resolve the details of the turbulence, down to the smallest eddy. This is not yet possible, even on the most powerful computer. [ 97 ] So in principle the NS equations provide a complete and very accurate theory of lift, but practical prediction of lift requires that the effects of turbulence be modeled in the RANS equations rather than computed directly.
These are the NS equations with the turbulence motions averaged over time, and the effects of the turbulence on the time-averaged flow represented by turbulence modeling (an additional set of equations based on a combination of dimensional analysis and empirical information on how turbulence affects a boundary layer in a time-averaged average sense). [ 98 ] [ 99 ] A RANS solution consists of the time-averaged velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil.
The amount of computation required is a minuscule fraction (billionths) [ 97 ] of what would be required to resolve all of the turbulence motions in a raw NS calculation, and with large computers available it is now practical to carry out RANS calculations for complete airplanes in three dimensions. Because turbulence models are not perfect, the accuracy of RANS calculations is imperfect, but it is adequate for practical aircraft design. Lift predicted by RANS is usually within a few percent of the actual lift.
The Euler equations are the NS equations without the viscosity, heat conduction, and turbulence effects. [ 100 ] As with a RANS solution, an Euler solution consists of the velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. While the Euler equations are simpler than the NS equations, they do not lend themselves to exact analytic solutions.
Further simplification is available through potential flow theory, which reduces the number of unknowns to be determined, and makes analytic solutions possible in some cases, as described below.
Either Euler or potential-flow calculations predict the pressure distribution on the airfoil surfaces roughly correctly for angles of attack below stall, where they might miss the total lift by as much as 10–20%. At angles of attack above stall, inviscid calculations do not predict that stall has happened, and as a result they grossly overestimate the lift.
In potential-flow theory, the flow is assumed to be irrotational , i.e. that small fluid parcels have no net rate of rotation. Mathematically, this is expressed by the statement that the curl of the velocity vector field is everywhere equal to zero. Irrotational flows have the convenient property that the velocity can be expressed as the gradient of a scalar function called a potential . A flow represented in this way is called potential flow. [ 101 ] [ 102 ] [ 103 ] [ 104 ]
In potential-flow theory, the flow is assumed to be incompressible. Incompressible potential-flow theory has the advantage that the equation ( Laplace's equation ) to be solved for the potential is linear , which allows solutions to be constructed by superposition of other known solutions. The incompressible-potential-flow equation can also be solved by conformal mapping , a method based on the theory of functions of a complex variable. In the early 20th century, before computers were available, conformal mapping was used to generate solutions to the incompressible potential-flow equation for a class of idealized airfoil shapes, providing some of the first practical theoretical predictions of the pressure distribution on a lifting airfoil.
A solution of the potential equation directly determines only the velocity field. The pressure field is deduced from the velocity field through Bernoulli's equation.
Applying potential-flow theory to a lifting flow requires special treatment and an additional assumption. The problem arises because lift on an airfoil in inviscid flow requires circulation in the flow around the airfoil (See " Circulation and the Kutta–Joukowski theorem " below), but a single potential function that is continuous throughout the domain around the airfoil cannot represent a flow with nonzero circulation. The solution to this problem is to introduce a branch cut , a curve or line from some point on the airfoil surface out to infinite distance, and to allow a jump in the value of the potential across the cut. The jump in the potential imposes circulation in the flow equal to the potential jump and thus allows nonzero circulation to be represented. However, the potential jump is a free parameter that is not determined by the potential equation or the other boundary conditions, and the solution is thus indeterminate. A potential-flow solution exists for any value of the circulation and any value of the lift. One way to resolve this indeterminacy is to impose the Kutta condition , [ 105 ] [ 106 ] which is that, of all the possible solutions, the physically reasonable solution is the one in which the flow leaves the trailing edge smoothly. The streamline sketches illustrate one flow pattern with zero lift, in which the flow goes around the trailing edge and leaves the upper surface ahead of the trailing edge, and another flow pattern with positive lift, in which the flow leaves smoothly at the trailing edge in accordance with the Kutta condition.
This is potential-flow theory with the further assumptions that the airfoil is very thin and the angle of attack is small. [ 107 ] The linearized theory predicts the general character of the airfoil pressure distribution and how it is influenced by airfoil shape and angle of attack, but is not accurate enough for design work. For a 2D airfoil, such calculations can be done in a fraction of a second in a spreadsheet on a PC.
When an airfoil generates lift, several components of the overall velocity field contribute to a net circulation of air around it: the upward flow ahead of the airfoil, the accelerated flow above, the decelerated flow below, and the downward flow behind.
The circulation can be understood as the total amount of "spinning" (or vorticity ) of an inviscid fluid around the airfoil.
The Kutta–Joukowski theorem relates the lift per unit width of span of a two-dimensional airfoil to this circulation component of the flow. [ 80 ] [ 108 ] [ 109 ] It is a key element in an explanation of lift that follows the development of the flow around an airfoil as the airfoil starts its motion from rest and a starting vortex is formed and left behind, leading to the formation of circulation around the airfoil. [ 110 ] [ 111 ] [ 112 ] Lift is then inferred from the Kutta-Joukowski theorem. This explanation is largely mathematical, and its general progression is based on logical inference, not physical cause-and-effect. [ 113 ]
The Kutta–Joukowski model does not predict how much circulation or lift a two-dimensional airfoil produces. Calculating the lift per unit span using Kutta–Joukowski requires a known value for the circulation. In particular, if the Kutta condition is met, in which the rear stagnation point moves to the airfoil trailing edge and attaches there for the duration of flight, the lift can be calculated theoretically through the conformal mapping method.
The lift generated by a conventional airfoil is dictated by both its design and the flight conditions, such as forward velocity, angle of attack and air density. Lift can be increased by artificially increasing the circulation, for example by boundary-layer blowing or the use of blown flaps . In the Flettner rotor the entire airfoil is circular and spins about a spanwise axis to create the circulation.
The flow around a three-dimensional wing involves significant additional issues, especially relating to the wing tips. For a wing of low aspect ratio , such as a typical delta wing , two-dimensional theories may provide a poor model and three-dimensional flow effects can dominate. [ 114 ] Even for wings of high aspect ratio, the three-dimensional effects associated with finite span can affect the whole span, not just close to the tips.
The vertical pressure gradient at the wing tips causes air to flow sideways, out from under the wing then up and back over the upper surface. This reduces the pressure gradient at the wing tip, therefore also reducing lift. The lift tends to decrease in the spanwise direction from root to tip, and the pressure distributions around the airfoil sections change accordingly in the spanwise direction. Pressure distributions in planes perpendicular to the flight direction tend to look like the illustration at right. [ 115 ] This spanwise-varying pressure distribution is sustained by a mutual interaction with the velocity field. Flow below the wing is accelerated outboard, flow outboard of the tips is accelerated upward, and flow above the wing is accelerated inboard, which results in the flow pattern illustrated at right. [ 116 ]
There is more downward turning of the flow than there would be in a two-dimensional flow with the same airfoil shape and sectional lift, and a higher sectional angle of attack is required to achieve the same lift compared to a two-dimensional flow. [ 117 ] The wing is effectively flying in a downdraft of its own making, as if the freestream flow were tilted downward, with the result that the total aerodynamic force vector is tilted backward slightly compared to what it would be in two dimensions. The additional backward component of the force vector is called lift-induced drag .
The difference in the spanwise component of velocity above and below the wing (between being in the inboard direction above and in the outboard direction below) persists at the trailing edge and into the wake downstream. After the flow leaves the trailing edge, this difference in velocity takes place across a relatively thin shear layer called a vortex sheet.
The wingtip flow leaving the wing creates a tip vortex. As the main vortex sheet passes downstream from the trailing edge, it rolls up at its outer edges, merging with the tip vortices. The combination of the wingtip vortices and the vortex sheets feeding them is called the vortex wake.
In addition to the vorticity in the trailing vortex wake there is vorticity in the wing's boundary layer, called 'bound vorticity', which connects the trailing sheets from the two sides of the wing into a vortex system in the general form of a horseshoe. The horseshoe form of the vortex system was recognized by the British aeronautical pioneer Lanchester in 1907. [ 118 ]
Given the distribution of bound vorticity and the vorticity in the wake, the Biot–Savart law (a vector-calculus relation) can be used to calculate the velocity perturbation anywhere in the field, caused by the lift on the wing. Approximate theories for the lift distribution and lift-induced drag of three-dimensional wings are based on such analysis applied to the wing's horseshoe vortex system. [ 119 ] [ 120 ] In these theories, the bound vorticity is usually idealized and assumed to reside at the camber surface inside the wing.
Because the velocity is deduced from the vorticity in such theories, some authors describe the situation to imply that the vorticity is the cause of the velocity perturbations, using terms such as "the velocity induced by the vortex", for example. [ 121 ] But attributing mechanical cause-and-effect between the vorticity and the velocity in this way is not consistent with the physics. [ 122 ] [ 123 ] [ 124 ] The velocity perturbations in the flow around a wing are in fact produced by the pressure field. [ 125 ]
The flow around a lifting airfoil must satisfy Newton's second law regarding conservation of momentum, both locally at every point in the flow field, and in an integrated sense over any extended region of the flow. For an extended region, Newton's second law takes the form of the momentum theorem for a control volume , where a control volume can be any region of the flow chosen for analysis. The momentum theorem states that the integrated force exerted at the boundaries of the control volume (a surface integral ), is equal to the integrated time rate of change ( material derivative ) of the momentum of fluid parcels passing through the interior of the control volume. For a steady flow, this can be expressed in the form of the net surface integral of the flux of momentum through the boundary. [ 126 ]
The lifting flow around a 2D airfoil is usually analyzed in a control volume that completely surrounds the airfoil, so that the inner boundary of the control volume is the airfoil surface, where the downward force per unit span − L ′ {\displaystyle -L'} is exerted on the fluid by the airfoil. The outer boundary is usually either a large circle or a large rectangle. At this outer boundary distant from the airfoil, the velocity and pressure are well represented by the velocity and pressure associated with a uniform flow plus a vortex, and viscous stress is negligible, so that the only force that must be integrated over the outer boundary is the pressure. [ 127 ] [ 128 ] [ 129 ] The free-stream velocity is usually assumed to be horizontal, with lift vertically upward, so that the vertical momentum is the component of interest.
For the free-air case (no ground plane), the force − L ′ {\displaystyle -L'} exerted by the airfoil on the fluid is manifested partly as momentum fluxes and partly as pressure differences at the outer boundary, in proportions that depend on the shape of the outer boundary, as shown in the diagram at right. For a flat horizontal rectangle that is much longer than it is tall, the fluxes of vertical momentum through the front and back are negligible, and the lift is accounted for entirely by the integrated pressure differences on the top and bottom. [ 127 ] For a square or circle, the momentum fluxes and pressure differences account for half the lift each. [ 127 ] [ 128 ] [ 129 ] For a vertical rectangle that is much taller than it is wide, the unbalanced pressure forces on the top and bottom are negligible, and lift is accounted for entirely by momentum fluxes, with a flux of upward momentum that enters the control volume through the front accounting for half the lift, and a flux of downward momentum that exits the control volume through the back accounting for the other half. [ 127 ]
The results of all of the control-volume analyses described above are consistent with the Kutta–Joukowski theorem described above. Both the tall rectangle and circle control volumes have been used in derivations of the theorem. [ 128 ] [ 129 ]
An airfoil produces a pressure field in the surrounding air, as explained under " The wider flow around the airfoil " above. The pressure differences associated with this field die off gradually, becoming very small at large distances, but never disappearing altogether. Below the airplane, the pressure field persists as a positive pressure disturbance that reaches the ground, forming a pattern of slightly-higher-than-ambient pressure on the ground, as shown on the right. [ 130 ] Although the pressure differences are very small far below the airplane, they are spread over a wide area and add up to a substantial force. For steady, level flight, the integrated force due to the pressure differences is equal to the total aerodynamic lift of the airplane and to the airplane's weight. According to Newton's third law, this pressure force exerted on the ground by the air is matched by an equal-and-opposite upward force exerted on the air by the ground, which offsets all of the downward force exerted on the air by the airplane. The net force due to the lift, acting on the atmosphere as a whole, is therefore zero, and thus there is no integrated accumulation of vertical momentum in the atmosphere, as was noted by Lanchester early in the development of modern aerodynamics. [ 131 ] | https://en.wikipedia.org/wiki/Lift_(force) |
Lift Powder , or Lift Charge is a slang term for Gunpowder . The term "Lift Powder" is mostly used in the Fireworks Industry. [ 1 ] [ 2 ]
This pyrotechnics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lift_powder |
Lift slab construction (also called the Youtz-Slick Method) is a method of constructing concrete buildings by casting the floor or roof slab on top of the previous slab and then raising (jacking) the slab up with hydraulic jacks . This method of construction allows for a large portion of the work to be completed at ground level, negating the need to form floor work in place. The ability to create monolithic concrete slabs makes the lift slab construction technique useful in quickly creating structures with repetitive form work, like parking ramps.
This method of construction simultaneously began development in 1948 by both Philip N. Youtz of New York and Thomas B Slick of Texas. Although the first patent for lift slab construction was given to Slick in 1955, the method of construction is commonly referred to as the "Youtz-Slick Method". [ 1 ] His patent called for a method that would allow for fabrication to be completed at the ground level, eliminate a large portion of the formwork , create uniform floors of concrete, and reduce the labor to be completed at an elevated level.
The method was first used at Trinity University in San Antonio , Texas during the construction of Northup Hall in 1952. [ 2 ] Northrup Hall was the first full scale building erected using lift slab construction. Being such, the process drew a crowd of spectators, waiting to see if the structural integrity of the building would hold. [ 3 ]
Johnstone Hall , a Clemson University dormitory in Clemson , South Carolina , was erected using this method in 1954, as did Woodrow Wilson High School in the same year. [ 4 ] Several of the blocks from Johnstone Hall have now been demolished.
The building located at 2150 Shattuck Avenue in Berkeley, CA (or First Savings Building) is one example of lift slab construction utilized in the Bay Area in the mid-twentieth century. Built in 1969, the First Savings Building utilizes lift slab construction to support the fourteen story height of the building. [ 5 ] The building's structural system consists of a system of trusses from which the various concrete slab floors are hung. In turn, these trusses extend out from two reinforced concrete cores which provide the main structural support for the entirety of the building. [ 6 ]
Lift slab construction was also involved in the L'Ambiance Plaza collapse in Bridgeport , Connecticut, in 1987, and resulted in a nationwide federal investigation into this construction technique in the United States, and Connecticut imposed a temporary moratorium on lift slab construction. [ 7 ] The failure of the structure has been primarily attributed to instability with the steel columns that were meant to support the floors. Although other factors were involved in the collapse while under construction, it is the insufficient lateral bracing that ultimately caused the structural failure. [ 8 ]
Northminster Car Park in Peterborough, England , built using the lift slab technique, was found to be unsafe and beyond economic repair in 2019. [ 9 ] [ 10 ]
To begin, a concrete slab is first poured on the ground level. Lifting collars are set around each of the columns and cast into place as the slab is poured around them. The lifting collars will later be used to support the slab as it is raised and secured in place. Subsequent floors and the roof are then poured and formed on top of the initial ground slab. Bond breakers are used between each floor plate to allow the slabs to separate as they are raised. [ 11 ] Along with reducing the formwork required to create the slabs, slabs can be easily protected from inclement weather since all of the slabs remain together during the curing process. [ 12 ]
Once the slabs have been raised to their desired height the lifting collars are welded to the columns, along with shear blocks to support the slab from beneath. To assure the security of a structure during the raising of the slabs, the hydraulic jacks , attached to the top of the columns, use synchronized consoles to lift the slabs at an even rate. Conventional methods of mounting the jacks to the columns require that the jacks are removed before continuing to raise the slabs. More recent approaches utilize welded plates, separated from the columns, to support the jack. [ 12 ]
In Latin America, contractors have started to use a form of lift slab construction where load-bearing concrete walls are raised at the same time as the floor slabs. Both the wall panels and the floor slabs are cast on the ground. The walls are attached to the slabs through hinges formed by plastic ropes. As the floors are raised, the walls unfold into place and form the vertical support for the system. [ 12 ] | https://en.wikipedia.org/wiki/Lift_slab_construction |
The Lifting Operations and Lifting Equipment Regulations 1998 (LOLER) are set of regulations created under the Health and Safety at Work etc. Act 1974 which came into force in Great Britain on 5 December 1998 [ 1 ] and replaced a number of other pieces of legislation which previously covered the use of lifting equipment . [ note 1 ] The purpose of the regulations was to reduce the risk of injury from lifting equipment used at work. [ 2 ] Areas covered in the regulations include the requirement for lifting equipment to be strong and stable enough for safe use and to be marked to indicate safe working loads; ensuring that any equipment is positioned and installed so as to minimise risks; that the equipment is used safely ensuring that work is planned, organised and performed by a competent person; that equipment is subject to ongoing thorough examination and where appropriate, inspection by competent people. [ 2 ]
The regulations define lifting equipment as "work equipment for lifting or lowering loads and includes its attachments used for anchoring, fixing or supporting it". [ 3 ] The regulations involve anything which involves the lifting of goods or people at work. Equipment covered would include lifts, cranes, ropes, slings, hooks, shackles, eyebolts, rope and pulley systems and forklift trucks. [ 4 ] The regulations apply to all workplaces and all the provisions of the ' Provision and Use of Work Equipment Regulations 1998 ' also apply to lifting equipment. [ 4 ]
A safe working load (SWL) should, according to the regulations be marked onto lifting equipment with the relevant SWL being dependent on the configuration of the equipment, accessories for lifting such as eye bolts, lifting magnets and lifting beams should also be marked. [ 5 ] The load itself would be based on the maximum load that the equipment can lift safely. Lifting equipment that is designed for lifting people must also be appropriately and clearly marked. [ 5 ]
The regulations stated that all lifts provided for use with work activities should be thoroughly examined by a 'competent person' at regular intervals. [ 6 ] Regulation 9 of the Lifting Operations and Lifting Equipment Regulations requires all employers to have their equipment thoroughly examined prior to it being put into service and after there has been any major alteration that could affect its operation. [ 7 ] Owners or people responsible for the safe operation of a lift at work are known as 'dutyholders' and have a responsibility to ensure that the lift has been thoroughly examined and is safe to use. Lifts when in use should be thoroughly examined every six months if, at any time, the lift has been used to carry people. Lifts used to only carry loads should be examined every 12 months. [ 6 ] Any substantial or significant changes should have been made to the equipment then this would also require an examination as would any change in operating condition which is likely to affect the integrity of the equipment. [ 6 ]
These are a legal requirement and should be carried out by a competent person. Though a "competent person" is not defined within the legislation, guidance is given in the HSE LOLER Approved Code of Practice and guidance [ 8 ] which gives further details that the person should have the "appropriate practical and theoretical knowledge and experience of the lifting equipment" which would allow them to identify safety issues.
In practice, an insurance company may provide a competent person or request a third party independent inspector.
These inspections should be carried out at 6 monthly intervals for all lifting items and at least every 12 months for those that could be covered by PUWER , although a competent person may determine different time scales.
Standards state that as a minimum;
LOLER Frequency (in months) [ 9 ] :
LOLER 1998 put in place four key protocols that all employers and workers must abide by.
All equipment must be safe and suitable for purpose . The manufacturer must identify any hazards associated with the equipment in question, they must then assess these hazards to bring them down to acceptable levels. All lifting equipment is normally put through an independent type testing process to establish that it will safely perform the tasks required to one of the below standards.
The above standards are a published specification that establishes a common language and contains a technical specification or other precise criteria. They are designed to be used consistently as a rule, guideline or definition.
All personnel must be suitably trained . All manufacturers of lifting equipment are obliged to send out instructions for use of all products. The employer is then obliged to make sure employees are aware of these instructions and use the lifting equipment correctly. To achieve this the employees must be competent. Competence is achieved through experience, technical knowledge and training.
All equipment must be maintained in a safe condition. It is good practice for all personnel using lifting equipment to conduct a pre-use inspection on all items. Regulation 9 of LOLER also outlines specific requirements for the formal inspection of lifting equipment at mandatory intervals. These inspections are to be performed by a competent person and the findings of the inspections recorded. Maximum fixed periods for thorough examinations and inspection of lifting equipment as stated in regulation 9 of LOLER are:
or in accordance with a written scheme of examination. Any inspection record must be made in line with the requirements of schedule 1 of LOLER.
The only exception to this is: If the lifting equipment has not been used before and; In the case of lifting equipment issued with an EC declaration of conformity, the employer has possession of such declaration and it is not made more than 12 months before the lifting equipment is put into service.
Operators of lifting equipment are legally required to ensure that reports of thorough examinations are kept available for consideration by health and safety inspectors for at least two years or until the next report, whichever is longer.
Records must be kept for all equipment. All equipment manufactured should be given a “birth certificate”. This should prove that when first made, it complied with any requirement. In Europe today, this document would normally be an EC Declaration of conformity plus a manufacturers certificate if called for by the standard worked to.
They may be kept electronically as long as you can provide a written report if requested. [ 10 ]
To gain an understanding of the your Health and Safety requirements in the motor vehicle repair industry in full read document HSG261. [ 11 ]
On 17 January 2011, a Liverpool nursing home was fined £18,000 after Frances Shannon, an 81-year-old woman fell to the ground whilst being lifted out of bed.
The Christopher Grange nursing home, run by the Catholic Blind Institute, was prosecuted by the Health and Safety Executive (HSE) for failing to carry out regular checks of the sling equipment which was used to lift Mrs Shannon, who suffered a broken shoulder and injuries to her back and elbow.
Taken to the Royal Liverpool University Hospital, Mrs Shannon died the day following the incident. Speaking of the prosecution Sarah Wadham, the HSE's inspecting officer, said that the incident could have been prevented, saying to the press "There should have been regular checks of the sling and it should have been thoroughly examined at least once every six months. Sadly this did not happen." [ 12 ]
The Catholic Blind Institute was charged under section 9 of the regulations and ordered to also pay £13,876 costs. | https://en.wikipedia.org/wiki/Lifting_Operations_and_Lifting_Equipment_Regulations_1998 |
Lifting bosses or handling bosses are protrusions intentionally left on stones by masons to facilitate maneuvering the blocks with ropes and levers. [ 1 ] [ 2 ]
They are an important feature of ancient and classical construction, and were often not cut away, despite having fulfilled their purpose. Sometimes this was the result of a cost-saving measure or a construction halt. Other times bosses were left as a stylistic element, and even if dressed back, a remnant of them was kept to make their existence obvious. [ 3 ] | https://en.wikipedia.org/wiki/Lifting_boss |
Lifting equipment , also known as lifting gear , is a general term for any equipment that can be used to lift and lower loads. [ 1 ] Types of lifting equipment include heavy machinery such as the patient lift , overhead cranes , forklifts , jacks , building cradles, and passenger lifts, and can also include smaller accessories such as chains , hooks , and rope . [ 1 ] Generally, this equipment is used to move material that cannot be moved with manual labor, and are tools used in most work environments, such as warehouses, and is a requirement for most construction projects, such as bridges and buildings. This equipment can also be used to equip a larger number of packages and goods, requiring less persons to move material. Lifting equipment includes any form of equipment that is used for vertical lifting, and equipment used to move material horizontally is not considered lifting equipment, nor is equipment designed to support. [ 2 ] As lifting equipment can be dangerous to use, it is a common subject of safety regulations in most countries, and heavy machinery usually requires certified workers to limit workplace injury. [ 3 ] [ 4 ]
Failure or misuse of heavy machinery can lead to severe or fatal injury, leading regulations to be one of the largest debates in labor laws across the world. Each country sets its own regulations, and enforces different aspects of workplace safety when using lifting equipment.
The Occupational Safety and Health Administration sets regulations for all equipment. [ 3 ] Contractors are forced to uphold usually strict rules to ensure safety of workers. All machinery is required to be developed by a certified engineer, contractors must follow manufacturer procedures, all users be professionally trained before operating equipment, and equipment must be inspected regularly.
The Health and Safety Executive sets regulations on equipment in the United Kingdom, under the Lifting Operations and Lifting Equipment Regulations . [ 1 ] These regulations require equipment be registered on a Statutory Inspection Report Form, is adequate for the task, be subject to routine inspection, and the use of the equipment be properly planned out.
Lifting equipment can be assigned a Working Load Limit (WLL) in the interests of avoiding failure; Working Load Limit is calculated by dividing the Minimum Breaking Load of the equipment by a safety factor . [ 5 ] WLL as a concept is not restricted to lifting, being also relevant for mooring ropes. [ 6 ] Minimum Breaking Load is also known under the terms of Minimum Breaking Strength or Minimum Breaking Force. [ 6 ] WLL of ropes are usually much smaller than their Minimum Breaking Load. [ 6 ] WLL is sometimes known as Safe Working Load, but this alternative term is sometimes avoided due to giving the connotation of safety, which may not be guaranteed. [ 7 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lifting_equipment |
Ligado Networks , formerly known as LightSquared, is an American satellite communications company.
After restructuring, emerging from bankruptcy and modifying its network plan, the new company, Ligado Networks, launched in 2016. It operates the SkyTerra 1 satellite. [ 3 ]
Ligado Networks is based in Reston, Virginia . [ 4 ] The company is governed by a seven-member board of directors [ 5 ] with Ivan Seidenberg as Chairman and Doug Smith as president and CEO . [ 6 ] [ better source needed ] Fortress Investment Group, LLC , Centerbridge Partners LP and JPMorgan Chase & Co. own controlling stakes in Ligado Networks; Harbinger Capital Partners maintains a minority stake. [ 7 ] [ 8 ]
Ligado Networks has 40 MHz of spectrum licenses in the nationwide block of 1500 MHz to 1700 MHz spectrum in the L-Band . [ 9 ] [ 10 ] With it, the company is developing a satellite-terrestrial network to support the emerging 5G market and Internet of Things applications. [ 6 ] [ 10 ]
The company (as LightSquared) reached a cooperation agreement in 2007 with Inmarsat , a British satellite telecommunications company, that rearranged the L-Band spectrum so the company could use a larger, contiguous stretch of spectrum. [ 11 ] Potential interference issues at the time prevented LightSquared from deploying the network. [ 12 ]
In 2010, the company acquired licenses to mid-band spectrum when it bought SkyTerra Communications. [ 13 ] LightSquared's plans, which did not come to fruition, were to use the spectrum to create a 4G wireless mobile network covering North America. [ 14 ] [ 15 ]
Ligado received FCC's unanimous approval for use of spectrum near the L-bands used by GPS signals for their 5G networks in April 2020. The decision came after letters from the Department of Defense and members of Congress suggested that the company using spectrum would interfere with military capabilities. Secretary of Defense Mark Esper warned of the risks, and a spokesman for the Pentagon argued that the request should be denied. The request was also opposed by Iridium Communications and the Federal Aviation Administration . [ 16 ] [ 17 ]
After the FCC approval, Bradford Parkinson , lead architect of the Global Positioning System and member of the National Executive Committee for Space-Based Positioning, Navigation and Timing , said that the FCC had made a "grave error" in their approval. An advisory committee agreed that the approval was a risk. [ 18 ] Major aviation associations including the Air Line Pilots Association, International , Aerospace Industries Association , Aircraft Owners and Pilots Association (AOPA), and others all filed statements in opposition to the order. Other major GPS users, including Lockheed , Garmin , Trimble , and others also filed statements in opposition. Additionally, after the ruling, the Department of Defense and Department of Transportation issued a joint statement of opposition; the latter noted safety losses from impacts to E-911 service. The HASC committee chairman, Rep. Adam Smith called it a security risk. [ 17 ] [ 18 ] [ 19 ]
In early May, the SASC held a hearing on the effects of the decision. Referencing the COVID-19 pandemic , Chairman Sen. James Inhofe charged FCC, stating "a few powerful people made a hasty decision over the weekend, in the middle of a national crisis, against the judgment of every other agency involved". The DOD said they had filed multiple objections and believed the license would be denied. The DOD objected to a draft of the approval in October 2019 and communicated this back to the FCC, who shared their rejection with Ligado. The FCC was not invited to participate; it is overseen by another committee. Ligado was also not invited to participate, which their CEO and Chairman both complained about in a joint statement. The following day, the HASC wrote a letter on behalf of the entire committee denouncing the decision and asking oversight questions. HASC and FCC participated in a conference call on May 21. On the following day, the National Telecommunications and Information Administration formally petitioned the FCC to request for a reversal of the decision. Ligado stated AG William Barr , Secretary of State Mike Pompeo and others supported their license. On May 26, FCC Chairman Ajit Pai responded to the HASC, commenting on the interagency conflict and defending their decision. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ]
In June 2020, Sen. Inhofe proposed legislation requiring Ligado to be liable for costs associated with their impact to GPS reception for any user of the service. [ 25 ] Rep. Michael Turner added language to the annual defense funding bill that would effectively ban Ligado from receiving contracts with DoD, and Sen. Inhofe did the same in their version of the bill. [ 26 ] [ 27 ] Members of the HASC asked for an investigation into Dennis Roberson, who is both the head of FCC's Technical Advisory Council and the head of Roberson and Associates , which provided a report to the FCC on Ligado's behalf. [ 28 ] The Keep GPS Working Coalition was created in late June, representing a broad range of industries including the Boat Owners Association, AOPA, AFBF, and others. [ 29 ] [ 30 ]
On October 13, 2023, Ligado announced that it would be preparing to file for Chapter 11 bankruptcy for the second time in eleven years after government talks over a multibillion-dollar claim asserted by the company collapsed and fell through. [ 31 ] On October 16, 2023, Ligado sued the U.S. government, stating that the Department of Defense is using its spectrum illegally, alleging that the U.S. government misappropriated Ligado's exclusively talked spectrum to support secret DoD systems that have been using the spectrum without consent. [ 32 ] Ligado Networks filed for Chapter 11 bankruptcy on January 5, 2025. [ 33 ]
Ligado Networks originated in 1988 with the company American Mobile Satellite Corporation (which became Motient Corporation), and later as Mobile Satellite Ventures [ 34 ] after a merger between Motient Corporation and TMI Communications. [ 35 ] The company originally operated two geostationary satellites covering the North American market: MSAT -2, [ 36 ] licensed in the United States, launched in 1995; the next year, the company launched MSAT-1, which is licensed in Canada. [ 37 ]
Mobile Satellite Ventures changed its name to SkyTerra Communications in 2008. [ 38 ] LightSquared emerged from SkyTerra after Philip Falcone 's Harbinger Capital Partners acquired SkyTerra in March 2010. [ 13 ] The company received about $2.9 billion in assets from Harbinger and affiliates, as well as more than $2.3 billion in debt and equity financing. [ 39 ] [ 40 ] LightSquared sought to develop a 4G LTE wireless broadband network [ 14 ] using spectrum in the L-Band . [ 10 ]
The company launched its SkyTerra 1 satellite from Baikonur Cosmodrome in Kazakhstan on November 14, 2010. [ 41 ] At its launch, the satellite contained the largest commercial reflector antenna put into service. [ 41 ] SkyTerra 1 replaced MSAT-1 and MSAT-2 as most of the data from the company's MSAT satellites relocated to SkyTerra 1. [ 42 ]
The spectrum the company controls was originally set aside for satellite communications only. [ 13 ] That changed in 2004 when the FCC granted approval for the company to augment its satellite network with cellphone towers on land (serving as an "ancillary terrestrial component," or ATC). [ 13 ] In January 2011, the FCC approved a conditional waiver to allow the company to use its spectrum for land-based-only LTE communications if the company resolved GPS interference. [ 43 ] The GPS industry, aviators and military claimed the company's use of its spectrum would interfere with their communications. [ 44 ] In February 2012, the FCC proposed to suspend indefinitely the ATC authorization due to the interference issues with satellite services. [ 12 ] [ 45 ] Three months later, LightSquared filed for chapter 11 bankruptcy . [ 12 ]
On December 7, 2015, the company emerged from bankruptcy as a new company [ 1 ] [ 2 ] under the control of Centerbridge Partners , Fortress Investment Group and JPMorgan Chase & Co. ; Harbinger retained minority ownership. [ 8 ] Also in December 2015, the company reached settlements with GPS companies Garmin Ltd. , Deere & Co. and Trimble Navigation Ltd. to establish how the company and GPS companies can coexist. [ 46 ] [ 47 ]
The company announced its new name, Ligado Networks, on February 10, 2016. [ 7 ] [ 10 ]
On March 1, 2001, Ligado Networks' predecessor, Mobile Satellite Ventures applied to the FCC to use a "combination of spot-beam satellites and terrestrial base stations." [ 48 ]
In 2011, LightSquared's plan for standalone-terrestrial broadband services met resistance over potential interference issues with GPS systems.
In a January 12, 2011, letter to the FCC, National Telecommunications and Information Administration (NTIA) chief Lawrence Strickling said that LightSquared's hybrid mobile broadband services raise "significant interference concerns" and that several federal agencies wanted the FCC to defer action on LightSquared until the concerns were addressed. [ 49 ]
On January 20, 2011, GPS industry representatives sent a letter to the FCC, sharing a study by Garmin International that said "widespread, severe GPS jamming will occur" if LightSquared's plans were approved. [ 50 ] The study used two GPS models and simulated LightSquared transmitters. [ 50 ]
Testing showed that LightSquared's proposed ground-based transmissions could "overpower" the fainter GPS signals from space-based satellites. With the band close to those GPS signals, "GPS devices could pick up the stronger LightSquared signals and become overloaded or saturated". [ 51 ]
On January 26, 2011, The Federal Communications Commission granted a conditional waiver that allowed LightSquared and its wholesale customers to offer terrestrial-only devices rather than having to incorporate both satellite and terrestrial services. [ 52 ] [ 53 ] The waiver was conditioned on resolving concerns about interference to GPS. [ 54 ] [ 55 ] Companies that provide global positioning systems, in addition to the United States Air Force, the operator of the GPS system, opposed the FCC waiver, saying that more time was needed to resolve concerns that LightSquared's service might interfere with their satellite-based offerings. LightSquared promised to work with GPS providers and give the FCC monthly updates on a resolution to interference concerns. [ 56 ]
On April 5, 2011, with respect to concerns raised by the U.S. GPS Industry Council and NTIA about LightSquared's proposed operations, the FCC stated that LightSquared could not commence offering a commercial terrestrial service until the agency concluded that the harmful interference concerns had been resolved. [ 57 ]
On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference. [ 54 ]
On September 15, 2011, Representative Michael Turner (R-Ohio) called for the United States House Oversight and Government Committee to investigate LightSquared under the premise that the Federal Communications Commission waived a rule for LightSquared because of campaign contributions to Democrats. [ 58 ] [ 59 ] LightSquared officials, who had contributed to both Republicans and Democrats, denied the allegations. [ 58 ] [ 59 ]
Among the issues raised was whether political contributors and investors received favorable treatment by President Barack Obama 's administration. Before he became president, Senator Obama had invested between $50,000 and $90,000 in SkyTerra, which later became LightSquared. [ 60 ] [ 61 ] An Air Force General claimed in a closed congressional hearing that he had received political pressure to soften his testimony regarding the negative effects of LightSquared technology. [ 62 ] [ 63 ] However, the General's spokesperson denied there was any improper influence and said that the general's testimony was reviewed appropriately by the Office of the Secretary of Defense and other executive agencies via the established Office of Management and Budget process. [ 64 ]
Sen. Chuck Grassley (R-Iowa), a ranking minority member on the United States Senate Committee on the Judiciary , had also asked Falcone and LightSquared's CEO to disclose their contacts with the FCC, the White House and other government agencies. [ 65 ] | https://en.wikipedia.org/wiki/Ligado_Networks |
In coordination chemistry , a ligand [ a ] is an ion or molecule with a functional group that binds to a central metal atom to form a coordination complex . The bonding with the metal generally involves formal donation of one or more of the ligand's electron pairs , often through Lewis bases . [ 1 ] The nature of metal–ligand bonding can range from covalent to ionic . Furthermore, the metal–ligand bond order can range from one to three. Ligands are viewed as Lewis bases, although rare cases are known to involve Lewis acidic "ligands". [ 2 ] [ 3 ]
Metals and metalloids are bound to ligands in almost all circumstances, although gaseous "naked" metal ions can be generated in a high vacuum. Ligands in a complex dictate the reactivity of the central atom, including ligand substitution rates, the reactivity of the ligands themselves, and redox . Ligand selection requires critical consideration in many practical areas, including bioinorganic and medicinal chemistry , homogeneous catalysis , and environmental chemistry .
Ligands are classified in many ways, including: charge, size (bulk), the identity of the coordinating atom(s), and the number of electrons donated to the metal ( denticity or hapticity ). The size of a ligand is indicated by its cone angle .
The composition of coordination complexes have been known since the early 1800s, such as Prussian blue and copper vitriol . The key breakthrough occurred when Alfred Werner reconciled formulas and isomers . He showed, among other things, that the formulas of many cobalt(III) and chromium(III) compounds can be understood if the metal has six ligands in an octahedral geometry . The first to use the term "ligand" were Alfred Werner and Carl Somiesky, in relation to silicon chemistry. The theory allows one to understand the difference between coordinated and ionic chloride in the cobalt ammine chlorides and to explain many of the previously inexplicable isomers. He resolved the first coordination complex called hexol into optical isomers, overthrowing the theory that chirality was necessarily associated with carbon compounds. [ 4 ] [ 5 ]
In general, ligands are viewed as electron donors and the metals as electron acceptors, i.e., respectively, Lewis bases and Lewis acids . This description has been semi-quantified in many ways, e.g. ECW model . Bonding is often described using the formalisms of molecular orbital theory. [ 6 ] [ 7 ]
Ligands and metal ions can be ordered in many ways; one ranking system focuses on ligand 'hardness' (see also hard/soft acid/base theory ). Metal ions preferentially bind certain ligands. In general, 'hard' metal ions prefer weak field ligands, whereas 'soft' metal ions prefer strong field ligands. According to the molecular orbital theory, the HOMO (Highest Occupied Molecular Orbital) of the ligand should have an energy that overlaps with the LUMO (Lowest Unoccupied Molecular Orbital) of the metal preferential. Metal ions bound to strong-field ligands follow the Aufbau principle , whereas complexes bound to weak-field ligands follow Hund's rule .
Binding of the metal with the ligands results in a set of molecular orbitals, where the metal can be identified with a new HOMO and LUMO (the orbitals defining the properties and reactivity of the resulting complex) and a certain ordering of the 5 d-orbitals (which may be filled, or partially filled with electrons). In an octahedral environment, the 5 otherwise degenerate d-orbitals split in sets of 3 and 2 orbitals (for a more in-depth explanation, see crystal field theory ):
The energy difference between these 2 sets of d-orbitals is called the splitting parameter, Δ o . The magnitude of Δ o is determined by the field-strength of the ligand: strong field ligands, by definition, increase Δ o more than weak field ligands. Ligands can now be sorted according to the magnitude of Δ o (see the table below ). This ordering of ligands is almost invariable for all metal ions and is called spectrochemical series .
For complexes with a tetrahedral surrounding, the d-orbitals again split into two sets, but this time in reverse order:
The energy difference between these 2 sets of d-orbitals is now called Δ t . The magnitude of Δ t is smaller than for Δ o , because in a tetrahedral complex only 4 ligands influence the d-orbitals, whereas in an octahedral complex the d-orbitals are influenced by 6 ligands. When the coordination number is neither octahedral nor tetrahedral, the splitting becomes correspondingly more complex. For the purposes of ranking ligands, however, the properties of the octahedral complexes and the resulting Δ o has been of primary interest.
The arrangement of the d-orbitals on the central atom (as determined by the 'strength' of the ligand), has a strong effect on virtually all the properties of the resulting complexes. E.g., the energy differences in the d-orbitals has a strong effect in the optical absorption spectra of metal complexes. It turns out that valence electrons occupying orbitals with significant 3 d-orbital character absorb in the 400–800 nm region of the spectrum (UV–visible range). The absorption of light (what we perceive as the color ) by these electrons (that is, excitation of electrons from one orbital to another orbital under influence of light) can be correlated to the ground state of the metal complex, which reflects the bonding properties of the ligands. The relative change in (relative) energy of the d-orbitals as a function of the field-strength of the ligands is described in Tanabe–Sugano diagrams .
In cases where the ligand has low energy LUMO, such orbitals also participate in the bonding. The metal–ligand bond can be further stabilised by a formal donation of electron density back to the ligand in a process known as back-bonding . In this case a filled, central-atom-based orbital donates density into the LUMO of the (coordinated) ligand. Carbon monoxide is the preeminent example a ligand that engages metals via back-donation. Complementarily, ligands with low-energy filled orbitals of pi-symmetry can serve as pi-donor.
Ligands are classified according to the number of electrons that they "donate" to the metal. L ligands are Lewis bases . L ligands are represented by amines , phosphines , CO , N 2 , and alkenes . Examples of L ligands extend to include dihydrogen and hydrocarbons that interact by agostic interactions . X ligands are halides and pseudohalides . X ligands typically are derived from anionic precursors such as chloride but includes ligands where salts of anion do not really exist such as hydride and alkyl. [ 8 ] [ 9 ]
Especially in the area of organometallic chemistry , ligands are classified according to the "CBC Method" for Covalent Bond Classification, as popularized by M. L. H. Green and "is based on the notion that there are three basic types [of ligands]... represented by the symbols L, X, and Z, which correspond respectively to 2-electron, 1-electron and 0-electron neutral ligands." [ 10 ] [ 11 ]
Many ligands are capable of binding metal ions through multiple sites, usually because the ligands have lone pairs on more than one atom. Such ligands are polydentate. [ 12 ] Ligands that bind via more than one atom are often termed chelating . A ligand that binds through two sites is classified as bidentate , and three sites as tridentate . The " bite angle " refers to the angle between the two bonds of a bidentate chelate. Chelating ligands are commonly formed by linking donor groups via organic linkers. A classic bidentate ligand is ethylenediamine , which is derived by the linking of two ammonia groups with an ethylene (−CH 2 CH 2 −) linker. A classic example of a polydentate ligand is the hexadentate chelating agent EDTA , which is able to bond through six sites, completely surrounding some metals. The number of times a polydentate ligand binds to a metal centre is symbolized by " κ n ", where n indicates the number of sites by which a ligand attaches to a metal. EDTA 4− , when it is hexidentate, binds as a κ 6 -ligand, the amines and the carboxylate oxygen atoms are not contiguous. In practice, the n value of a ligand is not indicated explicitly but rather assumed. The binding affinity of a chelating system depends on the chelating angle or bite angle .
Denticity (represented by κ ) is nomenclature that described to the number of noncontiguous atoms of a ligand bonded to a metal. This descriptor is often omitted because the denticity of a ligand is often obvious. The complex tris(ethylenediamine)cobalt(III) could be described as [Co(κ 2 -en) 3 ] 3+ .
Complexes of polydentate ligands are called chelate complexes. They tend to be more stable than complexes derived from monodentate ligands. This enhanced stability, called the chelate effect , is usually attributed to effects of entropy , which favors the displacement of many ligands by one polydentate ligand.
Related to the chelate effect is the macrocyclic effect . A macrocyclic ligand is any large ligand that at least partially surrounds the central atom and bonds to it, leaving the central atom at the centre of a large ring. The more rigid and the higher its denticity, the more inert will be the macrocyclic complex. Heme is an example, in which the iron atom is at the centre of a porphyrin macrocycle, bound to four nitrogen atoms of the tetrapyrrole macrocycle. The very stable dimethylglyoximate complex of nickel is a synthetic macrocycle derived from dimethylglyoxime .
Hapticity (represented by Greek letter η ) refers to the number of contiguous atoms that comprise a donor site and attach to a metal center. The η-notation applies when multiple atoms are coordinated. For example, η 2 is a ligand that coordinates through two contiguous atoms. Butadiene forms both η 2 and η 4 complexes depending on the number of carbon atoms that are bonded to the metal. [ 13 ] [ 14 ] [ 15 ]
Trans-spanning ligands are bidentate ligands that can span coordination positions on opposite sides of a coordination complex. [ 16 ]
In contrast to polydentate ligands, ambidentate ligands can attach to the central atom in either one of two (or more) places, but not both. An example is thiocyanate , SCN − , which can attach at either the sulfur atom or the nitrogen atom. Such compounds give rise to linkage isomerism .
Polydentate and ambidentate are therefore two different types of polyfunctional ligands (ligands with more than one functional group ) which can bond to a metal center through different ligand atoms to form various isomers. Polydentate ligands can bond through one atom AND another (or several others) at the same time, whereas ambidentate ligands bond through one atom OR another. Proteins are complex examples of polyfunctional ligands, usually polydentate.
A bridging ligand links two or more metal centers. Virtually all inorganic solids with simple formulas are coordination polymers , consisting of metal ion centres linked by bridging ligands. This group of materials includes all anhydrous binary metal ion halides and pseudohalides. Bridging ligands also persist in solution. Polyatomic ligands such as carbonate are ambidentate and thus are found to often bind to two or three metals simultaneously. Atoms that bridge metals are sometimes indicated with the prefix " μ ". Most inorganic solids are polymers by virtue of the presence of multiple bridging ligands. Bridging ligands, capable of coordinating multiple metal ions, have been attracting considerable interest because of their potential use as building blocks for the fabrication of functional multimetallic assemblies. [ 17 ]
Binucleating ligands bind two metal ions. [ 18 ] Usually binucleating ligands feature bridging ligands, such as phenoxide, pyrazolate, or pyrazine, as well as other donor groups that bind to only one of the two metal ions.
Some ligands can bond to a metal center through the same atom but with a different number of lone pairs . The bond order of the metal ligand bond can be in part distinguished through the metal ligand bond angle (M−X−R). This bond angle is often referred to as being linear or bent with further discussion concerning the degree to which the angle is bent. For example, an imido ligand in the ionic form has three lone pairs. One lone pair is used as a sigma X donor, the other two lone pairs are available as L-type pi donors. If both lone pairs are used in pi bonds then the M−N−R geometry is linear. However, if one or both these lone pairs is nonbonding then the M−N−R bond is bent and the extent of the bend speaks to how much pi bonding there may be. η 1 -Nitric oxide can coordinate to a metal center in linear or bent manner.
A spectator ligand is a tightly coordinating polydentate ligand that does not participate in chemical reactions but removes active sites on a metal. Spectator ligands influence the reactivity of the metal center to which they are bound.
Bulky ligands are used to control the steric properties of a metal center. They are used for many reasons, both practical and academic. On the practical side, they influence the selectivity of metal catalysts, e.g., in hydroformylation . Of academic interest, bulky ligands stabilize unusual coordination sites, e.g., reactive coligands or low coordination numbers. Often bulky ligands are employed to simulate the steric protection afforded by proteins to metal-containing active sites. Of course excessive steric bulk can prevent the coordination of certain ligands.
Chiral ligands are useful for inducing asymmetry within the coordination sphere. Often the ligand is employed as an optically pure group. In some cases, such as secondary amines, the asymmetry arises upon coordination. Chiral ligands are used in homogeneous catalysis , such as asymmetric hydrogenation .
Hemilabile ligands contain at least two electronically different coordinating groups and form complexes where one of these is easily displaced from the metal center while the other remains firmly bound, a behaviour which has been found to increase the reactivity of catalysts when compared to the use of more traditional ligands.
Non-innocent ligands bond with metals in such a manner that the distribution of electron density between the metal center and ligand is unclear. Describing the bonding of non-innocent ligands often involves writing multiple resonance forms that have partial contributions to the overall state.
Virtually every molecule and every ion can serve as a ligand for (or "coordinate to") metals. Monodentate ligands include virtually all anions and all simple Lewis bases. Thus, the halides and pseudohalides are important anionic ligands whereas ammonia , carbon monoxide , and water are particularly common charge-neutral ligands. Simple organic species are also very common, be they anionic ( RO − and RCO − 2 ) or neutral ( R 2 O , R 2 S , R 3− x NH x , and R 3 P ). The steric properties of some ligands are evaluated in terms of their cone angles .
Beyond the classical Lewis bases and anions, all unsaturated molecules are also ligands, utilizing their pi electrons in forming the coordinate bond. Also, metals can bind to the σ bonds in for example silanes , hydrocarbons , and dihydrogen (see also: Agostic interaction ).
In complexes of non-innocent ligands , the ligand is bonded to metals via conventional bonds, but the ligand is also redox-active.
In the following table the ligands are sorted by field strength [ citation needed ] (weak field ligands first):
The entries in the table are sorted by field strength, binding through the stated atom (i.e. as a terminal ligand). The 'strength' of the ligand changes when the ligand binds in an alternative binding mode (e.g., when it bridges between metals) or when the conformation of the ligand gets distorted (e.g., a linear ligand that is forced through steric interactions to bind in a nonlinear fashion).
In this table other common ligands are listed in alphabetical order.
A ligand exchange (also called ligand substitution ) is a chemical reaction in which a ligand in a compound is replaced by another. Two general mechanisms are recognized: associative substitution or by dissociative substitution .
Associative substitution closely resembles the S N 2 mechanism in organic chemistry. A typically smaller ligand can attach to an unsaturated complex followed by loss of another ligand. Typically, the rate of the substitution is first order in entering ligand L and the unsaturated complex. [ 19 ]
Dissociative substitution is common for octahedral complexes. This pathway closely resembles the S N 1 mechanism in organic chemistry. The identity of the entering ligand does not affect the rate. [ 19 ]
BioLiP [ 20 ] is a comprehensive ligand–protein interaction database, with the 3D structure of the ligand–protein interactions taken from the Protein Data Bank . MANORAA is a webserver for analyzing conserved and differential molecular interaction of the ligand in complex with protein structure homologs from the Protein Data Bank. It provides the linkage to protein targets such as its location in the biochemical pathways, SNPs and protein/RNA baseline expression in target organ. [ 21 ] | https://en.wikipedia.org/wiki/Ligand |
Ligand-gated ion channels ( LICs , LGIC ), also commonly referred to as ionotropic receptors , are a group of transmembrane ion-channel proteins which open to allow ions such as Na + , K + , Ca 2+ , and/or Cl − to pass through the membrane in response to the binding of a chemical messenger (i.e. a ligand ), such as a neurotransmitter . [ 1 ] [ 2 ] [ 3 ]
When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft . The neurotransmitter then binds to receptors located on the postsynaptic neuron . If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization , for an excitatory receptor response, or a hyperpolarization , for an inhibitory response.
These receptor proteins are typically composed of at least two different domains: a transmembrane domain which includes the ion pore, and an extracellular domain which includes the ligand binding location (an allosteric binding site). This modularity has enabled a 'divide and conquer' approach to finding the structure of the proteins (crystallising each domain separately). The function of such receptors located at synapses is to convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LICs are additionally modulated by allosteric ligands , by channel blockers , ions , or the membrane potential . LICs are classified into three superfamilies which lack evolutionary relationship: cys-loop receptors , ionotropic glutamate receptors and ATP-gated channels .
The cys-loop receptors are named after a characteristic loop formed by a disulfide bond between two cysteine residues in the N terminal extracellular domain.
They are part of a larger family of pentameric ligand-gated ion channels that usually lack this disulfide bond, hence the tentative name "Pro-loop receptors". [ 4 ] [ 5 ] A binding site in the extracellular N-terminal ligand-binding domain gives them receptor specificity for (1) acetylcholine (AcCh), (2) serotonin, (3) glycine, (4) glutamate and (5) γ-aminobutyric acid (GABA) in vertebrates. The receptors are subdivided with respect to the type of ion that they conduct (anionic or cationic) and further into families defined by the endogenous ligand. They are usually pentameric with each subunit containing 4 transmembrane helices constituting the transmembrane domain, and a beta sheet sandwich type, extracellular, N terminal, ligand binding domain. [ 6 ] Some also contain an intracellular domain like shown in the image.
The prototypic ligand-gated ion channel is the nicotinic acetylcholine receptor . It consists of a pentamer of protein subunits (typically ααβγδ), with two binding sites for acetylcholine (one at the interface of each alpha subunit). When the acetylcholine binds it alters the receptor's configuration (twists the T2 helices which moves the leucine residues, which block the pore, out of the channel pathway) and causes the constriction in the pore of approximately 3 angstroms to widen to approximately 8 angstroms so that ions can pass through. This pore allows Na + ions to flow down their electrochemical gradient into the cell. With a sufficient number of channels opening at once, the inward flow of positive charges carried by Na + ions depolarizes the postsynaptic membrane sufficiently to initiate an action potential .
A bacterial homologue to an LIC has been identified, hypothesized to act nonetheless as a chemoreceptor. [ 4 ] This prokaryotic nAChR variant is known as the GLIC receptor, after the species in which it was identified; G loeobacter L igand-gated I on C hannel.
Cys-loop receptors have structural elements that are well conserved, with a large extracellular domain (ECD) harboring an alpha-helix and 10 beta-strands. Following the ECD, four transmembrane segments (TMSs) are connected by intracellular and extracellular loop structures. [ 7 ] Except the TMS 3-4 loop, their lengths are only 7-14 residues. The TMS 3-4 loop forms the largest part of the intracellular domain (ICD) and exhibits the most variable region between all of these homologous receptors. The ICD is defined by the TMS 3-4 loop together with the TMS 1-2 loop preceding the ion channel pore. [ 7 ] Crystallization has revealed structures for some members of the family, but to allow crystallization, the intracellular loop was usually replaced by a short linker present in prokaryotic cys-loop receptors, so their structures as not known. Nevertheless, this intracellular loop appears to function in desensitization, modulation of channel physiology by pharmacological substances, and posttranslational modifications . Motifs important for trafficking are therein, and the ICD interacts with scaffold proteins enabling inhibitory synapse formation. [ 7 ]
The ionotropic glutamate receptors bind the neurotransmitter glutamate . They form tetramers, with each subunit consisting of an extracellular amino terminal domain (ATD, which is involved tetramer assembly), an extracellular ligand binding domain (LBD, which binds glutamate), and a transmembrane domain (TMD, which forms the ion channel). The transmembrane domain of each subunit contains three transmembrane helices as well as a half membrane helix with a reentrant loop. The structure of the protein starts with the ATD at the N terminus followed by the first half of the LBD which is interrupted by helices 1,2 and 3 of the TMD before continuing with the final half of the LBD and then finishing with helix 4 of the TMD at the C terminus. This means there are three links between the TMD and the extracellular domains. Each subunit of the tetramer has a binding site for glutamate formed by the two LBD sections forming a clamshell like shape. Only two of these sites in the tetramer need to be occupied to open the ion channel. The pore is mainly formed by the half helix 2 in a way which resembles an inverted potassium channel .
The α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (also known as AMPA receptor , or quisqualate receptor) is a non- NMDA -type ionotropic transmembrane receptor for glutamate that mediates fast synaptic transmission in the central nervous system (CNS).
Its name is derived from its ability to be activated by the artificial glutamate analog AMPA . The receptor was first named the "quisqualate receptor" by Watkins and colleagues after a naturally occurring agonist quisqualate and was only later given the label "AMPA receptor" after the selective agonist developed by Tage Honore and colleagues at the Royal Danish School of Pharmacy in Copenhagen. [ 10 ] AMPARs are found in many parts of the brain and are the most commonly found receptor in the nervous system . The AMPA receptor GluA2 (GluR2) tetramer was the first glutamate receptor ion channel to be crystallized . Ligands include:
The N-methyl-D-aspartate receptor ( NMDA receptor ) – a type of ionotropic glutamate receptor – is a ligand-gated ion channel that is gated by the simultaneous binding of glutamate and a co-agonist (i.e., either D-serine or glycine ). [ 11 ] Studies show that the NMDA receptor is involved in regulating synaptic plasticity and memory. [ 12 ] [ 13 ]
The name "NMDA receptor" is derived from the ligand N-methyl-D-aspartate (NMDA), which acts as a selective agonist at these receptors. When the NMDA receptor is activated by the binding of two co-agonists, the cation channel opens, allowing Na + and Ca 2+ to flow into the cell, in turn raising the cell's electric potential . Thus, the NMDA receptor is an excitatory receptor. At resting potentials , the binding of Mg 2+ or Zn 2+ at their extracellular binding sites on the receptor blocks ion flux through the NMDA receptor channel. "However, when neurons are depolarized, for example, by intense activation of colocalized postsynaptic AMPA receptors , the voltage-dependent block by Mg 2+ is partially relieved, allowing ion influx through activated NMDA receptors. The resulting Ca 2+ influx can trigger a variety of intracellular signaling cascades, which can ultimately change neuronal function through activation of various kinases and phosphatases". [ 14 ] Ligands include:
ATP-gated channels open in response to binding the nucleotide ATP . They form trimers with two transmembrane helices per subunit and both the C and N termini on the intracellular side.
Ligand-gated ion channels are likely to be the major site at which anaesthetic agents and ethanol have their effects, although unequivocal evidence of this is yet to be established. [ 16 ] [ 17 ] In particular, the GABA and NMDA receptors are affected by anaesthetic agents at concentrations similar to those used in clinical anaesthesia. [ 18 ]
By understanding the mechanism and exploring the chemical/biological/physical component that could function on those receptors, more and more clinical applications are proven by preliminary experiments or FDA . Memantine is approved by the U.S. F.D.A and the European Medicines Agency for the treatment of moderate-to-severe Alzheimer's disease , [ 19 ] and has now received a limited recommendation by the UK's National Institute for Health and Care Excellence for patients who fail other treatment options. [ 20 ] Agomelatine , is a type of drug that acts on a dual melatonergic - serotonergic pathway, which have shown its efficacy in the treatment of anxious depression during clinical trials, [ 21 ] [ 22 ] study also suggests the efficacy in the treatment of atypical and melancholic depression . [ 23 ]
As of this edit , this article uses content from "1.A.9 The Neurotransmitter Receptor, Cys loop, Ligand-gated Ion Channel (LIC) Family" , which is licensed in a way that permits reuse under the Creative Commons Attribution-ShareAlike 3.0 Unported License , but not under the GFDL . All relevant terms must be followed. | https://en.wikipedia.org/wiki/Ligand-gated_ion_channel |
A ligand-targeted liposome (LTL) is a nanocarrier with specific ligands attached to its surface to enhance localization for targeted drug delivery . The targeting ability of LTLs enhances cellular localization and uptake of these liposomes for therapeutic or diagnostic purposes. LTLs have the potential to enhance drug delivery by decreasing peripheral systemic toxicity, increasing in vivo drug stability, enhancing cellular uptake, and increasing efficiency for chemotherapeutics and other applications. [ 1 ]
Liposomes are beneficial in therapeutic manufacturing because of low batch-to-batch variability, easy synthesis, favorable scalability, and strong biocompatibility. Ligand-targeting technology enhances liposomes by adding targeting properties for directed drug delivery. [ 1 ]
Ligands are molecules responsible for binding to receptors in the cellular targeting process. Surface-coupled ligands offer a greater degree of freedom to move on the liposome membrane for optimal interactions. [ 2 ]
Ligands are typically monoclonal antibodies (mAbs) or antibody fragments , but can also include other molecules such as ARPG, [ 3 ] proteins , peptides , vitamins , carbohydrates , and glycoproteins . [ 1 ] The choice of ligand can significantly influence the behavioral and functional properties of a ligand-targeted liposome. Antibody fragments have lower immunogenicity and improved pharmacokinetics . [ citation needed ] mAbs are unique and can be used for inhibition of DNA repair , terminating the cell cycle , and triggering apoptosis , all of which factor into applications for anticancer drugs. [ 4 ] Peptides are relatively easy and affordable to prepare with low antigenicity and lower opsonization , which are thus more resistant to enzymatic degradation. Proteins can target the transferrin receptor membrane glycoprotein. Sugars and vitamins are recognized by cellular transport receptors. [ 1 ]
Ligand choice is based on receptor expression, ligand internalization, binding affinity, and type of ligand. [ citation needed ] Ligands alone are not able to carry an efficient payload for therapeutic levels but can carry more of the agent when combined with liposomes. [ 2 ]
Ligands can be attached to liposomes through ligation to create ligand-targeted liposomes in a variety of ways. Liposomes have a lipid outer layer that can be used to bind ligands. Conjugation of the ligand to the surface of a liposome can be achieved through multiple routes. Covalent binding [ 5 ] [ 6 ] is a prominent way due to the anchoring between the long-chain fatty acids and the ligand. Combinations of covalent binding through disulfide linkages , [ 7 ] heating, [ 8 ] and hydrophobic interactions [ 9 ] can be used depending on the properties of the liposome and ligand. Adsorption and membrane fusion are non-covalent methods for the attachment of monoclonal antibodies. [ 10 ] [ 2 ] Chemical linkages such as covalent bonds are more effective at increasing the amount of attached ligand to the carrier as opposed to non-covalent methods. [ 11 ]
During chemical coupling for manufacturing, it is crucial that ligands maintain their integrity when attached to the liposome surface. If ligands, such as antibodies, do not maintain binding specificity, proper orientation, and coupling efficiency, the liposome will not be effective. [ 2 ]
Since the ligand is responsible for cellular interaction, it is chosen for the application depending on the target site. The target site contains binding sites that the ligand targets to deliver the LTL to the desired area. Favorable target site characteristics are determined by what is commonly expressed by tissues of the pathology of interest. Determinants can include histones , basement membrane fibrinogen , selectins , adhesion molecules , and other ligand targets. For example, in some human cancer tumors such as ovarian carcinomas , folate is over-expressed. LTLs for targeting cancer often use a ligand that targets this over-expression of folate to localize drug delivery to the desired area. [ 12 ] [ 2 ] The tumor microenvironment of solid tumor cancers is also a unique targeting site. Tumor endothelial cells are important for angiogenesis , which is key to tumor growth; therefore, using LTLs to target these cells can limit the growth and vascularization of a tumor. [ 3 ]
Ligand-targeted liposomes utilize active targeting to interact with the desired cells. [ 11 ] Once administered intravenously into blood circulation, ligand-targeted liposomes must travel to reach the target area to deliver their contents. LTLs retain the contained agent until the process of cellular uptake.
Receptor-mediated endocytosis is the most common way LTLs deliver material to the cell. The targeting ligand connected to the liposome attaches to the binding site found on the targeted cell. The LTL's contents are transported to Lysosomes to be processed. This process allows the molecules to cross the blood-brain barrier , which allows the drug to be delivered to tissue that is relatively difficult to reach without a specific mechanism. Less commonly, pinocytosis or phagocytosis may be used for cellular uptake of the liposome. [ 12 ] Certain recognition sites, such as ecto-NAD+oglycohydro|ase, mediate uptake to aid in the internalization and effectiveness of the LTLs. [ 13 ]
The remainder of LTLs in circulation after binding to the target site are mainly cleared through the reticuloendothelial system (RES). The RES includes different organs including the kidneys, lungs, spleen, liver, bone marrow, and lymph nodes. The liver is the primary organ for the clearance of LTLs. The RES is most likely able to clear LTLs due to fenestrations in their microvasculature that allow for extravasation. Phagocytic cells within the RES break down LTLs. [ 14 ]
Ligand-targeted liposomes are used for a variety of applications depending on the liposome, ligand, and liposome contents.
Ligand-targeted liposomes can be used for diagnostics through imaging. The liposomes can contain imaging agents to aid in visualization such as fluorescent dyes , labeling probes, and contrast agents . [ 15 ] Commonly, a radioactive gamma-emitter, fluorescent marker, or magnetic resonance imaging (MRI) agent is encapsulated in the liposome for this application. [ 2 ] The active targeting mechanism of LTLs allows the target tissue to retain the imaging agent while the remaining agent is cleared from circulation. The ligand-targeted liposomes increase the specificity and sensitivity of the images taken through positron emission tomography (PET), single-photon emission computed tomography (SPECT), and MRI techniques through the ligand localization to receptors of interest. [ 15 ] Biotinylated liposomes containing [ 67 Ga] coupled with a later injection of avidin have been shown to reduce background signal and produce the needed contrast for imaging while reducing the circulation time of radioactive imaging agent. [ 16 ] Molecular imaging of processes over time in vivo is also made possible using ligand-targeted nanoparticles. [ 17 ] As of 2015, many ligand-targeted imaging agents such as MIP-1404, MIP-1405, MIP-1072, MIP-109, and 18 F-DCFBC were undergoing clinical trials. The ability of a liposome to encapsulate these imaging agents and deliver them to specific regions through ligand targeting is helpful for precision detection. [ 18 ]
Ligand-targeted liposomes are a promising method of drug delivery. These systems are efficient in delivering the drug to localized areas with low peripheral distribution, which minimizes off-target effects. The favorable biodistribution to target tissue is an encouraging property of this drug delivery system. In addition to highly targeting tissue, LTLs have a short circulating half-life , so they can be quickly cleared from the bloodstream. [ 2 ] [ 19 ] LTLs can be used to deliver AuNRs for localized delivery of photo-thermal therapy in cancer treatment. [ 20 ] Photodynamic therapy (PDT) is a non-invasive cancer therapy that relies on a photosensitizing (PS) pro-drug to interact with light and oxygen as a cancer therapeutic agent. PSs can be encapsulated in LTLs—allowing them to move through systemic circulation to the tumor site for ligand binding—to specify the area of their effect. Using PDT causes damage to cancer cells and tumor microvasculature. [ 3 ] There are many liposome-based products currently approved or undergoing clinical trials.
Aside from cancer therapies, ligand-targeted liposomes can also be used to target inflammation in the body that may be present due to rheumatoid arthritis , psoriasis , vascular inflammation, and organ transplantation . E-selectin is a cell-specific receptor expressed by inflamed endothelium that ligands can target. [ 21 ]
LTLs also have the potential for localized treatment in fungal infections. [ 21 ] AmBisome (L-AMB) is an LTL that contains Amphotericin B (AMPH-B), an anti-fungal treatment that is effective for a broad variety of fungal infections. AMPH-B can be toxic after prolonged exposure, making it a good candidate for the targeting and rapid clearing of systemic circulation of LTLs. AmBisome is also effective due to the inflammation in the area of fungal activity, which increases vascular permeation. [ 22 ]
Consistently producing ligand-targeted liposomes through traditional methods is difficult. The process can be tedious, challenging to control and result in a poorly defined system. Using the 'post-insertion' technique—in which Micelles formed from PEG-linked ligands are incubated with pre-formed, drug-loaded, non-targeted liposomes to combine and form LTLs—can limit the associated manufacturing challenges. [ citation needed ]
When using certain ligands, such as antibodies, the risk for an immunological reaction poses a risk. Liposome design including size, charge, morphology, composition, surface characteristics, and dose size can all influence the immune response to administered LTLs. [ 14 ] The ligands used can elicit an immune response when introduced into the body. For example, when peptide ligands such as CDX are used for brain-targeted delivery systems, they are immunogenic and trigger an immune response. [ 24 ] Complement Activation-Related Pseudo-allergies (CARPA) is a hypersensitivity syndrome that can be triggered when LTLs activate the innate immune system and the complement system. CARPA can cause many side effects including anaphylaxis , cardiopulmonary distress, and facial swelling. These side effects have the potential to be severe, which generates concern when administering LTLs to patients with health problems, especially cardiovascular issues. This reaction can be reduced by slowing infusion rates or incorporating the use of allergy medicines like antihistamines into the treatment regimen. [ 14 ]
Due to the immune response, LTLs can experience the accelerated blood clearance (ABC) phenomenon. This phenomenon is more common in repeated dosage usage of LTLs, such as multi-dose PEGylated formulas, because of immunological memory. The pharmokinetics and clearance rates of the second dose have been shown to be significantly reduced while accumulation in the spleen and liver increases. This poses challenges for clinical applications of LTLs that require multiple doses to be effective. [ 14 ]
Ligand-targeted liposomes need specific conditions to remain intact for use. Controlling environmental factors such as temperature and pH is necessary to maintain the integrity of the molecules. This can be helpful for temperature-sensitive or pH-dependent drug release conditions but is harmful if the pH changes at an inopportune time. [ 25 ] This technology can also be used in combination with enzymes such as in Gal-Dox, which releases active doxorubicin in combination with β-Galactosidase. [ 26 ] Making sure the compound does not encounter the enzyme too early is also important for effective usage.
There is a possibility that LTLs lead to immunosuppression. LTLs are cleared through the RES which is part of the innate immune system. Macrophage saturation to remove the liposomes could impact the ability of the phagocytic cells to function properly to conduct immune functions. Significant immune suppression has not been observed in clinical cases for therapeutic doses of LTLs containing non-cytotoxic drugs. [ 14 ] | https://en.wikipedia.org/wiki/Ligand-targeted_liposome |
LigandScout is computer software that allows creating three-dimensional (3D) pharmacophore models from structural data of macromolecule – ligand complexes, or from training and test sets of organic molecules. It incorporates a complete definition of 3D chemical features (such as hydrogen bond donors, acceptors, lipophilic areas, positively and negatively ionizable chemical groups) that describe the interaction of a bound small organic molecule ( ligand ) and the surrounding binding site of the macromolecule . [ 1 ] These pharmacophores can be overlaid and superimposed using a pattern-matching based alignment algorithm [ 2 ] that is solely based on pharmacophoric feature points instead of chemical structure. From such an overlay, shared features can be interpolated to create a so-called shared-feature pharmacophore that shares all common interactions of several binding sites/ligands or extended to create a so-called merged-feature pharmacophore. The software has been successfully used to predict new lead structures in drug design , e.g., predicting biological activity of novel human immunodeficiency virus ( HIV ) reverse transcriptase inhibitors . [ 3 ]
Other software tools which help to model pharmacophores include: | https://en.wikipedia.org/wiki/LigandScout |
In biochemistry and pharmacology , a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. The etymology stems from Latin ligare , which means 'to bind'. In protein-ligand binding, the ligand is usually a molecule which produces a signal by binding to a site on a target protein . The binding typically results in a change of conformational isomerism (conformation) of the target protein. In DNA-ligand binding studies, the ligand can be a small molecule, ion , [ 1 ] or protein [ 2 ] which binds to the DNA double helix . The relationship between ligand and binding partner is a function of charge, hydrophobicity , and molecular structure.
Binding occurs by intermolecular forces , such as ionic bonds , hydrogen bonds and Van der Waals forces . The association or docking is actually reversible through dissociation . Measurably irreversible covalent bonding between a ligand and target molecule is atypical in biological systems. In contrast to the definition of ligand in metalorganic and inorganic chemistry , in biochemistry it is ambiguous whether the ligand generally binds at a metal site, as is the case in hemoglobin . In general, the interpretation of ligand is contextual with regards to what sort of binding has been observed.
Ligand binding to a receptor protein alters the conformation by affecting the three-dimensional shape orientation. The conformation of a receptor protein composes the functional state. Ligands include substrates , inhibitors , activators , signaling lipids , and neurotransmitters . The rate of binding is called affinity , and this measurement typifies a tendency or strength of the effect. Binding affinity is actualized not only by host–guest interactions, but also by solvent effects that can play a dominant, steric role which drives non-covalent binding in solution. [ 3 ] The solvent provides a chemical environment for the ligand and receptor to adapt, and thus accept or reject each other as partners.
Radioligands are radioisotope labeled compounds used in vivo as tracers in PET studies and for in vitro binding studies.
The interaction of ligands with their binding sites can be characterized in terms of a binding affinity. In general, high-affinity ligand binding results from greater attractive forces between the ligand and its receptor while low-affinity ligand binding involves less attractive force. In general, high-affinity binding results in a higher occupancy of the receptor by its ligand than is the case for low-affinity binding; the residence time (lifetime of the receptor-ligand complex) does not correlate. High-affinity binding of ligands to receptors is often physiologically important when some of the binding energy can be used to cause a conformational change in the receptor, resulting in altered behavior for example of an associated ion channel or enzyme .
A ligand that can bind to and alter the function of the receptor that triggers a physiological response is called a receptor agonist . Ligands that bind to a receptor but fail to activate the physiological response are receptor antagonists .
Agonist binding to a receptor can be characterized both in terms of how much physiological response can be triggered (that is, the efficacy ) and in terms of the concentration of the agonist that is required to produce the physiological response (often measured as EC 50 , the concentration required to produce the half-maximal response). High-affinity ligand binding implies that a relatively low concentration of a ligand is adequate to maximally occupy a ligand-binding site and trigger a physiological response. Receptor affinity is measured by an inhibition constant or K i value, the concentration required to occupy 50% of the receptor. Ligand affinities are most often measured indirectly as an IC 50 value from a competition binding experiment where the concentration of a ligand required to displace 50% of a fixed concentration of reference ligand is determined. The K i value can be estimated from IC 50 through the Cheng Prusoff equation . Ligand affinities can also be measured directly as a dissociation constant (K d ) using methods such as fluorescence quenching , isothermal titration calorimetry or surface plasmon resonance . [ 4 ]
Low-affinity binding (high K i level) implies that a relatively high concentration of a ligand is required before the binding site is maximally occupied and the maximum physiological response to the ligand is achieved. In the example shown to the right, two different ligands bind to the same receptor binding site. Only one of the agonists shown can maximally stimulate the receptor and, thus, can be defined as a full agonist . An agonist that can only partially activate the physiological response is called a partial agonist . In this example, the concentration at which the full agonist (red curve) can half-maximally activate the receptor is about 5 x 10 −9 Molar (nM = nanomolar ).
Binding affinity is most commonly determined using a radiolabeled ligand, known as a tagged ligand. Homologous competitive binding experiments involve binding competition between a tagged ligand and an untagged ligand. [ 5 ] Real-time based methods, which are often label-free, such as surface plasmon resonance , dual-polarization interferometry and multi-parametric surface plasmon resonance (MP-SPR) can not only quantify the affinity from concentration based assays; but also from the kinetics of association and dissociation, and in the later cases, the conformational change induced upon binding. MP-SPR also enables measurements in high saline dissociation buffers thanks to a unique optical setup. Microscale thermophoresis (MST), an immobilization-free method [ 6 ] was developed. This method allows the determination of the binding affinity without any limitation to the ligand's molecular weight. [ 7 ]
For the use of statistical mechanics in a quantitative study of the
ligand-receptor binding affinity, see the comprehensive article [ 8 ] on the configurational partition function .
Binding affinity data alone does not determine the overall potency of a drug or a naturally produced (biosynthesized) hormone. [ 9 ]
Potency is a result of the complex interplay of both the binding affinity and the ligand efficacy. [ 9 ]
Ligand efficacy refers to the ability of the ligand to produce a biological response upon binding to the target receptor and the quantitative magnitude of this response. This response may be as an agonist , antagonist , or inverse agonist , depending on the physiological response produced. [ 10 ]
Selective ligands have a tendency to bind to very limited kinds of receptor, whereas non-selective ligands bind to several types of receptors. This plays an important role in pharmacology , where drugs that are non-selective tend to have more adverse effects , because they bind to several other receptors in addition to the one generating the desired effect.
For hydrophobic ligands (e.g. PIP2) in complex with a hydrophobic protein (e.g. lipid-gated ion channels ) determining the affinity is complicated by non-specific hydrophobic interactions. Non-specific hydrophobic interactions can be overcome when the affinity of the ligand is high. [ 11 ] For example, PIP2 binds with high affinity to PIP2 gated ion channels.
Bivalent ligands consist of two drug-like molecules (pharmacophores or ligands) connected by an inert linker. There are various kinds of bivalent ligands and are often classified based on what the pharmacophores target. Homobivalent ligands target two of the same receptor types. Heterobivalent ligands target two different receptor types. [ 12 ] Bitopic ligands target an orthosteric binding sites and allosteric binding sites on the same receptor. [ 13 ] In scientific research, bivalent ligands have been used to study receptor dimers and to investigate their properties. This class of ligands was pioneered by Philip S. Portoghese and coworkers while studying the opioid receptor system. [ 14 ] [ 15 ] [ 16 ] Bivalent ligands were also reported early on by Micheal Conn and coworkers for the gonadotropin-releasing hormone receptor . [ 17 ] [ 18 ] Since these early reports, there have been many bivalent ligands reported for various G protein-coupled receptor (GPCR) systems including cannabinoid, [ 19 ] serotonin, [ 20 ] [ 21 ] oxytocin, [ 22 ] and melanocortin receptor systems, [ 23 ] [ 24 ] [ 25 ] and for GPCR - LIC systems ( D2 and nACh receptors ). [ 12 ]
Bivalent ligands usually tend to be larger than their monovalent counterparts, and therefore, not 'drug-like' as in Lipinski's rule of five . Many believe this limits their applicability in clinical settings. [ 26 ] [ 27 ] In spite of these beliefs, there have been many ligands that have reported successful pre-clinical animal studies. [ 24 ] [ 25 ] [ 22 ] [ 28 ] [ 29 ] [ 30 ] Given that some bivalent ligands can have many advantages compared to their monovalent counterparts (such as tissue selectivity, increased binding affinity, and increased potency or efficacy), bivalents may offer some clinical advantages as well.
Ligands of proteins can be characterized also by the number of protein chains they bind. "Monodesmic" ligands (μόνος: single, δεσμός: binding) are ligands that bind a single protein chain, while "polydesmic" ligands (πολοί: many) [ 31 ] are frequent in protein complexes, and are ligands that bind more than one protein chain, typically in or near protein interfaces. Recent research shows that the type of ligands and binding site structure has profound consequences for the evolution, function, allostery and folding of protein compexes. [ 32 ] [ 33 ]
A privileged scaffold [ 34 ] is a molecular framework or chemical moiety that is statistically recurrent among known drugs or among a specific array of biologically active compounds. These privileged elements [ 35 ] can be used as a basis for designing new active biological compounds or compound libraries.
Main methods to study protein–ligand interactions are principal hydrodynamic and calorimetric techniques, and principal spectroscopic and structural methods such as
Other techniques include:
fluorescence intensity,
bimolecular fluorescence complementation,
FRET (fluorescent resonance energy transfer) / FRET quenching
surface plasmon resonance, bio-layer interferometry ,
Coimmunopreciptation
indirect ELISA,
equilibrium dialysis,
gel electrophoresis,
far western blot,
fluorescence polarization anisotropy,
electron paramagnetic resonance, microscale thermophoresis , switchSENSE .
The dramatically increased computing power of supercomputers and personal computers has made it possible to study protein–ligand interactions also by means of computational chemistry . For example, a worldwide grid of well over a million ordinary PCs was harnessed for cancer research in the project grid.org , which ended in April 2007. Grid.org has been succeeded by similar projects such as World Community Grid , Human Proteome Folding Project , Compute Against Cancer and Folding@Home . | https://en.wikipedia.org/wiki/Ligand_(biochemistry) |
A ligand binding assay ( LBA ) is an assay , or an analytic procedure, which relies on the binding of ligand molecules to receptors , antibodies or other macromolecules . [ 1 ] A detection method is used to determine the presence and amount of the ligand-receptor complexes formed, and this is usually determined electrochemically or through a fluorescence detection method. [ 2 ] This type of analytic test can be used to test for the presence of target molecules in a sample that are known to bind to the receptor. [ 3 ]
There are numerous types of ligand binding assays, both radioactive and non-radioactive. [ 4 ] [ 5 ] [ 6 ] Some newer types are called "mix-and-measure" assays because they require fewer steps to complete, for example foregoing the removal of unbound reagents. [ 5 ]
Ligand binding assays are used primarily in pharmacology for various demands. Specifically, despite the human body's endogenous receptors , hormones , and other neurotransmitters , pharmacologists utilize assays in order to create drugs that are selective, or mimic, the endogenously found cellular components. On the other hand, such techniques are also available to create receptor antagonists in order to prevent further cascades. [ 7 ] Such advances provide researchers with the ability not only to quantify hormones and hormone receptors, but also to contribute important pharmacological information in drug development and treatment plans. [ 8 ]
Historically, ligand binding assay techniques were used extensively to quantify hormone or hormone receptor concentrations in plasma or in tissue. The ligand -binding assay methodology quantified the concentration of the hormone in the test material by comparing the effects of the test sample to the results of varying amounts of known protein ( ligand ).
The foundations for which ligand binding assay have been built are a result of Karl Landsteiner , in 1945, and his work on immunization of animals through the production of antibodies for certain proteins. [ 9 ] Landsteiner's work demonstrated that immunoassay technology allowed researchers to analyze at the molecular level. The first successful ligand binding assay was reported in 1960 by Rosalyn Sussman Yalow and Solomon Berson . [ 9 ] They investigated the binding interaction for insulin and an insulin-specific antibody, in addition to developing the first radioimmunoassay (RIA) for insulin. These discoveries provided precious information regarding both the sensitivity and specificity of protein hormones found within blood-based fluids. [ 9 ] Yalow and Berson received the Nobel Prize in Medicine as a result of their advancements. Through the development of RIA technology, researchers have been able to move beyond the use of radioactivity, and instead, use liquid- and solid-phase, competitive, and immunoradiometric assays. [ 9 ] As a direct result of these monumental findings, researchers have continued the advancement of ligand binding assays in many facets in the fields of biology, chemistry, and the like. For instance, the Lois lab at Caltech is using engineered artificial ligands and receptors on neurons to trace information flow in the brain. [ 10 ] They are specifically using ligand-induced intramembrane proteolysis to unravel the wiring of the brain in Drosophila and other models. [ 11 ] When the artificial ligand on one neuron binds to the receptor on another, GFP expression is activated in the acceptor neuron demonstrating the usefulness of ligand binding assays in neuroscience and biology. [ 12 ]
Ligand binding assays provide a measure of the interactions that occur between two molecules, such as protein-bindings, as well as the degree of affinity (weak, strong, or no connection) for which the reactants bind together. [ 13 ] Essential aspects of binding assays include, but are not limited to, the concentration level of reactants or products ( see radioactive section ), maintaining the equilibrium constant of reactants throughout the assay, and the reliability and validity of linked reactions. [ 13 ] Although binding assays are simple, they fail to provide information on whether or not the compound being tested affects the target's function. [ 14 ]
Radioligands are used to measure the ligand binding to receptors and should ideally have high affinity, low non-specific binding, high specific activity to detect low receptor densities, and receptor specificity. [ 7 ]
Levels of radioactivity for a radioligand (per mole) are referred to as the specific activity (SA), which is measured in Ci/mmol. [ 15 ] The actual concentration of a radioligand is determined by the specific stock mix for which the radioligand originated (from the manufactures.) [ 15 ] The following equation determines the actual concentration:
p m = C P M / S A ( C P M / f m o l ) V o l u m e ( m l ) × 0.001 ( p m o l / f m o l ) 0.001 ( l i t e r / m l ) = ( C P M / S A ) ( V o l ) {\displaystyle pm={\frac {CPM/SA(CPM/fmol)}{Volume(ml)}}\times {0.001(pmol/fmol) \over 0.001(liter/ml)}={(CPM/SA) \over (Vol)}} [ 15 ]
Saturation analysis is used in various types of tissues, such as fractions of partially purified plasma from tissue homogenates , cells transfected with cloned receptors , and cells that are either in culture or isolated prior to analysis. [ 7 ] Saturation binding analysis can determine receptor affinity and density. It requires that the concentration chosen must be determined empirically for a new ligand.
There are two common strategies that are adopted for this type of experiment: [ 7 ] Increasing the amount of radioligand added while maintaining both the constant specific activity and constant concentration of radioligand, or decreasing the specific activity of the radioligand due to the addition of an unlabeled ligand. [ 7 ]
A Scatchard plot (Rosenthal plot) can be used to show radioligand affinity. In this type of plot, the ratio of Bound/Free radioligand is plotted against the Bound radioligand. The slope of the line is equal to the negative reciprocal of the affinity constant (K). The intercept of the line with the X axis is an estimate of Bmax. [ 7 ] The Scatchard plot can be standardized against an appropriate reference so that there can be a direct comparison of receptor density in different studies and tissues. [ 7 ] This sample plot indicates that the radioligand binds with a single affinity. If the ligand were to have bound to multiple sites that have differing radioligand affinities, then the Scatchard plot would have shown a concave line instead. [ 7 ]
Nonlinear curve-fitting programs, such as Equilibrium Binding Data Analysis (EBDA) and LIGAND, are used to calculate estimates of binding parameters from saturation and competition-binding experiments. [ 16 ] EBDA performs the initial analysis, which converts measured radioactivity into molar concentrations and creates Hill slopes and Scatchard transformations from the data. The analysis made by EBDA can then be used by LIGAND to estimate a specified model for the binding. [ 16 ]
Competition binding is used to determine the presence of selectivity for a particular ligand for receptor sub-types, which allows the determination of the density and proportion of each sub-type in the tissue. [ 7 ] Competition curves are obtained by plotting specific binding, which is the percentage of the total binding, against the log concentration of the competing ligand. [ 7 ] A steep competition curve is usually indicative of binding to a single population of receptors, whereas a shallow curve, or a curve with clear inflection points, is indicative of multiple populations of binding sites. [ 16 ]
Despite the different techniques used for non-radioactive assays, they require that ligands exhibit similar binding characteristics to its radioactive equivalent. Thus, results in both non-radioactive and radioactive assays will remain consistent. [ 5 ] One of the largest differences between radioactive and non-radioactive ligand assays are in regards of dangers to human health. Radioactive assays are harmful in that they produce radioactive waste; whereas, non-radioactive ligand assays utilize a different method to avoid producing toxic waste. These methods include, but are not limited to, fluorescence polarization (FP), fluorescence resonance energy transfer (FRET), and surface plasmon resonance (SPR). In order to measure process of ligand-receptor binding, most non-radioactive methods require that labeling avoids interfering with molecular interactions. [ 5 ]
Fluorescence polarization (FP) is synonymous with fluorescence anisotropy . This method measures the change in the rotational speed of a fluorescent-labeled ligand once it is bound to the receptor. [ 5 ] Polarized light is used in order to excite the ligand, and the amount of light emitted is measured. [ 5 ] Depolarization of the emitted light depends on ligand being bound (e.g., to receptor). If ligand is unbound, it will have a large depolarization (ligand is free to spin rapidly, rotating the light). If the ligand is bound, the combined larger size results in slower rotation and therefore, reduced depolarization. [ 5 ] An advantage of this method is that it requires only one labeling step. However, this method is less precise at low nanomolar concentrations. [ 5 ]
Kinetic exclusion assay (KinExA) measures free (unbound) ligand or free receptor present in a mixture of ligand, receptor, and ligand-receptor complex. The measurements allow quantitation of the active ligand concentration and the binding constants (equilibrium, on and off rates) of the interaction. [ 17 ]
Fluorescence Resonance Energy Transfer (FRET) utilizes energy transferred between the donor and the acceptor molecules that are in close proximity. [ 5 ] FRET uses a fluorescently labeled ligand, as with FP. [ 5 ] Energy transfer within FRET begins by exciting the donor. [ 5 ] The dipole–dipole interaction between the donor and the acceptor molecule transfers the energy from the donor to the acceptor molecule. [ 5 ] If the ligand is bound to the receptor-antibody complex, then the acceptor will emit light. [ 5 ] When using FRET, it is critical that there is a distance smaller than 10 nm between the acceptor and donor, in addition to an overlapping absorption spectrum between acceptor and donor, and that the antibody does not interfere or block the ligand binding site. [ 5 ]
Surface Plasmon Resonance (SPR) does not require labeling of the ligand. [ 5 ] Instead, it works by measuring the change in the angle at which the polarized light is reflected from a surface ( refractive index ). [ 5 ] The angle is related to the change in mass or layer of thickness, such as immobilization of a ligand changing the resonance angle, which increases the reflected light. [ 5 ] The device for which SPR is derived includes a sensor chip, a flow cell, a light source, a prism , and a fixed angle position detector. [ 5 ]
The liquid-phase ligand binding assay of immunoprecipitation (IP) is a method that is used to purify or enrich a specific protein, or a group of proteins, using an antibody from a complex mixture. The extract of disrupted tissue or cells is mixed with an antibody against the antigen of interest, which produces the antigen-antibody complex. [ 18 ] When antigen concentration is low, the antigen-antibody complex precipitation can take hours or even days and becomes hard to isolate the small amount of precipitate formed. [ 18 ]
The enzyme-linked immunosorbent assay ( ELISA ) or Western blotting are two different ways that the purified antigen (or multiple antigens) can be obtained and analyzed. This method involves purifying an antigen through the aid of an attached antibody on a solid (beaded) support, such as agarose resin. [ 19 ] The immobilized protein complex can be accomplished either in a single step or successively. [ 19 ]
IP can also be used in conjunction with biosynthetic radioisotope labeling. Using this technique combination, one can determine if a specific antigen is synthesized by a tissue or by a cell. [ 18 ]
Multiwell plates are multiple petri dishes incorporated into one container, with the number of individual wells ranging from 6 to over 1536. Multiwell Plate Assays are convenient for handling necessary dosages and replicates. [ 20 ] There are a wide range of plate types that have a standardized footprint, supporting equipment, and measurement systems. [ 20 ] Electrodes can be integrated into the bottom of the plates to capture information as a result of the binding assays. [ 9 ] The binding reagents become immobilized on the electrode surface and then can be analyzed. [ 9 ]
The multiwell plates are manufactured to allow researchers to create and manipulate different types of assays (i.e., bioassays , immunoassays , etc.) within each multiwell plate. [ 20 ] Due to the variability in multiwell plate formatting, it is not uncommon for artifacts to arise. Artifacts are due to the different environments found within the different wells on the plate, especially near the edges and center of the wells. Such effects are known as well effects, edge effects, and plate effects. Thus, emphasizing the necessity to position assay designs in the correct manner both within, and between, each plate. [ 20 ]
The use of multiwell plates are common when measuring in vitro biological assay activity, or measuring immunoreactivity through immunoassays. [ 20 ] Artifacts can be avoided by maintaining plate uniformity by applying the same dose of the specific medium in each well, in addition to maintaining atmospheric pressure and temperature rates in order to reduce humidity. [ 20 ]
On-Bead Ligand Binding assays are isolation methods for basic proteins, DNA/RNA or other biomolecules located in undefined suspensions and can be used in multiple biochromatographic applications. Bioaffine ligands are covalently bound to silica beads with terminal negatively charged silanol groups or polystyrene beads and are used for isolation and purification of basic proteins or adsorption of biomolecules. After binding the separation is performed by centrifugation (density separation) or by magnetic field attraction (for magnetic particles only). The beads can be washed to provide purity of the isolated molecule before dissolving it by ion exchange methods. Direct analyzation methods based on enzymatic/fluorescent detection (e.g. HRP, fluorescent dye) can be used for on-bead determination or quantification of bound biomolecules. [ 21 ] [ 22 ] [ 23 ]
Filter assays are a solid-phase ligand binding assay that use filters to measure the affinity between two molecules. In a filter binding assay , the filters are used to trap cell membranes by sucking the medium through them. [ 8 ] This rapid method occurs at a fast speed in which filtration and a recovery can be achieved for the found fraction. [ 24 ] Washing filters with a buffer removes residual unbound ligands and any other ligands present that are capable of being washed away from the binding sites. [ 8 ] The receptor-ligand complexes present while the filter is being washed will not dissociate significantly because they will be completely trapped by the filters. [ 8 ] Characteristics of the filter are important for each job being done. A thicker filter is useful to get a more complete recovery of small membrane pieces, but may require a longer wash time. [ 8 ] It is recommended to pretreat the filters to help trap negatively charged membrane pieces. [ 8 ] Soaking the filter in a solution that would give the filter a positive surface charge would attract the negatively charged membrane fragments. [ 8 ]
In this type of assay the binding of a ligand to cells is followed over time. The obtained signal is proportional to the number of ligands bound to a target structure, often a receptor, on the cell surface. Information about the ligand-target interaction is obtained from the signal change over time and kinetic parameters such as the association rate constant k a , the dissociation rate constant k d and the affinity K D can be calculated. [ 25 ] By measuring the interaction directly on cells, no isolation of the target protein is needed, which can otherwise be challenging, especially for some membrane proteins. [ 26 ] To ensure that the interaction with the intended target structure is measured appropriate biological controls, such as cells not expressing the target structure, are recommended.
Real-time measurements using label-free or label-based approaches have been used to analyze biomolecular interactions on fixated or on living cells. [ 27 ] [ 28 ]
The advantage of measuring ligand-receptor interactions in real-time, is that binding equilibrium does not need to be reached for accurate determination of the affinity. [ 29 ]
The effects of a drug are a result of their binding selectivity with macromolecule properties of an organism, or the affinity with which different ligands bind to a substrate. [ 30 ] More specifically, the specificity and selectivity of a ligand to its respective receptor provides researchers the opportunity to isolate and produce specific drug effects through the manipulation of ligand concentrations and receptor densities. [ 30 ] Hormones and neurotransmitters are essential endogenous regulatory ligands that affect physiological receptors within an organism. [ 30 ] Drugs that act upon these receptors are incredibly selective in order to produce required responses from signaling molecules. [ 30 ]
Specific binding refers to the binding of a ligand to a receptor, and it is possible that there is more than one specific binding site for one ligand. [ 31 ] Non specific binding refers to the binding of a ligand to something other than its designated receptor such as various other receptors, or different types of transporters in the cell membrane. [ 31 ] For example, various antagonists can bind to multiple types receptors. In the case of muscarinic antagonists, they can also bind to histamine receptors. [ 31 ] Such binding patterns are technically considered specific, as the destination of the ligand is specific to multiple receptors. However, researchers may not be focused on such behaviors compared to other binding factors. [ 31 ] Nevertheless, nonspecific binding behavior is very important information to acquire. These estimates are measured by examining how a ligand binds to a receptor while simultaneously reacting to a substitute agent (antagonist) that will prevent specific binding to occur. [ 31 ]
Specific binding types to ligand and receptor interactions: [ 30 ]
Technologies for ligand binding assay continue to advance related to the increasing the speed and to keeping cost-effective procedures while maintaining and increasing the accuracy and sensitivity. [ 9 ] Some technological advances include new binding reagents as alternatives to antibodies, [ 9 ] alternative dye solutions and micro plate systems, and the development of a method to skip the filtration step, which is required in many ligand binding assay processes. [ 16 ]
A prominent signaling molecule in cells is Calcium , (Ca 2+ ), which can be detected with a Fluo-4 acetoxymethyl dye. It binds to free Ca 2+ ions, which in turn slightly increase fluorescence of the Fluo-4 AM. [ 16 ] The drawback of the Fluo-4 dye formulation is that a washing step is required to remove extracellular dye, which may provide unwanted background signals. For instance, washing puts additional stress on the cells, as well as consumes time, which prevents a timely analysis. [ 16 ] Recently, an alternative dye solution and microplate system has been developed called FLIPR® (fluorometric imaging plate reader), which uses a Calcium 3 assay reagent that does not require a washing step. As a result, change in dye fluorescence can be viewed in real time with no delay using an excitatory laser and a charge-coupled device . [ 16 ]
Many ligand binding assays require a filtration step to separate bound and unbound ligands before screening. A method called Scintillation proximity assay (SPA) has been recently developed, which eliminates this otherwise crucial step. It works through crystal lattice beads, which are coated with ligand coupling molecules and filled with cerium ions. These give off bursts of light when stimulated by an isotope, which can easily be measured. Ligands are radiolabeled using either 3H or 125I, and released into the assay. Since only the radioligands that directly bind to the beads initiate a signal, free-ligands do not interfere during the screening process. [ 16 ]
By nature, assays must be carried out in a controlled environment in vitro, so this method does not provide information about receptor binding in vivo. The results obtained can only verify that a specific ligand fits a receptor, but assays provide no way of knowing the distribution of ligand-binding receptors in an organism.
In vivo ligand binding and receptor distribution can be studied using Positron Emission Tomography (PET), which works by induction of a radionuclide into a ligand, which is then released into the body of a studied organism. The radiolabeled ligands are spatially located by a PET scanner to reveal areas in the organism with high concentrations of receptors. [ 16 ] | https://en.wikipedia.org/wiki/Ligand_binding_assay |
Ligand bond number (LBN) represents the effective total number of ligands or ligand attachment points surrounding a metal center, labeled M. [ 1 ] [ 2 ] More simply, it represents the number of coordination sites occupied on the metal. Based on the covalent bond classification method (from where LBN is derived), the equation for determining ligand bond number is as follows:
Where L represents the number of neutral ligands adding two electrons to the metal center (typically lone electron pairs , pi-bonds and sigma bonds . Most encountered ligands will fall under this category. X represents covalent-bonding ligands such as halogen anions. Z represents, though rarely encountered electron accepting ligands or dative bond forming ligands. The ligand bond number convention is most commonly encountered within inorganic chemistry and it's related fields organometallic chemistry and bioinorganic chemistry .
On comparison to the classical coordination numbers, some major differences can be seen. For example, ( η 5 – cyclopentadienyl ) 2 Cr (ML 4 X 2 ) and (η 6 – benzene ) 2 Cr (ML 6 ) both have a LBN of 6 as compared to classical coordination numbers of 10 and 12. [ 3 ] Well known complexes such as Ferrocene and Uranocene also serve as examples where LBN and coordination number differ. Ferrocene has two η 5 cyclopentadienyl ligands while Uranocene has two η 8 cyclooctatetraene ligands; however, by covalent bond classification the complexes are found to be ML 4 X 2 and ML 6 X 4 . [ 4 ] This corresponds to LBN values of 6 and 10 respectively, even though the total coordination numbers would be 10 and 16. The usefulness of LBN to describe bonding extends beyond just sandwich compounds . Co(CO) 3 (NO) is a stable 18-electron complex in part due to the bonding of the NO ligand in its linear form. The donation of the lone pair on the nitrogen makes this complex ML 4 X, containing 18 electrons. The traditional coordination number here would be 4, while the CBC more accurately describes the bonding with a LBN of 5. In simple cases, the LBN is often equal to the classical coordination number (ex. Fe(CO) 5 , etc.) [ 5 ]
Left (η 5 –cyclopentadienyl) 2 Cr: LBN of 6, Coordination # of 10
The LBN for transition metals trends downward from left to right across the periodic table. This trend is highlighted in the LBN plots of Groups 3 through 10. Groups exhibit trends, but the LBN for individual complexes can vary. | https://en.wikipedia.org/wiki/Ligand_bond_number |
In coordination chemistry , the ligand cone angle (θ) is a measure of the steric bulk of a ligand in a transition metal coordination complex . It is defined as the solid angle formed with the metal at the vertex of a cone and the outermost edge of the van der Waals spheres of the ligand atoms at the perimeter of the base of the cone. Tertiary phosphine ligands are commonly classified using this parameter, but the method can be applied to any ligand. The term cone angle was first introduced by Chadwick A. Tolman , a research chemist at DuPont . Tolman originally developed the method for phosphine ligands in nickel complexes, determining them from measurements of accurate physical models. [ 1 ] [ 2 ] [ 3 ]
The concept of cone angle is most easily visualized with symmetrical ligands, e.g. PR 3 . But the approach has been refined to include less symmetrical ligands of the type PRR′R″ as well as diphosphines. In such asymmetric cases, the substituent angles' half angles, θ i / 2 , are averaged and then doubled to find the total cone angle, θ . In the case of diphosphines, the θ i / 2 of the backbone is approximated as half the chelate bite angle , assuming a bite angle of 74°, 85°, and 90° for diphosphines with methylene, ethylene, and propylene backbones, respectively. The Manz cone angle is often easier to compute than the Tolman cone angle: [ 4 ] [ clarification needed ]
The Tolman cone angle method assumes empirical bond data and defines the perimeter as the maximum possible circumscription of an idealized free-spinning substituent. The metal-ligand bond length in the Tolman model was determined empirically from crystal structures of tetrahedral nickel complexes. In contrast, the solid-angle concept derives both bond length and the perimeter from empirical solid state crystal structures. [ 5 ] [ 6 ] There are advantages to each system.
If the geometry of a ligand is known, either through crystallography or computations, an exact cone angle ( θ ) can be calculated. [ 7 ] [ 8 ] [ 9 ] No assumptions about the geometry are made, unlike the Tolman method.
The concept of cone angle is of practical importance in homogeneous catalysis because the size of the ligand affects the reactivity of the attached metal center. In an example, [ 10 ] the selectivity of hydroformylation catalysts is strongly influenced by the size of the coligands. Despite being monovalent , some phosphines are large enough to occupy more than half of the coordination sphere of a metal center. Recent research has found that other descriptors—such as percent buried volume—are more accurate than cone angle at capturing the relevant steric effects of the phosphine ligand(s) when bound to the metal center. [ 11 ] | https://en.wikipedia.org/wiki/Ligand_cone_angle |
There are two types of pathway for substitution of ligands in a complex. The ligand dependent pathway is the one whereby the chemical properties of the ligand affect the rate of substitution. Alternatively, there is the ligand independent pathway , which is where the ligand does not have an effect.
This is of vital importance in the world of inorganic chemistry and complex ions .
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ligand_dependent_pathway |
Ligand efficiency is a measurement of the binding energy per atom of a ligand to its binding partner, such as a receptor or enzyme. [ 1 ]
Ligand efficiency is used in drug discovery research programs to assist in narrowing focus to lead compounds with optimal combinations of physicochemical properties and pharmacological properties. [ 2 ]
Mathematically, ligand efficiency (LE) can be defined as the ratio of Gibbs free energy (ΔG) to the number of non-hydrogen atoms of the compound:
where Δ G = − RT ln K i and N is the number of non-hydrogen atoms. [ 3 ] It can be transformed to the equation: [ 4 ]
Some suggest [ 2 ] that better metrics for ligand efficiency are
percentage/potency efficiency index (PEI), binding efficiency index (BEI) and surface-binding efficiency index (SEI) because they are easier to calculate and take into account the differences between elements in different rows of the periodic table. It is important to note that PEI is a relative measure for comparing compounds tested in the same conditions (e.g. a single-point assay) and are not comparable at different inhibitor concentrations. Also for BEI and SEI, similar measurements must be used (e.g. always using pK i ).
where pK i , pK d and pIC 50 is defined as −log(K i ), −log(K d ), or −log(IC 50 ), respectively. K i and IC 50 in mol/L .
The authors suggest plotting compounds SEI and BEI on a plane and optimizing compounds towards the diagonal and so optimizing both SEI and BEI which incorporate potency, molecular weight and PSA. [ 2 ]
There are other metrics which can be useful during hit to lead optimization : group efficiency (GE), lipophilic efficiency/lipophilic ligand efficiency (LipE/LLE), ligand lipophilicity index (LLE AT ) ligand efficiency dependent lipophilicity (LELP), fit quality scaled ligand efficiency (LE scale ), size independent ligand efficiency (SILE). [ 4 ]
Group efficiency (GE) is a metric used to estimate the binding efficiency of groups added to a ligand. [ 5 ] Unlike ligand efficiency which evaluates the efficiency of the entire molecule, group efficiency measures the relative change of the Gibbs free energy (ΔΔG), caused by addition or modification of groups, normalized by the change in the number heavy atoms in those groups (ΔN), using the equation: | https://en.wikipedia.org/wiki/Ligand_efficiency |
Ligand field theory ( LFT ) describes the bonding, orbital arrangement, and other characteristics of coordination complexes . [ 1 ] [ 2 ] [ 3 ] [ 4 ] It represents an application of molecular orbital theory to transition metal complexes. A transition metal ion has nine valence atomic orbitals - consisting of five n d, one ( n +1)s, and three ( n +1)p orbitals. These orbitals have the appropriate energy to form bonding interactions with ligands . The LFT analysis is highly dependent on the geometry of the complex, but most explanations begin by describing octahedral complexes, where six ligands coordinate with the metal. Other complexes can be described with reference to crystal field theory . [ 5 ] Inverted ligand field theory (ILFT) elaborates on LFT by breaking assumptions made about relative metal and ligand orbital energies.
Ligand field theory resulted from combining the principles laid out in molecular orbital theory and crystal field theory , which describe the loss of degeneracy of metal d orbitals in transition metal complexes. John Stanley Griffith and Leslie Orgel [ 6 ] championed ligand field theory as a more accurate description of such complexes, although the theory originated in the 1930s with the work on magnetism by John Hasbrouck Van Vleck . Griffith and Orgel used the electrostatic principles established in crystal field theory to describe transition metal ions in solution and used molecular orbital theory to explain the differences in metal-ligand interactions, thereby explaining such observations as crystal field stabilization and visible spectra of transition metal complexes. In their paper, they proposed that the chief cause of color differences in transition metal complexes in solution is the incomplete d orbital subshells. [ 6 ] That is, the unoccupied d orbitals of transition metals participate in bonding, which influences the colors they absorb in solution. In ligand field theory, the various d orbitals are affected differently when surrounded by a field of neighboring ligands and are raised or lowered in energy based on the strength of their interaction with the ligands. [ 6 ]
In an octahedral complex, the molecular orbitals created by coordination can be seen as resulting from the donation of two electrons by each of six σ-donor ligands to the d -orbitals on the metal . In octahedral complexes, ligands approach along the x -, y - and z -axes, so their σ-symmetry orbitals form bonding and anti-bonding combinations with the d z 2 and d x 2 − y 2 orbitals. The d xy , d xz and d yz orbitals remain non-bonding orbitals. Some weak bonding (and anti-bonding) interactions with the s and p orbitals of the metal also occur, to make a total of 6 bonding (and 6 anti-bonding) molecular orbitals [ 7 ]
In molecular symmetry terms, the six lone-pair orbitals from the ligands (one from each ligand) form six symmetry-adapted linear combinations (SALCs) of orbitals, also sometimes called ligand group orbitals (LGOs). The irreducible representations that these span are a 1g , t 1u and e g . The metal also has six valence orbitals that span these irreducible representations - the s orbital is labeled a 1g , a set of three p-orbitals is labeled t 1u , and the d z 2 and d x 2 − y 2 orbitals are labeled e g . The six σ-bonding molecular orbitals result from the combinations of ligand SALCs with metal orbitals of the same symmetry. [ 8 ]
π bonding in octahedral complexes occurs in two ways: via any ligand p -orbitals that are not being used in σ bonding, and via any π or π * molecular orbitals present on the ligand.
In the usual analysis, the p -orbitals of the metal are used for σ bonding (and have the wrong symmetry to overlap with the ligand p or π or π * orbitals anyway), so the π interactions take place with the appropriate metal d -orbitals, i.e. d xy , d xz and d yz . These are the orbitals that are non-bonding when only σ bonding takes place.
One important π bonding in coordination complexes is metal-to-ligand π bonding, also called π backbonding . It occurs when the LUMOs (lowest unoccupied molecular orbitals) of the ligand are anti-bonding π * orbitals. These orbitals are close in energy to the d xy , d xz and d yz orbitals, with which they combine to form bonding orbitals (i.e. orbitals of lower energy than the aforementioned set of d -orbitals). The corresponding anti-bonding orbitals are higher in energy than the anti-bonding orbitals from σ bonding so, after the new π bonding orbitals are filled with electrons from the metal d -orbitals, Δ O has increased and the bond between the ligand and the metal strengthens. The ligands end up with electrons in their π * molecular orbital, so the corresponding π bond within the ligand weakens.
The other form of coordination π bonding is ligand-to-metal bonding. This situation arises when the π-symmetry p or π orbitals on the ligands are filled. They combine with the d xy , d xz and d yz orbitals on the metal and donate electrons to the resulting π-symmetry bonding orbital between them and the metal. The metal-ligand bond is somewhat strengthened by this interaction, but the complementary anti-bonding molecular orbital from ligand-to-metal bonding is not higher in energy than the anti-bonding molecular orbital from the σ bonding. It is filled with electrons from the metal d -orbitals, however, becoming the HOMO (highest occupied molecular orbital) of the complex. For that reason, Δ O decreases when ligand-to-metal bonding occurs.
The greater stabilization that results from metal-to-ligand bonding is caused by the donation of negative charge away from the metal ion, towards the ligands. This allows the metal to accept the σ bonds more easily. The combination of ligand-to-metal σ-bonding and metal-to-ligand
π-bonding is a synergic effect, as each enhances the other.
As each of the six ligands has two orbitals of π-symmetry, there are twelve in total. The symmetry adapted linear combinations of these fall into four triply degenerate irreducible representations, one of which is of t 2g symmetry. The d xy , d xz and d yz orbitals on the metal also have this symmetry, and so the π-bonds formed between a central metal and six ligands also have it (as these π-bonds are just formed by the overlap of two sets of orbitals with t 2g symmetry.)
The six bonding molecular orbitals that are formed are "filled" with the electrons from the ligands, and electrons from the d -orbitals of the metal ion occupy the non-bonding and, in some cases, anti-bonding MOs. The energy difference between the latter two types of MOs is called Δ O (O stands for octahedral) and is determined by the nature of the π-interaction between the ligand orbitals with the d -orbitals on the central atom. As described above, π-donor ligands lead to a small Δ O and are called weak- or low-field ligands, whereas π-acceptor ligands lead to a large value of Δ O and are called strong- or high-field ligands. Ligands that are neither π-donor nor π-acceptor give a value of Δ O somewhere in-between.
The size of Δ O determines the electronic structure of the d 4 - d 7 ions. In complexes of metals with these d -electron configurations, the non-bonding and anti-bonding molecular orbitals can be filled in two ways: one in which as many electrons as possible are put in the non-bonding orbitals before filling the anti-bonding orbitals, and one in which as many unpaired electrons as possible are put in. The former case is called low-spin, while the latter is called high-spin. A small Δ O can be overcome by the energetic gain from not pairing the electrons, leading to high-spin. When Δ O is large, however, the spin-pairing energy becomes negligible by comparison and a low-spin state arises.
The spectrochemical series is an empirically-derived list of ligands ordered by the size of the splitting Δ that they produce. It can be seen that the low-field ligands are all π-donors (such as I − ), the high field ligands are π-acceptors (such as CN − and CO), and ligands such as H 2 O and NH 3 , which are neither, are in the middle.
I − < Br − < S 2− < SCN − < Cl − < NO 3 − < N 3 − < F − < OH − < C 2 O 4 2− < H 2 O < NCS − < CH 3 CN < py ( pyridine ) < NH 3 < en ( ethylenediamine ) < bipy ( 2,2'-bipyridine ) < phen (1,10- phenanthroline ) < NO 2 − < PPh 3 < CN − < CO | https://en.wikipedia.org/wiki/Ligand_field_theory |
In coordination chemistry , ligand isomerism is a type of structural isomerism in coordination complexes which arises from the presence of ligands which can adopt different isomeric forms. 1,2-Diaminopropane and 1,3-Diaminopropane are the examples that each feature a different isomer would be ligand isomers. [ 1 ] [ 2 ]
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ligand_isomerism |
The ligase chain reaction ( LCR ) is a method of DNA amplification. The ligase chain reaction (LCR) is an amplification process that differs from polymerase chain reaction (PCR) in that it involves a thermostable ligase to join two probes or other molecules together which can then be amplified by standard PCR cycling. [ 1 ] Each cycle results in a doubling of the target nucleic acid molecule. A key advantage of LCR is greater specificity as compared to PCR. [ 2 ] Thus, LCR requires two completely different enzymes to operate properly: ligase , to join probe molecules together, and a thermostable polymerase (e.g., Taq polymerase ) to amplify those molecules involved in successful ligation. The probes involved in the ligation are designed such that the 5′ end of one probe is directly adjacent to the 3′ end of the other probe, thereby providing the requisite 3′-OH and 5′-PO 4 group substrates for the ligase.
LCR was originally developed to detect point mutations ; a single base mismatch at the junction of the two probe molecules is all that is needed to prevent ligation. By performing the ligation right at the T m of the oligonucleotide probe, only perfectly matched primer:template duplexes will be tolerated. LCR can also be used to amplify template molecules that have been successfully ligated for the purpose of assessing ligation efficiency and producing a large amount of product with even greater specificity than PCR. Thus, LCR is not necessarily an alternative, but rather a complement, to PCR.
It has been widely used for the detection of single base mutations, as in certain genetic diseases . [ 1 ]
LCR and PCR may be used to detect gonorrhea and chlamydia , and may be performed on first-catch urine samples, providing easy collection and a large yield of organisms. Endogenous inhibitors limit the sensitivity , but if this effect could be eliminated, LCR and PCR would have clinical advantages over any other methods of diagnosing gonorrhea and chlamydia. [ 3 ] Among these methods, LCR is emerging as the most sensitive method with high specificity for known single-nucleotide polymorphism (SNP) detection (20). LCR was first developed in 1989 by multiple groups, [ 4 ] who used thermostable DNA ligase to discriminate between normal and mutant DNA and to amplify the allele-specific product. A mismatch at the 3′ end of the discriminating primer prevents the DNA ligase from joining the two fragments together. By using both strands of genomic DNA as targets for oligonucleotide hybridization, the products generated from two sets of adjacent oligonucleotide primers, complementary to each target strand in one round of ligation, can become the targets for the next round. The amount of the products can thus be increased exponentially by repeated thermal cycling. [ 1 ] | https://en.wikipedia.org/wiki/Ligase_chain_reaction |
Ligation-independent cloning (LIC) is a form of molecular cloning that can be performed without the use of restriction endonucleases or DNA ligase . The technique was developed in the early 1990s as an alternative to restriction enzyme/ligase cloning. [ 1 ] This allows genes to be cloned without the requirement of a restriction site for cloning that is absent from the gene insert. [ 2 ] [ 3 ] [ 4 ] [ clarification needed ] LIC uses long complementary overhangs on the vector and the DNA insert to create a stable association between them. [ 5 ] | https://en.wikipedia.org/wiki/Ligation-independent_cloning |
Ligation is the joining of two nucleotides, or two nucleic acid fragments, into a single polymeric chain through the action of an enzyme known as a ligase . The reaction involves the formation of a phosphodiester bond between the 3'-hydroxyl terminus of one nucleotide and the 5'-phosphoryl terminus of another nucleotide, which results in the two nucleotides being linked consecutively on a single strand. Ligation works in fundamentally the same way for both DNA and RNA. A cofactor is generally involved in the reaction, usually ATP or NAD + . Eukaryotic ligases belong to the ATP type, while the NAD+ type are found in bacteria (e.g. E. coli ). [ 1 ]
Ligation occurs naturally as part of numerous cellular processes, including DNA replication, transcription, splicing, and recombination, and is also an essential laboratory procedure in molecular cloning , whereby DNA fragments are joined to create recombinant DNA molecules (such as when a foreign DNA fragment is inserted into a plasmid ). The discovery of DNA ligase dates back to 1967 and was an important event in the field of molecular biology . [ 1 ] Ligation in the laboratory is normally performed using T4 DNA ligase . It is broadly used in vitro due to its capability of joining sticky-ended fragments as well as blunt-ended fragments. [ 2 ] However, procedures for ligation without the use of standard DNA ligase are also popular. Human DNA ligase abnormalities have been linked to pathological disorders characterized by immunodeficiency, radiation sensitivity, and developmental problems. [ 3 ]
The mechanism of the ligation reaction was first elucidated in the laboratory of I. Robert Lehman. [ 4 ] [ 5 ] Two fragments of DNA may be joined by DNA ligase which catalyzes the formation of a phosphodiester bond between the 3'-hydroxyl group (-OH) at one end of a strand of DNA and the 5'-phosphate group (-PO4) of another. In animals and bacteriophages , ATP is used as the energy source for the ligation, while in bacteria, NAD + is used. [ 6 ]
The DNA ligase first reacts with ATP or NAD + , forming a ligase-AMP intermediate with the AMP linked to the ε-amino group of lysine in the active site of the ligase via a phosphoramide bond. This adenylyl group is then transferred to the phosphate group at the 5' end of a DNA chain, forming a DNA-adenylate complex. Finally, a phosphodiester bond between the two DNA ends is formed via the nucleophilic attack of the 3'-hydroxyl at the end of a DNA strand on the activated 5′-phosphoryl group of another. [ 4 ]
A nick in the DNA (i.e. a break in one strand of a double-stranded DNA) can be repaired very efficiently by the ligase. However, a complicating feature of ligation conducted presents itself when ligating two separate DNA ends as the two ends need to come together before the ligation reaction can proceed. In the ligation reactions conducted in a laboratory, the ligation of DNA with sticky or cohesive ends , the protruding strands of DNA may be annealed together already, therefore it is a relatively efficient process as it is equivalent to repairing two nicks in the DNA. However, in the ligation of blunt-ends , which lack protruding ends for the DNA to anneal together, the process is dependent on random collision for the ends to align together before they can be ligated, and is consequently a much less efficient process. [ 7 ] The DNA ligase from E. coli cannot ligate blunt-ended DNA except under conditions of molecular crowding, and it is therefore not normally used for ligation in the laboratory. Instead the DNA ligase from phage T4 is used as it can ligate blunt-ended DNA as well as single-stranded DNA. [ 8 ] [ 6 ]
In the laboratory, factors that affect an enzyme-mediated chemical reaction would naturally affect a ligation reaction, these include the concentration of enzyme and the reactants, the temperature of reaction and the length of time of incubation. Ligation is complicated by the fact that the reaction can involve both inter- and intra-molecular reactions, but the desired ligation products in many ligation reactions (e.g. ligating a DNA fragment into a vector) needed first to be inter-molecular, i.e. between two different DNA molecules, followed by an intra-molecular reaction to seal and circularize the molecule. For efficient ligation, an additional annealing step is also necessary.
The three steps to form a new phosphodiester bond during ligation are: enzyme adenylylation, adenylyl transfer to DNA, and nick sealing. Mg(2+) is a cofactor for catalysis, therefore at high concentration of Mg(2+) the ligation efficiency is high. If the concentration of Mg(2+) is limited, the nick- sealing is the rate- limiting reaction of the process, and adenylylated DNA intermediate stays in the solution. Such adenylylation of the enzyme restrains the rebinding to the adenylylated DNA intermediate comparison of an Achilles' heel of LIG1, and represents a risk if they are not fixed. [ 9 ]
The concentration of DNA can affect the rate of ligation, and whether the ligation is an inter-molecular or intra-molecular reaction. Ligation involves joining up the ends of a DNA with other ends, however, each DNA fragment has two ends, and if the ends are compatible, a DNA molecule can circularize by joining its own ends. At high DNA concentration, there is a greater chance of one end of a DNA molecule meeting the end of another DNA, thereby forming intermolecular ligation. At a lower DNA concentration, the chance that one end of a DNA molecule would meet the other end of the same molecule increases, therefore intramolecular reaction that circularizes the DNA is more likely. The transformation efficiency of linear DNA is also much lower than circular DNA, and for the DNA to circularize, the DNA concentration should not be too high. As a general rule, the total DNA concentration should be less than 10 μg/ml. [ 10 ]
The relative concentration of the DNA fragments, their length, as well as buffer conditions are also factors that can affect whether intermolecular or intramolecular reactions are favored.
The concentration of DNA can be artificially increased by adding condensing agents such as cobalt hexamine and biogenic polyamines such as spermidine , or by using crowding agents such as polyethylene glycol (PEG) which also increase the effective concentration of enzymes. [ 11 ] [ 12 ] Note however that additives such as cobalt hexamine can produce exclusively intermolecular reaction, [ 11 ] resulting in linear concatemers rather than the circular DNA more suitable for transformation of plasmid DNA, and is therefore undesirable for plasmid ligation. If it is necessary to use additives in plasmid ligation, the use of PEG is preferable as it can promote intramolecular as well as intermolecular ligation. [ 13 ]
As is usual for an enzyme, the higher the ligase concentration, the faster is the rate of ligation. Blunt-end ligation is much less efficient than sticky end ligation, so a higher concentration of ligase is used in blunt-end ligations. High DNA ligase concentration may be used in conjunction with PEG for a faster ligation, and they are the components often found in commercial kits designed for rapid ligation. [ 14 ] [ 15 ]
Two issues are involved when considering the temperature of a ligation reaction. First, the optimum temperature for DNA ligase activity which is 37 ° C, and second, the melting temperature (T m ) of the DNA ends to be ligated. The melting temperature is dependent on length and base composition of the DNA overhang—the greater the number of G and C, the higher the T m since there are three hydrogen bonds formed between G-C base pair compared to two for A-T base pair—with some contribution from the stacking of the bases between fragments. For the ligation reaction to proceed efficiently, the ends should be stably annealed, and in ligation experiments, the T m of the DNA ends is generally much lower than 37 ° C. The optimal temperature for ligating cohesive ends is therefore a compromise between the best temperature for DNA ligase activity and the T m where the ends can associate. [ 16 ] However, different restriction enzymes generates different ends, and the base composition of the ends produced by these enzymes may also differ, the melting temperature and therefore the optimal temperature can vary widely depending on the restriction enzymes used, and the optimum temperature for ligation may be between 4-15 ° C depending on the ends. [ 17 ] [ 18 ] Ligations also often involve ligating ends generated from different restriction enzymes in the same reaction mixture, therefore it may not be practical to select optimal temperature for a particular ligation reaction and most protocols simply choose 12-16 ° C, room temperature, or 4 ° C. When conducting a ligation at 4 ° C, it is necessary to increase the time of ligation reaction, for example by leaving the ligation mixture overnight or longer in the fridge.
The ionic strength of the buffer used can affect the ligation. The kinds of cations presence can also influence the ligation reaction, for example, excess amount of Na + can cause the DNA to become more rigid and increase the likelihood of intermolecular ligation. At high concentration of monovalent cation (>200 mM) ligation can also be almost completely inhibited. [ 19 ] The standard buffer used for ligation is designed to minimize ionic effects. [ 20 ]
Restriction enzymes can generate a wide variety of ends in the DNA they digest, but in cloning experiments most commonly-used restriction enzymes generate a 4-base single-stranded overhang called the sticky or cohesive end (exceptions include Nde I which generates a 2-base overhang, and those that generate blunt ends). These sticky ends can anneal to other compatible ends and become ligated in a sticky-end (or cohesive end) ligation. Eco RI for example generates an AATT end, and since A and T have lower melting temperature than C and G, its melting temperature T m is low at around 6 ° C. [ 21 ] For most restriction enzymes, the overhangs generated have a T m that is around 15 ° C. [ 20 ] For practical purposes, sticky end ligations are performed at 12-16 ° C, or at room temperature, or alternatively at 4 ° C for a longer period.
For the insertion of a DNA fragment into a plasmid vector, it is preferable to use two different restriction enzymes to digest the DNA so that different ends are generated. The two different ends can prevent the religation of the vector without any insert, and it also allows the fragment to be inserted in a directional manner.
When it is not possible to use two different sites, then the vector DNA may need to be dephosphorylated to avoid a high background of recircularized vector DNA with no insert. Without a phosphate group at the ends the vector cannot ligate to itself, but can be ligated to an insert with a phosphate group. Dephosphorylation is commonly done using calf-intestinal alkaline phosphatase (CIAP) which removes the phosphate group from the 5′ end of digested DNA, but note that CIAP is not easy to inactivate and can interfere with ligation without an additional step to remove the CIAP, thereby resulting in failure of ligation. CIAP should not be used in excessive amount and should only be used when necessary. Shrimp alkaline phosphatase (SAP) or Antarctic phosphatase (AP) are suitable alternative as they can be easily inactivated.
Blunt end ligation does not involve base-pairing of the protruding ends, so any blunt end may be ligated to another blunt end. Blunt ends may be generated by restriction enzymes such as Sma I and Eco RV . A major advantage of blunt-end cloning is that the desired insert does not require any restriction sites in its sequence as blunt-ends are usually generated in a PCR , and the PCR generated blunt-ended DNA fragment may then be ligated into a blunt-ended vector generated from restriction digest.
Blunt-end ligation, however, is much less efficient than sticky end ligation, typically the reaction is 100X slower than sticky-end ligation. Since blunt-end does not have protruding ends, the ligation reaction depends on random collisions between the blunt-ends and is consequently much less efficient. To compensate for the lower efficiency, the concentration of ligase used is higher than sticky end ligation (10x or more). The concentration of DNA used in blunt-end ligation is also higher to increase the likelihood of collisions between ends, and longer incubation time may also be used for blunt-end ligations.
If both ends needed to be ligated into a vector are blunt-ended, then the vector needs to be dephosphorylated to minimize self-ligation. This may be done using CIAP, but caution in its use is necessary as noted previously. Since the vector has been dephosphorylated, and ligation requires the presence of a 5'-phosphate, the insert must be phosphorylated. Blunt-ended PCR product normally lacks a 5'-phosphate, therefore it needs to be phosphorylated by treatment with T4 polynucleotide kinase . [ 22 ]
Blunt-end ligation is also reversibly inhibited by high concentration of ATP. [ 23 ]
PCR usually generates blunt-ended PCR products, but note that PCR using Taq polymerase can add an extra adenine (A) to the 3' end of the PCR product. This property may be exploited in TA cloning where the ends of the PCR product can anneal to the T end of a vector. TA ligation is therefore a form of sticky end ligation. Blunt-ended vectors may be turned into vector for TA ligation with dideoxythymidine triphosphate (ddTTP) using terminal transferase.
For the cloning of an insert into a circular plasmid:
Sometimes ligation fail to produce the desired ligated products, and some of the possible reasons may be:
A number of commercially available DNA cloning kits use other methods of ligation that do not require the use of the usual DNA ligases. These methods allow cloning to be done much more rapidly, as well as allowing for simpler transfer of cloned DNA insert to different vectors . These methods however require the use of specially designed vectors and components, and may lack flexibility.
Topoisomerase can be used instead of ligase for ligation, and the cloning may be done more rapidly without the need for restriction digest of the vector or insert. In this TOPO cloning method a linearized vector is activated by attaching topoisomerase I to its ends, and this "TOPO-activated" vector may then accept a PCR product by ligating to both of the 5' ends of the PCR product, the topoisomerase is released and a circular vector is formed in the process. [ 28 ]
Another method of cloning without the use of ligase is by DNA recombination , for example as used in the Gateway cloning system . [ 29 ] [ 30 ] The gene, once cloned into the cloning vector (called entry clone in this method), may be conveniently introduced into a variety of expression vectors by recombination. [ 31 ]
Different types of ligases found in the studied organisms. For instance, Nicotinamide adenine dinucleotide (NAD+)-dependent ligase was found and isolated from bacterial organism, known as E. coli in second third of 20th century. Since then, this model has been widely used to study that DNA ligase family. Moreover, it is found in all bacteria. Examples of genes present in E. coli are LigA, which has essential functions affecting bacterial growth, and LigB. [ 32 ]
In mammals, including human 3 genes, namely Lig1, Lig3, Lig4 were identified. All eukaryotes contain multiple types of DNA ligases encoded by Lig genes. [ 33 ] The smallest known eukaryotic ligase is Chlorella virus DNA ligase (ChVLig). It contains only 298 amino acids. When ChVLig is the only source of ligase in the cell, it can continue to support mitotic development, and nonhomologous end joining in budding yeasts. [ 34 ] DNA Ligase I (Lig1) is accountable for Okazaki Fragments ligation. It is consist of 919 amino acids. In a complex process of DNA replication, DNA Ligase I recruited to replications machinery by protein interactions. Lig1 plays role in cell division in plants and yeasts. Knockout of the Lig1 gene is lethal in yeasts and some plants sprouts. Nevertheless, studies of mouse embryogenesis have shown that until the middle of the growth process embryo developing without DNA ligase I. [ 35 ]
Enzymatic ligation has been used in various studies related to DNA nanostructures and lead to increase of efficiency and stability. One of the methods is sealing of covalent DNA bond, namely phosphodiester bond and nicks. Reconstruction of those structures performed with assistance of ligation. For instance, T4 DNA ligase serve as a catalyst for sealing of a nick between 3 prime and 5 prime ends of DNA to make up strong phosphodiester bond. Ligated structures have higher thermal stability values. [ 36 ] T4 DNA ligase has many valuable properties such as already mentioned catalytic, but it is also responsible for sealing of the gaps between DNA strands, nick-closing activity, repair of the DNA damage, etc. [ 2 ]
In nanostructures architecture, molecular biology researches - ssDNA is an important application model. T4 DNA ligase used to cyclize short ssDNA fragments, but process is complicated by formation of secondary structures. On the other hand, Taq DNA ligase is a thermostable enzyme which can be applied at higher temperatures (45, 55 and 65 °C respectively). Since at these temperature range secondary structures less stable it is enhance cyclization efficiency of oligonucleotides. The kinetic, biological, and other parameters of nanostructures are influenced by presence of the secondary structures in DNA rings. However, Taq DNA ligation occur only when two complementary DNA strands are perfectly paired and have no gaps in between. [ 37 ]
Analysis of ligases activities, mutations, deficiencies widely used in drug design and biological researches to investigate diseases, pathologies developments and related rare acquired or inherited syndromes (e.g. DNA ligase IV syndrome). [ 38 ] [ 39 ] [ 40 ] [ 41 ]
The ligation procedure is prevalent in molecular biology cloning techniques, and it has been applied to define and characterize specific nucleotide sequences in the genome using Ligase Chain Reaction (LCR) or Polymerase Chain Reaction (PCR)-based amplification of ligated probes. [ 42 ]
Ligation may also serve as a DNA analysis method. [ 43 ] Some techniques employ rolling circle amplification . [ 43 ] The most notable of these is described by Smolina et al. , 2007 & Smolina et al. , 2008 using fluorescence in situ hybridization and peptide nucleic acids . [ 43 ] They developed and employed this technique for analyses of bacterial chromosomes. [ 43 ] | https://en.wikipedia.org/wiki/Ligation_(molecular_biology) |
In mathematics , Light's associativity test is a procedure invented by F. W. Light for testing whether a binary operation defined in a finite set by a Cayley multiplication table is associative . The naive procedure for verification of the associativity of a binary operation specified by a Cayley table, which compares the two products that can be formed from each triple of elements, is cumbersome. Light's associativity test simplifies the task in some instances (although it does not improve the worst-case runtime of the naive algorithm, namely O ( n 3 ) {\displaystyle {\mathcal {O}}\left(n^{3}\right)} for sets of size n {\displaystyle n} ).
Let a binary operation ' · ' be defined in a finite set A by a Cayley table. Choosing some element a in A , two new binary operations are defined in A as follows:
The Cayley tables of these operations are constructed and compared. If the tables coincide then x · ( a · y ) = ( x · a ) · y for all x and y . This is repeated for every element of the set A .
The example below illustrates a further simplification in the procedure for the construction and comparison of the Cayley tables of the operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } '.
It is not even necessary to construct the Cayley tables of ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' for all elements of A . It is enough to compare Cayley tables of ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to the elements in a proper generating subset of A .
When the operation ' . ' is commutative , then x ⋆ {\displaystyle \star } y = y ∘ {\displaystyle \circ } x. As a result, only part of each Cayley table must be computed, because x ⋆ {\displaystyle \star } x = x ∘ {\displaystyle \circ } x always holds, and x ⋆ {\displaystyle \star } y = x ∘ {\displaystyle \circ } y implies y ⋆ {\displaystyle \star } x = y ∘ {\displaystyle \circ } x.
When there is an identity element e, it does not need to be included in the Cayley tables because x ⋆ {\displaystyle \star } y = x ∘ {\displaystyle \circ } y always holds if at least one of x and y are equal to e.
Consider the binary operation ' · ' in the set A = { a , b , c , d , e } defined by the following Cayley table (Table 1):
The set { c , e } is a generating set for the set A under the binary operation defined by the above table, for, a = e · e , b = c · c , d = c · e . Thus it is enough to verify that the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to c coincide and also that the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to e coincide.
To verify that the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to c coincide, choose the row in Table 1 corresponding to the element c :
This row is copied as the header row of a new table (Table 3):
Under the header a copy the corresponding column in Table 1, under the header b copy the corresponding column in Table 1, etc., and construct Table 4.
The column headers of Table 4 are now deleted to get Table 5:
The Cayley table of the binary operation ' ⋆ {\displaystyle \star } ' corresponding to the element c is given by Table 6.
Next choose the c column of Table 1:
Copy this column to the index column to get Table 8:
Against the index entry a in Table 8 copy the corresponding row in Table 1, against the index entry b copy the corresponding row in Table 1, etc., and construct Table 9.
The index entries in the first column of Table 9 are now deleted to get Table 10:
The Cayley table of the binary operation ' ∘ {\displaystyle \circ } ' corresponding to the element c is given by Table 11.
One can verify that the entries in the various cells in Table 6 agrees with the entries in the corresponding cells of Table 11. This shows that x · ( c · y ) = ( x · c ) · y for all x and y in A . If there were some discrepancy then it would not be true that x · ( c · y ) = ( x · c ) · y for all x and y in A .
That x · ( e · y ) = ( x · e ) · y for all x and y in A can be verified in a similar way by constructing the following tables (Table 12 and Table 13):
It is not necessary to construct the Cayley tables (Table 6 and table 11) of the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } '. It is enough to copy the column corresponding to the header c in Table 1 to the index column in Table 5 and form the following table (Table 14) and verify that the a -row of Table 14 is identical with the a -row of Table 1, the b -row of Table 14 is identical with the b -row of Table 1, etc. This is to be repeated mutatis mutandis for all the elements of the generating set of A .
Computer software can be written to carry out Light's associativity test. Kehayopulu and Argyris have developed such a program for Mathematica . [ 1 ]
Light's associativity test can be extended to test associativity in a more general context. [ 2 ] [ 3 ]
Let T = { t 1 , t 2 , … {\displaystyle \ldots } , t m } be a magma in which the operation is denoted by juxtaposition . Let X = { x 1 , x 2 , … {\displaystyle \ldots } , x n } be a set. Let there be a mapping from the Cartesian product T × X to X denoted by ( t , x ) ↦ tx and let it be required to test whether this map has the property
A generalization of Light's associativity test can be applied to verify whether the above property holds or not. In mathematical notations, the generalization runs as follows: For each t in T , let L ( t ) be the m × n matrix of elements of X whose i - th row is
and let R ( t ) be the m × n matrix of elements of X , the elements of whose j - th column are
According to the generalised test (due to Bednarek), that the property to be verified holds if and only if L ( t ) = R ( t ) for all t in T . When X = T , Bednarek's test reduces to Light's test.
There is a randomized algorithm by Rajagopalan and Schulman to test associativity in time proportional to the input size. (The method also works for testing certain other identities.) Specifically, the runtime is O ( n 2 log 1 δ ) {\displaystyle O(n^{2}\log {\frac {1}{\delta }})} for an n × n {\displaystyle n\times n} table and error probability δ {\displaystyle \delta } .
The algorithm can be modified to produce a triple ⟨ a , b , c ⟩ {\displaystyle \langle a,b,c\rangle } for which ( a b ) c ≠ a ( b c ) {\displaystyle (ab)c\neq a(bc)} , if there is one, in time O ( n 2 log n ⋅ log 1 δ ) {\displaystyle O(n^{2}\log n\cdot \log {\frac {1}{\delta }})} . [ 4 ] | https://en.wikipedia.org/wiki/Light's_associativity_test |
Light-Water: a Mosaic of Meditations is a " hypermedia work" [ 1 ] that utilizes and layers images and poetry to "create a striking experience of poetic meditation." [ 2 ] Created by Christy Sheffield Sanford in 1999, [ 3 ] [ 4 ] the work consists of ten poems that produce a "visual-literal meditation on light and water." [ 1 ] Through the implementation of timelines within the poems and overall work, Light-Water illustrates how "space-time possibilities for literature can now be more adequately realized through the use of spatio-temporal dhtml editors." [ 5 ]
Sanford used Adobe Dreamweaver to complete this body of work, and this specific software allowed her to create a complicated, operational, and dynamic html scripting. Sanford said "special emphasis was placed on creating Timelines...Timelines were used to explore the kinetic properties/spirit of light and water." [ 5 ]
Light-Water was originally published in 1999 in an online journal by The New River. Through the use of the software Dreamweaver , this work exists on a web browser and is coded as a Dynamic HTML , or DHTML . As of 2022, this piece is accessible through the Electronic Literature Organization's The NEXT : Museum, Library, and Preservation Space, hosted at Washington State University Vancouver in Washington US. On the ELO Next's webpage, they note how "Amanda Hodes transferred the files for this copy to Dene Grigar in June 2022." While most of Sanford's works are no longer available or discoverable online, Light-Water remains accessible online and still remains the same as its original version and formatting. [ 2 ] [ 3 ] [ 4 ]
Light-Water: a Mosaic of Meditations connects and emphasizes the natural and synthetic natures of light and water. Sanford uses the images to convey the message and help the reader visualize her poetic descriptions. With ten different poems, Cristy Sheffield Sanford creates a calming peace by looking at life through the lens of water and light. She details how light and water not only affect human life, but plant life and the entire view of the world. She uses various light sources like street lights, colorful stop lights, fire, and the moon to illustrate how everything we see, in life and in her work, has a purpose and is beautifully made or destroyed. [ 6 ] [ 7 ] [ 8 ]
The reader can enter the story by clicking on the image titles of one of the ten poems shown at the starting, mosaic title page. In each node and poem page, the reader can transition to the other poems through the linked titles of other poems at the bottom of the page. The three links at the bottom always include a link to the "Mosaic" title page in the middle and two different poems on either side of it. There is only one functional hypertext linked phrase within the text of the ten poems, and this link is located in the poem titled "Sweat". The navigation is simple and consistent, but it allows the reader to create their own path or follow a constructed sequence. [ 6 ] [ 7 ] Memmott notes how "For the most part, images are on equal ground with words -- there is little difference in the formal treatments of text and image. Images are intended to be metabolized as text, are meant as text -- are text. Vice versa." [ 8 ]
This piece contains 10 poems. [ 6 ] [ 7 ] Light-Water is Electronic literature since it was created "using advanced web-techniques" [ 2 ] and exists solely online through the web. [ 3 ] [ 4 ] [ 9 ]
Describing the works in The New River, Timothy Luke and Jeremy Hunsinger said that the "combination of the visual and the literal is central to the direction of hypermedia " and that it is a creative synthesis between the reader and the work due to the blend of moving images and textual literature. [ 1 ] In a critique titled "Mise en Place: Hypersensual Textility and Poly-vocal Narration," Talan Memmott said Sanford uses images and text interchangeably, creating metaphorical visuals by overlapping the text and images. He said that "the reader must not only read in a literary sense, but is asked to interpret the design, the contrasts, and interact with images and emergent metaphor. It is impressive just how much 'textility' images maintain in these works." [ 8 ] The Electronic Literature Organization , the principle organization that promotes and supports e-lit , said that, "Sanford's visual lushness opened up radical new possibilities for the look of the screen and the combination of merged image, movement, and text." [ 2 ] | https://en.wikipedia.org/wiki/Light-Water:_a_Mosaic_of_Meditations |
In physics , particularly special relativity , light-cone coordinates , introduced by Paul Dirac [ 1 ] and also known as Dirac coordinates, are a special coordinate system where two coordinate axes combine both space and time, while all the others are spatial.
A spacetime plane may be associated with the plane of split-complex numbers which is acted upon by elements of the unit hyperbola to effect Lorentz boosts. This number plane has axes corresponding to time and space. An alternative basis is the diagonal basis which corresponds to light-cone coordinates.
In a light-cone coordinate system, two of the coordinates are null vectors and all the other coordinates are spatial. The former can be denoted x + {\displaystyle x^{+}} and x − {\displaystyle x^{-}} and the latter x ⊥ {\displaystyle x_{\perp }} .
Assume we are working with a (d,1) Lorentzian signature.
Instead of the standard coordinate system (using Einstein notation )
with i , j = 1 , … , d {\displaystyle i,j=1,\dots ,d} we have
with i , j = 1 , … , d − 1 {\displaystyle i,j=1,\dots ,d-1} , x + = t + x 2 {\displaystyle x^{+}={\frac {t+x}{\sqrt {2}}}} and x − = t − x 2 {\displaystyle x^{-}={\frac {t-x}{\sqrt {2}}}} .
Both x + {\displaystyle x^{+}} and x − {\displaystyle x^{-}} can act as "time" coordinates. [ 2 ] : 21
One nice thing about light cone coordinates is that the causal structure is partially included into the coordinate system itself.
A boost in the ( t , x ) {\displaystyle (t,x)} plane shows up as the squeeze mapping x + → e + β x + {\displaystyle x^{+}\to e^{+\beta }x^{+}} , x − → e − β x − {\displaystyle x^{-}\to e^{-\beta }x^{-}} , x i → x i {\displaystyle x^{i}\to x^{i}} . A rotation in the ( i , j ) {\displaystyle (i,j)} -plane only affects x ⊥ {\displaystyle x_{\perp }} .
The parabolic transformations show up as x + → x + {\displaystyle x^{+}\to x^{+}} , x − → x − + δ i j α i x j + α 2 2 x + {\displaystyle x^{-}\to x^{-}+\delta _{ij}\alpha ^{i}x^{j}+{\frac {\alpha ^{2}}{2}}x^{+}} , x i → x i + α i x + {\displaystyle x^{i}\to x^{i}+\alpha ^{i}x^{+}} . Another set of parabolic transformations show up as x + → x + + δ i j α i x j + α 2 2 x − {\displaystyle x^{+}\to x^{+}+\delta _{ij}\alpha ^{i}x^{j}+{\frac {\alpha ^{2}}{2}}x^{-}} , x − → x − {\displaystyle x^{-}\to x^{-}} and x i → x i + α i x − {\displaystyle x^{i}\to x^{i}+\alpha ^{i}x^{-}} .
Light cone coordinates can also be generalized to curved spacetime in general relativity. Sometimes calculations simplify using light cone coordinates. See Newman–Penrose formalism .
Light cone coordinates are sometimes used to describe relativistic collisions, especially if the relative velocity is very close to the speed of light. They are also used in the light cone gauge of string theory.
A closed string is a generalization of a particle. The spatial coordinate of a point on the string is conveniently described by a parameter σ {\displaystyle \sigma } which runs from 0 {\displaystyle 0} to 2 π {\displaystyle 2\pi } . Time is appropriately described by a parameter σ 0 {\displaystyle \sigma _{0}} . Associating each point on the string in a D-dimensional spacetime with coordinates x 0 , x {\displaystyle x_{0},x} and transverse coordinates x i , i = 2 , . . . , D {\displaystyle x_{i},i=2,...,D} , these coordinates play the role of fields in a 1 + 1 {\displaystyle 1+1} dimensional field theory. Clearly, for such a theory more is required. It is convenient to employ instead of x 0 = σ 0 {\displaystyle x_{0}=\sigma _{0}} and x {\displaystyle x} , light-cone coordinates x ± {\displaystyle x_{\pm }} given by
so that the metric d s 2 {\displaystyle ds^{2}} is given by
(summation over i {\displaystyle i} understood).
There is some gauge freedom. First, we can set x + = σ 0 {\displaystyle x_{+}=\sigma _{0}} and treat this degree of freedom as the time variable. A reparameterization invariance under σ → σ + δ σ {\displaystyle \sigma \rightarrow \sigma +\delta \sigma } can be imposed with a constraint L 0 = 0 {\displaystyle {\mathcal {L}}_{0}=0} which we obtain from the metric, i.e.
Thus x − {\displaystyle x_{-}} is not an independent degree of freedom anymore. Now L 0 {\displaystyle {\mathcal {L}}_{0}} can be identified as the corresponding Noether charge . Consider L 0 ( x − , x i ) {\displaystyle {\mathcal {L}}_{0}(x_{-},x_{i})} . Then with the use of the Euler-Lagrange equations for x i {\displaystyle x_{i}} and x − {\displaystyle x_{-}} one obtains
Equating this to
where Q {\displaystyle Q} is the Noether charge, we obtain:
This result agrees with a result cited in the literature. [ 3 ]
For a free particle of mass m {\displaystyle m} the action is
In light-cone coordinates L {\displaystyle {\mathcal {L}}} becomes with σ = x + {\displaystyle \sigma =x_{+}} as time variable:
The canonical momenta are
The Hamiltonian is ( ℏ = c = 1 {\displaystyle \hbar =c=1} ):
and the nonrelativistic Hamilton equations imply:
One can now extend this to a free string. | https://en.wikipedia.org/wiki/Light-cone_coordinates |
Light-dependent reactions are certain photochemical reactions involved in photosynthesis , the main process by which plants acquire energy. There are two light dependent reactions: the first occurs at photosystem II (PSII) and the second occurs at photosystem I (PSI) .
PSII absorbs a photon to produce a so-called high energy electron which transfers via an electron transport chain to cytochrome b 6 f and then to PSI. The then-reduced PSI, absorbs another photon producing a more highly reducing electron, which converts NADP + to NADPH. In oxygenic photosynthesis , the first electron donor is water , creating oxygen (O 2 ) as a by-product. In anoxygenic photosynthesis , various electron donors are used.
Cytochrome b 6 f and ATP synthase work together to produce ATP ( photophosphorylation ) in two distinct ways. In non-cyclic photophosphorylation, cytochrome b 6 f uses electrons from PSII and energy from PSI [ citation needed ] to pump protons from the stroma to the lumen . The resulting proton gradient across the thylakoid membrane creates a proton-motive force, used by ATP synthase to form ATP. In cyclic photophosphorylation, cytochrome b 6 f uses electrons and energy from PSI to create more ATP and to stop the production of NADPH. Cyclic phosphorylation is important to create ATP and maintain NADPH in the right proportion for the light-independent reactions .
The net-reaction of all light-dependent reactions in oxygenic photosynthesis is:
PSI and PSII are light-harvesting complexes . If a special pigment molecule in a photosynthetic reaction center absorbs a photon, an electron in this pigment attains the excited state and then is transferred to another molecule in the reaction center. This reaction, called photoinduced charge separation , is the start of the electron flow and transforms light energy into chemical forms.
In chemistry , many reactions depend on the absorption of photons to provide the energy needed to overcome the activation energy barrier and hence can be labelled light-dependent. Such reactions range from the silver halide reactions used in photographic film to the creation and destruction of ozone in the upper atmosphere . This article discusses a specific subset of these, the series of light-dependent reactions related to photosynthesis in living organisms.
The reaction center is in the thylakoid membrane. It transfers absorbed light energy to a dimer of chlorophyll pigment molecules near the periplasmic (or thylakoid lumen) side of the membrane. This dimer is called a special pair because of its fundamental role in photosynthesis. This special pair is slightly different in PSI and PSII reaction centers. In PSII, it absorbs photons with a wavelength of 680 nm, and is therefore called P680 . In PSI, it absorbs photons at 700 nm and is called P700 . In bacteria, the special pair is called P760, P840, P870, or P960. "P" here means pigment, and the number following it is the wavelength of light absorbed.
Electrons in pigment molecules can exist at specific energy levels. Under normal circumstances, they are at the lowest possible energy level, the ground state. However, absorption of light of the right photon energy can lift them to a higher energy level. Any light that has too little or too much energy cannot be absorbed and is reflected. The electron in the higher energy level is unstable and will quickly return to its normal lower energy level. To do this, it must release the absorbed energy. This can happen in various ways. The extra energy can be converted into molecular motion and lost as heat, or re-emitted by the electron as light ( fluorescence ). The energy, but not the electron itself, may be passed onto another molecule; this is called resonance energy transfer . If an electron of the special pair in the reaction center becomes excited, it cannot transfer this energy to another pigment using resonance energy transfer. Under normal circumstances, the electron would return to the ground state, but because the reaction center is arranged so that a suitable electron acceptor is nearby, the excited electron is taken up by the acceptor. The loss of the electron gives the special pair a positive charge and, as an ionization process, further boosts its energy. [ citation needed ] The formation of a positive charge on the special pair and a negative charge on the acceptor is referred to as photoinduced charge separation . The electron can be transferred to another molecule. As the ionized pigment returns to the ground state, it takes up an electron and gives off energy to the oxygen evolving complex so it can split water into electrons, protons, and molecular oxygen (after receiving energy from the pigment four times). Plant pigments usually utilize the last two of these reactions to convert the sun's energy into their own.
This initial charge separation occurs in less than 10 picoseconds (10 -11 seconds). In their high-energy states, the special pigment and the acceptor could undergo charge recombination; that is, the electron on the acceptor could move back to neutralize the positive charge on the special pair. Its return to the special pair would waste a valuable high-energy electron and simply convert the absorbed light energy into heat. In the case of PSII, this backflow of electrons can produce reactive oxygen species leading to photoinhibition . [ 1 ] [ 2 ] Three factors in the structure of the reaction center work together to suppress charge recombination nearly completely:
Thus, electron transfer proceeds efficiently from the first electron acceptor to the next, creating an electron transport chain that ends when it has reached NADPH .
The photosynthesis process in chloroplasts begins when an electron of P680 of PSII attains a higher-energy level. This energy is used to reduce a chain of electron acceptors that have subsequently higher redox potentials. This chain of electron acceptors is known as an electron transport chain . When this chain reaches PSI , an electron is again excited, creating a high redox-potential. The electron transport chain of photosynthesis is often put in a diagram called the Z-scheme , because the redox diagram from P680 to P700 resembles the letter Z. [ 3 ]
The final product of PSII is plastoquinol , a mobile electron carrier in the membrane. Plastoquinol transfers the electron from PSII to the proton pump, cytochrome b6f . The ultimate electron donor of PSII is water. Cytochrome b 6 f transfers the electron chain to PSI through plastocyanin molecules. PSI can continue the electron transfer in two different ways. It can transfer the electrons either to plastoquinol again, creating a cyclic electron flow, or to an enzyme called FNR ( Ferredoxin—NADP(+) reductase ), creating a non-cyclic electron flow. PSI releases FNR into the stroma , where it reduces NADP + to NADPH .
Activities of the electron transport chain, especially from cytochrome b 6 f , lead to pumping of protons from the stroma to the lumen. The resulting transmembrane proton gradient is used to make ATP via ATP synthase .
The overall process of the photosynthetic electron transport chain in chloroplasts is:
PSII is extremely complex, a highly organized transmembrane structure that contains a water splitting complex , chlorophylls and carotenoid pigments, a reaction center (P680), pheophytin (a pigment similar to chlorophyll), and two quinones. It uses the energy of sunlight to transfer electrons from water to a mobile electron carrier in the membrane called plastoquinone :
Plastoquinol, in turn, transfers electrons to cyt b 6 f , which feeds them into PSI.
The step H 2 O → P680 is performed by an imperfectly understood structure embedded within PSII called the water-splitting complex or oxygen-evolving complex ( OEC ). It catalyzes a reaction that splits water into electrons, protons and oxygen,
using energy from P680 + . The actual steps of the above reaction possibly occur in the following way (Kok's diagram of S-states):
(I) 2 H 2 O (monoxide) (II) OH. H 2 O (hydroxide) (III) H 2 O 2 (peroxide) (IV) HO 2 (super oxide)(V) O 2 (di-oxygen). [ citation needed ] (Dolai's mechanism)
The electrons are transferred to special chlorophyll molecules (embedded in PSII) that are promoted to a higher-energy state by the energy of photons .
The excitation P680 → P680 * of the reaction center pigment P680 occurs here. These special chlorophyll molecules embedded in PSII absorb the energy of photons, with maximal absorption at 680 nm. Electrons within these molecules are promoted to a higher-energy state. This is one of two core processes in photosynthesis, and it occurs with astonishing efficiency (greater than 90%) because, in addition to direct excitation by light at 680 nm, the energy of light first harvested by antenna proteins at other wavelengths in the light-harvesting system is also transferred to these special chlorophyll molecules.
This is followed by the electron transfer P680 * → pheophytin , and then on to plastoquinol , which occurs within the reaction center of PSII. The electrons are transferred to plastoquinone and two protons, generating plastoquinol, which released into the membrane as a mobile electron carrier. This is the second core process in photosynthesis. The initial stages occur within picoseconds , with an efficiency of 100%. The seemingly impossible efficiency is due to the precise positioning of molecules within the reaction center. This is a solid-state process, not a typical chemical reaction. It occurs within an essentially crystalline environment created by the macromolecular structure of PSII. The usual rules of chemistry (which involve random collisions and random energy distributions) do not apply in solid-state environments.
When the excited chlorophyll P 680 * passes the electron to pheophytin, it converts to high-energy P 680 + , which can oxidize the tyrosine Z (or Y Z ) molecule by ripping off one of its hydrogen atoms. The high-energy oxidized tyrosine gives off its energy and returns to the ground state by taking up a proton and removing an electron from the oxygen-evolving complex and ultimately from water. [ 4 ] Kok's S-state diagram shows the reactions of water splitting in the oxygen-evolving complex.
PSII is a transmembrane structure found in all chloroplasts. It splits water into electrons, protons and molecular oxygen. The electrons are transferred to plastoquinol, which carries them to a proton pump. The oxygen is released into the atmosphere.
The emergence of such an incredibly complex structure, a macromolecule that converts the energy of sunlight into chemical energy and thus potentially useful work with efficiencies that are impossible in ordinary experience, seems almost magical at first glance. Thus, it is of considerable interest that, in essence, the same structure is found in purple bacteria .
PSII and PSI are connected by a transmembrane proton pump, cytochrome b 6 f complex (plastoquinol—plastocyanin reductase; EC 1.10.99.1 ). Electrons from PSII are carried by plastoquinol to cyt b 6 f , where they are removed in a stepwise fashion (re-forming plastoquinone) and transferred to a water-soluble electron carrier called plastocyanin . This redox process is coupled to the pumping of four protons across the membrane. The resulting proton gradient (together with the proton gradient produced by the water-splitting complex in PSI) is used to make ATP via ATP synthase.
The structure and function of cytochrome b 6 f (in chloroplasts) is very similar to cytochrome bc 1 ( Complex III in mitochondria). Both are transmembrane structures that remove electrons from a mobile, lipid-soluble electron carrier (plastoquinone in chloroplasts; ubiquinone in mitochondria) and transfer them to a mobile, water-soluble electron carrier (plastocyanin in chloroplasts; cytochrome c in mitochondria). Both are proton pumps that produce a transmembrane proton gradient. In fact, cytochrome b 6 and subunit IV are homologous to mitochondrial cytochrome b [ 5 ] and the Rieske iron-sulfur proteins of the two complexes are homologous. [ 6 ] However, cytochrome f and cytochrome c 1 are not homologous. [ 7 ]
PSI accepts electrons from plastocyanin and transfers them either to NADPH ( noncyclic electron transport ) or back to cytochrome b 6 f ( cyclic electron transport ):
PSI, like PSII, is a complex, highly organized transmembrane structure that contains antenna chlorophylls, a reaction center (P700), phylloquinone, and a number of iron-sulfur proteins that serve as intermediate redox carriers.
The light-harvesting system of PSI uses multiple copies of the same transmembrane proteins used by PSII. The energy of absorbed light (in the form of delocalized, high-energy electrons) is funneled into the reaction center, where it excites special chlorophyll molecules (P700, with maximum light absorption at 700 nm) to a higher energy level. The process occurs with astonishingly high efficiency.
Electrons are removed from excited chlorophyll molecules and transferred through a series of intermediate carriers to ferredoxin , a water-soluble electron carrier. As in PSII, this is a solid-state process that operates with 100% efficiency.
There are two different pathways of electron transport in PSI. In noncyclic electron transport , ferredoxin carries the electron to the enzyme ferredoxin NADP + reductase (FNR) that reduces NADP + to NADPH. In cyclic electron transport , electrons from ferredoxin are transferred (via plastoquinol) to a proton pump, cytochrome b 6 f . They are then returned (via plastocyanin) to P700. NADPH and ATP are used to synthesize organic molecules from CO 2 . The ratio of NADPH to ATP production can be adjusted by adjusting the balance between cyclic and noncyclic electron transport.
It is noteworthy that PSI closely resembles photosynthetic structures found in green sulfur bacteria , just as PSII resembles structures found in purple bacteria.
PSII, PSI, and cytochrome b 6 f are found in chloroplasts. All plants and all photosynthetic algae contain chloroplasts, which produce NADPH and ATP by the mechanisms described above. In essence, the same transmembrane structures are also found in cyanobacteria .
Unlike plants and algae, cyanobacteria are prokaryotes. They do not contain chloroplasts; rather, they bear a striking resemblance to chloroplasts themselves. This suggests that organisms resembling cyanobacteria were the evolutionary precursors of chloroplasts. One imagines primitive eukaryotic cells taking up cyanobacteria as intracellular symbionts in a process known as endosymbiosis .
Cyanobacteria contain both PSI and PSII. Their light-harvesting system is different from that found in plants (they use phycobilins , rather than chlorophylls, as antenna pigments), but their electron transport chain
is, in essence, the same as the electron transport chain in chloroplasts. The mobile water-soluble electron carrier is cytochrome c 6 in cyanobacteria, having been replaced by plastocyanin in plants. [ 8 ]
Cyanobacteria can also synthesize ATP by oxidative phosphorylation, in the manner of other bacteria. The electron transport chain is
where the mobile electron carriers are plastoquinol and cytochrome c 6 , while the proton pumps are NADH dehydrogenase, cyt b 6 f and cytochrome aa 3 (member of the COX3 family).
Cyanobacteria are the only bacteria that produce oxygen during photosynthesis. Earth's primordial atmosphere was anoxic. Organisms like cyanobacteria produced our present-day oxygen-containing atmosphere.
The other two major groups of photosynthetic bacteria, purple bacteria and green sulfur bacteria, contain only a single photosystem and do not produce oxygen.
Purple bacteria contain a single photosystem that is structurally related to PSII in cyanobacteria and chloroplasts:
This is a cyclic process in which electrons are removed from an excited chlorophyll molecule ( bacteriochlorophyll ; P870), passed through an electron transport chain to a proton pump (cytochrome bc 1 complex; similar to the chloroplastic one ), and then returned to the chlorophyll molecule. The result is a proton gradient that is used to make ATP via ATP synthase. As in cyanobacteria and chloroplasts, this is a solid-state process that depends on the precise orientation of various functional groups within a complex transmembrane macromolecular structure.
To make NADPH, purple bacteria use an external electron donor (hydrogen, hydrogen sulfide , sulfur, sulfite, or organic molecules such as succinate and lactate) to feed electrons into a reverse electron transport chain.
Green sulfur bacteria contain a photosystem that is analogous to PSI in chloroplasts:
There are two pathways of electron transfer. In cyclic electron transfer , electrons are removed from an excited chlorophyll molecule, passed through an electron transport chain to a proton pump, and then returned to the chlorophyll. The mobile electron carriers are, as usual, a lipid-soluble quinone and a water-soluble cytochrome. The resulting proton gradient is used to make ATP.
In noncyclic electron transfer , electrons are removed from an excited chlorophyll molecule and used to reduce NAD + to NADH. The electrons removed from P840 must be replaced. This is accomplished by removing electrons from H 2 S , which is oxidized to sulfur (hence the name "green sulfur bacteria").
Purple bacteria and green sulfur bacteria occupy relatively minor ecological niches in the present day biosphere. They are of interest because of their importance in precambrian ecologies, and because their methods of photosynthesis were the likely evolutionary precursors of those in modern plants.
The first ideas about light being used in photosynthesis were proposed by Jan IngenHousz in 1779 [ 9 ] who recognized it was sunlight falling on plants that was required, although Joseph Priestley had noted the production of oxygen without the association with light in 1772. [ 10 ] Cornelis Van Niel proposed in 1931 that photosynthesis is a case of general mechanism where a photon of light is used to photo decompose a hydrogen donor and the hydrogen being used to reduce CO 2 . [ 11 ] Then in 1939, Robin Hill demonstrated that isolated chloroplasts would make oxygen, but not fix CO 2 , showing the light and dark reactions occurred in different places. Although they are referred to as light and dark reactions, both of them take place only in the presence of light. [ 12 ] This led later to the discovery of photosystems I and II. | https://en.wikipedia.org/wiki/Light-dependent_reactions |
A light-emitting electrochemical cell ( LEC or LEEC ) is a solid-state device that generates light from an electric current ( electroluminescence ). LECs are usually composed of two metal electrodes connected by (e.g. sandwiching) an organic semiconductor containing mobile ions. Aside from the mobile ions, their structure is very similar to that of an organic light-emitting diode (OLED).
LECs have most of the advantages of OLEDs, as well as additional ones:
There are two distinct types of LECs, those based on inorganic transition metal complexes (iTMC) or light emitting polymers. iTMC devices are often more efficient than their LEP based counterparts due to the emission mechanism being phosphorescent rather than fluorescent. [ 7 ]
While electroluminescence had been seen previously in similar devices, the invention of the polymer LEC is attributed to Pei et al. [ 8 ] Since then, numerous research groups and a few companies have worked on improving and commercializing the devices.
In 2012 the first inherently stretchable LEC using an elastomeric emissive material (at room temperature) was reported. Dispersing an ionic transition metal complex into an elastomeric matrix enables the fabrication of intrinsically stretchable light-emitting devices that possess large emission areas (~175 mm2) and tolerate linear strains up to 27% and repetitive cycles of 15% strain. This work demonstrates the suitability of this approach to new applications in conformable lighting that require uniform, diffuse light emission over large areas. [ 9 ]
In 2012 fabrication of organic light-emitting electrochemical cells (LECs) using a roll-to-roll compatible process under ambient conditions was reported. [ 10 ]
In 2017, a new design approach developed by a team of Swedish researchers promised to deliver substantially higher efficiency: 99.2 cd A −1 at a bright luminance of 1910 cd m −2 . [ 11 ]
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Light-emitting_electrochemical_cell |
In biology , a light-harvesting complex or LHC is an aggregate consisting of proteins bound with chromophores ( chlorophylls and carotenoids ) that play a key role in photosynthesis . LHCs are arrayed around photosynthetic reaction centers in both plants and photosynthetic bacteria and collect more of the incoming light than would be captured by the reaction centers alone. The light captured by the chromophores excites molecules from their ground states to (short-lived) higher-energy states, known as the excited states. [ 1 ] This energy is then focused toward the reaction centers by Förster resonance energy transfer .
Light-harvesting complexes are found in a wide variety among the different photosynthetic species, with no homology among the major groups. [ 2 ]
Photosynthesis is a process where light is absorbed or harvested by pigment protein complexes which are able to turn sunlight into chemical energy. [ 1 ] In this process, a molecule of the pigment protein absorbs a photon of sunlight, leading to electronic excitation delivered to the reaction centre where the process of charge separation can take place [ 1 ] if the energy of the absorbed photon matches that of an electronic transition. The result of such excitation can be a return to the ground state or to another electronic state of the same molecule. When the excited molecule has a nearby neighbour molecule, the excitation energy may also be transferred, through electromagnetic interactions, from one molecule to another. This process is called resonance energy transfer , and the rate depends strongly on the distance between the energy donor and energy acceptor molecules. Before an excited molecule can transition back to its ground state, energy needs to be harvested. [ clarification needed ] [ dubious – discuss ] This excitation is transferred among chromophores where it is delivered to the reaction centre. [ 1 ] Light-harvesting complexes have their pigments specifically positioned to optimize these rates.
Purple bacteria are a type of photosynthetic organism with a light harvesting complex consisting of two pigment protein complexes, referred to as LH1 and LH2. [ 3 ] Within the photosynthetic membrane, these two complexes differ in their arrangement. [ 3 ] The LH1 complexes surround the reaction centre, while the LH2 complexes are arranged peripherally around the LH1 complexes and the reaction centre. [ 3 ] Purple bacteria use bacteriochlorophyll and carotenoids to gather light energy. These proteins are arranged in a ring-like fashion, creating a cylinder that spans the membrane. [ 4 ] [ 5 ]
The main light harvesting complex in Green bacteria is known as the chlorosome. The chlorosome is equipped with rod-like BChl c aggregates with protein embedded lipids surrounding it. Chlorosomes are found outside of the membrane which covers the reaction centre. Green sulphur bacteria and some Chloroflexia use ellipsoidal complexes known as the chlorosome to capture light. Their form of bacteriochlorophyll is green. [ 6 ]
Chlorophylls and carotenoids are important in light-harvesting complexes present in plants. Chlorophyll b is almost identical to chlorophyll a , except it has a formyl group in place of a methyl group . This small difference makes chlorophyll b absorb light with wavelengths between 400 and 500 nm more efficiently. Carotenoids are long linear organic molecules that have alternating single and double bonds along their length. Such molecules are called polyenes . Two examples of carotenoids are lycopene and β-carotene . These molecules also absorb light most efficiently in the 400 – 500 nm range.
Due to their absorption region, carotenoids appear red and yellow and provide most of the red and yellow colours present in fruits and flowers .
The carotenoid molecules also serve a safeguarding function. Carotenoid molecules suppress damaging photochemical reactions, in particular those including oxygen , which exposure to sunlight can cause. Plants that lack carotenoid molecules quickly die upon exposure to oxygen and light.
The antenna-shaped light harvesting complex of cyanobacteria , glaucocystophyta , and red algae is known as the phycobilisome; it is composed of linear tetrapyrrole pigments. Pigment-protein complexes, referred to as R-phycoerythrin, are rod-like in shape and make up the rods and core of the phycobilisome. [ 6 ] Little light reaches algae that reside at a depth of one meter or more in seawater, as light is absorbed by seawater. The pigments, such as phycocyanobilin and phycoerythrobilin , are the chromophores that bind through a covalent thioether bond to their apoproteins [ Needs disambiguation ] at cystein residues. The apoprotein with its chromophore is called phycocyanin, phycoerythrin, and allophycocyanin, respectively. [ clarification needed ] [ "respectively" means they need distinguishing in some way ] They often occur as hexamers of α and β subunits (α 3 β 3 ) 2 . They enhance the amount and spectral window of light absorption and fill the "green gap", which occurs in higher plants. [ 7 ]
The geometrical arrangement of a phycobilisome is very elegant and results in 95% efficiency of energy transfer. There is a central core of allophycocyanin , which sits above a photosynthetic reaction center. There are phycocyanin and phycoerythrin subunits that radiate out from this center like thin tubes. This increases the surface area of the absorbing section and helps focus and concentrate light energy down into the reaction center to form chlorophyll. The energy transfer from excited electrons absorbed by pigments in the phycoerythrin subunits at the periphery of these antennas appears at the reaction center in less than 100 ps. [ 8 ] | https://en.wikipedia.org/wiki/Light-harvesting_complex |
The light-harvesting complex (or antenna complex ; LH or LHC ) is an array of protein and chlorophyll molecules embedded in the thylakoid membrane of plants and cyanobacteria, which transfer light energy to one chlorophyll a molecule at the reaction center of a photosystem .
The antenna pigments are predominantly chlorophyll b , xanthophylls , and carotenes . Chlorophyll a is known as the core pigment. Their absorption spectra are non-overlapping and broaden the range of light that can be absorbed in photosynthesis. The carotenoids have another role as an antioxidant to prevent photo-oxidative damage of chlorophyll molecules. Each antenna complex has between 250 and 400 pigment molecules and the energy they absorb is shuttled by resonance energy transfer to a specialized chlorophyll-protein complex known as the reaction center of each photosystem . [ 1 ] The reaction center initiates a complex series of chemical reactions that capture energy in the form of chemical bonds.
For photosystem II, when either of the two chlorophyll a molecules at the reaction center absorb energy, an electron is excited and transferred to an electron acceptor molecule, pheophytin , leaving the chlorophyll a in an oxidized state. The oxidised chlorophyll a replaces the electrons by photolysis that involves the oxidation of water molecules to oxygen , protons and electrons .
The N-terminus of the chlorophyll a - b binding protein extends into the stroma where it is involved with adhesion of granal membranes and photo-regulated by reversible phosphorylation of its threonine residues. [ 2 ] Both these processes are believed to mediate the distribution of excitation energy between photosystems I and II.
This family also includes the photosystem II protein PsbS, which plays a role in energy-dependent quenching that increases thermal dissipation of excess absorbed light energy in the photosystem. [ 3 ]
Light-harvesting complex I is permanently bound to photosystem I via the plant-specific subunit PsaG. It is made up of four proteins: Lhca1, Lhca2, Lhca3, and Lhca4, all of which belong to the LHC or chlorophyll a/b-binding family. The LHC wraps around the PS1 reaction core. [ 4 ]
The LH 2 is usually bound to photosystem II , but it can undock and bind PS I instead depending on light conditions. [ 4 ] This behavior is controlled by reversible phosphorylation. This reaction represents a system for balancing the excitation energy between the two photosystems. [ 5 ] | https://en.wikipedia.org/wiki/Light-harvesting_complexes_of_green_plants |
A light-induced fluorescence transient ( LIFT ) is a device to remotely measure chlorophyll fluorescence in plants in a fast and non-destructive way. By using a series of excitation light pulses, LIFT combines chlorophyll fluorescence data with spectral and RGB information to provide insights into various photosynthetic traits and vegetation indices . LIFT combines the pump-probe method with the principle of laser-induced fluorescence . [ citation needed ]
A LIFT measures photosynthesis by exposing the plant to short flashes of blue light and analyzing the changes in fluorescence over time by the help of the FRR technique.
The LIFT fast repetition rate (FRR) fluorescence technique is a method for measuring plant fluorescence. It uses a series of short bursts of blue light pulses from a LED to excite photosystem II in the plant. When the quinone acceptor A (Q A ) reaches its capacity for binding electrons, the system becomes saturated and consequently red fluorescence is emitted. This is regulated by a precise excitation protocol, which consists of a saturation sequence (SQA) and a relaxation sequence (RQA) with a set of short excitation flashes (1 μs).
The fluorescence can then be measured with FRR fluorometry . For that purpose, the LIFT instrument has a built-in optical interference filter to separate the red chlorophyll fluorescence from reflected light, with a wavelength of 685 ± 10 nm.
The fluorescence transient resulting from this excitation protocol shows the kinetics of the reduction of Q A and its subsequent re-oxidation, and can be used to calculate various photosynthetic indicators. [ 1 ] These indicators provide information on the level of photosynthetic activity, such as the efficiency of light utilization, the quantum yield of photochemical conversion, and the rate of electron transport.
The LIFT system measures chlorophyll fluorescence by stimulating the plant with excitation light, leading to an increase in fluorescence to its maximum (Fm'). The naturally occurring fluorescence (F') can also be measured without the excitation light. The variable fluorescence (Fq') can be calculated as the difference between Fm' and F'. The Photosystem II operating efficiency can be calculated using the equation:
Fq ′ / Fm ′ = ( Fm ′ − F ′ ) / Fm ′ {\displaystyle {\ce {Fq'/Fm' = (Fm' - F')/ Fm'}}}
For the relaxation sequence (RQA), different relaxation parameters [ 1 ] can be calculated according to different time sections:
These parameters can be used for calculating the reoxidation efficiency of Q A i.e. the kinetics of electron transfer between QA to PQ pool to photosystem I for light-adapted plants.
The concept for an airborne LIFT instrument was developed by Zbigniew Kolber at Rutgers University in 1998. The first field test was conducted at Biosphere 2 in Arizona in 2002 using a stationary large LIFT setup equipped with a laser operating at a distance of up to 50 meters. [ 2 ] The prototype instrument was later refined and improved at the Carnegie Institute, Stanford, and the Agricultural Research Center in Arizona, where the first attempt to operate it on a tractor frame was made. [ 3 ] In 2010 several instruments were transferred from Carnegie to the Forschungszentrum Jülich where they are used for laboratory and field research in robotic positioning systems for non-invasive, high-throughput data acquisition.
In recent research, LIFT has been used in laboratory settings to explore the kinetics of photosynthesis and was also implemented in high-throughput phenotyping platforms for early drought detection. [ 4 ] In a more recent publication, the device has been used to study the effects of future elevated atmospheric CO 2 concentrations on the seasonal photosynthesis dynamics of different wheat cultivars. The authors showed that the elevated CO 2 concentration increased the photosynthetic efficiency, mainly during vegetative growth. [ 5 ] | https://en.wikipedia.org/wiki/Light-induced_fluorescence_transient |
A Light-oxygen-voltage-sensing domain ( LOV domain ) is a protein sensor used by a large variety of higher plants, microalgae, fungi and bacteria to sense environmental conditions. In higher plants, they are used to control phototropism , chloroplast relocation, and stomatal opening, whereas in fungal organisms, they are used for adjusting the circadian temporal organization of the cells to the daily and seasonal periods. They are a subset of PAS domains . [ 1 ]
Common to all LOV proteins is the blue-light sensitive flavin chromophore, which in the signaling state is covalently linked to the protein core via an adjacent cysteine residue. [ 2 ] [ 3 ] LOV domains are e.g. encountered in phototropins , which are blue-light-sensitive protein complexes regulating a great diversity of biological processes in higher plants as well as in micro-algae. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] Phototropins are composed of two LOV domains, each containing a non-covalently bound flavin mononucleotide (FMN) chromophore in its dark-state form, and a C-terminal Ser-Thr kinase .
Upon blue-light absorption, a covalent bond between the FMN chromophore and an adjacent reactive cysteine residue of the apo-protein is formed in the LOV2 domain. This subsequently mediates the activation of the kinase , which induces a signal in the organism through phototropin autophosphorylation . [ 9 ]
While the photochemical reactivity of the LOV2 domain has been found to be essential for the activation of the kinase , the in vivo functionality of the LOV1 domain within the protein complex still remains unclear. [ 10 ]
In case of the fungus Neurospora crassa , the circadian clock is controlled by two light-sensitive domains, known as the white-collar-complex (WCC) and the LOV domain vivid (VVD-LOV). [ 11 ] [ 12 ] [ 13 ] WCC is primarily responsible for the light-induced transcription on the control-gene frequency (FRQ) under day-light conditions, which drives the expression of VVD-LOV and governs the negative feedback loop onto the circadian clock . [ 13 ] [ 14 ] By contrast, the role of VVD-LOV is mainly modulatory and does not directly affect FRQ. [ 12 ] [ 15 ]
LOV domains have been found to control gene expression through DNA binding and
to be involved in redox-dependent regulation, like e.g. in the bacterium Rhodobacter sphaeroides . [ 16 ] [ 17 ] Notably, LOV-based optogenetic tools [ 18 ] have been gaining wide popularity in recent years to control a myriad of cellular events, including cell motility, [ 19 ] subcellular organelle distribution, [ 20 ] formation of membrane contact sites, [ 21 ] microtubule dynamics, [ 22 ] transcription, [ 23 ] and protein degradation. [ 24 ] | https://en.wikipedia.org/wiki/Light-oxygen-voltage-sensing_domain |
The light-second is a unit of length useful in astronomy , telecommunications and relativistic physics . It is defined as the distance that light travels in free space in one second , and is equal to exactly 299 792 458 m (approximately 983 571 055 ft or 186 282 miles ).
Just as the second forms the basis for other units of time , the light-second can form the basis for other units of length , ranging from the light-nanosecond ( 299.8 mm or just under one international foot) to the light-minute, light-hour and light-day, which are sometimes used in popular science publications. The more commonly used light-year is also currently defined to be equal to precisely 31 557 600 light-seconds , since the definition of a year is based on a Julian year (not the Gregorian year ) of exactly 365.25 d , each of exactly 86 400 SI seconds . [ 1 ]
Communications signals on Earth rarely travel at precisely the speed of light in free space. [ citation needed ] Distances in fractions of a light-second are useful for planning telecommunications networks.
The light-second is a convenient unit for measuring distances in the inner Solar System , since it corresponds very closely to the radiometric data used to determine them. (The match is not exact for an Earth-based observer because of a very small correction for the effects of relativity .) The value of the astronomical unit (roughly the distance between Earth and the Sun) in light-seconds is a fundamental measurement for the calculation of modern ephemerides (tables of planetary positions). It is usually quoted as "light-time for unit distance" in tables of astronomical constants , and its currently accepted value is 499.004 786 385(20) s. [ 3 ] [ 4 ]
Multiples of the light-second can be defined, although apart from the light-year, they are more used in popular science publications than in research works. For example: | https://en.wikipedia.org/wiki/Light-second |
A Light Aid Detachment is an attached independent minor unit of the Royal Electrical and Mechanical Engineers , Royal Canadian Electrical and Mechanical Engineers , Royal Australian Electrical and Mechanical Engineers , or Royal New Zealand Army Logistic Regiment operating as a sub-unit of the supported unit. These units provide dedicated logistic support to every field unit of the Australian Army , British Army , Canadian Army or New Zealand Army .
RAEME, REME, RCEME and the NZEME were created in October 1942 out of elements of the Royal Australian Army Ordnance Corps , Royal Army Ordnance Corps , Royal Engineers, Royal Corps of Signals, Royal Army Service Corps Royal Canadian Ordnance Corps and the New Zealand Ordnance Corps who previously handled functions such as the repair of weapons, optics and vehicles. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
In the RCEME LADs were divisions of larger units known as Workshops. [ 3 ] In the British Army the title Workshop (Wksp) is used both for major REME units (Field for Brigades or Armoured for Divisions) and for those minor units which provide some 2nd Line support to the parent regiment. The term LAD is therefore restricted to only those minor REME units which solely provide 1st Line support, typically this is Armour and Infantry units. REME minor units supporting RA, R Signals, RE, RLC etc. are normally titled as Wksp as they also provide some degree of 2nd line support to the parent unit. [ citation needed ]
Typically composed of around 60-80 personnel they are attached to a host battalion. Typical field deployment would split the LAD/Wksp into a regimental "B Echelon" contingent of about 30 men and 4 "fitter sections" of about 7-12 men, each of which is attached to a company/squadron. The fitter sections are part of the A Echelon HQ of the company/squadron. This average configuration does, of course, vary widely dependent on the parent unit and their equipment. | https://en.wikipedia.org/wiki/Light_Aid_Detachment |
Light scattering spectroscopy (LSS) is a spectroscopic technique typically used to evaluate morphological changes in epithelial cells in order to study mucosal tissue and detect early cancer and precancer . [ 1 ] [ 2 ] [ 3 ]
Light scattering spectroscopy relies upon elastic scattering of photons reflected from the epithelium . Most of the signal is generated by light scattering from small intracellular structures, but larger intracellular structures, such as nuclei, also scatter light, with their relative contribution increasing in the backscatter direction. As changes in the morphology of epithelial cells are hallmarks of pre-cancer and early cancer, LSS can be used for early cancer diagnosis.
In addition to photons backscattering from epithelial cells, a major portion of photons penetrates the epithelium, reaching optically turbid connective tissue where they are scattered multiple times and partially absorbed by hemoglobin . As a result, it is not possible to measure single backscattering events directly in human tissue, [ 4 ] with polarization gating [ 5 ] and spatial gating [ 6 ] well-suited for endoscopy applications. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 6 ] [ 11 ]
Lev T. Perelman , principal scientist at MIT , and Vadim Backman , graduate student in Harvard- MIT Health Sciences and Technology program introduced LSS in 1998. [ 1 ]
Light scattering spectroscopy has been applied for detection of precancer in many organs including esophagus, [ 1 ] [ 2 ] [ 3 ] [ 9 ] [ 10 ] [ 12 ] colon, [ 2 ] [ 13 ] [ 14 ] urinary bladder, [ 2 ] oral cavity, [ 2 ] cervix, [ 15 ] [ 16 ] pancreatic cyst, [ 11 ] [ 17 ] stomach, [ 18 ] skin, [ 19 ] and bile duct. [ 11 ] | https://en.wikipedia.org/wiki/Light_Scattering_Spectroscopy |
An aluminium alloy ( UK / IUPAC ) or aluminum alloy ( NA ; see spelling differences ) is an alloy in which aluminium (Al) is the predominant metal. The typical alloying elements are copper , magnesium , manganese , silicon , tin , nickel and zinc . There are two principal classifications, namely casting alloys and wrought alloys, both of which are further subdivided into the categories heat-treatable and non-heat-treatable. About 85% of aluminium is used for wrought products, for example rolled plate, foils and extrusions . Cast aluminium alloys yield cost-effective products due to the low melting point, although they generally have lower tensile strengths than wrought alloys. The most important cast aluminium alloy system is Al–Si , where the high levels of silicon (4–13%) contribute to give good casting characteristics. Aluminium alloys are widely used in engineering structures and components where light weight or corrosion resistance is required. [ 1 ]
Alloys composed mostly of aluminium have been very important in aerospace manufacturing since the introduction of metal-skinned aircraft. Aluminium–magnesium alloys are both lighter than other aluminium alloys and much less flammable than other alloys that contain a very high percentage of magnesium. [ 2 ]
Aluminium alloy surfaces will develop a white, protective layer of aluminium oxide if left unprotected by anodizing and/or correct painting procedures. In a wet environment, galvanic corrosion can occur when an aluminium alloy is placed in electrical contact with other metals with more positive corrosion potentials than aluminium, and an electrolyte is present that allows ion exchange. Also referred to as dissimilar-metal corrosion, this process can occur as exfoliation or as intergranular corrosion. Aluminium alloys can be improperly heat treated, causing internal element separation which corrodes the metal from the inside out. [ citation needed ]
Aluminium alloy compositions are registered with The Aluminum Association . Many organizations publish more specific standards for the manufacture of aluminium alloy, including the SAE International standards organization, specifically its aerospace standards subgroups, [ 3 ] and ASTM International .
Aluminium alloys with a wide range of properties are used in engineering structures. Alloy systems are classified by a number system ( ANSI ) or by names indicating their main alloying constituents ( DIN and ISO ). Selecting the right alloy for a given application entails considerations of its tensile strength , density , ductility , formability, workability, weldability , and corrosion resistance, to name a few. A brief historical overview of alloys and manufacturing technologies is given in Ref. [ 4 ] Aluminium alloys are used extensively in aircraft due to their high strength-to-weight ratio . Pure aluminium is much too soft for such uses, and it does not have the high tensile strength that is needed for building airplanes and helicopters .
Aluminium alloys typically have an elastic modulus of about 70 GPa , which is about one-third of the elastic modulus of steel alloys . Therefore, for a given load, a component or unit made of an aluminium alloy will experience a greater deformation in the elastic regime than a steel part of identical size and shape. With completely new metal products, the design choices are often governed by the choice of manufacturing technology. Extrusions are particularly important in this regard, owing to the ease with which aluminium alloys, particularly the Al-Mg-Si series, can be extruded to form complex profiles.
In general, stiffer and lighter designs can be achieved with aluminium alloy than is feasible with steels. For instance, consider the bending of a thin-walled tube: the second moment of area is inversely related to the stress in the tube wall, i.e. stresses are lower for larger values. The second moment of area is proportional to the cube of the radius times the wall thickness, thus increasing the radius (and weight) by 26% will lead to a halving of the wall stress. For this reason, bicycle frames made of aluminium alloys make use of larger tube diameters than steel or titanium in order to yield the desired stiffness and strength. In automotive engineering, cars made of aluminium alloys employ space frames made of extruded profiles to ensure rigidity. This represents a radical change from the common approach for current steel car design, which depend on the body shells for stiffness, known as unibody design.
Aluminium alloys are widely used in automotive engines, particularly in engine blocks and crankcases due to the weight savings that are possible. Since aluminium alloys are susceptible to warping at elevated temperatures, the cooling system of such engines is critical. Manufacturing techniques and metallurgical advancements have also been instrumental for the successful application in automotive engines. In the 1960s, the aluminium cylinder heads of the Chevrolet Corvair earned a reputation for failure and stripping of threads , which is not seen in current aluminium cylinder heads.
An important structural limitation of aluminium alloys is their lower fatigue strength compared to steel. In controlled laboratory conditions, steels display a fatigue limit , which is the stress amplitude below which no failures occur – the metal does not continue to weaken with extended stress cycles. Aluminium alloys do not have this lower fatigue limit and will continue to weaken with continued stress cycles. Aluminium alloys are therefore sparsely used in parts that require high fatigue strength in the high cycle regime (more than 10 7 stress cycles).
Often, the metal's sensitivity to heat must also be considered. Even a relatively routine workshop procedure involving heating is complicated by the fact that aluminium, unlike steel, will melt without first glowing red. Forming operations where a blow torch is used can reverse or remove the effects of heat treatment. No visual signs reveal how the material is internally damaged. Much like welding heat treated, high strength link chain, all strength is now lost by heat of the torch. The chain is dangerous and must be discarded. [ citation needed ]
Aluminium is subject to internal stresses and strains. Sometimes years later, improperly welded aluminium bicycle frames may gradually twist out of alignment from the stresses of the welding process. Thus, the aerospace industry avoids heat altogether by joining parts with rivets of like metal composition, other fasteners, or adhesives.
Stresses in overheated aluminium can be relieved by heat-treating the parts in an oven and gradually cooling it—in effect annealing the stresses. Yet these parts may still become distorted, so that heat-treating of welded bicycle frames, for instance, can result in a significant fraction becoming misaligned. If the misalignment is not too severe, the cooled parts may be bent into alignment. If the frame is properly designed for rigidity (see above), that bending will require enormous force. [ citation needed ]
Aluminium's intolerance to high temperatures has not precluded its use in rocketry; even for use in constructing combustion chambers where gases can reach 3500 K. The RM-81 Agena upper stage engine used a regeneratively cooled aluminium design for some parts of the nozzle, including the thermally critical throat region; in fact the extremely high thermal conductivity of aluminium prevented the throat from reaching the melting point even under massive heat flux, resulting in a reliable, lightweight component.
Because of its high conductivity and relatively low price compared with copper in the 1960s, aluminium was introduced at that time for household electrical wiring in North America, even though many fixtures had not been designed to accept aluminium wire. But the new use brought some problems:
All of this resulted in overheated and loose connections, and this in turn resulted in some fires. Builders then became wary of using the wire, and many jurisdictions outlawed its use in very small sizes, in new construction. Yet newer fixtures eventually were introduced with connections designed to avoid loosening and overheating. At first they were marked "Al/Cu", but they now bear a "CO/ALR" coding.
Another way to forestall the heating problem is to crimp the short " pigtail " of copper wire. A properly done high-pressure crimp by the proper tool is tight enough to reduce any thermal expansion of the aluminium. Today, new alloys, designs, and methods are used for aluminium wiring in combination with aluminium terminations.
Wrought and cast aluminium alloys use different identification systems. Wrought aluminium is identified with a four digit number which identifies the alloying elements.
Cast aluminium alloys use a four to five digit number with a decimal point. The digit in the hundreds place indicates the alloying elements, while the digit after the decimal point indicates the form (cast shape or ingot).
The temper designation follows the cast or wrought designation number with a dash, a letter, and potentially a one to three digit number, e.g. 6061-T6. The definitions for the tempers are: [ 5 ] [ 6 ]
-F : As fabricated -H : Strain hardened (cold worked) with or without thermal treatment
-O : Full soft (annealed) -T : Heat treated to produce stable tempers
-W : Solution heat treated only
Note: -W is a relatively soft intermediary designation that applies after heat treat and before aging is completed. The -W condition can be extended at extremely low temperatures but not indefinitely and depending on the material will typically last no longer than 15 minutes at ambient temperatures.
The International Alloy Designation System is the most widely accepted naming scheme for wrought alloys. Each alloy is given a four-digit number, where the first digit indicates the major alloying elements, the second — if different from 0 — indicates a variation of the alloy, and the third and fourth digits identify the specific alloy in the series. For example, in alloy 3105, the number 3 indicates the alloy is in the manganese series, 1 indicates the first modification of alloy 3005, and finally 05 identifies it in the 3000 series. [ 7 ]
1000 series are essentially pure aluminium with a minimum 99% aluminium content by weight and can be work hardened .
# Not an International Alloy Designation System name
2000 series are alloyed with copper, can be precipitation hardened to strengths comparable to steel. Formerly referred to as duralumin , they were once the most common aerospace alloys, but were susceptible to stress corrosion cracking and are increasingly replaced by 7000 series in new designs.
3000 series are alloyed with manganese , and can be work hardened .
4000 series are alloyed with silicon. Variations of aluminium–silicon alloys intended for casting (and therefore not included in 4000 series) are also known as silumin .
5000 series are alloyed with magnesium, and offer superb corrosion resistance, making them suitable for marine applications. 5083 alloy has the highest strength of non-heat-treated alloys. Most 5000 series alloys include manganese as well.
6000 series are alloyed with magnesium and silicon. They are easy to machine, are weldable , and can be precipitation hardened, but not to the high strengths that 2000 and 7000 can reach. 6061 alloy is one of the most commonly used general-purpose aluminium alloys.
7000 series are alloyed with zinc, and can be precipitation hardened to the highest strengths of any aluminium alloy. Most 7000 series alloys include magnesium and copper as well.
8000 series are alloyed with other elements which are not covered by other series. Aluminium–lithium alloys are an example. [ 45 ]
The Aluminum Association (AA) has adopted a nomenclature similar to that of wrought alloys. British Standard and DIN have different designations. In the AA system, the second two digits reveal the minimum percentage of aluminium, e.g. 150.x correspond to a minimum of 99.50% aluminium. The digit after the decimal point takes a value of 0 or 1, denoting casting and ingot respectively. [ 1 ] The main alloying elements in the AA system are as follows: [ 51 ]
Titanium alloys , which are stronger but heavier than Al-Sc alloys, are still much more widely used. [ 57 ]
The main application of metallic scandium by weight is in aluminium–scandium alloys for minor aerospace industry components. These alloys contain between 0.1% and 0.5% (by weight) of scandium. They were used in the Russian military aircraft MiG-21 and MiG-29 . [ 56 ]
Some items of sports equipment, which rely on high performance materials, have been made with scandium–aluminium alloys, including baseball bats , [ 58 ] lacrosse sticks, as well as bicycle [ 59 ] frames and components, and tent poles.
U.S. gunmaker Smith & Wesson produces revolvers with frames composed of scandium alloy and cylinders of titanium. [ 60 ]
Due to its light-weight and high strength, aluminium alloys are desired materials to be applied in spacecraft, satellites and other components to be deployed in space. However, this application is limited by the energetic particle irradiation emitted by the Sun . The impact and deposition of solar energetic particles within the microstructure of conventional aluminium alloys can induce the dissolution of most common hardening phases, leading to softening. The recently introduced crossover aluminium alloys [ 61 ] [ 62 ] are being tested as a surrogate to 6xxx and 7xxx series in environments where energetic particle irradiation is a major concern. Such crossover aluminium alloys can be hardened via precipitation of a chemical complex phase known as T-phase in which the radiation resistance has been proved to be superior than other hardening phases of conventional aluminium alloys. [ 63 ] [ 64 ]
The following aluminium alloys are commonly used in aircraft and other aerospace structures: [ 65 ] [ 66 ]
Note that the term aircraft aluminium or aerospace aluminium usually refers to 7075. [ 67 ] [ 68 ]
4047 aluminium is a unique alloy used in both the aerospace and automotive applications as a cladding alloy or filler material. As filler, aluminium alloy 4047 strips can be combined to intricate applications to bond two metals. [ 69 ]
6951 is a heat treatable alloy providing additional strength to the fins while increasing sag resistance; this allows the manufacturer to reduce the gauge of the sheet and therefore reducing the weight of the formed fin. These distinctive features make aluminium alloy 6951 one of the preferred alloys for heat transfer and heat exchangers manufactured for aerospace applications. [ 70 ]
6063 aluminium alloys are heat treatable with moderately high strength, excellent corrosion resistance and good extrudability.
They are regularly used as architectural and structural members. [ 71 ]
The following list of aluminium alloys are currently produced, [ citation needed ] but less widely [ citation needed ] used:
These alloys are used for boat building and shipbuilding, and other marine and salt-water sensitive shore applications. [ 72 ]
4043, 5183, 6005A, 6082 also used in marine constructions and off shore applications.
6111 aluminium and 2008 aluminium alloy are extensively used for external automotive body panels , with 5083 and 5754 used for inner body panels. Bonnets have been manufactured from 2036 , 6016 , and 6111 alloys. Truck and trailer body panels have used 5456 aluminium .
Automobile frames often use 5182 aluminium or 5754 aluminium formed sheets, 6061 or 6063 extrusions.
Wheels have been cast from A356.0 aluminium or formed 5xxx sheet. [ 73 ]
Engine blocks and crankcases are often cast made of aluminium alloys. The most popular aluminium alloys used for cylinder blocks are A356, 319 and to a minor extent 242.
Aluminium alloys containing cerium are being developed and implemented in high-temperature automotive applications, such as cylinder heads and turbochargers , and in other energy generation applications. [ 74 ] These alloys were initially developed as a way to increase the usage of cerium, which is over-produced in rare-earth mining operations for more coveted elements such as neodymium and dysprosium , [ 75 ] but gained attention for its strength at high temperatures over long periods of time. [ 76 ] It gains its strength from the presence of an Al 11 Ce 3 intermetallic phase which is stable up to temperatures of 540 °C, and retains its strength up to 300 °C, making it quite viable at elevated temperatures. Aluminium–cerium alloys are typically cast, due to their excellent casting properties, although work has also been done to show that laser-based additive manufacturing techniques can be used as well to create parts with more complex geometries and greater mechanical properties. [ 77 ] Recent work has largely focused on adding higher-order alloying elements to the binary Al-Ce system to improve its mechanical performance at room and elevated temperatures, such as iron , nickel , magnesium , or copper , and work is being done to understand the alloying element interactions further. [ 78 ]
6061 aluminium and 6351 aluminium are widely used in breathing gas cylinders for scuba diving and SCBA alloys. [ 79 ] | https://en.wikipedia.org/wiki/Light_alloy |
A light booth is an apparatus which simulates lighting conditions. This apparatus is used to test products under a variety of lighting conditions, meaning that the user can accurately show how that product will appear under a variety of conditions independent of environmental influences. [ 1 ] Light booths are primarily used in the painting industry to test the finish of products under controlled conditions. [ 2 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Light_booth |
In special and general relativity , a light cone (or "null cone") is the path that a flash of light, emanating from a single event (localized to a single point in space and a single moment in time) and traveling in all directions, would take through spacetime .
If one imagines the light confined to a two-dimensional plane, the light from the flash spreads out in a circle after the event E occurs, and if we graph the growing circle with the vertical axis of the graph representing time, the result is a cone , known as the future light cone. The past light cone behaves like the future light cone in reverse, a circle which contracts in radius at the speed of light until it converges to a point at the exact position and time of the event E. In reality, there are three space dimensions , so the light would actually form an expanding or contracting sphere in three-dimensional (3D) space rather than a circle in 2D, and the light cone would actually be a four-dimensional version of a cone whose cross-sections form 3D spheres (analogous to a normal three-dimensional cone whose cross-sections form 2D circles), but the concept is easier to visualize with the number of spatial dimensions reduced from three to two.
This view of special relativity was first proposed by Albert Einstein 's former professor Hermann Minkowski and is known as Minkowski space . The purpose was to create an invariant spacetime for all observers. To uphold causality , Minkowski restricted spacetime to non-Euclidean hyperbolic geometry . [ 1 ] [ page needed ]
Because signals and other causal influences cannot travel faster than light (see special relativity ), the light cone plays an essential role in defining the concept of causality : for a given event E, the set of events that lie on or inside the past light cone of E would also be the set of all events that could send a signal that would have time to reach E and influence it in some way. For example, at a time ten years before E, if we consider the set of all events in the past light cone of E which occur at that time, the result would be a sphere (2D: disk) with a radius of ten light-years centered on the position where E will occur. So, any point on or inside the sphere could send a signal moving at the speed of light or slower that would have time to influence the event E, while points outside the sphere at that moment would not be able to have any causal influence on E. Likewise, the set of events that lie on or inside the future light cone of E would also be the set of events that could receive a signal sent out from the position and time of E, so the future light cone contains all the events that could potentially be causally influenced by E. Events which lie neither in the past or future light cone of E cannot influence or be influenced by E in relativity. [ 2 ]
In special relativity , a light cone (or null cone ) is the surface describing the temporal evolution of a flash of light in Minkowski spacetime . This can be visualized in 3-space if the two horizontal axes are chosen to be spatial dimensions, while the vertical axis is time. [ 3 ]
The light cone is constructed as follows. Taking as event p a flash of light (light pulse) at time t 0 , all events that can be reached by this pulse from p form the future light cone of p , while those events that can send a light pulse to p form the past light cone of p .
Given an event E , the light cone classifies all events in spacetime into 5 distinct categories:
The above classifications hold true in any frame of reference; that is, an event judged to be in the light cone by one observer, will also be judged to be in the same light cone by all other observers, no matter their frame of reference.
The above refers to an event occurring at a specific location and at a specific time. To say that one event cannot affect another means that light cannot get from the location of one to the other in a given amount of time . Light from each event will ultimately make it to the former location of the other, but after those events have occurred.
As time progresses, the future light cone of a given event will eventually grow to encompass more and more locations (in other words, the 3D sphere that represents the cross-section of the 4D light cone at a particular moment in time becomes larger at later times). However, if we imagine running time backwards from a given event, the event's past light cone would likewise encompass more and more locations at earlier and earlier times. The farther locations will be at later times: for example, if we are considering the past light cone of an event which takes place on Earth today, a star 10,000 light years away would only be inside the past light cone at times 10,000 years or more in the past. The past light cone of an event on present-day Earth, at its very edges, includes very distant objects (every object in the observable universe ), but only as they looked long ago, when the known universe was young.
Two events at different locations, at the same time (according to a specific frame of reference), are always outside each other's past and future light cones; light cannot travel instantaneously. Other observers might see the events happening at different times and at different locations, but one way or another, the two events will likewise be seen to be outside each other's cones.
If using a system of units where the speed of light in vacuum is defined as exactly 1, for example if space is measured in light-seconds and time is measured in seconds, then, provided the time axis is drawn orthogonally to the spatial axes, as the cone bisects the time and space axes, it will show a slope of 45°, because light travels a distance of one light-second in vacuum during one second. Since special relativity requires the speed of light to be equal in every inertial frame , all observers must arrive at the same angle of 45° for their light cones. Commonly a Minkowski diagram is used to illustrate this property of Lorentz transformations .
Elsewhere, an integral part of light cones is the region of spacetime outside the light cone at a given event (a point in spacetime). Events that are elsewhere from each other are mutually unobservable, and cannot be causally connected.
(The 45° figure really only has meaning in space-space, as we try to understand space-time by making space-space drawings. Space-space tilt is measured by angles , and calculated with trig functions . Space-time tilt is measured by rapidity , and calculated with hyperbolic functions .)
In flat spacetime, the future light cone of an event is the boundary of its causal future and its past light cone is the boundary of its causal past .
In a curved spacetime, assuming spacetime is globally hyperbolic , it is still true that the future light cone of an event includes the boundary of its causal future (and similarly for the past). However gravitational lensing can cause part of the light cone to fold in on itself, in such a way that part of the cone is strictly inside the causal future (or past), and not on the boundary.
Light cones also cannot all be tilted so that they are 'parallel'; this reflects the fact that the spacetime is curved and is essentially different from Minkowski space. In vacuum regions (those points of spacetime free of matter), this inability to tilt all the light cones so that they are all parallel is reflected in the non-vanishing of the Weyl tensor . | https://en.wikipedia.org/wiki/Light_cone |
In theoretical physics , light cone gauge is an approach to remove the ambiguities arising from a gauge symmetry . While the term refers to several situations, a null component of a field A is set to zero (or a simple function of other variables) in all cases. [ 1 ] [ 2 ]
The advantage of light-cone gauge is that fields, e.g. gluons in the QCD case, are transverse. Consequently, all ghosts and other unphysical degrees of freedom are eliminated. The disadvantage is that some symmetries such as Lorentz symmetry become obscured (they become non-manifest, i.e. hard to prove).
In gauge theory , light-cone gauge refers to the condition A + = 0 {\displaystyle A^{+}=0} where
It is a method to get rid of the redundancies implied by Yang–Mills symmetry.
In string theory , light-cone gauge fixes the reparameterization invariance on the world sheet by
where p + {\displaystyle p^{+}} is a constant and τ {\displaystyle \tau } is the worldsheet time.
This article about theoretical physics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Light_cone_gauge |
Light crude oil is liquid petroleum that has a low density and flows freely at room temperature . [ 1 ] It has a low viscosity , low specific gravity and high API gravity due to the presence of a high proportion of light hydrocarbon fractions . [ 2 ] It generally has a low wax content. Light crude oil receives a higher price than heavy crude oil on commodity markets because it produces a higher percentage of gasoline and diesel fuel when converted into products by an oil refinery .
The clear cut definition of light and heavy crude varies because the classification is based more on practical grounds than theoretical. The New York Mercantile Exchange (NYMEX) defines light crude oil for domestic U.S. oil as having an API gravity between 37° API (840 kg/m 3 ) and 42° API (816 kg/m 3 ), while it defines light crude oil for non-U.S. oil as being between 32° API (865 kg/m 3 ) and 42° API (816 kg/m 3 ). [ 3 ] The National Energy Board of Canada defines light crude oil as having a density less than 875.7 kg/m 3 (API gravity greater than 30.1° API). [ 4 ] The government of Alberta, the province which produces most of the oil in Canada, disagrees and defines it as oil with a density less than 850 kg/m 3 (API gravity greater than 35° API) [ 5 ] The Mexican state oil company, Pemex , defines light crude oil as being between 27° API (893 kg/m 3 ) and 38° API (835 kg/m 3 ). [ 6 ] This variation in definition occurred because countries such as Canada and Mexico tend to have heavier crude oils than are commonly found in the United States, whose large oil fields historically produced lighter oils than are found in many other countries. Canada also uses the SI of units to measure oil rather than American oil industry conventional units, and the base temperature for density calculations is different in Canada at 15 °C (59 °F) than the US at 60 °F (15.56 °C), resulting in slightly different density values.
A wide variety of benchmark crude oils worldwide are considered to be light. The most prominent in North America is West Texas Intermediate which has an API gravity of 39.6° API (827 kg/m 3 ). It is often referred to by publications when quoting oil prices. The most commonly referenced benchmark oil from Europe is Brent Crude , which is 38.06° API (835 kg/m 3 ). The third most commonly quoted benchmark is Dubai Crude , which is 31° API (871 kg/m 3 ). This is considered light by Arabian standards but would not be considered light if produced in the U.S.
The largest oil field in the world, Saudi Arabia's Ghawar field, produces light crude oils ranging from 33° API (860 kg/m 3 ) to 40° API (825 kg/m 3 )
In the United States, the price of the front month light sweet crude oil futures contract , traded on the NYMEX commodity exchange (symbol CL), is widely reported as a proxy for the cost of imported crude oil. These contracts have delivery dates in all 12 months of the year. [ 7 ] From below $20 a barrel in early 2002, it rose to an intraday peak of $70.85 at the end of August 2005 in the aftermath of Hurricane Katrina . A new intraday record high of $78.40 was set on July 14, 2006, prompted by the firing of at least six missiles by North Korea on July 4–5, 2006, and escalating Middle East violence.
Subsequently, the price declined until on October 11, 2006, the price closed at $66.04. But, by August 2007, the price had reached a record high of $78.71, amid production output concerns in the North Sea and Nigeria. On November 29, 2007, the price peaked at $98.70 intraday after closing at $98.03 the previous day. [ 8 ] The price of light crude set a new intraday high on May 21, 2008, of $133.45 and closed at $133.17. A new high was reached on July 11, 2008, as prices temporarily reached $147.27 a barrel.
Light crude oil is traded on the CME Globex, CME ClearPort, ( CME Group ) and Open Outcry (New York) futures exchange venues and is quoted in U.S. dollars and cents per barrel. Its product symbol is "CL" and its contract size is 1,000 barrels (160 m 3 ) with a minimum fluctuation of $0.01 per barrel. [ 9 ] | https://en.wikipedia.org/wiki/Light_crude_oil |
In botany , a light curve shows the photosynthetic response of leaf tissue or algal communities to varying light intensities. The shape of the curve illustrates the principle of limiting factors ; in low light levels, the rate of photosynthesis is limited by the concentration of chlorophyll and the efficiency of the light-dependent reactions , but in higher light levels it is limited by the efficiency of RuBisCo and the availability of carbon dioxide . The point on the curve where these two differing slopes meet is called the light saturation point and is where the light-dependent reactions are producing more ATP and NADPH than can be utilized by the light-independent reactions . Since photosynthesis is also limited by ambient carbon dioxide levels, light curves are often repeated at several different constant carbon dioxide concentrations. [ 1 ]
This photosynthesis article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Light_curve_(botany) |
A light echo is a physical phenomenon caused by light reflected off surfaces distant from the source, and arriving at the observer with a delay relative to this distance. The phenomenon is analogous to an echo of sound , but due to the much faster speed of light , it mostly manifests itself only over astronomical distances.
For example, a light echo is produced when a sudden flash from a nova is reflected off a cosmic dust cloud , and arrives at the viewer after a longer duration than it otherwise would have taken with a direct path. Because of their geometries , light echoes can produce the illusion of superluminal motion . [ 1 ]
Light echoes are produced when the initial flash from a rapidly brightening object such as a nova is reflected off intervening interstellar dust which may or may not be in the immediate vicinity of the source of the light. Light from the initial flash arrives at the viewer first, while light reflected from dust or other objects between the source and the viewer begins to arrive shortly afterward. Because this light has only travelled forward as well as away from the star, it produces the illusion of an echo expanding faster than the speed of light . [ 3 ]
In the first illustration above, light following path A is emitted from the original source and arrives at the observer first. Light which follows path B is reflected off a part of the gas cloud at a point between the source and the observer, and light following path C is reflected off a part of the gas cloud perpendicular to the direct path. Although light following paths B and C appear to come from the same point in the sky to the observer, B is actually significantly closer. As a result, the echo of the event in an evenly distributed (spherical) cloud for example will appear to the observer to expand at a rate approaching or faster than the speed of light, because the observer may assume the light from B is actually the light from C.
All reflected light rays that originate from the flash and arrive at Earth together will have traveled the same distance. When the rays of light are reflected, the possible paths between the source and Earth that arrive at the same time correspond to reflections on an ellipsoid , with the origin of the flash and Earth as its two foci (see animation to the right). This ellipsoid naturally expands over time.
The variable star V838 Monocerotis experienced a significant outburst in 2002 as observed by the Hubble Space Telescope . The outburst proved surprising to observers when the object appeared to expand at a rate far exceeding the speed of light as it grew from an apparent visual size of 4 to 7 light years in a matter of months. [ 3 ] [ 4 ]
Using light echoes, it is sometimes possible to see the faint reflections of historical supernovae . Astronomers calculate the ellipsoid which has Earth and a supernova remnant at its focal points to locate clouds of dust and gas at its boundary. Identification can be done using laborious comparisons of photos taken months or years apart, and spotting changes in the light rippling across the interstellar medium . By analyzing the spectra of reflected light, astronomers can discern chemical signatures of supernovae whose light reached Earth long before the invention of the telescope and compare the explosion with its remnants, which may be centuries or millennia old. The first recorded instance of such an echo was in 1936, but it was not studied in detail. [ 4 ]
An example is supernova SN 1987A , the closest supernova in modern times. Its light echoes have aided in mapping the morphology of the immediate vicinity [ 5 ] as well as in characterizing dust clouds lying further away but close to the line of sight from Earth. [ 6 ]
Another example is the SN 1572 supernova observed on Earth in 1572, where in 2008, faint light-echoes were seen on dust in the northern part of the Milky Way . [ 7 ] [ 8 ]
Light echoes have also been used to study the supernova that produced the supernova remnant Cassiopeia A . [ 7 ] The light from Cassiopeia A would have been visible on Earth around 1660, but went unnoticed, probably because dust obscured the direct view. Reflections from different directions allow astronomers to determine if a supernova was asymmetrical and shone more brightly in some directions than in others. The progenitor of Cassiopeia A has been suspected as being asymmetric, [ 9 ] and looking at the light echoes of Cassiopeia A allowed for the first detection of supernova asymmetry in 2010. [ 10 ]
Yet other examples are supernovae SN 1993J [ 11 ] and SN 2014J . [ 12 ]
Light echo from the 1838-1858 Great Eruption of Eta Carinae were used to study this supernova imposter . A study from 2012, which used light echo spectra from the Great Eruption, found that the eruption was colder compared to other supernova imposters. [ 13 ]
Light echoes were used to determine the distance to the Cepheid variable RS Puppis to an accuracy of 1%. [ 14 ] Pierre Kervella at the European Southern Observatory described this measurement as so far "the most accurate distance to a Cepheid". [ 15 ]
In 1939, French astronomer Paul Couderc published a study entitled "Les Auréoles Lumineuses des Novae" (Luminous Haloes of the Novae). [ 16 ] Within this study, Couderc published the derivation of echo locations and time delays in the paraboloid, rather than ellipsoid, approximation of infinite distance. [ 16 ] However, in his 1961 study, Y.K. Gulak queried Couderc's theories: "It is shown that there is an essential error in the proof according to which Couderc assumed the possibility of expansion of the bright ring (nebula) around Nova Persei 1901 with a velocity exceeding that of light." [ 17 ] He continues: "The comparison of the formulas obtained by the author, with the conclusions and formulas of Couderc, shows that the coincidence of the parallax calculated according to Coudrec's scheme, with parallaxes derived by other methods, could have been accidental." [ 17 ]
The ShaSS 622-073 system is composed of the larger galaxy ShaSS 073 (seen in yellow in the image on the right) and the smaller galaxy ShaSS 622 (seen in blue) that are at the very beginning of a merger. The bright core of ShaSS 073 has excited with its radiation a region of gas within the disc of ShaSS 622; even though the core has faded over the last 30,000 years, the region still glows brightly as it re-emits the light. [ 18 ]
Since 2009 objects known either as quasar light echoes or quasar ionisation echoes have been investigated. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] A well studied example of a quasar light echo is the object known as Hanny's Voorwerp (HsV). [ 25 ]
HsV is made entirely of gas so hot – about 10,000 degrees Celsius – that astronomers felt it had to be illuminated by something powerful. [ 26 ] After several studies of light and ionisation echoes, it is thought they are likely caused by the 'echo' of a previously-active AGN that has shut down. Kevin Schawinski , a co-founder of the website Galaxy Zoo , stated: "We think that in the recent past the galaxy IC 2497 hosted an enormously bright quasar. Because of the vast scale of the galaxy and the Voorwerp, light from that past still lights up the nearby Voorwerp even though the quasar shut down sometime in the past 100,000 years, and the galaxy's black hole itself has gone quiet." [ 26 ] Chris Lintott , also a co-founder of Galaxy Zoo, stated: "From the point of view of the Voorwerp, the galaxy looks as bright as it would have before the black hole turned off – it's this light echo that has been frozen in time for us to observe." [ 26 ] The analysis of HsV in turn has led to the study of objects called Voorwerpjes and Green bean galaxies . | https://en.wikipedia.org/wiki/Light_echo |
Light field microscopy ( LFM ) is a scanning-free 3-dimensional (3D) microscopic imaging method based on the theory of light field . This technique allows sub-second (~10 Hz) large volumetric imaging ([~0.1 to 1 mm] 3 ) with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods. Just as in traditional light field rendering , there are two steps for LFM imaging: light field capture and processing. In most setups, a microlens array is used to capture the light field. As for processing, it can be based on two kinds of representations of light propagation: the ray optics picture [ 1 ] and the wave optics picture. [ 2 ] The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006 [ 1 ] and has been working on the cutting edge since then.
A light field is a collection of all the rays flowing through some free space, where each ray can be parameterized with four variables. [ 3 ] In many cases, two 2D coordinates–denoted as ( s , t ) {\displaystyle (s,t)} & ( u , v ) {\displaystyle (u,v)} –on two parallel planes with which the rays intersect are applied for parameterization. Accordingly, the intensity of the 4D light field can be described as a scalar function: L f ( s , t , u , v ) {\textstyle L_{f}(s,t,u,v)} , where f {\displaystyle f} is the distance between two planes.
LFM can be built upon the traditional setup of a wide-field fluorescence microscope and a standard CCD camera or sCMOS . [ 1 ] A light field is generated by placing a microlens array at the intermediate image plane of the objective (or the rear focal plane of an optional relay lens) and is further captured by placing the camera sensor at the rear focal plane of the microlenses. As a result, the coordinates of the microlenses ( s , t ) {\displaystyle (s,t)} conjugate with those on the object plane (if additional relay lenses are added, then on the front focal plane of the objective) ( s ′ , t ′ ) {\displaystyle (s',t')} ; the coordinates of the pixels behind each microlens ( u , v ) {\displaystyle (u,v)} conjugate with those on the objective plane ( u ′ , v ′ ) {\displaystyle (u',v')} . For uniformity and convenience, we shall call the plane ( s ′ , t ′ ) {\displaystyle (s',t')} the original focus plane in this article. Correspondingly, f {\displaystyle f} is the focal length of the microlenses (i.e., the distance between microlens array plane and the sensor plane).
In addition, the apertures and the focal-length of each lens and the dimensions of the sensor and microlens array should all be properly chosen to ensure that there is neither overlap nor empty areas between adjacent subimages behind the corresponding microlenses.
This section mainly introduces the work of Levoy et al ., 2006. [ 1 ]
Owing to the conjugated relationships as mentioned above, any certain pixel ( u j , v j ) {\displaystyle (u_{j},v_{j})} behind a certain microlens ( s i , t i ) {\displaystyle (s_{i},t_{i})} corresponds to the ray passing through the point ( s i ′ , t i ′ ) {\displaystyle (s_{i}',t_{i}')} towards the direction ( u j ′ , v j ′ ) {\displaystyle (u_{j}',v_{j}')} . Therefore, by extracting the pixel ( u j , v j ) {\displaystyle (u_{j},v_{j})} from all subimages and stitching them together, a perspective view from the certain angle is obtained: L f ( : , : , u j , v j ) {\textstyle L_{f}(:,:,u_{j},v_{j})} . In this scenario, spatial resolution is determined by the number of microlenses; angular resolution is determined by the number of pixels behind each microlens.
Synthetic focusing uses the captured light field to compute the photograph focusing at any arbitrary section. By simply summing all the pixels in each subimage behind the microlens (equivalent to collecting all radiation coming from different angles that falls on the same position), the image is focused exactly on the plane that conjugates with the microlens array plane:
E f ( s , t ) = 1 f 2 ∬ L f ( s , t , u , v ) cos 4 ϕ d u d v {\displaystyle E_{f}(s,t)={1 \over f^{2}}\iint L_{f}(s,t,u,v)\cos ^{4}\phi ~dudv} ,
where ϕ {\displaystyle \phi } is the angle between the ray and the normal of the sensor plane, and ϕ = arctan ( u 2 + v 2 / f ) {\textstyle \phi =\arctan({\sqrt {u^{2}+v^{2}}}/f)} if the origin of the coordinate system of each subimage is located on the principal optic axis of the corresponding microlens. Now, a new function can defined to absorb the effective projection factor cos 4 ϕ {\displaystyle \cos ^{4}\phi } into the light field intensity L f {\displaystyle L_{f}} and obtain the actual radiance collection of each pixel: L ¯ f = L f cos 4 ϕ {\displaystyle {\bar {L}}_{f}=L_{f}\cos ^{4}\phi } .
In order to focus on some other plane besides the front focal plane of the objective, say, the plane whose conjugated plane is f ′ = α f {\displaystyle f'=\alpha f} away from the sensor plane, the conjugated plane can be moved from f {\displaystyle f} to α f {\displaystyle \alpha f} and reparameterize its light field back to the original one at f {\displaystyle f} :
L ¯ α f ( s , t , u , v ) = L ¯ f ( u + ( s − u ) / α , v + ( t − v ) / α , u , v ) {\displaystyle {\bar {L}}_{\alpha f}(s,t,u,v)={\bar {L}}_{f}(u+(s-u)/\alpha ,v+(t-v)/\alpha ,u,v)} .
Thereby, the refocused photograph can be computed with the following formula:
E α f ( s , t ) = 1 α 2 f 2 ∬ L ¯ f ( u ( 1 − 1 / α ) + s / α , v ( 1 − 1 / α ) + t / α , u , v ) d u d v {\displaystyle E_{\alpha f}(s,t)={1 \over \alpha ^{2}f^{2}}\iint {\bar {L}}_{f}(u(1-1/\alpha )+s/\alpha ,v(1-1/\alpha )+t/\alpha ,u,v)~dudv} .
Consequently, a focal stack is generated to recapitulate the instant 3D imaging of the object space. Furthermore, tilted or even curved focal planes are also synthetically possible. [ 5 ] In addition, any reconstructed 2D image focused at an arbitrary depth corresponds to a 2D slice of a 4D light field in the Fourier domain , where the algorithm complexity can be reduced from O ( n 4 ) {\displaystyle O(n^{4})} to O ( n 2 log n ) {\displaystyle O(n^{2}\log n)} . [ 4 ]
Due to diffraction and defocus, however, the focal stack F S {\displaystyle FS} differs from the actual intensity distribution of voxels V {\displaystyle V} , which is really desired. Instead, F S {\displaystyle FS} is a convolution of V {\displaystyle V} and a point spread function (PSF):
Thus, the 3D shape of the PSF has to be measured in order to subtract its effect and to obtain voxels' net intensity. This measurement can be easily done by placing a fluorescent bead at the center of the original focus plane and recording its light field, based on which the PSF's 3D shape is ascertained by synthetically focusing on varied depth. Given that the PSF is acquired with the same LFM setup and digital refocusing procedure as the focal stack, this measurement correctly reflects the angular range of rays captured by the objective (including any falloff in intensity); therefore, this synthetic PSF is actually free of noise and aberrations. The shape of the PSF can be considered identical everywhere within our desired field of view (FOV); hence, multiple measurements can be avoided.
In the Fourier domain, the actual intensity of voxels has a very simple relation with the focal stack and the PSF:
F ( V ) = F ( F S ) F ( P S F ) {\displaystyle {\mathcal {F}}(V)={\frac {{\mathcal {F}}(FS)}{{\mathcal {F}}(PSF)}}} ,
where F {\displaystyle {\mathcal {F}}} is the operator of the Fourier transform . However, it may not be possible to directly solve the equation above, given the fact that the aperture is of limited size, resulting in the PSF being bandlimited (i.e., its Fourier transform has zeros). Instead, an iterative algorithm called constrained iterative deconvolution in the spatial domain is much more practical here: [ 6 ]
This idea is based on constrained gradient descent: the estimation of V {\displaystyle V} is improved iteratively by calculating the difference between the actual focal stack F S {\displaystyle FS} and the estimated focal stack V ∗ P S F {\textstyle V*PSF} and correcting V {\displaystyle V} with the current difference ( V {\displaystyle V} is constrained to be non-negative).
The formula of E α f ( s , t ) {\displaystyle E_{\alpha f}(s,t)} can be rewritten by adopting the concept of the Fourier Projection-Slice Theorem. [ 7 ] Because the photography operator P α [ L f ¯ ] ( s , t ) = E α f ( s , t ) {\displaystyle {\mathcal {P}}_{\alpha }[{\bar {L_{f}}}](s,t)=E_{\alpha f}(s,t)} can be viewed as a shear followed by projection, the result should be proportional to a dilated 2D slice of the 4D Fourier transform of a light field. Precisely, a refocused image can be generated from the 4D Fourier spectrum of a light field by extracting an 2D slice, applying an inverse 2D transform, and scaling. Before the proof, we first introduce some operators:
By these definitions, we can rewrite P α [ L f ¯ ] ( s , t ) = 1 α 2 f 2 ∬ L ¯ f ( u ( 1 − 1 / α ) + s / α , v ( 1 − 1 / α ) + t / α , u , v ) d u d v ≡ 1 α 2 f 2 I 2 4 ∘ B α [ L f ] {\displaystyle {\mathcal {P}}_{\alpha }[{\bar {L_{f}}}](s,t)={1 \over \alpha ^{2}f^{2}}\iint {\bar {L}}_{f}(u(1-1/\alpha )+s/\alpha ,v(1-1/\alpha )+t/\alpha ,u,v)~dudv\equiv {\frac {1}{\alpha ^{2}f^{2}}}{\mathcal {I}}_{2}^{4}\circ {\mathcal {B}}_{\alpha }[L_{f}]} .
According to the generalized Fourier-slice theorem, [ 7 ] we have
F M ∘ I M N ∘ B ≡ S M N ∘ B − T | B − T | ∘ F N {\displaystyle {\mathcal {F}}^{M}\circ {\mathcal {I}}_{M}^{N}\circ {\mathcal {B}}\equiv {\mathcal {S}}_{M}^{N}\circ {\frac {{\mathcal {B}}^{-T}}{|{\mathcal {B}}^{-T}|}}\circ {\mathcal {F}}^{N}} ,
and hence the photography operator has the form
P α [ L f ¯ ] ( s , t ) ≡ 1 f 2 F − 2 ∘ S 2 4 ∘ B α − T ∘ F 4 {\displaystyle {\mathcal {P}}_{\alpha }[{\bar {L_{f}}}](s,t)\equiv {\frac {1}{f^{2}}}{\mathcal {F}}^{-2}\circ {\mathcal {S}}_{2}^{4}\circ {\mathcal {B}}_{\alpha }^{-T}\circ {\mathcal {F}}^{4}} .
According to the formula, we know a photograph is the inverse 2D Fourier transform of a dilated 2D slice in the 4D Fourier transform of the light field.
If all we have available are samples of the light field, instead of use Fourier slice theorem for continuous signal mentioned above, we adopt discrete Fourier slice theorem, which is a generalization of the discrete Radon transform, to compute refocused image. [ 8 ]
Assume that a lightfield L ¯ f {\displaystyle {\bar {L}}_{f}} is periodic with periods T s , T t , T u , T v {\displaystyle T_{s},T_{t},T_{u},T_{v}} and is defined on the hypercube H = [ − T s / 2 , T s / 2 ] × [ − T t / 2 , T t / 2 ] × [ − T u / 2 , T u / 2 ] × [ − T v / 2 , T v / 2 ] {\displaystyle H=[-T_{s}/2,T_{s}/2]\times [-T_{t}/2,T_{t}/2]\times [-T_{u}/2,T_{u}/2]\times [-T_{v}/2,T_{v}/2]} . Also, assume there are N s × N t × N u × N v {\displaystyle N_{s}\times N_{t}\times N_{u}\times N_{v}} known samples of the light field ( s ^ Δ s , t ^ Δ t , u ^ Δ u , v ^ Δ v ) {\displaystyle ({\hat {s}}\Delta s,{\hat {t}}\Delta t,{\hat {u}}\Delta u,{\hat {v}}\Delta v)} , where s ^ , t ^ , u ^ , v ^ ∈ Z {\displaystyle {\hat {s}},{\hat {t}},{\hat {u}},{\hat {v}}\in \mathbb {Z} } and Δ s , Δ t , Δ u , Δ v = T s N s , T t N t , T u N u , T v N v {\displaystyle \Delta s,\Delta t,\Delta u,\Delta v={\frac {T_{s}}{N_{s}}},{\frac {T_{t}}{N_{t}}},{\frac {T_{u}}{N_{u}}},{\frac {T_{v}}{N_{v}}}} , respectively. Then, we can define L ¯ f d {\displaystyle {\bar {L}}_{f}^{d}} using trigonometric interpolation with these sample points:
L ¯ f d ( s ^ , t ^ , u ^ , v ^ ) = ∑ ω s ^ ∑ ω t ^ ∑ ω u ^ ∑ ω v ^ L ¯ f d ( ω s ^ , ω t ^ , ω u ^ , ω v ^ ) e 2 π i ( s ^ ω s ^ + t ^ ω t ^ + u ^ ω u ^ + v ^ ω v ^ ) {\displaystyle {\bar {L}}_{f}^{d}({\hat {s}},{\hat {t}},{\hat {u}},{\hat {v}})=\sum _{\omega _{\hat {s}}}\sum _{\omega _{\hat {t}}}\sum _{\omega _{\hat {u}}}\sum _{\omega _{\hat {v}}}{\mathcal {\bar {L}}}_{f}^{d}(\omega _{\hat {s}},\omega _{\hat {t}},\omega _{\hat {u}},\omega _{\hat {v}})e^{2\pi i({\hat {s}}\omega _{\hat {s}}+{\hat {t}}\omega _{\hat {t}}+{\hat {u}}\omega _{\hat {u}}+{\hat {v}}\omega _{\hat {v}})}} ,
where
L ¯ f d ( ω s ^ , ω t ^ , ω u ^ , ω v ^ ) = ∑ s ^ ∑ t ^ ∑ u ^ ∑ v ^ L ¯ f d ( s ^ , t ^ , u ^ , v ^ ) e 2 π i ( s ^ ω s ^ + t ^ ω t ^ + u ^ ω u ^ + v ^ ω v ^ ) {\displaystyle {\mathcal {\bar {L}}}_{f}^{d}(\omega _{\hat {s}},\omega _{\hat {t}},\omega _{\hat {u}},\omega _{\hat {v}})=\sum _{\hat {s}}\sum _{\hat {t}}\sum _{\hat {u}}\sum _{\hat {v}}{\bar {L}}_{f}^{d}({\hat {s}},{\hat {t}},{\hat {u}},{\hat {v}})e^{2\pi i({\hat {s}}\omega _{\hat {s}}+{\hat {t}}\omega _{\hat {t}}+{\hat {u}}\omega _{\hat {u}}+{\hat {v}}\omega _{\hat {v}})}} .
Note that the constant factors are dropped for simplicity.
To compute its refocused photograph, we replace infinite integral in the formula of P α {\displaystyle {\mathcal {P}}_{\alpha }} with summation whose bounds are [ − T u / 2 , T u / 2 ] {\displaystyle [-T_{u}/2,T_{u}/2]} and [ − T v / 2 , T v / 2 ] {\displaystyle [-T_{v}/2,T_{v}/2]} . That is,
P α d [ L f d ¯ ] ( s ^ , t ^ ) = 1 α 2 f 2 ∑ − N u / 2 N u / 2 ∑ − N v / 2 N v / 2 L ¯ f d ( u ^ Δ u ( 1 − 1 / α ) + s ^ Δ s / α , v ^ Δ v ( 1 − 1 / α ) + t ^ Δ t / α , u ^ Δ u , v ^ Δ v ) {\displaystyle {\mathcal {P}}_{\alpha }^{d}[{\bar {L_{f}^{d}}}]({\hat {s}},{\hat {t}})={1 \over \alpha ^{2}f^{2}}\sum _{-N_{u}/2}^{N_{u}/2}\sum _{-N_{v}/2}^{N_{v}/2}{\bar {L}}_{f}^{d}({\hat {u}}\Delta u(1-1/\alpha )+{\hat {s}}\Delta s/\alpha ,{\hat {v}}\Delta v(1-1/\alpha )+{\hat {t}}\Delta t/\alpha ,{\hat {u}}\Delta u,{\hat {v}}\Delta v)} .
Then, by discrete Fourier slice theorem indicates, we can represent the photograph using Fourier slice:
P α d [ L f d ¯ ] ( s ^ , t ^ ) = ∑ − N u / 2 N u / 2 ∑ − N v / 2 N v / 2 e 2 π i ( s ^ ω s ^ + t ^ ω t ^ ) L ¯ f d ( α ω s ^ , α ω t ^ , ( 1 − α ) ω s ^ , ( 1 − α ) ω t ^ ) {\displaystyle {\mathcal {P}}_{\alpha }^{d}[{\bar {L_{f}^{d}}}]({\hat {s}},{\hat {t}})=\sum _{-N_{u}/2}^{N_{u}/2}\sum _{-N_{v}/2}^{N_{v}/2}{e^{2\pi i({\hat {s}}\omega _{\hat {s}}+{\hat {t}}\omega _{\hat {t}})}{\mathcal {\bar {L}}}_{f}^{d}}{(\alpha \omega _{\hat {s}},\alpha \omega _{\hat {t}},(1-\alpha )\omega _{\hat {s}},(1-\alpha )\omega _{\hat {t}})}}
Although ray-optics based plenoptic camera has demonstrated favorable performance in the macroscopic world, diffraction places a limit on the LFM reconstruction when staying with ray-optics parlance. Hence, it may be much more convenient to switch to wave optics. (This section mainly introduce the work of Broxton et al ., 2013. [ 2 ] )
The interested FOV is segmented into N v {\displaystyle N_{v}} voxels, each with a label i {\textstyle i} . Thus, the whole FOV can be discretely represented with a vector g {\displaystyle \mathbf {g} } with a dimension of N v × 1 {\displaystyle N_{v}\times 1} . Similarly, a N p × 1 {\displaystyle N_{p}\times 1} vector f {\displaystyle \mathbf {f} } represents the sensor plane, where each element f j {\displaystyle \mathrm {f} _{j}} denotes one sensor pixel. Under the condition of incoherent propagation among different voxels, the light field transmission from the object space to the sensor can be linearly linked by a N p × N v {\displaystyle N_{p}\times N_{v}} measurement matrix, in which the information of PSF is incorporated:
In the ray-optics scenario, a focal stack is generated via synthetically focusing of rays, and then deconvolution with a synthesized PSF is applied to diminish the blurring caused by the wave nature of light. In the wave optics picture, on the other hand, the measurement matrix H {\displaystyle \mathrm {H} } –describing light field transmission–is directly calculated based on propagation of waves. Unlike transitional optical microscopes whose PSF shape is invariant (e.g., Airy Pattern ) with respect to position of the emitter, an emitter in each voxel generates a unique pattern on the sensor of a LFM. In other words, each column in H {\displaystyle \mathrm {H} } is distinct. In the following sections, the calculation of the whole measurement matrix would be discussed in detail.
The optical impulse response h ( x , p ) {\textstyle h(\mathbf {x} ,\mathbf {p} )} is the intensity of an electric field at a 2D position x ∈ R 2 {\displaystyle \mathbf {x} \in \mathbb {R^{2}} } on the sensor plane when an isotropic point source of unit amplitude is placed at some 3D position p ∈ R 3 {\displaystyle \mathbf {p} \in \mathbb {R^{3}} } in the FOV. There are three steps along the electric-field propagation: traveling from a point source to the native image plane (i.e., the microlens array plane), passing through the microlens array, and propagating onto the sensor plane.
For an objective with a circular aperture, the wavefront at the native image plane x = ( x 1 , x 2 ) {\displaystyle \mathbf {x} =(x_{1},x_{2})} initiated from an emitter at p = ( p 1 , p 2 , p 3 ) {\textstyle \mathbf {p} =(p_{1},p_{2},p_{3})} can be computed using the scalar Debye theory: [ 9 ]
U i ( x , p ) = M f o b j 2 λ 2 exp ( − i u 4 sin 2 ( α / 2 ) ) ∫ 0 α d θ P ( θ ) exp ( − i u sin 2 ( θ / 2 ) 2 sin 2 ( α / 2 ) ) J 0 ( sin θ sin α ν ) sin θ {\displaystyle U_{i}(\mathbf {x} ,\mathbf {p} )={\frac {\mathrm {M} }{f_{obj}^{2}\lambda ^{2}}}\exp {\biggl (}-{\frac {iu}{4\sin ^{2}(\alpha /2)}}{\biggr )}\int _{0}^{\alpha }d\theta ~P(\theta )\exp {\biggl (}-{\frac {iu\sin ^{2}(\theta /2)}{2\sin ^{2}(\alpha /2)}}{\biggr )}J_{0}{\biggl (}{\frac {\sin \theta }{\sin \alpha }}\nu {\biggr )}\sin \theta } ,
where f o b j {\displaystyle f_{obj}} is the focal length of the objective; M {\displaystyle \mathrm {M} } is its magnification. λ {\displaystyle \lambda } is the wavelength. α = arcsin ( N A / n ) {\displaystyle \alpha =\arcsin(\mathrm {NA} /n)} is the half-angle of the numerical aperture ( n {\displaystyle n} is the index of refraction of the sample). P ( θ ) {\displaystyle P(\theta )} is the apodization function of the microscope ( P ( θ ) = cos θ {\displaystyle P(\theta )={\sqrt {\cos \theta }}} for Abbe-sine corrected objectives). J 0 ( ⋅ ) {\displaystyle J_{0}(\cdot )} is the zeroth order Bessel function of the first kind. ν {\displaystyle \nu } and u {\displaystyle u} are the normalized radial and axial optical coordinates, respectively:
ν ≈ k ( x 1 − p 1 ) 2 + ( x 2 − p 2 ) 2 sin α {\displaystyle \nu \thickapprox k{\sqrt {(x_{1}-p_{1})^{2}+(x_{2}-p_{2})^{2}}}\sin \alpha }
u ≈ 4 k p 3 sin 2 ( α / 2 ) {\displaystyle u\thickapprox 4kp_{3}\sin ^{2}(\alpha /2)} ,
where k = 2 π n / λ {\displaystyle k=2\pi n/\lambda } is the wave number.
Each microlens can be regarded as a phase mask:
ϕ ( x ) = exp ( − i k 2 f ‖ Δ x ‖ 2 2 ) {\displaystyle \phi (\mathbf {x} )=\exp {\biggl (}{\frac {-ik}{2f}}\|\Delta \mathbf {x} \|_{2}^{2}{\biggr )}} ,
where f {\textstyle f} is the focal length of microlenses and Δ x = x − x μ l e n s {\displaystyle \Delta \mathbf {x} =\mathbf {x} -\mathbf {x} _{\mu lens}} is the vector pointing from the center of the microlens to a point x {\displaystyle \mathbf {x} } on the microlens. It is worth noticing that ϕ ( x ) {\textstyle \phi (\mathbf {x} )} is non-zero only when x {\displaystyle \mathbf {x} } is located at the effective transmission area of a microlens.
Thereby, the transmission function of the overall microlens array can be represented as ϕ ( x ) {\displaystyle \phi (\mathbf {x} )} convoluted with a 2D comb function:
Φ ( x ) = ϕ ( x ) ∗ c o m b ( x / d ) {\displaystyle \Phi (\mathbf {x} )=\phi (\mathbf {x} )*\mathrm {comb} (\mathbf {x} /d)} ,
where d {\displaystyle d} is the pitch (say, the dimension) of microlenses.
The propagation of wave front with distance f {\displaystyle f} from the native image plane to the sensor plane can be computed with a Fresnel diffraction integral:
E ( x ) | z = f = e i k f i λ f ∬ E ( x ′ ) | z = 0 exp ( i k 2 f ‖ x − x ′ ‖ 2 2 ) d x ′ {\displaystyle E(\mathbf {x} )|_{z=f}={\frac {e^{ikf}}{i\lambda f}}\iint E(\mathbf {x} ')|_{z=0}\exp {\biggl (}{\frac {ik}{2f}}\|\mathbf {x} -\mathbf {x} '\|_{2}^{2}{\biggr )}d\mathbf {x} '} ,
where E ( x ′ ) | z = 0 = U i ( x ′ , p ) Φ ( x ′ ) {\textstyle E(\mathbf {x} ')|_{z=0}=U_{i}(\mathbf {x} ',\mathbf {p} )\Phi (\mathbf {x} ')} is the wave front immediately passing the native imaging plane.
Therefore, the whole optical impulse response can be expressed in terms of a convolution:
h ( x , p ) = ( U i ( x , p ) Φ ( x ) ) ∗ ( e i k f i λ f e i k 2 f ‖ x ‖ 2 2 ) {\displaystyle h(\mathbf {x} ,\mathbf {p} )={\biggl (}U_{i}(\mathbf {x} ,\mathbf {p} )\Phi (\mathbf {x} ){\biggr )}*{\biggl (}{\frac {e^{ikf}}{i\lambda f}}e^{{\frac {ik}{2f}}\mathbf {\|} \mathbf {x} \|_{2}^{2}}{\biggr )}} .
Having acquired the optical impulse response, any element h i j {\displaystyle h_{ij}} in the measurement matrix H {\displaystyle \mathrm {H} } can be calculated as:
h i j = ∫ α j ∫ β i w i ( p ) | h ( x , p ) | 2 d p d x {\displaystyle h_{ij}=\int _{\alpha _{j}}\int _{\beta _{i}}w_{i}(\mathbf {p} )|h(\mathbf {x} ,\mathbf {p} )|^{2}d\mathbf {p} d\mathbf {x} } ,
where α j {\displaystyle \alpha _{j}} is the area for pixel j {\displaystyle j} and β i {\displaystyle \beta _{i}} is the volume for voxel i {\displaystyle i} . The weight filter w i ( p ) {\textstyle w_{i}(\mathbf {p} )} is added to match the fact that a PSF contributes more at the center of a voxel than at the edges. The linear superposition integral is based on the assumption that fluorophores in each infinitesimal volume d p {\textstyle d\mathbf {p} } experience an incoherent, stochastic emission process, considering their rapid, random fluctuations.
Again, due to the limited bandwidth, the photon shot noise , and the huge matrix dimension, it is impossible to directly solve the inverse problem as: g = H − 1 f {\textstyle \mathbf {g} =\mathrm {H} ^{-1}\mathbf {f} } . Instead, a stochastic relation between a discrete light field and FOV more resembles:
f ^ ∼ P o i s ( H g + b ) {\displaystyle {\hat {\mathbf {f} }}}\sim \mathrm {Pois} (\mathrm {H} ~{\mathbf {g} +\mathbf {b} )} ,
where b {\displaystyle \mathbf {b} } is the background fluorescence measured prior to imaging; P o i s ( ⋅ ) {\displaystyle \mathrm {Pois} (\cdot )} is the Poisson noise. Therefore, f ^ {\textstyle {\hat {\mathbf {f} }}} now becomes a random vector with Possion-distributed values in units of photoelectrons e − .
Based on the idea of maximizing the likelihood of the measured light field f ^ {\textstyle {\hat {\mathbf {f} }}} given a particular FOV g {\textstyle \mathbf {g} } and background b {\textstyle \mathbf {b} } , the Richardson-Lucy iteration scheme provides an effective 3D deconvolution algorithm here:
g ( k + 1 ) = d i a g ( H T 1 ) − 1 d i a g ( H T d i a g ( H g ( k ) + b ) − 1 f ) g ( k ) {\displaystyle \mathbf {g} ^{(k+1)}=\mathrm {diag} (\mathrm {H} ^{T}\mathbf {1} )^{-1}\mathrm {diag} (\mathrm {H} ^{T}\mathrm {diag} (\mathrm {H} \mathbf {g} ^{(k)}+\mathbf {b} )^{-1}\mathbf {f} )\mathbf {g} ^{(k)}} .
where the operator d i a g ( ⋅ ) {\displaystyle \mathrm {diag} (\cdot )} remains the diagonal arguments of a matrix and sets its off-diagonal elements to zero.
Starting with initial work at Stanford University applying Light Field Microscopy to calcium imaging in larval zebrafish ( Danio Rerio ), [ 10 ] a number of articles have now applied Light Field Microscopy to functional neural imaging including measuring the neuron dynamic activities across the whole brain of C. elegans , [ 11 ] whole-brain imaging in larval zebrafish, [ 11 ] [ 12 ] imaging calcium and voltage activity sensors across the brain of fruit flies ( Drosophila ) at up to 200 Hz, [ 13 ] and fast imaging of 1mm x 1mm x 0.75mm volumes in the hippocampus of mice navigating a virtual environment. [ 14 ] This area of application is a rapidly developing area at the intersection of computational optics and neuroscience. [ 15 ] | https://en.wikipedia.org/wiki/Light_field_microscopy |
A light fixture (US English), light fitting (UK English), or luminaire is an electrical lighting device containing one or more light sources, such as lamps , and all the accessory components required for its operation to provide illumination to the environment. [ 1 ] All light fixtures have a fixture body and one or more lamps. The lamps may be in sockets for easy replacement—or, in the case of some LED fixtures, hard-wired in place.
Fixtures may also have a switch to control the light, either attached to the lamp body or attached to the power cable. Permanent light fixtures, such as dining room chandeliers , may have no switch on the fixture itself, but rely on a wall switch.
Fixtures require an electrical connection to a power source, typically AC mains power, but some run on battery power for camping or emergency lights. Permanent lighting fixtures are directly wired. Movable lamps have a plug and cord that plugs into a wall socket.
Light fixtures may also have other features, such as reflectors for directing the light, an aperture (with or without a lens ), an outer shell or housing for lamp alignment and protection, an electrical ballast or power supply , and a shade to diffuse the light or direct it towards a workspace (e.g., a desk lamp). A wide variety of special light fixtures are created for use in the automotive lighting industry, aerospace , marine and medicine sectors. [ 2 ] [ 3 ]
Portable light fixtures are often called lamps , as in table lamp or desk lamp . In technical terminology , the lamp is the light source, which, in casual terminology, is called the light bulb . Both the International Electrotechnical Commission (IEC) and the Illuminating Engineering Society (IES) recommend the term luminaire for technical use. [ 4 ]
Fixture manufacturing began soon after production of the incandescent light bulb . [ citation needed ] When practical uses of fluorescent lighting were realized after 1924, the three leading companies to produce various fixtures were Lightolier , Artcraft Fluorescent Lighting Corporation , and Globe Lighting in the United States. [ 5 ]
Light fixtures are classified by how the fixture is installed, the light function or lamp type.
There are various types of devices used to manage the amount of light used: [ 6 ] | https://en.wikipedia.org/wiki/Light_fixture |
The light-front quantization [ 1 ] [ 2 ] [ 3 ] of quantum field theories provides a useful alternative to ordinary equal-time quantization . In
particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions . The quantization is
based on the choice of light-front coordinates , [ 4 ] where x + ≡ c t + z {\displaystyle x^{+}\equiv ct+z} plays the role of time and the corresponding spatial coordinate is x − ≡ c t − z {\displaystyle x^{-}\equiv ct-z} . Here, t {\displaystyle t} is the ordinary time, z {\displaystyle z} is one Cartesian coordinate , and c {\displaystyle c} is the speed of light. The other two Cartesian coordinates, x {\displaystyle x} and y {\displaystyle y} , are untouched and often called transverse or perpendicular, denoted by symbols of the type x → ⊥ = ( x , y ) {\displaystyle {\vec {x}}_{\perp }=(x,y)} . The choice of the frame of reference where the time t {\displaystyle t} and z {\displaystyle z} -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others.
In practice, virtually all measurements are made at fixed light-front time. For example, when an electron scatters on a proton as in the famous SLAC experiments that discovered the quark structure of hadrons , the interaction with the constituents occurs at a single light-front time. When one takes a flash photograph, the recorded image shows the object as the front of the light wave from the flash crosses the object. Thus Dirac used the terminology "light-front" and "front form" in contrast to ordinary instant time and "instant form". [ 4 ] Light waves traveling in the negative z {\displaystyle z} direction continue to propagate in x − {\displaystyle x^{-}} at a single light-front time x + {\displaystyle x^{+}} .
As emphasized by Dirac, Lorentz boosts of states at fixed light-front time are simple kinematic transformations. The description of physical systems in light-front coordinates is unchanged by light-front boosts to frames moving with respect to the one specified initially. This also means that there is a separation of external and internal coordinates (just as in nonrelativistic systems), and the internal wave functions are independent of the external coordinates, if there is no external force or field. In contrast, it is a difficult dynamical problem to calculate the effects of boosts of states defined at a fixed instant time t {\displaystyle t} .
The description of a bound state in a quantum field theory, such as an atom in quantum electrodynamics (QED) or a hadron in quantum chromodynamics (QCD), generally requires multiple wave functions, because quantum field theories include processes which create and annihilate particles. The state of the system then does not have a definite number of particles, but is instead a quantum-mechanical linear combination of Fock states , each with a definite particle number. Any single measurement of particle number will return a value with a probability determined by the amplitude of the Fock state with that number of particles. These amplitudes are the light-front wave functions. The light-front wave functions are each frame-independent and independent of the total momentum .
The wave functions are the solution of a field-theoretic analog of the Schrödinger equation H ψ = E ψ {\displaystyle H\psi =E\psi } of nonrelativistic quantum mechanics. In the nonrelativistic theory the Hamiltonian operator H {\displaystyle H} is just a kinetic piece − ℏ 2 2 m ∇ 2 {\displaystyle -{\frac {\hbar ^{2}}{2m}}\nabla ^{2}} and a potential piece V ( r → ) {\displaystyle V({\vec {r}})} . The wave function ψ {\displaystyle \psi } is a function of the coordinate r → {\displaystyle {\vec {r}}} , and E {\displaystyle E} is the energy . In light-front quantization, the formulation is
usually written in terms of light-front momenta p _ i = ( p i + , p → ⊥ i ) {\displaystyle {\underline {p}}_{i}=(p_{i}^{+},{\vec {p}}_{\perp i})} , with i {\displaystyle i} a particle index, p i + ≡ p i 2 + m i 2 + p i z {\displaystyle p_{i}^{+}\equiv {\sqrt {p_{i}^{2}+m_{i}^{2}}}+p_{iz}} , p → ⊥ i = ( p i x , p i y ) {\displaystyle {\vec {p}}_{\perp i}=(p_{ix},p_{iy})} , and m i {\displaystyle m_{i}} the particle mass , and light-front
energies p i − ≡ p i 2 + m i 2 − p i z {\displaystyle p_{i}^{-}\equiv {\sqrt {p_{i}^{2}+m_{i}^{2}}}-p_{iz}} . They satisfy the mass-shell condition m i 2 = p i + p i − − p → ⊥ i 2 {\displaystyle m_{i}^{2}=p_{i}^{+}p_{i}^{-}-{\vec {p}}_{\perp i}^{2}}
The analog of the nonrelativistic Hamiltonian H {\displaystyle H} is the light-front operator P − {\displaystyle {\mathcal {P}}^{-}} , which generates translations in light-front time. It is constructed from the Lagrangian for the chosen quantum field theory. The total light-front momentum of the system, P _ ≡ ( P + , P → ⊥ ) {\displaystyle {\underline {P}}\equiv (P^{+},{\vec {P}}_{\perp })} , is the sum of the single-particle light-front momenta. The total light-front energy P − {\displaystyle P^{-}} is fixed by the mass-shell condition to be ( M 2 + P ⊥ 2 ) / P + {\displaystyle (M^{2}+P_{\perp }^{2})/P^{+}} , where M {\displaystyle M} is the invariant mass of the system. The Schrödinger-like equation of light-front quantization is then P − ψ = M 2 + P ⊥ 2 P + ψ {\displaystyle {\mathcal {P}}^{-}\psi ={\frac {M^{2}+P_{\perp }^{2}}{P^{+}}}\psi } . This provides a foundation for a nonperturbative analysis of quantum field theories that is quite distinct from the lattice approach. [ 5 ] [ 6 ] [ 7 ]
Quantization on the light-front provides the rigorous field-theoretical realization of the intuitive ideas of the parton model which is formulated at fixed t {\displaystyle t} in the infinite-momentum frame. [ 8 ] [ 9 ] (see #Infinite momentum frame ). The same results are obtained in the front form for any frame; e.g., the structure functions and other probabilistic parton distributions measured in deep inelastic scattering are obtained from the squares of the boost-invariant light-front wave functions, [ 10 ] the eigensolution of the light-front Hamiltonian. The Bjorken kinematic variable x b j {\displaystyle x_{bj}} of deep inelastic scattering becomes identified with the light-front fraction at small x {\displaystyle x} . The Balitsky–Fadin–Kuraev–Lipatov
(BFKL) [ 11 ] Regge behavior of structure functions can be demonstrated from the behavior of light-front wave functions at small x {\displaystyle x} . The Dokshitzer–Gribov–Lipatov–Altarelli–Parisi ( DGLAP ) evolution [ 12 ] of structure functions and the Efremov–Radyushkin–Brodsky–Lepage (ERBL) evolution [ 13 ] [ 14 ] of distribution amplitudes in log Q 2 {\displaystyle \log Q^{2}} are properties of the light-front wave functions at high transverse momentum.
Computing hadronic matrix elements of currents is particularly simple on the light-front, since they can be obtained rigorously as overlaps of light-front wave functions as in the Drell–Yan–West formula. [ 15 ] [ 16 ] [ 17 ]
The gauge -invariant meson and baryon distribution amplitudes which control hard exclusive and direct reactions are the valence light-front wave functions integrated over transverse momentum at fixed x i = k i + / P + {\displaystyle x_{i}={k_{i}^{+}/P^{+}}} . The "ERBL" evolution [ 13 ] [ 14 ] of distribution amplitudes and the factorization theorems for hard exclusive processes can be derived most easily using light-front methods. Given the frame-independent light-front wave functions, one can compute a large range of hadronic observables including generalized parton distributions, Wigner distributions, etc. For example, the "handbag" contribution to the generalized parton distributions for deeply virtual Compton scattering , which can be computed from the overlap of light-front wave functions, automatically satisfies the known sum rules .
The light-front wave functions contain information about novel features of QCD. These include effects suggested from other approaches, such as color transparency , hidden color, intrinsic charm , sea-quark symmetries, dijet diffraction, direct hard processes, and hadronic spin dynamics.
One can also prove fundamental theorems for relativistic quantum field theories using the front form, including:
(a) the cluster decomposition theorem [ 18 ] and (b) the vanishing of the anomalous gravitomagnetic moment for any Fock state of a
hadron; [ 19 ] one also can show that a nonzero anomalous magnetic moment of a bound state requires nonzero angular momentum of the constituents. The cluster properties [ 20 ] of light-front time-ordered perturbation theory , together with J z {\displaystyle J^{z}} conservation, can be used to elegantly derive the Parke–Taylor rules for multi- gluon scattering amplitudes. [ 21 ] The counting-rule [ 22 ] behavior of structure functions at large x {\displaystyle x} and Bloom–Gilman duality [ 23 ] [ 24 ] have also been derived in light-front QCD (LFQCD). The existence of "lensing effects" at leading twist, such as the T {\displaystyle T} -odd "Sivers effect" in spin-dependent semi-inclusive deep-inelastic scattering, was first demonstrated using light-front methods. [ 25 ]
Light-front quantization is thus the natural framework for the description of the nonperturbative relativistic bound-state structure of hadrons in quantum chromodynamics. The formalism is rigorous, relativistic, and frame-independent. However, there exist subtle problems in LFQCD that require thorough investigation. For example, the complexities of the vacuum in the usual instant-time formulation, such as the Higgs mechanism and condensates in ϕ 4 {\displaystyle \phi ^{4}} theory, have their counterparts in zero modes or, possibly, in additional terms in the LFQCD Hamiltonian that are allowed by power counting. [ 26 ] Light-front considerations of the vacuum as well as the problem of achieving full covariance in LFQCD require close attention to the light-front singularities and zero-mode contributions. [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] The truncation of the light-front Fock-space calls for the introduction of effective quark and gluon degrees of freedom to overcome truncation effects. Introduction of such effective degrees of freedom is what one desires in seeking the dynamical connection between canonical (or current) quarks and effective (or constituent) quarks that Melosh sought, and Gell-Mann advocated, as a method for truncating QCD.
The light-front Hamiltonian formulation thus opens access to QCD at the amplitude level and is poised to become the foundation for a common treatment of spectroscopy and the parton structure of hadrons in a single covariant formalism, providing a unifying connection between low-energy and high-energy experimental data that so far remain largely disconnected.
Front-form relativistic quantum mechanics was introduced by Paul Dirac in a 1949 paper published in Reviews of Modern Physics. [ 4 ] Light-front quantum field theory is the front-form representation of local relativistic quantum field theory.
The relativistic invariance of a quantum theory means that the observables (probabilities, expectation values and ensemble averages) have the same values in all inertial coordinate systems. Since different inertial coordinate systems are related by inhomogeneous Lorentz transformations ( Poincaré transformations), this requires that the Poincaré group is a symmetry group of the theory. Wigner [ 38 ] and Bargmann [ 39 ] showed that this symmetry must be realized by a unitary representation of the connected component of the Poincaré group on the Hilbert space of the quantum theory. The Poincaré symmetry is a dynamical symmetry because Poincaré transformations mix both space and time variables. The dynamical nature of this symmetry is most easily seen by noting that the Hamiltonian appears on the right-hand side of three of the commutators of the Poincaré generators, [ K j , P k ] = i δ j k H {\displaystyle [K^{j},P^{k}]=i\delta ^{jk}H} , where P k {\displaystyle P^{k}} are components of the linear momentum and K j {\displaystyle K^{j}} are components of rotation-less boost generators. If the Hamiltonian includes interactions, i.e. H = H 0 + V {\displaystyle H=H_{0}+V} , then the commutation relations cannot be satisfied unless at least three of the Poincaré generators also include interactions.
Dirac's paper [ 4 ] introduced three distinct ways to minimally include interactions in the Poincaré Lie algebra . He referred to the different minimal choices as the "instant-form", "point-form" and "front-from" of the dynamics. Each "form of dynamics" is characterized by a different interaction-free (kinematic) subgroup of the Poincaré group. In Dirac's instant-form dynamics the kinematic subgroup is the three-dimensional Euclidean subgroup generated by spatial translations and rotations, in Dirac's point-form dynamics the kinematic subgroup is the Lorentz group and in Dirac's "light-front dynamics" the kinematic subgroup is the group of transformations that leave a three-dimensional hyperplane tangent to the light cone invariant.
A light front is a three-dimensional hyperplane defined by the condition:
with x 0 = c t {\displaystyle x^{0}=ct} , where the usual convention is to choose n ^ = z ^ {\displaystyle {\hat {n}}={\hat {z}}} .
Coordinates of points on the light-front hyperplane are
The Lorentz invariant inner product of two four-vectors , x {\displaystyle x} and y {\displaystyle y} ,
can be expressed in terms of their light-front components as
In a front-form relativistic quantum theory the three interacting generators of the Poincaré group are P − := H − P → ⋅ n ^ {\displaystyle P^{-}:=H-{\vec {P}}\cdot {\hat {n}}} , the generator of translations normal to the light front, and J → ⊥ := J → − n ^ ( n ^ ⋅ J → ) {\displaystyle {\vec {J}}_{\perp }:={\vec {J}}-{\hat {n}}({\hat {n}}\cdot {\vec {J}})} , the generators of rotations transverse to the light-front. P − {\displaystyle P^{-}} is called the "light-front" Hamiltonian.
The kinematic generators, which generate transformations tangent to the light front, are free of interaction. These include P + := H + P → ⋅ n ^ {\displaystyle P^{+}:=H+{\vec {P}}\cdot {\hat {n}}} and P → ⊥ := P → − n ^ ( n ^ ⋅ P → ) {\displaystyle {\vec {P}}_{\perp }:={\vec {P}}-{\hat {n}}({\hat {n}}\cdot {\vec {P}})} ,
which generate translations tangent to the light front, J 3 := n ^ ⋅ J → {\displaystyle J_{3}:={\hat {n}}\cdot {\vec {J}}} which generates rotations about the n ^ {\displaystyle {\hat {n}}} axis, and the generators K 3 := n ^ ⋅ K → {\displaystyle K_{3}:={\hat {n}}\cdot {\vec {K}}} , E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} of light-front preserving boosts,
which form a closed subalgebra .
Light-front quantum theories have the following distinguishing properties:
These properties have consequences that are useful in applications.
There is no loss of generality in using light-front relativistic quantum theories. For systems of a finite number of degrees of freedom there are explicit S {\displaystyle S} -matrix-preserving unitary transformations that transform theories with light-front kinematic subgroups to equivalent theories with instant-form or point-form kinematic subgroups. One expects that this is true in quantum field theory, although establishing the equivalence requires a nonperturbative definition of the theories in different forms of dynamics.
Canonical commutation relations at equal time are the centerpiece of the canonical quantization method to quantized fields. In the standard quantization method (the "Instant Form" in Dirac's classification of relativistic dynamics [ 4 ] ), the relations are, for example here for a spin-0 field ϕ {\displaystyle \phi } and its canonical conjugate π {\displaystyle \pi } :
I n s t a n t F o r m : [ ϕ ( t , x → ) , ϕ ( t , y → ) ] = 0 , [ π ( t , x → ) , π ( t , y → ) ] = 0 , [ ϕ ( t , x → ) , π ( t , y → ) ] = i ℏ δ 3 ( x → − y → ) , {\displaystyle {\rm {Instant~Form:}}~~[\phi (t,{\vec {x}}),\phi (t,{\vec {y}})]=0,\ \ [\pi (t,{\vec {x}}),\pi (t,{\vec {y}})]=0,\ \ [\phi (t,{\vec {x}}),\pi (t,{\vec {y}})]=i\hbar \delta ^{3}({\vec {x}}-{\vec {y}}),}
where the relations are taken at equal time t {\displaystyle t} , and x → {\displaystyle {\vec {x}}} and y → {\displaystyle {\vec {y}}} are the space variables. The equal-time requirement imposes that x → − y → {\displaystyle {\vec {x}}-{\vec {y}}} is a spacelike quantity. The non-zero value of the commutator [ ϕ ( t , x → ) , π ( t , y → ) ] {\displaystyle [\phi (t,{\vec {x}}),\pi (t,{\vec {y}})]} expresses the fact that when ϕ {\displaystyle \phi } and π {\displaystyle \pi } are separated by a spacelike distance, they cannot communicate with each other and thus commute, except when their separation x → − y → → 0 {\displaystyle {\vec {x}}-{\vec {y}}\to 0} . [ 40 ]
In the Light-Front form however, fields at equal time x + {\displaystyle x^{+}} are causally linked (i.e., they can communicate) since the Light-Front time x + ≡ t − z {\displaystyle x^{+}\equiv t-z} is along the light cone. Consequently, the Light-Front canonical commutation relations are different. For instance: [ 41 ]
L i g h t − F r o n t f o r m : [ ϕ ( x + , x → ) , ϕ ( x + , y → ) ] = i 4 ϵ ( x − − y − ) δ 2 ( x ⊥ → − y ⊥ → ) , {\displaystyle {\rm {Light-Front~form:}}~~[\phi (x^{+},{\vec {x}}),\phi (x^{+},{\vec {y}})]={\frac {i}{4}}\epsilon (x^{-}-y^{-})\delta ^{2}({\vec {x_{\bot }}}-{\vec {y_{\bot }}}),}
where ϵ ( x ) = θ ( x ) − θ ( − x ) {\displaystyle \epsilon (x)=\theta (x)-\theta (-x)} is the antisymmetric Heaviside step function .
On the other hand, the commutation relations for the creation and annihilation operators are similar for both the Instant and Light-Front forms:
I n s t a n t F o r m : [ a ( t , k → ) , a ( t , l → ) ] = 0 , [ a † ( t , k → ) , a † ( t , l → ) ] = 0 , [ a ( t , k → ) , a † ( t , l → ) ] = ℏ δ 3 ( k → − l → ) . {\displaystyle {\rm {Instant~Form:}}~~[a(t,{\vec {k}}),a(t,{\vec {l}})]=0,\ \ [a^{\dagger }(t,{\vec {k}}),a^{\dagger }(t,{\vec {l}})]=0,\ \ [a(t,{\vec {k}}),a^{\dagger }(t,{\vec {l}})]=\hbar \delta ^{3}({\vec {k}}-{\vec {l}}).}
L i g h t − F r o n t f o r m : [ a ( x + , k → ) , a ( x + , l → ) ] = 0 , [ a † ( x + , k → ) , a † ( x + , l → ) ] = 0 , [ a ( x + , k → ) , a † ( x + , l → ) ] = ℏ δ ( k + − l + ) δ 2 ( k ⊥ → − l ⊥ → ) . {\displaystyle {\rm {Light-Front~form:}}~~[a(x^{+},{\vec {k}}),a(x^{+},{\vec {l}})]=0,\ \ [a^{\dagger }(x^{+},{\vec {k}}),a^{\dagger }(x^{+},{\vec {l}})]=0,\ \ [a(x^{+},{\vec {k}}),a^{\dagger }(x^{+},{\vec {l}})]=\hbar \delta (k^{+}-l^{+})\delta ^{2}({\vec {k_{\bot }}}-{\vec {l_{\bot }}}).}
where k → {\displaystyle {\vec {k}}} and l → {\displaystyle {\vec {l}}} are the wavevectors of the fields, k + = k 0 + k 3 {\displaystyle k^{+}=k_{0}+k_{3}} and l + = l 0 + l 3 {\displaystyle l^{+}=l_{0}+l_{3}} .
In general if one multiplies a Lorentz boost on the right by a momentum-dependent rotation, which leaves the rest vector unchanged, the result is a different type of boost. In principle there are as many different kinds of boosts as there are momentum-dependent rotations. The most common choices are rotation-less boosts, helicity boosts, and light-front boosts. The light-front boost ( 4 ) is a Lorentz boost that leaves the light front invariant.
The light-front boosts are not only members of the light-front kinematic subgroup, but they also form a closed three-parameter subgroup. This has two consequences. First, because the boosts do not involve interactions, the unitary representations of light-front boosts of an interacting system of particles are tensor products of single-particle representations of light-front boosts. Second, because these boosts form a subgroup, arbitrary sequences of light-front boosts that return to the starting frame do not generate Wigner rotations.
The spin of a particle in a relativistic quantum theory is the angular momentum of the particle in its rest frame . Spin observables are defined by boosting the particle's angular momentum tensor to the particle's rest frame
where Λ − 1 ( p ) μ ν {\displaystyle \Lambda ^{-1}(p)^{\mu }{}_{\nu }} is a Lorentz boost that
transforms p μ {\displaystyle p^{\mu }} to ( m , 0 → ) {\displaystyle (m,{\vec {0}})} .
The components of the resulting spin vector, j → {\displaystyle {\vec {j}}} , always
satisfy S U ( 2 ) {\displaystyle SU(2)} commutation relations, but the individual components will
depend on the choice of boost Λ − 1 ( P ) μ ν {\displaystyle \Lambda ^{-1}(P)^{\mu }{}_{\nu }} .
The light-front components of the spin are obtained by choosing Λ − 1 ( P ) k μ {\displaystyle \Lambda ^{-1}(P)^{k}{}_{\mu }} to be the inverse of the light-front
preserving boost, ( 4 ).
The light-front components of the spin are the components of the spin measured in the particle's rest frame after transforming the particle to its rest frame with the light-front preserving boost ( 4 ).
The light-front spin is invariant with respect to light-front preserving-boosts because these boosts do not generate Wigner rotations. The component of this spin along the n ^ {\displaystyle {\hat {n}}} direction is called the light-front helicity. In addition to being invariant, it is also a kinematic observable, i.e. free of interactions. It is called a helicity because the spin quantization axis is determined by the orientation of the light front. It differs from the Jacob–Wick helicity, where the quantization axis is determined by the direction of the momentum.
These properties simplify the computation of current matrix elements because (1) initial and final states in different frames are related by kinematic Lorentz transformations, (2) the one-body contributions to the current matrix, which are important for hard scattering, do not mix with the interaction-dependent parts of the current under light front boosts and (3) the light-front helicities remain invariant with respect to the light-front boosts. Thus, light-front helicity is conserved by every interaction at every vertex.
Because of these properties, front-form quantum theory is the only form of relativistic dynamics that has true "frame-independent" impulse approximations, in the sense that one-body current operators remain one-body operators in all frames related by light-front boosts and the momentum transferred to the system is identical to the momentum transferred to the constituent particles. Dynamical constraints, which follow from rotational covariance and current covariance, relate matrix elements with different magnetic quantum numbers . This means that consistent impulse approximations can only be applied to linearly independent current matrix elements.
A second unique feature of light-front quantum theory follows because the operator P + {\displaystyle P^{+}} is non-negative and kinematic. The kinematic feature means that the generator P + {\displaystyle P^{+}} is the sum of the non-negative single-particle P i + {\displaystyle P_{i}^{+}} generators, ( P + = ∑ i P i + ) {\displaystyle P^{+}=\sum _{i}P_{i}^{+})} . It follows that if P + {\displaystyle P^{+}} is zero on a state, then each of the individual P i + {\displaystyle P_{i}^{+}} must also vanish on the state.
In perturbative light-front quantum field theory this property leads to a suppression of a large class of diagrams, including all vacuum diagrams, which have zero internal P + {\displaystyle P^{+}} . The condition P + = 0 {\displaystyle P^{+}=0} corresponds to infinite momentum ( − P 3 → H ) {\displaystyle (-P^{3}\to H)} . Many of the simplifications of light-front quantum field theory are realized in the infinite momentum limit [ 42 ] [ 43 ] of ordinary canonical field theory (see #Infinite momentum frame ).
An important consequence of the spectral condition on P + {\displaystyle P^{+}} and the subsequent suppression of the vacuum diagrams in perturbative field theory is that the perturbative vacuum is the same as the free-field vacuum. This results in one of the great simplifications of light-front quantum field theory, but it also leads to some puzzles with regard the formulation of theories with spontaneously broken symmetries .
Sokolov [ 44 ] [ 45 ] demonstrated that relativistic quantum theories based on different forms of dynamics are related by S {\displaystyle S} -matrix-preserving unitary transformations. The equivalence in field theories is more complicated because the definition of the field theory requires a redefinition of the ill-defined local operator products that appear in the dynamical generators. This is achieved through renormalization. At the perturbative level, the ultraviolet divergences of a canonical field theory are replaced by a mixture of ultraviolet and infrared ( P + = 0 ) {\displaystyle (P^{+}=0)} divergences in light-front field theory. These have to be renormalized in a manner that recovers the full rotational covariance and maintains the S {\displaystyle S} -matrix equivalence. The renormalization of light front field theories is discussed in Light-front computational methods#Renormalization group .
One of the properties of the classical wave equation is that the light-front is a characteristic surface for the initial value problem. This means the data on the light front is insufficient to generate a unique evolution off of the light front. If one thinks in purely classical terms one might anticipate that this problem could lead to an ill-defined quantum theory upon quantization.
In the quantum case the problem is to find a set of ten self-adjoint operators that satisfy the Poincaré Lie algebra. In the absence of interactions, Stone's theorem applied to tensor products of known unitary irreducible representations of the Poincaré group gives a set of self-adjoint light-front generators with all of the required properties. The problem of adding interactions is no different [ 46 ] than it is in non-relativistic quantum mechanics, except that the added interactions also need to preserve the commutation relations.
There are, however, some related observations. One is that if one takes seriously the classical picture of evolution off of surfaces with different values of x + {\displaystyle x^{+}} , one finds that the surfaces with x + ≠ 0 {\displaystyle x^{+}\not =0} are only invariant under a six parameter subgroup. This means that if one chooses a quantization surface with a fixed non-zero value of x + {\displaystyle x^{+}} , the resulting quantum theory would require a fourth interacting generator. This does not happen in light-front quantum mechanics; all seven kinematic generators remain kinematic. The reason is that the choice of light front is more closely related to the choice of kinematic subgroup, than the choice of an initial value surface.
In quantum field theory, the vacuum expectation value of two fields restricted to the light front are not well-defined distributions on test functions restricted to the light front. They only become well defined distributions on functions of four space time variables. [ 47 ] [ 48 ]
The dynamical nature of rotations in light-front quantum theory means
that preserving full rotational invariance is non-trivial. In field
theory, Noether's theorem provides explicit expressions for the
rotation generators, but truncations to a finite number of degrees of
freedom can lead to violations of rotational invariance. The general
problem is how to construct dynamical rotation generators that satisfy
Poincaré commutation relations with P − {\displaystyle P^{-}} and the rest of the
kinematic generators. A related problem is that, given that the
choice of orientation of the light front manifestly breaks the
rotational symmetry of the theory, how is the rotational symmetry of
the theory recovered?
Given a dynamical unitary representation of rotations, U ( R ) {\displaystyle U(R)} , the product U 0 ( R ) U † ( R ) {\displaystyle U_{0}(R)U^{\dagger }(R)} of a kinematic rotation with the inverse of the corresponding dynamical rotation is a unitary operator that (1) preserves the S {\displaystyle S} -matrix and (2) changes the kinematic subgroup to a kinematic subgroup with a rotated light front, n ^ ′ = R n ^ {\displaystyle {\hat {n}}'=R{\hat {n}}} . Conversely, if the S {\displaystyle S} -matrix is invariant with respect to changing the orientation of the light-front, then the dynamical unitary representation of rotations, U ( R ) {\displaystyle U(R)} , can be constructed using the generalized wave operators for different orientations of the light front [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 53 ] and the kinematic representation of rotations
Because the dynamical input to the S {\displaystyle S} -matrix is P − {\displaystyle P^{-}} , the invariance of the S {\displaystyle S} -matrix with respect to changing the orientation of the light front implies the existence of a consistent dynamical rotation generator without the need to explicitly construct that generator. The success or failure of this approach is related to ensuring the correct rotational properties of the asymptotic states used to construct the wave operators, which in turn requires that the subsystem bound states transform irreducibly with respect to S U ( 2 ) {\displaystyle SU(2)} .
These observations make it clear that the rotational covariance of the theory is encoded in the choice of light-front Hamiltonian. Karmanov [ 54 ] [ 55 ] [ 56 ] introduced a covariant formulation of light-front quantum theory, where the orientation of the light front is treated as a degree of freedom. This formalism can be used to identify observables that do not depend on the orientation, n ^ {\displaystyle {\hat {n}}} , of the light front (see #Covariant formulation ).
While the light-front components of the spin are invariant under light-front boosts, they Wigner rotate under rotation-less boosts and ordinary rotations. Under rotations the light-front components of the single-particle spins of different particles experience different Wigner rotations. This means that the light-front spin components cannot be directly coupled using the standard rules of angular momentum addition. Instead, they must first be transformed to the more standard canonical spin components, which have the property that the Wigner rotation of a rotation is the rotation. The spins can then be added using the standard rules of angular momentum addition and the resulting composite canonical spin components can be transformed back to the light-front composite spin components. The transformations between the different types of spin components are called Melosh rotations. [ 57 ] [ 58 ] They are the momentum-dependent rotations constructed by multiplying a light-front boost followed by the inverse of the corresponding rotation-less boost. In order to also add the relative orbital angular momenta, the relative orbital angular momenta of each particle must also be converted to a representation where they Wigner rotate with the spins.
While the problem of adding spins and internal orbital angular momenta is more complicated, [ 59 ] it is only total angular
momentum that requires interactions; the total spin does not necessarily require an interaction dependence. Where the interaction dependence explicitly appears is in the relation between the total spin and the total angular momentum [ 58 ] [ 60 ]
where here P − {\displaystyle P^{-}} and M {\displaystyle M} contain interactions. The transverse components of the light-front spin, j → ⊥ {\displaystyle {\vec {j}}_{\perp }} may or may not have an interaction dependence; however, if one also demands cluster properties, [ 61 ] then the transverse components of
total spin necessarily have an interaction dependence. The result is that by choosing the light front components of the spin to be kinematic it is possible to realize full rotational invariance at the expense of cluster properties. Alternatively it is easy to realize cluster properties at the expense of full rotational symmetry. For models of a finite number of degrees of freedom there are constructions that realize both full rotational covariance and cluster properties; [ 62 ] these realizations all have additional many-body interactions in the generators that are functions of fewer-body interactions.
The dynamical nature of the rotation generators means that tensor and spinor operators, whose commutation relations with the rotation generators are linear in the components of these operators, impose dynamical constraints that relate different components of these operators.
The strategy for performing nonperturbative calculations in light-front field theory is similar to the strategy used in lattice calculations. In both cases a nonperturbative regularization and renormalization are used to try to construct effective theories of a finite number of degrees of freedom that are insensitive to the eliminated degrees of freedom. In both cases the success of the renormalization program requires that the theory has a fixed point of the renormalization group; however, the details of the two approaches differ. The renormalization methods used in light-front field theory are discussed in Light-front computational methods#Renormalization group . In the lattice case the computation of observables in the effective theory involves the evaluation of large-dimensional integrals, while in the case of light-front field theory solutions of the effective theory involve solving large systems of linear equations. In both cases multi-dimensional integrals and linear systems are sufficiently well understood to formally estimate numerical errors. In practice such calculations can only be performed for the simplest systems. Light-front calculations have the special advantage that the calculations are all in Minkowski space and the results are wave
functions and scattering amplitudes.
While most applications of light-front quantum mechanics are to the light-front formulation of quantum field theory, it is also possible to formulate relativistic quantum mechanics of finite systems of directly interacting particles with a light-front kinematic subgroup. Light-front relativistic quantum mechanics is formulated on the direct sum of tensor products of single-particle Hilbert spaces. The kinematic representation U 0 ( Λ , a ) {\displaystyle U_{0}(\Lambda ,a)} of the Poincaré group on this space is the direct sum of tensor products of the single-particle unitary irreducible representations of the Poincaré group. A front-form dynamics on this space is defined by a dynamical representation of the Poincaré group U ( Λ , a ) {\displaystyle U(\Lambda ,a)} on this space where U ( g ) = U 0 ( g ) {\displaystyle U(g)=U_{0}(g)} when g {\displaystyle g} is in the kinematic subgroup of the Poincare group.
One of the advantages of light-front quantum mechanics is that it is possible to realize exact rotational covariance for system of a finite number of degrees of freedom. The way that this is done is to start with the non-interacting generators of the full Poincaré group, which are sums of single-particle generators, construct the kinematic invariant mass operator, the three kinematic generators of translations tangent to the light-front, the three kinematic light-front boost generators and the three components of the light-front spin operator. The generators are well-defined functions of these operators [ 60 ] [ 63 ] given by ( 1 )
and P − = ( P → ⊥ 2 + M 2 ) / P + {\displaystyle P^{-}=({\vec {P}}_{\perp }^{2}+M^{2})/P^{+}} . Interactions that commute with all of these operators except the kinematic mass are added to the kinematic mass operator to construct a dynamical mass operator. Using this mass operator in ( 1 ) and the expression for P − {\displaystyle P^{-}} gives a set of dynamical Poincare generators with a light-front kinematic subgroup. [ 62 ]
A complete set of irreducible eigenstates can be found by diagonalizing the interacting mass operator in a basis of simultaneous eigenstates of the light-front components of the kinematic momenta, the kinematic mass, the kinematic spin and the projection of the kinematic spin on the n ^ {\displaystyle {\hat {n}}} axis. This is equivalent to solving the center-of-mass Schrödinger equation in non-relativistic quantum mechanics. The resulting mass eigenstates transform irreducibly under the action of the Poincare group. These irreducible representations define the dynamical representation of the Poincare group on the Hilbert space.
This representation fails to satisfy cluster properties, [ 61 ] but this can be restored using a front-form generalization [ 58 ] [ 62 ] of the recursive construction given by Sokolov. [ 44 ]
The infinite momentum frame (IMF) was originally introduced [ 42 ] [ 43 ] to provide a physical interpretationof the Bjorken variable x b j = Q 2 2 M ν {\displaystyle x_{bj}={\frac {Q^{2}}{2M\nu }}} measured in deep inelastic lepton -proton scattering ℓ p → ℓ ′ X {\displaystyle \ell p\to \ell ^{\prime }X} in Feynman's parton model. (Here Q 2 = − q 2 {\displaystyle Q^{2}=-q^{2}} is the square of the spacelike momentum transfer imparted by the lepton and ν = E ℓ − E ℓ ′ {\displaystyle \nu =E_{\ell }-E_{\ell ^{\prime }}} is the energy transferred in the proton's rest frame.) If one considers a hypothetical Lorentz frame where the observer is moving at infinite momentum, P → ∞ {\displaystyle P\to \infty } , in the negative z ^ {\displaystyle {\hat {z}}} direction, then x b j {\displaystyle x_{bj}} can be interpreted as the longitudinal momentum fraction x = k z P z {\displaystyle x={\frac {k^{z}}{P^{z}}}} carried by the struck quark (or "parton") in the incoming fast moving proton. The structure function of the proton measured in the experiment is then given by the square of its instant-form wave function boosted to infinite momentum.
Formally, there is a simple connection between the Hamiltonian formulation of quantum field theories quantized at fixed time t {\displaystyle t} (the "instant form" ) where the observer is moving at infinite momentum and light-front Hamiltonian theory quantized at fixed light-front time τ = t + z / c {\displaystyle \tau =t+z/c} (the "front form"). A typical energy denominator in the instant-form is 1 / [ E i n i t i a l − E i n t e r m e d i a t e + i ϵ ] {\displaystyle {1/[E_{initial}-E_{intermediate}+i\epsilon ]}} where E i n t e r m e d i a t e = ∑ j E j = ∑ j m 2 + k → j 2 {\displaystyle E_{intermediate}=\sum _{j}E_{j}=\sum _{j}{\sqrt {m^{2}+{\vec {k}}_{j}^{2}}}} is the sum of energies of the particles in the intermediate state. In the IMF, where the observer moves at high momentum P {\displaystyle P} in the negative z ^ {\displaystyle {\hat {z}}} direction, the leading terms in P {\displaystyle P} cancel, and the energy denominator becomes 2 P / [ M 2 − ∑ j [ k ⊥ 2 + m 2 x i ] j + i ϵ ] {\displaystyle 2P/[{\mathcal {M}}^{2}-\sum _{j}{\big [}{k_{\perp }^{2}+{\frac {m^{2}}{x_{i}}}}{\big ]}_{j}+i\epsilon ]} where M 2 {\displaystyle {\mathcal {M}}^{2}} is invariant mass squared of the initial state. Thus, by keeping the terms in 1 P {\displaystyle {\frac {1}{P}}} in the instant form, one recovers the energy denominator which appears in light-front Hamiltonian theory. This correspondence has a physical meaning: measurements made by an observer moving at infinite momentum is analogous to making observations approaching the speed of light—thus matching to the front form where measurements are made along the front of a light wave. An example of an application to quantum electrodynamics
can be found in the work of Brodsky, Roskies and
Suaya. [ 64 ]
The vacuum state in the instant form defined at fixed t {\displaystyle t} is acausal and infinitely complicated. For example, in quantum electrodynamics, bubble graphs of all orders, starting with the e + e − γ {\displaystyle e^{+}e^{-}\gamma } intermediate state, appear in the ground state vacuum; however, as shown by Weinberg, [ 43 ] such vacuum graphs are frame-dependent and formally vanish by powers of 1 / P 2 {\displaystyle 1/P^{2}} as the observer moves at P → ∞ {\displaystyle P\to \infty } . Thus, one can again match the instant form to the front-form formulation where such vacuum loop diagrams do not appear in the QED ground state. This is because the + {\displaystyle +} momentum of each constituent is positive, but must sum to zero in the vacuum state since the + {\displaystyle +} momenta are conserved. However, unlike the instant form, no dynamical boosts are required, and the front form formulation is causal and frame-independent. The infinite momentum frame formalism is useful as an intuitive tool; however, the limit P → ∞ {\displaystyle P\to \infty } is not a rigorous limit, and the need to boost the instant-form wave function introduces complexities.
In light-front coordinates, x + = c t + z {\displaystyle x^{+}=ct+z} , x − = c t − z {\displaystyle x^{-}=ct-z} , the spatial coordinates x , y , z {\displaystyle x,y,z} do not enter symmetrically: the coordinate z {\displaystyle z} is distinguished, whereas x {\displaystyle x} and y {\displaystyle y} do not appear at all. This non-covariant definition destroys the spatial symmetry that, in its turn, results in a few difficulties related to the fact that some transformation of the reference frame may change the orientation of the light-front plane. That is, the transformations of the reference frame and variation of orientation of the light-front plane are not decoupled from each other. Since the wave function depends dynamically on the orientation of the plane where it is defined, under these transformations the light-front wave function is transformed by dynamical operators (depending on the interaction). Therefore, in general, one should know the interaction to go from given reference frame to the new one. The loss of symmetry between the coordinates z {\displaystyle z} and x , y {\displaystyle x,y} complicates also the construction of the states with definite angular momentum since the latter is just a property of the wave function relative to the rotations which affects all the coordinates x , y , z {\displaystyle x,y,z} .
To overcome this inconvenience, there was developed the explicitly covariant version [ 54 ] [ 55 ] [ 56 ] of
light-front quantization (reviewed by Carbonell
et al. [ 65 ] ),
in which the state vector is defined on the light-front plane of
general orientation: ω ⋅ x = ω 0 c t − ω → ⋅ x → = ω 0 t − ω x x − ω y y − ω z z = 0 {\displaystyle \omega \cdot x=\omega _{0}ct-{\vec {\omega }}\cdot {\vec {x}}=\omega _{0}t-\omega _{x}x-\omega _{y}y-\omega _{z}z=0} (instead of c t + z = 0 {\displaystyle ct+z=0} ),
where x = ( c t , x → ) {\displaystyle x=(ct,{\vec {x}})} is a four-dimensional vector in the four-dimensional space-time and ω = ( ω 0 , ω → ) {\displaystyle \omega =(\omega _{0},{\vec {\omega }})} is also a four-dimensional vector with the property ω 2 = ω 0 2 − ω → 2 = 0 {\displaystyle \omega ^{2}=\omega _{0}^{2}-{\vec {\omega }}^{2}=0} . In the particular case ω = ( 1 / c , 0 , 0 , − 1 / c ) {\displaystyle \omega =(1/c,0,0,-1/c)} we come back to the standard construction. In the explicitly covariant formulation the
transformation of the reference frame and the change of orientation of the light-front plane
are decoupled. All the rotations and the Lorentz transformations are purely
kinematical (they do not require knowledge of the interaction), whereas the
(dynamical) dependence on the orientation of the light-front plane is covariantly parametrized
by the wave function dependence on the four-vector ω {\displaystyle \omega } .
There were formulated the rules of graph techniques which, for a given Lagrangian, allow to calculate the perturbative decomposition of the state vector evolving in the light-front time σ = ω ⋅ x {\displaystyle \sigma =\omega \cdot x} (in contrast to the evolution in the direction x + {\displaystyle x^{+}} or t {\displaystyle t} ). For the instant form of dynamics,
these rules were first developed by Kadyshevsky. [ 66 ] [ 67 ] By these rules, the light-front amplitudes are represented as the integrals over the momenta of particles in intermediate states. These integrals are three-dimensional, and all the four-momenta k i {\displaystyle k_{i}} are on the corresponding mass shells k i 2 = m i 2 {\displaystyle k_{i}^{2}=m_{i}^{2}} ,
in contrast to the Feynman rules containing four-dimensional integrals over the off-mass-shell momenta. However, the calculated light-front amplitudes, being on the mass shell, are in general the off-energy-shell amplitudes. This means that the on-mass-shell four-momenta, which these amplitudes depend on, are not conserved in the direction x − {\displaystyle x^{-}} (or, in general, in the direction ω {\displaystyle \omega } ).
The off-energy shell amplitudes do not coincide with the Feynman amplitudes, and they depend on the orientation of the light-front plane. In the covariant formulation, this dependence is explicit:
the amplitudes are functions of ω {\displaystyle \omega } . This allows one to apply to them in full measure the well known techniques developed for the covariant Feynman amplitudes (constructing the invariant variables, similar to the Mandelstam variables, on which the amplitudes depend; the decompositions, in the case of particles with spins, in invariant amplitudes; extracting electromagnetic form factors; etc.). The irreducible off-energy-shell amplitudes serve as the kernels of equations for the light-front wave functions. The latter ones are found from these equations and used to analyze hadrons and nuclei.
For spinless particles, and in the particular case of ω = ( 1 / c , 0 , 0 , − 1 / c ) {\displaystyle \omega =(1/c,0,0,-1/c)} , the amplitudes found by the rules of covariant graph techniques, after replacement of variables, are reduced to the amplitudes given by the Weinberg rules [ 43 ] in the infinite momentum frame . The dependence on orientation of the light-front plane manifests itself in the dependence of the off-energy-shell Weinberg amplitudes on the variables k → ⊥ i , x i {\displaystyle {\vec {k}}_{\perp i},x_{i}} taken separately but not in some particular combinations like the Mandelstam variables s , t {\displaystyle s,t} .
On the energy shell, the amplitudes do not depend on the four-vector ω {\displaystyle \omega } determining orientation of the corresponding light-front plane. These on-energy-shell amplitudes coincide with the on-mass-shell amplitudes given by the Feynman rules. However, the dependence on ω {\displaystyle \omega } can survive because of approximations.
The covariant formulation is especially useful for constructing the states with definite angular momentum. In this construction, the four-vector ω {\displaystyle \omega } participates on equal footing with other four-momenta, and, therefore, the main part of this problem is reduced to the well known one. For example, as is well known, the wave function of a non-relativistic system, consisting of two spinless particles with the relative momentum k → {\displaystyle {\vec {k}}} and with total angular momentum l {\displaystyle l} , is proportional to the spherical function Y l m ( k → ^ ) {\displaystyle Y_{lm}({\hat {\vec {k}}})} : ψ l m ( k → ) = f ( k ) Y l m ( k ^ ) {\displaystyle \psi _{lm}({\vec {k}})=f(k)Y_{lm}({\hat {k}})} ,
where k ^ = k → / k {\displaystyle {\hat {k}}={\vec {k}}/k} and f ( k ) {\displaystyle f(k)} is a function depending on the modulus k = | k → | {\displaystyle k=|{\vec {k}}|} . The angular momentum operator reads: J → = − i [ k → × ∂ k → ] {\displaystyle {\vec {J}}=-i[{\vec {k}}\times \partial {\vec {k}}]} .
Then the wave function of a relativistic system in the covariant formulation of light-front dynamics obtains the similar form:
where n ^ = ω → / | ω → | {\displaystyle {\hat {n}}={\vec {\omega }}/|{\vec {\omega }}|} and f 1 , 2 ( k , k → ⋅ n ^ ) {\displaystyle f_{1,2}(k,{\vec {k}}\cdot {\hat {n}})} are functions depending, in addition to k {\displaystyle k} , on the scalar product k → ⋅ n ^ {\displaystyle {\vec {k}}\cdot {\hat {n}}} .
The variables k {\displaystyle k} , k → ⋅ n ^ {\displaystyle {\vec {k}}\cdot {\hat {n}}} are invariant not only under rotations
of the vectors k → {\displaystyle {\vec {k}}} , n ^ {\displaystyle {\hat {n}}} but also under rotations and the Lorentz
transformations of initial four-vectors k {\displaystyle k} , ω {\displaystyle \omega } . The second contribution ∝ Y l m ( n ^ ) {\displaystyle \propto Y_{lm}({\hat {n}})} means that the operator of the total angular momentum in explicitly covariant light-front dynamics obtains an additional term: J → = − i [ k → × ∂ k → ] − i [ n ^ × ∂ n ^ ] {\displaystyle {\vec {J}}=-i[{\vec {k}}\times \partial {\vec {k}}]-i[{\hat {n}}\times \partial {\hat {n}}]} .
For non-zero spin particles this operator obtains the contribution of the spin operators: [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 68 ] [ 69 ]
J → = − i [ k → × ∂ k → ] − i [ n ^ × ∂ n ^ ] + s → 1 + s → 2 . {\displaystyle {\vec {J}}=-i[{\vec {k}}\times \partial {\vec {k}}]-i[{\hat {n}}\times \partial {\hat {n}}]+{\vec {s}}_{1}+{\vec {s}}_{2}.}
The fact that the transformations changing the orientation of the light-front plane are dynamical (the corresponding generators of the Poincare group contain interaction) manifests itself in the dependence of the coefficients f 1 , 2 {\displaystyle f_{1,2}} on the scalar product k → ⋅ n ^ {\displaystyle {\vec {k}}\cdot {\hat {n}}} varying when the orientation of the unit vector n ^ {\displaystyle {\hat {n}}} changes (for fixed k → {\displaystyle {\vec {k}}} ). This dependence (together with the dependence on k {\displaystyle k} ) is found from the dynamical equation for the wave function.
A peculiarity of this construction is in the fact that there exists the operator A = ( n ^ ⋅ J → ) 2 {\displaystyle A=({\hat {n}}\cdot {\vec {J}})^{2}} which commutes both with the Hamiltonian and with J → 2 , J z {\displaystyle {\vec {J}}^{2},J_{z}} . Then the states are labeled also by the eigenvalue a {\displaystyle a} of the operator A {\displaystyle A} : ψ = ψ l m a ( k → , n ^ ) {\displaystyle \psi =\psi _{lma}({\vec {k}},{\hat {n}})} .
For given angular momentum l {\displaystyle l} , there are l + 1 {\displaystyle l+1} such the states. All of them are
degenerate, i.e. belong to the same mass (if we do not make an approximation). However, the wave function should also satisfy the so-called angular condition [ 55 ] [ 56 ] [ 70 ] [ 71 ] [ 72 ] After satisfying it, the solution obtains the form of a unique superposition of the states ψ l m a ( k → , n ^ ) {\displaystyle \psi _{lma}({\vec {k}},{\hat {n}})} with different eigenvalues a {\displaystyle a} . [ 56 ] [ 65 ]
The extra contribution − i [ n ^ × ∂ n ^ ] {\displaystyle -i[{\hat {n}}\times \partial {\hat {n}}]} in the light-front angular momentum operator increases the number of spin components in the light-front wave function. For example, the non-relativistic deuteron wave function is determined by two components ( S {\displaystyle S} - and D {\displaystyle D} -waves). Whereas, the relativistic light-front deuteron wave function is determined by six components. [ 68 ] [ 69 ] These components were calculated in the one-boson exchange model. [ 73 ]
The central issue for light-front quantization is the rigorous description of hadrons, nuclei, and systems thereof from first principles in QCD. The main goals of the research using light-front dynamics are:
The nonperturbative analysis of light-front QCD requires the following:
[ 89 ] finite elements, function expansions, [ 90 ] and the complete orthonormal wave functions obtained from AdS/QCD. This will build on the Lanczos-based MPI code developed for nonrelativistic nuclear physics applications and similar codes for Yukawa theory and lower-dimensional supersymmetric Yang—Mills theories.
Understand the role of renormalization group methods, asymptotic
freedom and spectral properties of P + {\displaystyle P^{+}} in quantifying truncation
errors.
x {\displaystyle x} and y {\displaystyle y} , are dynamical. To solve the angular momentum classification problem, the eigenstates and spectra of the sum of squares of these generators must be constructed. This is the price to pay for having more kinematical generators than in equal-time quantization, where all three boosts are dynamical. In light-front quantization, the boost along z {\displaystyle z} is kinematic, and this greatly simplifies the calculation of matrix elements that involve boosts, such as the ones needed to calculate form factors. The relation to covariant Bethe–Salpeter approaches projected on the light-front may help in understanding the angular momentum issue and its relationship to the Fock-space truncation of the light-front Hamiltonian. Model-independent constraints from the general angular condition, which must be satisfied by the light-front helicity amplitudes, should also be explored. The contribution from the zero mode appears necessary for the hadron form factors to satisfy angular momentum conservation, as expressed by the angular condition. The relation to light-front quantum mechanics, where it is possible to exactly realize full rotational covariance and construct explicit representations of the dynamical rotation generators, should also be investigated.
The approximate duality in the limit of massless quarks motivates few-body analyses of meson and baryon spectra based on a one-dimensional light-front Schrödinger equation in terms of the modified transverse coordinate ζ {\displaystyle \zeta } . Models that extend the approach to massive quarks have been proposed, but a more fundamental understanding within QCD is needed. The nonzero quark masses introduce a non-trivial dependence on the longitudinal momentum, and thereby highlight the need to understand the representation of rotational symmetry within the formalism. Exploring AdS/QCD wave functions as part of a physically motivated Fock-space basis set to diagonalize the LFQCD Hamiltonian should shed light on both issues. The complementary Ehrenfest
interpretation [ 97 ] can be used to introduce effective
degrees of freedom such as diquarks in
baryons. | https://en.wikipedia.org/wiki/Light_front_quantization |
A light metal is any metal of relatively low density. [ 1 ] These may be pure elements, but more commonly are metallic alloys . Lithium and then potassium are the two lightest metallic elements.
Magnesium , aluminium and titanium alloys are light metals of significant commercial importance. [ 2 ] Their densities of 1.7, 2.7 and 4.5 g/cm 3 range from 19 to 56% of the densities of other structural metals, [ 3 ] such as iron (7.9) and copper (8.9). | https://en.wikipedia.org/wiki/Light_metal |
A light non-aqueous phase liquid (LNAPL) is a groundwater contaminant that is not soluble in water and has a lower density than water , in contrast to a DNAPL which has a higher density than water. Once a LNAPL pollution infiltrates the ground , it will stop at the depth of the water table because of its positive buoyancy . Efforts to locate and remove LNAPLs are relatively less expensive and easier than for DNAPLs because LNAPLs float on top of the water table.
Examples of LNAPLs are benzene , toluene , xylene , and other hydrocarbons .
This water supply –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Light_non-aqueous_phase_liquid |
Light Scanning Photomacrography (LSP), also known as Scanning Light Photomacrography (SLP) or Deep-Field Photomacrography , is a photographic film technique that allows for high magnification light imaging with exceptional depth of field (DOF). This method overcomes the limitations of conventional macro photography , which typically only keeps a portion of the subject in acceptable focus at high magnifications.
The principles of LSP were first documented in the early 1960s by Dan McLachlan Jr., who highlighted its capability for extreme focal depth in microscopy [ 1 ] and in 1968 patented the process. [ 2 ]
The technique was revived and further developed in the 1980s by photographers such as Darwin Dale and Nile Root, a faculty member at the Rochester Institute of Technology . [ 3 ] In the early 1990s, William Sharp and Charles Kazilek , both researchers at Arizona State University , also published articles describing their technique and system setup for capturing SLP images. [ 4 ]
Light Scanning Photomacrography offered a powerful analog tool for high-detail imaging in the age of film photography . It provided a comprehensive depth of field, making it invaluable in scientific and biomedical photography. [ 5 ] As technology and techniques continue to evolve, LSP has been replaced by digital image focus stacking . This technique uses a collection of images captured in series at different focal depths, which are then processed using computer software to create a single image with a greater focus depth than any single image.
LSP involves the use of a thin plane of light that scans across the subject, which is mounted on a stage moving perpendicular to the film plane. The technique utilizes traditional optics and is governed by the physical laws of depth of field. By moving the subject through a narrow band of illumination, the entire subject can be recorded in sharp focus from the nearest details to the farthest ones. This analog process produces sharp and detailed images by slowly recording the image on film as the specimen passes through the sheet of light that is thinner than the effective DOF. [ 4 ]
Because the image is captured at the same relative distance from the camera lens, the resulting images are axonometric rather than perspective projection, which is what the human eye sees and is typically captured by a film camera. Because all parts of an LSP image are captured at the same distance from the lens, relative measurements can be taken from an LSP photograph and can be used for comparison. [ 6 ]
A typical LSP setup includes:
In 1991, Sharp and Kazilek described their SLP system that used three Kodak Ektagraphic slide projectors with zoom lenses to create a thin plane of light. The projectors each had a slide mount with two razor blades placed edge-to-edge to create a thin slit for the light to pass through. The image was captured using a Nikon FE-2 SLR camera mounted above the specimen. Kodachrome 25 slide film was used to record the image and to minimize film grain size and maximize image sharpness [ 7 ]
A commercial SLP instrument was produced by the Irvine Optical Corp. Their DYNAPHOT system was based on a photomacroscope and could capture images on 4x5 film. The instrument came with two or three illumination sources and a motorized specimen stage. The system advertised a 2X – 40X magnification range and the ability to capture images in black and white and color. [ 4 ] Other systems have been developed by Nile Root and Theodore Clarke and reported higher magnification (up to 100X). [ 3 ]
LSP was particularly useful in biomedical photography, where it was used to document magnified subjects with increased depth of field over traditional macro and micro photography. It has been employed to capture detailed images of biological specimens, such as imaging small insects and their parts. SLP has been used to document shell collections for scientific documentation and research. Other applications include forensic science , mineralogy , and the imaging of fractured surfaces and parts [ 8 ] [ 9 ] [ 7 ] [ 10 ]
Enthusiasts and researchers have contributed to the development and accessibility of LSP by creating and sharing DIY guides. These contributions have enabled others to build their own LSP systems using readily available materials and components. Nile Root's publications provide detailed instructions and recommendations for constructing an LSP setup. These DIY systems have allowed a wider audience to explore and utilize the benefits of LSP imaging in various fields. [ 6 ] [ 10 ] [ 4 ] | https://en.wikipedia.org/wiki/Light_scanning_photomacrography |
Light sheet fluorescence microscopy ( LSFM ) is a fluorescence microscopy technique with an intermediate-to-high [ 1 ] optical resolution , but good optical sectioning capabilities and high speed. In contrast to epifluorescence microscopy only a thin slice (usually a few hundred nanometers to a few micrometers) of the sample is illuminated perpendicularly to the direction of observation. For illumination, a laser light-sheet is used, i.e. a laser beam which is focused only in one direction (e.g. using a cylindrical lens). A second method uses a circular beam scanned in one direction to create the lightsheet. As only the actually observed section is illuminated, this method reduces the photodamage and stress induced on a living sample. Also the good optical sectioning capability reduces the background signal and thus creates images with higher contrast, comparable to confocal microscopy . Because light sheet fluorescence microscopy scans samples by using a plane of light instead of a point (as in confocal microscopy), it can acquire images at speeds 100 to 1,000 times faster than those offered by point-scanning methods.
This method is used in cell biology [ 2 ] and for microscopy of intact, often chemically cleared, organs, embryos, and organisms. [ 3 ]
Starting in 1994, light sheet fluorescence microscopy was developed as orthogonal plane fluorescence optical sectioning microscopy or tomography (OPFOS) [ 4 ] mainly for large samples and later as the selective/single plane illumination microscopy (SPIM) also with sub-cellular resolution. [ 5 ] This introduced an illumination scheme into fluorescence microscopy, which has already been used successfully for dark field microscopy under the name ultramicroscopy . [ 6 ]
In this type of microscopy, [ 7 ] the illumination is done perpendicularly to the direction of observation (see schematic image at the top of the article). The expanded beam of a laser is focused in only one direction by a cylindrical lens, or by a combination of a cylindrical lens and a microscope objective as the latter is available in better optical quality and with higher numerical aperture than the first. This way a thin sheet of light or lightsheet is created in the focal region that can be used to excite fluorescence only in a thin slice (usually a few micrometers thin) of the sample.
The fluorescence light emitted from the lightsheet is then collected perpendicularly with a standard microscope objective and projected onto an imaging sensor (usually a CCD , electron-multiplying CCD or CMOS camera ). In order to let enough space for the excitation optics/lightsheet an observation objective with high working distance is used. In most light sheet fluorescence microscopes the detection objective and sometimes also the excitation objective are fully immersed in the sample buffer, so usually the sample and excitation/detection optics are embedded into a buffer-filled sample chamber, which can also be used to control the environmental conditions (temperature, carbon dioxide level ...) during the measurement. The sample mounting in light sheet fluorescence microscopy is described below in more detail.
As both the excitation lightsheet and the focal plane of the detection optics have to coincide to form an image, focusing different parts of the sample can not be done by translating the detection objective, but usually the whole sample is translated and rotated instead.
In recent years, several extensions to this scheme have been developed:
The separation of the illumination and detection beampaths in light sheet fluorescence microscopy (except in oblique plane microscopy ) creates a need for specialized sample mounting methods. To date most light sheet fluorescence microscopes are built in such a way that the illumination and detection beampath lie in a horizontal plane (see illustrations above), thus the sample is usually hanging from the top into the sample chamber or is resting on a vertical support inside the sample chamber. Several methods have been developed to mount all sorts of samples:
Some light sheet fluorescence microscopes have been developed where the sample is mounted as in standard microscopy (e.g. cells grow horizontally on the bottom of a petri dish ) and the excitation and detection optics are constructed in an upright plane from above. This also allows combining a light sheet fluorescence microscope with a standard inverted microscope and avoids the requirement for specialized sample mounting procedures. [ 20 ] [ 30 ] [ 31 ] [ 32 ]
Most light sheet fluorescence microscopes are used to produce 3D images of the sample by moving the sample through the image plane. If the sample is larger than the field of view of the image sensor, the sample also has to be shifted laterally. An alternative approach is to move the image plane through the sample to create the image stack. [ 32 ]
Long experiments can be carried out, for example with stacks recorded every 10 sec–10 min over the timespan of days. This allows study of changes over time in 3D, or so-called 4D microscopy.
After the image acquisition the different image stacks are registered to form one single 3D dataset. Multiple views of the sample can be collected, either by interchanging the roles of the objectives [ 32 ] or by rotating the sample. [ 8 ] Having multiple views can yield more information than a single stack; for example occlusion of some parts of the sample may be overcome. Multiple views also improves 3D image resolution by overcoming poor axial resolution as described below.
Some studies also use a selective plane illumination microscope to image only one slice of the sample, but at much higher temporal resolution. This allows e.g. to observe the beating heart of a zebra fish embryo in real-time. [ 33 ] Together with fast translation stages for the sample a high-speed 3D particle tracking has been implemented. [ 34 ]
The lateral resolution of a selective plane illumination microscope is comparable to that of a standard (epi) fluorescence microscope, as it is determined fully by the detection objective and the wavelength of the detected light (see Abbe limit ). E.g. for detection in the green spectral region around 525 nm, a resolution of 250–500 nm can be reached. [ 7 ] The axial resolution is worse than the lateral (about a factor of 4), but it can be improved by using a thinner lightsheet in which case nearly isotropic resolution is possible. [ 20 ] Thinner light sheets are either thin only in a small region (for Gaussian beams ) or else specialized beam profiles such as Bessel beams must be used (besides added complexity, such schemes add side lobes which can be detrimental). [ 13 ] Alternatively, isotropic resolution can be achieved by computationally combining 3D image stacks taken from the same sample under different angles. Then the depth-resolution information lacking in one stack is supplied from another stack; for example with two orthogonal stacks the (poor-resolution) axial direction in one stack is a (high-resolution) lateral direction in the other stack.
The lateral resolution of light sheet fluorescence microscopy can be improved beyond the Abbe limit, by using super resolution microscopy techniques, e.g. with using the fact, that single fluorophores can be located with much higher spatial precision than the nominal resolution of the used optical system (see stochastic localization microscopy techniques ). [ 23 ] In Structured Illumination Light Sheet Microscopy , structured illumination techniques have been applied to further improve the optical sectioning capacity of light sheet fluorescence microscopy. [ 24 ]
As the illumination typically penetrates the sample from one side, obstacles lying in the way of the lightsheet can disturb its quality by scattering and/or absorbing the light. This typically leads to dark and bright stripes in the images. If parts of the samples have a significantly higher refractive index (e.g. lipid vesicles in cells), they can also lead to a focussing effect resulting in bright stripes behind these structures. To overcome this artifact, the lightsheets can e.g. be "pivoting". That means that the lightsheet's direction of incidence is changed rapidly (~1 kHz rate) by a few degrees (~10°), so light also hits the regions behind the obstacles. Illumination can also be performed with two (pivoted) lightsheets (see above) to further reduce these artifacts. [ 8 ] Alternatively, the Variational Stationary Noise Remover (VSNR) algorithm has been developed and is available as a free Fiji plugin. [ 35 ]
At the beginning of the 20th century, R. A. Zsigmondy introduced the ultramicroscope as a new illumination scheme into dark-field microscopy. Here sunlight or a white lamp is used to illuminate a precision slit. The slit is then imaged by a condensor lens into the sample to form a lightsheet. Scattering (sub-diffractive) particles can be observed perpendicularly with a microscope. This setup allowed the observation of particles with sizes smaller than the microscope's resolution and led to a Nobel prize for Zsigmondy in 1925. [ 36 ]
The first application of this illumination scheme for fluorescence microscopy was published in 1993 by Voie et al. under the name orthogonal-plane fluorescence optical sectioning (OPFOS). [ 4 ] for imaging of the internal structure of the cochlea . The resolution at that time was limited to 10 µm laterally and 26 µm longitudinally but at a sample size in the millimeter range. The orthogonal-plane fluorescence optical sectioning microscope used a simple cylindrical lens for illumination. Further development and improvement of the selective plane illumination microscope started in 2004. [ 5 ] After this publication by Huisken et al. the technique found wide application and is still adapted to new measurement situations today (see above). Since 2010 a first ultramicroscope with fluorescence excitation and limited resolution [ 37 ] and since 2012 a first selective plane illumination microscope are available commercially. [ 38 ] A good overview about the development of selective plane illumination microscopy is given in ref. [ 39 ] During 2012 also open source projects have started to appear that freely publish complete construction plans for light sheet fluorescence microscopes and also the required software suites. [ 40 ] [ 41 ] [ 42 ] [ 43 ]
Selective plane illumination microscopy/light sheet fluorescence microscopy is often used in developmental biology, where it enables long-time (several days) observations of embryonic development (even with full lineage tree reconstruction). [ 5 ] [ 44 ] In transparent creatures such as the larval zebrafish , light sheet microscopy can image the activity of the nervous system of a live, behaving animal. [ 45 ] Selective plane illumination microscopy can also be combined with techniques, like fluorescence correlation spectroscopy to allow spatially resolved mobility measurements of fluorescing particles (e.g. fluorescent beads, quantum dots or fluorescent proteins) inside living biological samples. [ 20 ] [ 21 ]
Strongly scattering biological tissue such as brain or kidney has to be chemically fixed and cleared before it can be imaged in a selective plane illumination microscope. [ 46 ] Special tissue clearing techniques have been developed for this purpose, e.g. 3DISCO , CUBIC and CLARITY . Depending on the index of refraction of the cleared sample, matching immersion fluids and special long-distance objectives must be used during imaging. | https://en.wikipedia.org/wiki/Light_sheet_fluorescence_microscopy |
The light travel time effect is defined as the differences that occur in the periodic eclipses of binary stars when they are disturbed by another massive object.
The periods of the orbits in an undisturbed eclipsing binary star system stay relatively stable, since the center of mass does not change in position. A more massive object can disturb the center of mass of the binary system and thus change the periodic nature of the orbits in the binary. The disturbance caused by this larger object causes the system to be farther away or closer to the observer at times, causing the timings of the eclipses in the binary to change. [ 1 ]
If the binary systems have planets, the more massive object can cause transit-timing variations in the orbiting planets. [ 2 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Light_travel_time_effect |
Lightening holes are holes in structural components of machines and buildings used by a variety of engineering disciplines to make structures lighter. The edges of the hole may be flanged to increase the rigidity and strength of the component. [ 1 ] The holes can be circular, triangular, elliptical, or rectangular and should have rounded edges, but they should never have sharp corners, to avoid the risk of stress risers , and they must not be too close to the edge of a structural component. [ 2 ] [ 3 ]
Lightening holes are often used in the aviation industry. This allows an aircraft to be as lightweight as possible, retaining the durability and airworthiness of the aircraft structure. [ 4 ] [ 5 ]
Lightening holes have also been used in marine engineering to increase seaworthiness of the vessel. [ 6 ] [ 7 ] [ 8 ]
Lightening holes became a prominent feature of motor racing in the 1920s and 1930s. Chassis members, suspension components, engine housings and even connecting rods were drilled with a range of holes, of sizes almost as large as the component.
"[The] wisdom of the day was to make everything along the lines of a brick shithouse [...] and then drill holes in the bits to lighten them."
Lightening holes have been used in various military vehicles, aircraft, equipment and weaponry platforms. [ citation needed ] This allows equipment to be lighter in weight as well as increase the ruggedness and durability. [ citation needed ] They are usually made by drilling [ citation needed ] holes, pressed stamping or machining and can also save strategic materials and cost during wartime production.
Lightening holes have been used on various architecture designs. [ 10 ] During the 1980s and early 1990s, lightening holes were fashionable and somewhat seen as futuristic and were used in the likes of industrial units, car showrooms, shopping precincts, sports centres etc. Parsons House in London is a notable building that uses lightening holes since its renovation in 1988. [ 11 ] [ 12 ] Ringwood Health & Leisure Centre in Hampshire is another notable example. [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Lightening_holes |
Lightfastness is a property of a colourant such as dye or pigment that describes its resistance to fading when exposed to light. [ 1 ] [ 2 ] [ 3 ] Dyes and pigments are used for example for dyeing of fabrics , plastics or other materials and manufacturing paints or printing inks .
The bleaching of the color is caused by the impact of ultraviolet radiation in the chemical structure of the molecules giving the color of the subject. The part of a molecule responsible for its color is called the chromophore . [ 4 ] [ 5 ]
Light encountering a painted surface can either alter or break the chemical bonds of the pigment, causing the colors to bleach or change in a process known as photodegradation . [ 6 ] Materials that resist this effect are said to be lightfast . The electromagnetic spectrum of the sun contains wavelengths from gamma waves to radio waves. The high energy of ultraviolet radiation in particular accelerates the fading of the dye. [ 7 ]
The photon energy of UVA -radiation which is not absorbed by atmospheric ozone exceeds the dissociation energy of the carbon-carbon single bond , resulting in the cleavage of the bond and fading of the color. [ 7 ] Inorganic colourants are considered to be more lightfast than organic colourants. [ 8 ] Black colourants are usually considered the most lightfast. [ 9 ]
Lightfastness is measured by exposing a sample to a lightsource for a predefined period of time and then comparing it to an unexposed sample. [ 2 ] [ 3 ] [ 10 ]
During the fading, colourant molecules undergo various chemical processes which result in fading.
When a UV-photon reacts with a molecule acting as colourant, the molecule is excited from the ground state to an excited state. The excited molecule is highly reactive and unstable. During the quenching of the molecule from excited state to ground state, atmospheric triplet oxygen reacts with the colourant molecule to form singlet oxygen and superoxide oxygen radical . The oxygen atom and the superoxide radical resulting from the reaction are both highly reactive and capable of destroying the colourants. [ 7 ]
Photolysis , i.e., photochemical decomposition is a chemical reaction where the compound is broken down by the photons. This decomposition occurs when a photon of sufficient energy encounters a colorant molecule bond with a suitable dissociation energy. The reaction causes homolytic cleavage in the chromophoric system resulting in the fading of the colourant. [ 7 ]
Photo-oxidation , i.e., photochemical oxidation . A colorant molecule, when excited by a photon of sufficient energy, undergoes an oxidation process. In the process the chromophoric system of the colorant molecule reacts with the atmospheric oxygen to form a non-chromophoric system, resulting in fading. Colorants which contain a carbonyl group as the chromophore are particularly vulnerable to oxidation. [ 7 ]
Photo-reduction , i.e., photochemical reduction . A colorant molecule with an unsaturated double bond (typical to alkenes ) or triple bond (typical to alkynes ) acting as a chromophore undergoes reduction in the presence of hydrogen and photons of sufficient energy, forming a saturated chromophoric system. Saturation reduces the length of the chromophoric system, resulting in the fading of the colorant. [ 7 ]
Photosensitization , i.e., photochemical sensitization. Exposing dyed cellulosic material, such as plant-based fibers, to sunlight allows dyes to remove hydrogen from the cellulose, resulting in photoreduction on the cellulosic substrate. Simultaneously, the colorant will undergo oxidation in the presence of the atmospheric oxygen, resulting in photo-oxidation of the colourant. These processes result in both fading of the colorant and strength loss of the substrate. [ 7 ]
Phototendering , i.e., photochemical tendering. As a result of UV light, the substrate material supplies hydrogen to the colourant molecules, reducing the colorant molecule. As the hydrogen is removed, the material undergoes oxidation. [ 7 ]
Some organizations publish standards for rating the lightfastness of pigments and materials. Testing is typically done by controlled exposure to sunlight , or to artificial light generated by a xenon arc lamp . [ 11 ] Watercolors , inks , pastels , and colored pencils are particularly susceptible to fading over time, so choosing lightfast pigments is especially important in these media. [ 1 ]
The most well known scales measuring the lightfastness are the Blue Wool Scale , Grey scale and the scale defined by ASTM (American Standard Test Measure). [ 11 ] [ 12 ] [ 13 ] [ 14 ] On the Blue Wool Scale the lightfastness is rated between 1–8. 1 being very poor and 8 being excellent lightfastness. In grey scale the lightfastness is rated between 1–5. 1 being very poor and 5 being excellent lightfastness. [ 1 ] [ 2 ] [ 10 ] On ASTM scale the lightfastness is rated between I-V. I is excellent lightfastness and it corresponds to ratings 7–8 on Blue Wool Scale. V is very poor lightfastness and it corresponds to Blue Wool scale rating 1. [ 10 ]
The actual lightfastness is dependent on the strength of the radiation of the sun, so lightfastness is relative to geographic location, season, and exposure direction. The following table is listing suggestive relations of the lightfastness ratings on different measure scales and the relation relative to time in direct sunlight and normal conditions of display: away from a window, under indirect sunlight and properly framed behind a UV protective glass. [ 10 ]
The relative amount of fading can be measured and studied by using standard test strips. In the workflow of the Blue Wool test, one reference strip set shall be stored protected from any exposure to light. Simultaneously, another equivalent test strip set is exposed under a light source defined in the standard. For example, if the lightfastness of the colourant is indicated to be 5 on the Blue Wool scale, it can be expected to fade by a similar amount as the strip number 5 in the Blue Wool test strip set. The success of the test can be confirmed by comparing the test strip set with the reference set that was stored protected from the light. [ 12 ] [ 13 ]
In printing, organic pigments are mainly used in the inks, so the shifting or bleaching of the color of a printing product due to the presence of UV light is usually just a matter of time. The use of organic pigments is justified primarily by their inexpensive cost compared to inorganic pigments. The particle size of the inorganic pigments is often larger than that of organic pigments, thus inorganic pigments are often not suitable to be used in offset printing . [ 15 ]
In screen printing , the particle size of the pigment is not the limiting factor. Thus it is the preferred printing method for printing jobs requiring extreme lightfastness. The thickness of the ink layer affects the lightfastness by the amount of pigment laid on the substrate. The ink layer printed by screen printing is thicker than that printed by offset printing. In other words, it contains more pigment per area. This leads to better lightfastness even though the printing ink used in both methods would be based on the same pigment. [ 7 ]
When mixing printing inks, the ink with the weaker lightfastness defines the lightfastness of the whole mixed color. The fading of one of the pigments leads to a tone shift towards the component with better lightfastness. If it is required that there will be something visible from the printing, even though its dominant pigment would fade, then a small amount of pigment with excellent lightfastness can be mixed with it. | https://en.wikipedia.org/wiki/Lightfastness |
In aeroacoustics , Lighthill's eighth power law states that power of the sound created by a turbulent motion, far from the turbulence, is proportional to eighth power of the characteristic turbulent velocity, derived by Sir James Lighthill in 1952. [ 1 ] [ 2 ] This is used to calculate the total acoustic power of the jet noise . The law reads as
where
The eighth power is experimentally verified and found to be accurate for low speed flows, i.e., Mach number is small, M < 1 {\displaystyle M<1} . And also, the source has to be compact to apply this law. | https://en.wikipedia.org/wiki/Lighthill's_eighth_power_law |
A lighting control system is intelligent network-based lighting control that incorporates communication between various system inputs and outputs related to lighting control with the use of one or more central computing devices. Lighting control systems are widely used on both indoor and outdoor lighting of commercial, industrial, and residential spaces. Lighting control systems are sometimes referred to under the term smart lighting . Lighting control systems serve to provide the right amount of light where and when it is needed. [ 1 ]
Lighting control systems are employed to maximize the energy savings from the lighting system, satisfy building codes , or comply with green building and energy conservation programs. Lighting control systems may include a lighting technology designed for energy efficiency , convenience and security. This may include high efficiency fixtures and automated controls that make adjustments based on conditions such as occupancy or daylight availability. Lighting is the deliberate application of light to achieve some aesthetic or practical effect (e.g. illumination of a security breach). It includes task lighting , accent lighting , and general lighting.
The term lighting controls is typically used to indicate stand-alone control of the lighting within a space. This may include occupancy sensors , timeclocks, and photocells that are hard-wired to control fixed groups of lights independently. Adjustment occurs manually at each devices location. The efficiency of and market for residential lighting controls has been characterized by the Consortium for Energy Efficiency . [ 2 ]
The term lighting control system refers to an intelligent networked system of devices related to lighting control. These devices may include relays , occupancy sensors , photocells , light control switches or touchscreens , and signals from other building systems (such as fire alarm or HVAC ). Adjustment of the system occurs both at device locations and at central computer locations via software programs or other interface devices.
The major advantage of a lighting control system over stand-alone lighting controls or conventional manual switching is the ability to control individual lights or groups of lights from a single user interface device. This ability to control multiple light sources from a user device allows complex lighting scenes to be created. A room may have multiple scenes pre-set, each one created for different activities in the room. A major benefit of lighting control systems is reduced energy consumption. Longer lamp life is also gained when dimming and switching off lights when not in use. Wireless lighting control systems provide additional benefits including reduced installation costs and increased flexibility over where switches and sensors may be placed. [ 3 ]
Lighting applications represents 19% of the world's energy use and 6% of all greenhouse emissions . [ 4 ] In the United States, 65 percent of energy consumption is used by commercial and industrial sectors, and 22 percent of this is used for lighting.
Smart lighting enables households and users to remotely control cooling, heating, lighting and appliances, minimizing unnecessary light and energy use. This ability saves energy and provides a level of comfort and convenience. From outside the traditional lighting industry, the future success of lighting will require involvement of a number of stakeholders and stakeholder communities. The concept of smart lighting also involves utilizing natural light from the sun to reduce the use of man-made lighting, and the simple concept of people turning off lighting when they leave a room. [ 5 ]
A smart lighting system can ensure that dark areas are illuminated when in use. The lights actively respond to the activities of the occupants based on sensors and intelligence (logic) that anticipates the lighting needs of an occupant. This can enhance comfort, improve safety, reduce manual effort, and improve energy efficiency.
Lights can be used to dissuade those from areas they should not be. A security breach, for example, is an event that could trigger floodlights at the breach point. Preventative measures include illuminating key access points (such as walkways) at night and automatically adjusting the lighting when a household is away to make it appear as though there are occupants.
Lighting control systems typically provide the ability to automatically adjust a lighting device's output based on:
Chronological time schedules incorporate specific times of the day, week, month or year.
Solar time schedules incorporate sunrise and sunset times, often used to switch outdoor lighting. Solar time scheduling requires that the location of the building be set. This is accomplished using the building's geographic location via either latitude and longitude or by picking the nearest city in a given database giving the approximate location and corresponding solar times.
Space occupancy is primarily determined with occupancy sensors . Smart lighting that utilizes occupancy sensors can work in unison with other lighting connected to the same network to adjust lighting per various conditions. [ 6 ] The table below shows potential electricity savings from using occupancy sensors to control lighting in various types of spaces. [ 7 ]
The advantages of ultrasonic devices are that they are sensitive to all types of motion and generally there are zero coverage gaps, since they can detect movements not within the line of sight. [ 8 ] [ 7 ]
Electric lighting energy use can be adjusted by automatically dimming and/or switching electric lights in response to the level of available daylight . Reducing the amount of electric lighting used when daylight is available is known as daylight harvesting .
In response to daylighting technology, daylight-linked automated response systems have been developed to further reduce energy consumption. [ 9 ] [ 10 ] These technologies are helpful, but they do have their downfalls. Many times, rapid and frequent switching of the lights on and off can occur, particularly during unstable weather conditions or when daylight levels are changing around the switching illuminance. Not only does this disturb occupants, it can also reduce lamp life. A variation of this technology is the 'differential switching' or 'dead-band' photoelectric control which has multiple illuminances it switches from to reduce occupants being disturbed. [ 11 ] [ 12 ]
Alarm conditions typically include inputs from other building systems such as the fire alarm or HVAC system, which may trigger an emergency 'all lights on' or ' all lights flashing' command for example.
Program logic can tie all of the above elements together using constructs such as if-then-else statements and logical operators . Digital Addressable Lighting Interface (DALI) is specified in the IEC 62386 standard.
The use of automatic light dimming is an aspect of smart lighting that serves to reduce energy consumption. [ 13 ] Manual light dimming also has the same effect of reducing energy use.
In the paper "Energy savings due to occupancy sensors and personal controls: a pilot field study", Galasiu, A.D. and Newsham, G.R have confirmed that automatic lighting systems including occupancy sensors and individual (personal) controls are suitable for open-plan office environments and can save a significant amount of energy (about 32%) when compared to a conventional lighting system, even when the installed lighting power density of the automatic lighting system is ~50% higher than that of the conventional system. [ 14 ]
A complete sensor consists of a motion detector , an electronic control unit, and a controllable switch/relay. The detector senses motion and determines whether there are occupants in the space. [ 9 ] It also has a timer that signals the electronic control unit after a set period of inactivity. The control unit uses this signal to activate the switch/relay to turn equipment on or off. For lighting applications, there are three main sensor types: passive infrared , ultrasonic , [ 8 ] and hybrid.
Motion-detecting (microwave), heating-sensing (infrared), and sound-sensing; optical cameras, infrared motion, optical trip wires, door contact sensors, thermal cameras, micro radars, daylight sensors. [ 15 ]
In the 1980s there was a strong requirement to make commercial lighting more controllable so that it could become more energy efficient. Initially this was done with analog control, allowing fluorescent ballasts and dimmers to be controlled from a central source. This was a step in the right direction, but cabling was complicated and therefore not cost effective.
Tridonic was an early company to go digital with their broadcast protocols , DSI , in 1991. DSI was a basic protocol as it transmitted one control value to change the brightness of all the fixtures attached to the line. What made this protocol more attractive, and able to compete with the established analog option, was the simple wiring.
There are two types of lighting control systems which are:
Examples for analog lighting control systems are:
In production lighting 0-10V system was replaced by analog multiplexed systems such as D54 and AMX192, which themselves have been almost completely replaced by DMX512 . For dimmable fluorescent lamps (where it operates instead at 1-10 V, where 1 V is minimum and 0 V is off) the system is being replaced by DSI, which itself is in the process of being replaced by DALI.
Examples for digital lighting control systems are:
Those are all wired lighting control systems.
There are also wireless lighting control systems that are based on some standard protocols like MIDI , ZigBee , Bluetooth Mesh , and others. The standard for digital addressable lighting interface, mostly in professional and commercial deployments, is IEC 62386-104 . This standard specifies the underlying technologies, which in wireless are VEmesh , which operates in the industrial Sub-1 GHz frequency band and Bluetooth Mesh , which operates in the 2.4 GHz frequency band.
Other notable protocols, standards and systems include:
The new type of control for lighting system is using Bluetooth connection directly to the lighting system. It is recently introduced by Philips HUE and company new name as Signify formerly known as Philips Lighting . This system will need a smartphone or tablet where the user can install a special Philips Hue Bluetooth app. The Bluetooth bulbs don't need a Philips Hue bridge to function. There is no need to have a Wi-Fi or data connection for controlling the lights with that system.
Smart lighting systems can be controlled using the internet to adjust lighting brightness and schedules. [ 6 ] One technology involves a smart lighting network that assigns IP addresses to light bulbs. [ 16 ]
Schubert predicts that revolutionary lighting systems will provide an entirely new means of sensing and broadcasting information. By blinking far too rapidly for any human to notice, the light will pick up data from sensors and carry it from room to room, reporting such information as the location of every person within a high-security building. A major focus of the Future Chips Constellation is smart lighting, a revolutionary new field in photonics based on efficient light sources that are fully tunable in terms of such factors as spectral content, emission pattern, polarization, color temperature, and intensity. Schubert, who leads the group, says smart lighting will not only offer better, more efficient illumination; it will provide "totally new functionalities."
Architectural lighting control systems can integrate with a theater 's on-off and dimmer controls, and are often used for house lights and stage lighting , and can include worklights , rehearsal lighting, and lobby lighting. Control stations can be placed in several locations in the building and range in complexity from single buttons that bring up preset options-looks, to in-wall or desktop LCD touchscreen consoles. Much of the technology is related to residential and commercial lighting control systems.
The benefit of architectural lighting control systems in the theater is the ability for theater staff to turn worklights and house lights on and off without having to use a lighting control console . Alternately, the light designer can control these same lights with light cues from the lighting control console so that, for instance, the transition from houselights being up before a show starts and the first light cue of the show is controlled by one system.
The function of a traditional emergency lighting system is the supply of a minimum illuminating level when a line voltage failure appears. Therefore, emergency lighting systems have to store energy in a battery module to supply lamps in case of failure. In this kind of lighting systems the internal damages, for example battery overcharging, damaged lamps and starting circuit failure must be detected and repaired by specialist workers.
For this reason, the smart lighting prototype can check its functional state every fourteen days and dump the result into a LED display. With these features they can test themselves checking their functional state and displaying their internal damages. Also the maintenance cost can be decreased. [ 17 ]
The main idea is the substitution of the simple line voltage sensing block that appears in the traditional systems by a more complex one based on a microcontroller. This new circuit will assume the functions of line voltage sensing and inverter activation, by one side, and the supervision of all the system: lamp and battery state, battery charging, external communications, correct operation of the power stage, etc., by the other side.
The system has a great flexibility, for instance, it would be possible the communication of several devices with a master computer, which would know the state of each device all the time.
A new emergency lighting system based on an intelligent module has been developed. The micro-controller as a control and supervision device guarantees increase in the installation security and a maintenance cost saving.
Another important advantage is the cost saving for mass production specially whether a microcontroller with the program in ROM memory is used.
The advances achieved in photonics are already transforming society just as electronics revolutionized the world in recent decades and it will continue to contribute more in the future. From the statistics, North America's optoelectronics market grew to more than $20 billion in 2003. The LED ( light-emitting diode ) market is expected to reach $5 billion in 2007, and the solid-state lighting market is predicted to be $50 billion in 15–20 years, as stated by E. Fred Schubert, [ 18 ] Wellfleet Senior Distinguished Professor of the Future Chips Constellation at Rensselaer. | https://en.wikipedia.org/wiki/Lighting_control_system |
Lighting ratio in photography refers to the comparison of key light (the main source of light from which shadows fall) to the total fill light (the light that fills in the shadow areas). [ 1 ] The higher the lighting ratio, the higher the contrast of the image; the lower the ratio, the lower the contrast. The lighting ratio is the ratio of the light levels on the brightest-lit to the least-lit parts of the subject; the brightest-lit areas are lit by both key (K) and fill (F). The American Society of Cinematographers (ASC) defines lighting ratio as (key+fill):fill, or (key+Σ fill ):Σ fill , where Σ fill is the sum of all fill lights.
Light can be measured in footcandles . A key light of 200 footcandles and fill light of 100 footcandles have a 3:1 ratio (a ratio of three to one) — (200 + 100):100.
A key light of 800 footcandles and a fill light of 200 footcandles has a ratio of 5:1 according to the lighting ratio formula — (800 + 200):200 = 1000 / 200 = 5 : 1.
The ratio can be determined in relation to F stops since each increase in f-stop is equal to double the amount of light: 2 to the power of the difference in f stops is equal to the first factor in the ratio. For example, a difference in two f-stops between key and fill is 2 squared, or 4:1 ratio. A difference in 3 stops is 2 cubed, or an 8:1 ratio. No difference is equal to 2 to the power of 0, for a 1:1 ratio. | https://en.wikipedia.org/wiki/Lighting_ratio |
Nir Pochter (CMO/Co Founder)
Yaron Inger (CTO/Co Founder)
Lightricks , founded in January 2013, is a company that develops video and image editing mobile apps and software, known particularly for its selfie-editing app, Facetune . [ 1 ] [ 2 ] [ 3 ] Headquartered in Jerusalem, the firm has approximately 600 employees. [ 4 ] [ 5 ] As of 2023, its apps have been downloaded over 730 million times. As of 2021, Lightricks is valued at 1.8 billion [ 6 ] As of 2025 Lightricks has over 6.6 million monthly paying users, over 50 million monthly users, and its apps have been downloaded over 730 million times. [ 7 ]
The company was created in 2013 by 5 founders, Ph.D. students Zeev Farbman , Nir Pochter, Yaron Inger, Amit Goldstein, and former Supreme Court of Israel clerk Itai Tsiddon who were all studying at the Hebrew University of Jerusalem . [ 2 ] [ 8 ] Lightricks began life as a bootstrapped company, which was the subject of a case study from the Harvard Business School "Bootstrapping at Lightricks". [ 9 ]
The company received in 2015 its first funding round of $10 million led by Viola Ventures. [ 10 ] [ 11 ] It received its second round of funding of $60 million in November 2018, led by Insight Venture Partners and with participation from Israeli VC company ClalTech. [ 8 ] In July 2019, it secured $135 million in series C funding led by Goldman Sachs, with participation from Insight Partners and ClalTech; this was reported to imply a $1 billion valuation. [ 12 ] [ 13 ] [ 14 ] It puts the total raised to date at $205 million. [ 15 ] [ 16 ] Lightricks ended 2018 with over $50 million in revenue. [ 2 ] In September 2021, the company received $100 million in primary and $30 million in secondary Series D funding. This valued the company at $1.8 billion. [ 17 ] In 2024, Lightricks introduced LTX Studio, a platform for creating and editing videos using AI. [ 18 ]
After beginning in the Hebrew University campus, the company outgrew its space a number of times. It remains based in Jerusalem, Israel , with offices in Haifa , London and Chicago ; it has a total of approximately 600 employees. [ 19 ] [ 4 ] [ 20 ]
Once Apple Inc allowed it, Lightricks was one of the first app companies to offer subscriptions. Most of its apps are now published under a freemium model. [ 21 ] [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/Lightricks |
Lightweight Extensible Authentication Protocol ( LEAP ) is a proprietary wireless LAN authentication method developed by Cisco Systems . Important features of LEAP are dynamic WEP keys and mutual authentication (between a wireless client and a RADIUS server). LEAP allows for clients to re-authenticate frequently; upon each successful authentication, the clients acquire a new WEP key (with the hope that the WEP keys don't live long enough to be cracked). LEAP may be configured to use TKIP instead of dynamic WEP.
Some 3rd party vendors also support LEAP through the Cisco Compatible Extensions Program. [ 1 ]
An unofficial description of the protocol is available. [ 2 ]
Cisco LEAP, similar to WEP , has had well-known security weaknesses since 2003 involving offline password cracking . [ 3 ] LEAP uses a modified version of MS-CHAP , an authentication protocol in which user credentials are not strongly protected. Stronger authentication protocols employ a salt to strengthen the credentials against eavesdropping during the authentication process. Cisco's response to the weaknesses of LEAP suggests that network administrators either force users to have stronger, more complicated passwords or move to another authentication protocol also developed by Cisco, EAP-FAST , to ensure security. [ 4 ] Automated tools like ASLEAP demonstrate the simplicity of getting unauthorized access in networks protected by LEAP implementations. [ 5 ] | https://en.wikipedia.org/wiki/Lightweight_Extensible_Authentication_Protocol |
A lightweight markup language ( LML ), also termed a simple or humane markup language , is a markup language with simple, unobtrusive syntax. It is designed to be easy to write using any generic text editor and easy to read in its raw form. Lightweight markup languages are used in applications where it may be necessary to read the raw document as well as the final rendered output.
For instance, a person downloading a software library might prefer to read the documentation in a text editor rather than a web browser. Another application for such languages is to provide for data entry in web-based publishing, such as blogs and wikis , where the input interface is a simple text box . The server software then converts the input into a common document markup language like HTML .
Lightweight markup languages were originally used on text-only displays which could not display characters in italics or bold , so informal methods to convey this information had to be developed. This formatting choice was naturally carried forth to plain-text email communications. Console browsers may also resort to similar display conventions.
In 1986 international standard SGML provided facilities to define and parse lightweight markup languages using grammars and tag implication. The 1998 W3C XML is a profile of SGML that omits these facilities. However, no SGML document type definition (DTD) for any of the languages listed below is known.
Lightweight markup languages can be categorized by their tag types. Like HTML ( <b> bold </b> ), some languages use named elements that share a common format for start and end tags (e.g. BBCode [b] bold [/b] ), whereas proper lightweight markup languages are restricted to ASCII -only punctuation marks and other non-letter symbols for tags, but some also mix both styles (e.g. Textile bq. ) or allow embedded HTML (e.g. Markdown ), possibly extended with custom elements (e.g. MediaWiki <ref>'''source'''</ref> ).
Most languages distinguish between markup for lines or blocks and for shorter spans of texts, but some only support inline markup.
Some markup languages are tailored for a specific purpose, such as documenting computer code (e.g. POD , reST , RD ) or being converted to a certain output format (usually HTML or LaTeX ) and nothing else, others are more general in application. This includes whether they are oriented on textual presentation or on data serialization. [ clarification needed ]
Presentation oriented languages include AsciiDoc , atx , BBCode , Creole , Crossmark, Djot, Epytext, Haml , JsonML , MakeDoc , Markdown , Org-mode , POD (Perl) , reST (Python) , RD (Ruby) , Setext , SiSU , SPIP , Xupl, Texy! , Textile, txt2tags , UDO and Wikitext .
Data serialization oriented languages include Curl ( homoiconic , but also reads JSON; every object serializes), JSON , and YAML .
Markdown's own syntax does not support class attributes or id attributes; however, since Markdown supports the inclusion of native HTML code, these features can be implemented using direct HTML. (Some extensions may support these features.)
txt2tags' own syntax does not support class attributes or id attributes; however, since txt2tags supports inclusion of native HTML code in tagged areas, these features can be implemented using direct HTML when saving to an HTML target. [ 26 ]
DokuWiki does not support HTML import natively, but HTML to DokuWiki converters and importers exist and are mentioned in the official documentation. [ 27 ] DokuWiki does not support class or id attributes, but can be set up to support HTML code, which does support both features. HTML code support was built-in before release 2023-04-04. [ 28 ] In later versions, HTML code support can be achieved through plugins, though it is discouraged. [ 28 ]
Although usually documented as yielding italic and bold text, most lightweight markup processors output semantic HTML elements em and strong instead. Monospaced text may either result in semantic code or presentational tt elements. Few languages make a distinction, e.g. Textile, or allow the user to configure the output easily, e.g. Texy.
LMLs sometimes differ for multi-word markup where some require the markup characters to replace the inter-word spaces ( infix ).
Some languages require a single character as prefix and suffix, other need doubled or even tripled ones or support both with slightly different meaning, e.g. different levels of emphasis.
Gemtext does not have any inline formatting, monospaced text (called preformatted text in the context of Gemtext) must have the opening and closing ``` on their own lines.
In HTML, text is emphasized with the <em> and <strong> element types, whereas <i> and <b> traditionally mark up text to be italicized or bold-faced, respectively.
Microsoft Word and Outlook, and accordingly other word processors and mail clients that strive for a similar user experience, support the basic convention of using asterisks for boldface and underscores for italic style. While Word removes the characters, Outlook retains them.
In HTML, removed or deleted and inserted text is marked up with the <del> and <ins> element types, respectively. However, legacy element types <s> or <strike> and <u> are still also available for stricken and underlined spans of text.
AsciiDoc, ATX, Creole, MediaWiki, PmWiki, reST, Slack, Textile, Texy! and WhatsApp do not support dedicated markup for underlining text. Textile does, however, support insertion via the +inserted+ syntax.
ATX, Creole, MediaWiki, PmWiki, reST, Setext and Texy! do not support dedicated markup for striking through text.
DokuWiki supports HTML-like <del>stricken</del> syntax, even with embedded HTML disabled.
AsciiDoc supports striken text through a built-in text span [ c ] prefix: [.line-through]#stricken# .
Quoted computer code is traditionally presented in typewriter-like fonts where each character occupies the same fixed width. HTML offers the semantic <code> and the deprecated, presentational <tt> element types for this task.
Mediawiki and Gemtext do not provide lightweight markup for inline code spans.
Headings are usually available in up to six levels, but the top one is often reserved to contain the same as the document title, which may be set externally. Some documentation may associate levels with divisional types, e.g. part, chapter, section, article or paragraph. This article uses 1 as the top level, but index of heading levels may begin at 1 or 0 in official documentation.
Most LMLs follow one of two styles for headings, either Setext -like underlines or atx -like [ 50 ] line markers, or they support both.
The first style uses underlines, i.e. repeated characters (e.g. equals = , hyphen - or tilde ~ , usually at least two or four times) in the line below the heading text.
Headings may optionally be overline in reStructuredText , in addition to being underlined.
The second style is based on repeated markers (e.g. hash # , equals = or asterisk * ) at the start of the heading itself, where the number of repetitions indicates the (sometimes inverse) heading level. Most languages also support the reduplication of the markers at the end of the line, but whereas some make them mandatory, others do not even expect their numbers to match.
Org-mode supports indentation as a means of indicating the level.
BBCode does not support section headings at all.
POD and Textile choose the HTML convention of numbered heading levels instead.
Microsoft Word supports auto-formatting paragraphs as headings if they do not contain more than a handful of words, no period at the end and the user hits the enter key twice. For lower levels, the user may press the tabulator key the according number of times before entering the text, i.e. one through eight tabs for heading levels two through nine.
Hyperlinks can either be added inline, which may clutter the code because of long URLs, or with named alias or numbered id references to lines containing nothing but the address and related attributes and often may be located anywhere in the document.
Most languages allow the author to specify text Text to be displayed instead of the plain address http://example.com and some also provide methods to set a different link title Title which may contain more information about the destination.
LMLs that are tailored for special setups, e.g. wikis or code documentation, may automatically generate named anchors (for headings, functions etc.) inside the document, link to related pages (possibly in a different namespace) or provide a textual search for linked keywords.
Most languages employ (double) square or angular brackets to surround links, but hardly any two languages are completely compatible. Many can automatically recognize and parse absolute URLs inside the text without further markup.
Gemtext and setext links must be on a line by themselves, they cannot be used inline.
Org-mode's normal link syntax does a text search of the file. You can also put in dedicated targets with <<id>> .
HTML requires an explicit element for the list, specifying its type, and one for each list item, but most lightweight markup languages need only different line prefixes for the bullet points or enumerated items. Some languages rely on indentation for nested lists, others use repeated parent list markers.
Microsoft Word automatically converts paragraphs that start with an asterisk * , hyphen-minus - or greater-than bracket > followed by a space or horizontal tabulator as bullet list items. It will also start an enumerated list for the digit 1 and the case-insensitive letters a (for alphabetic lists) or i (for roman numerals), if they are followed by a period . , a closing round parenthesis ) , a greater-than sign > or a hyphen-minus - and a space or tab; in case of the round parenthesis an optional opening one ( before the list marker is also supported.
Languages differ on whether they support optional or mandatory digits in numbered list items, which kinds of enumerators they understand (e.g. decimal digit 1 , roman numerals i or I , alphabetic letters a or A ) and whether they support to keep explicit values in the output format. Some Markdown dialects, for instance, will respect a start value other than 1, but ignore any other explicit value.
Slack assists the user in entering enumerated and bullet lists, but does not actually format them as such, i.e. it just includes a leading digit followed by a period and a space or a bullet character • in front of a line.
The following lightweight markup languages, while similar to some of those already mentioned, have not yet been added to the comparison tables in this article: | https://en.wikipedia.org/wiki/Lightweight_markup_language |
In computing , lightweight software [ 1 ] also called lightweight program and lightweight application , is a computer program that is designed to have a small memory footprint (RAM usage) and low CPU usage , overall a low usage of system resources [ citation needed ] . To achieve this, the software should avoid software bloat and code bloat and try to find the best algorithm efficiency .
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lightweight_software |
Lightweighting is a concept in the auto industry about building cars and trucks that are less heavy as a way to achieve better fuel efficiency , battery range , acceleration, braking and handling. [ 1 ] [ 2 ] In addition, lighter vehicles can tow and haul larger loads because the engine is not carrying unnecessary weight. [ 3 ] Excessive vehicle weight is also a contributing factor to particulate emissions from tyre and brake wear. [ 4 ] [ 5 ]
Carmakers make body structure parts from aluminium sheet, aluminium extrusions,press hardening steel , carbon fibers , windshields from plastic, and bumpers out of aluminum foam, as ways to lessen vehicle load. [ 6 ] Replacing car parts with lighter materials does not lessen overall safety for drivers, according to one view, since many grades of aluminium and plastics have a high strength-to-weight ratio; and aluminum has high energy absorption properties for its weight. [ 6 ]
The search to replace car parts with lighter ones is not limited to any one type of part; according to a spokesman for Ford Motor Company , engineers strive for lightweighting "anywhere we can." [ 7 ] Using lightweight materials such as plastics , high strength steels and aluminium can mean less strain on the engine and better gas mileage as well as improved handling. [ 8 ] One material sometimes used to reduce weight for structures that can accept the cost premium is carbon fiber . [ 9 ] The auto industry has used the term for many years, as the effort to keep making cars lighter is ongoing. [ 2 ]
Another common material used for lightweighting is aluminum . [ 10 ] Incorporating aluminum has grown continuously to not only meet CAFE standards but to also improve automotive performance. A light
weighting magazine finds: "Even though aluminum is light, it does not sacrifice strength. Aluminum body structure is equal in strength to steel and can absorb twice as much crash-induced energy." [ 11 ] The use of aluminium for lightweighting can be limited for the higher strength grades by their low formability - and in response to this forming challenge new techniques such as roll forming and hot forming ( Hot Form Quench ) have been introduced in recent years.
Many other materials are used to meet lightweighting goals. [ 12 ] Cost of lightweighting, and increasingly sustainability of materials, is becoming an issue in solution selection - with the viable cost increase of a part per kilogram saved being between $5 and $15, [ 13 ] depending on the price point and performance needs of the vehicle. | https://en.wikipedia.org/wiki/Lightweighting |
Lightwood's law is the principle that, in medicine , bacterial infections will tend to localise while viral infections will tend to spread. [ 1 ] This is based on the observation that while bacterial sepsis tends, despite affecting the whole body, to have a clear site of origin or 'focus', the opposite may be true of viral infections. [ 2 ] There may be multiple sites across the body which are affected including dermatological manifestations, respiratory symptoms and gastrointestinal symptoms. [ citation needed ] It is named for Reginald Cyril Lightwood .
This principle is by no means infallible and in clinical practice a variety of diagnostic tests are used to distinguish between bacterial and viral infections. [ citation needed ]
This infectious disease article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lightwood's_law |
A lignocolous lichen is a lichen that grows on wood that has the bark stripped from it. [ 1 ] This contrasts with a corticolous lichen that grows on the bark, [ 2 ] and saxicolous lichens that grow on rock. [ 3 ]
This article about lichens or lichenology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lignicolous_lichen |
Lignin is a class of complex organic polymers that form key structural materials in the support tissues of most plants. [ 1 ] Lignins are particularly important in the formation of cell walls , especially in wood and bark , because they lend rigidity and do not rot easily. Chemically, lignins are polymers made by cross-linking phenolic precursors. [ 2 ]
Lignin was first mentioned in 1813 by the Swiss botanist A. P. de Candolle , who described it as a fibrous, tasteless material, insoluble in water and alcohol but soluble in weak alkaline solutions, and which can be precipitated from solution using acid. [ 3 ] He named the substance "lignine", which is derived from the Latin word lignum , [ 4 ] meaning wood. It is one of the most abundant organic polymers on Earth , exceeded only by cellulose and chitin . Lignin constitutes 30% of terrestrial non- fossil organic carbon [ 5 ] on Earth, and 20 to 35% of the dry mass of wood. [ 6 ]
Lignin is present in red algae , which suggest that the common ancestor of plants and red algae may have been pre-adapted to synthesize lignin. This finding also suggests that the original function of lignin may have been structural as it plays this role in the red alga Calliarthron , where it supports joints between calcified segments. [ 7 ]
The composition of lignin varies from species to species. An example of composition from an aspen [ 8 ] sample is 63.4% carbon, 5.9% hydrogen, 0.7% ash (mineral components), and 30% oxygen (by difference), [ 9 ] corresponding approximately to the formula (C 31 H 34 O 11 ) n .
Lignin is a collection of highly heterogeneous polymers derived from a handful of precursor lignols. Heterogeneity arises from the diversity and degree of crosslinking between these lignols. The lignols that crosslink are of three main types, all derived from phenylpropane: coniferyl alcohol (3-methoxy-4-hydroxyphenylpropane; its radical, G, is sometimes called guaiacyl), sinapyl alcohol (3,5-dimethoxy-4-hydroxyphenylpropane; its radical, S, is sometimes called syringyl), and paracoumaryl alcohol (4-hydroxyphenylpropane; its radical, H, is sometimes called 4-hydroxyphenyl). [ citation needed ]
The relative amounts of the precursor "monomers" (lignols or monolignols) vary according to the plant source. [ 5 ] Lignins are typically classified according to their syringyl/guaiacyl (S/G) ratio. Lignin from gymnosperms is derived from the coniferyl alcohol , which gives rise to G upon pyrolysis. In angiosperms some of the coniferyl alcohol is converted to S. Thus, lignin in angiosperms has both G and S components. [ 10 ] [ 11 ]
Lignin's molecular masses exceed 10,000 u . It is hydrophobic as it is rich in aromatic subunits. The degree of polymerisation is difficult to measure, since the material is heterogeneous. Different types of lignin have been described depending on the means of isolation. [ 12 ]
Many grasses have mostly G, while some palms have mainly S. [ 13 ] All lignins contain small amounts of incomplete or modified monolignols, and other monomers are prominent in non-woody plants. [ 14 ]
Lignin fills the spaces in the cell wall between cellulose , hemicellulose , and pectin components, especially in vascular and support tissues: xylem tracheids , vessel elements and sclereid cells. [ citation needed ]
Lignin plays a crucial part in conducting water and aqueous nutrients in plant stems. The polysaccharide components of plant cell walls are highly hydrophilic and thus permeable to water, whereas lignin is more hydrophobic . The crosslinking of polysaccharides by lignin is an obstacle for water absorption to the cell wall. Thus, lignin makes it possible for the plant's vascular tissue to conduct water efficiently. [ 15 ] Lignin is present in all vascular plants , but not in bryophytes , supporting the idea that the original function of lignin was restricted to water transport.
It is covalently linked to hemicellulose and therefore cross-links different plant polysaccharides , conferring mechanical strength to the cell wall and by extension the plant as a whole. [ 16 ] Its most commonly noted function is the support through strengthening of wood (mainly composed of xylem cells and lignified sclerenchyma fibres) in vascular plants. [ 17 ] [ 18 ] [ 19 ]
Finally, lignin also confers disease resistance by accumulating at the site of pathogen infiltration, making the plant cell less accessible to cell wall degradation. [ 20 ]
Global commercial production of lignin is a consequence of papermaking. In 1988, more than 220 million tons of paper were produced worldwide. [ 21 ] Much of this paper was delignified; lignin comprises about 1/3 of the mass of lignocellulose, the precursor to paper. Lignin is an impediment to papermaking as it is colored, it yellows in air, and its presence weakens the paper. Once separated from the cellulose, it is burned as fuel. Only a fraction is used in a wide range of low volume applications where the form but not the quality is important. [ 22 ]
Mechanical, or high-yield pulp , which is used to make newsprint , still contains most of the lignin originally present in the wood. This lignin is responsible for newsprint's yellowing with age. [ 4 ] High quality paper requires the removal of lignin from the pulp. These delignification processes are core technologies of the papermaking industry as well as the source of significant environmental concerns. [ citation needed ]
In sulfite pulping , lignin is removed from wood pulp as lignosulfonates , for which many applications have been proposed. [ 23 ] They are used as dispersants , humectants , emulsion stabilizers , and sequestrants ( water treatment ). [ 24 ] Lignosulfonate was also the first family of water reducers or superplasticizers to be added in the 1930s as admixture to fresh concrete in order to decrease the water-to-cement ( w/c ) ratio, the main parameter controlling the concrete porosity , and thus its mechanical strength , its diffusivity and its hydraulic conductivity , all parameters essential for its durability. It has application in environmentally sustainable dust suppression agent for roads. Also, lignin can be used in making biodegradable plastic along with cellulose as an alternative to hydrocarbon-made plastics if lignin extraction is achieved through a more environmentally viable process than generic plastic manufacturing. [ 25 ]
Lignin removed by the kraft process is usually burned for its fuel value, providing energy to power the paper mill. Two commercial processes exist to remove lignin from black liquor for higher value uses: LignoBoost (Sweden) and LignoForce (Canada). Higher quality lignin presents the potential to become a renewable source of aromatic compounds for the chemical industry, with an addressable market of more than $130bn. [ 26 ]
Given that it is the most prevalent biopolymer after cellulose , lignin has been investigated as a feedstock for biofuel production and can become a crucial plant extract in the development of a new class of biofuels. [ 27 ] [ 28 ]
Lignin biosynthesis begins in the cytosol with the synthesis of glycosylated monolignols from the amino acid phenylalanine . These first reactions are shared with the phenylpropanoid pathway. The attached glucose renders them water-soluble and less toxic . Once transported through the cell membrane to the apoplast , the glucose is removed, and the polymerisation commences. [ 29 ] Much about its anabolism is not understood even after more than a century of study. [ 5 ]
The polymerisation step, that is a radical-radical coupling, is catalysed by oxidative enzymes . Both peroxidase and laccase enzymes are present in the plant cell walls , and it is not known whether one or both of these groups participates in the polymerisation. Low molecular weight oxidants might also be involved. The oxidative enzyme catalyses the formation of monolignol radicals . These radicals are often said to undergo uncatalyzed coupling to form the lignin polymer . [ 30 ] An alternative theory invokes an unspecified biological control. [ 1 ]
In contrast to other bio-polymers (e.g. proteins, DNA, and even cellulose), lignin resists degradation. It is immune to both acid- and base-catalyzed hydrolysis. The degradability varies with species and plant tissue type. For example, syringyl (S) lignin is more susceptible to degradation by fungal decay as it has fewer aryl-aryl bonds and a lower redox potential than guaiacyl units. [ 31 ] [ 32 ] Because it is cross-linked with the other cell wall components, lignin minimizes the accessibility of cellulose and hemicellulose to microbial enzymes, leading to a reduced digestibility of biomass. [ 15 ]
Some ligninolytic enzymes include heme peroxidases such as lignin peroxidases , manganese peroxidases , versatile peroxidases , and dye-decolourizing peroxidases as well as copper-based laccases . Lignin peroxidases oxidize non-phenolic lignin, whereas manganese peroxidases only oxidize the phenolic structures. Dye-decolorizing peroxidases, or DyPs, exhibit catalytic activity on a wide range of lignin model compounds, but their in vivo substrate is unknown. In general, laccases oxidize phenolic substrates but some fungal laccases have been shown to oxidize non-phenolic substrates in the presence of synthetic redox mediators. [ 33 ] [ 34 ]
Well-studied ligninolytic enzymes are found in Phanerochaete chrysosporium [ 35 ] and other white rot fungi . Some white rot fungi, such as Ceriporiopsis subvermispora , can degrade the lignin in lignocellulose , but others lack this ability. Most fungal lignin degradation involves secreted peroxidases . Many fungal laccases are also secreted, which facilitate degradation of phenolic lignin-derived compounds, although several intracellular fungal laccases have also been described. An important aspect of fungal lignin degradation is the activity of accessory enzymes to produce the H 2 O 2 required for the function of lignin peroxidase and other heme peroxidases . [ 33 ]
Bacteria lack most of the enzymes employed by fungi to degrade lignin, and lignin derivatives (aliphatic acids, furans, and solubilized phenolics) inhibit the growth of bacteria. [ 36 ] Yet, bacterial degradation can be quite extensive, [ 37 ] especially in aquatic systems such as lakes, rivers, and streams, where inputs of terrestrial material (e.g. leaf litter ) can enter waterways. The ligninolytic activity of bacteria has not been studied extensively even though it was first described in 1930. Many bacterial DyPs have been characterized. Bacteria do not express any of the plant-type peroxidases (lignin peroxidase, Mn peroxidase, or versatile peroxidases), but three of the four classes of DyP are only found in bacteria. In contrast to fungi, most bacterial enzymes involved in lignin degradation are intracellular, including two classes of DyP and most bacterial laccases. [ 34 ]
In the environment, lignin can be degraded either biotically via bacteria or abiotically via photochemical alteration, and oftentimes the latter assists in the former. [ 38 ] In addition to the presence or absence of light, several of environmental factors affect the biodegradability of lignin, including bacterial community composition, mineral associations, and redox state. [ 39 ] [ 40 ]
In shipworms , the lignin it ingests is digested by " Alteromonas-like sub-group " bacteria symbionts in the typhlosole sub-organ of its cecum . [ 41 ]
Pyrolysis of lignin during the combustion of wood or charcoal production yields a range of products, of which the most characteristic ones are methoxy -substituted phenols . Of those, the most important are guaiacol and syringol and their derivatives. Their presence can be used to trace a smoke source to a wood fire. In cooking , lignin in the form of hardwood is an important source of these two compounds, which impart the characteristic aroma and taste to smoked foods such as barbecue . The main flavor compounds of smoked ham are guaiacol , and its 4-, 5-, and 6-methyl derivatives as well as 2,6-dimethylphenol. These compounds are produced by thermal breakdown of lignin in the wood used in the smokehouse. [ 42 ]
The conventional method for lignin quantitation in the pulp industry is the Klason lignin and acid-soluble lignin test, which is standardized procedures. The cellulose is digested thermally in the presence of acid. The residue is termed Klason lignin. Acid-soluble lignin (ASL) is quantified by the intensity of its Ultraviolet spectroscopy . The carbohydrate composition may be also analyzed from the Klason liquors, although there may be sugar breakdown products (furfural and 5-hydroxymethylfurfural ). [ 43 ]
A solution of hydrochloric acid and phloroglucinol is used for the detection of lignin (Wiesner test). A brilliant red color develops, owing to the presence of coniferaldehyde groups in the lignin. [ 44 ]
Thioglycolysis is an analytical technique for lignin quantitation . [ 45 ] Lignin structure can also be studied by computational simulation. [ 46 ]
Thermochemolysis (chemical break down of a substance under vacuum and at high temperature) with tetramethylammonium hydroxide (TMAH) or cupric oxide [ 47 ] has also been used to characterize lignins. The ratio of syringyl lignol (S) to vanillyl lignol (V) and cinnamyl lignol (C) to vanillyl lignol (V) is variable based on plant type and can therefore be used to trace plant sources in aquatic systems (woody vs. non-woody and angiosperm vs. gymnosperm). [ 48 ] Ratios of carboxylic acid (Ad) to aldehyde (Al) forms of the lignols (Ad/Al) reveal diagenetic information, with higher ratios indicating a more highly degraded material. [ 31 ] [ 32 ] Increases in the (Ad/Al) value indicate an oxidative cleavage reaction has occurred on the alkyl lignin side chain which has been shown to be a step in the decay of wood by many white-rot and some soft rot fungi . [ 31 ] [ 32 ] [ 49 ] [ 50 ] [ 51 ]
Lignin and its models have been well examined by 1 H and 13 C NMR spectroscopy. Owing to the structural complexity of lignins, the spectra are poorly resolved and quantitation is challenging. [ 52 ] | https://en.wikipedia.org/wiki/Lignin |
The term " lignin characterization " (or " lignin analysis ") refers to a group of activities within lignin research aiming at describing the characteristics of a lignin by determination of its most important properties. [ 1 ] Most often, this term is used to describe the characterization of technical lignins by means of chemical or thermo-chemical analysis. Technical lignins are lignins isolated from various biomasses during various kinds of technical processes such as wood pulping . The most common technical lignins include lignosulphonates (isolated from sulfite pulping), kraft lignins (isolated from kraft pulping black liquor ), organosolv lignins (isolated from organosolv pulping), soda lignins (isolated from soda pulping) and lignin residue after enzymatic treatment of biomass.
Lignins can be characterized by determination of their purity, molecular structure and thermal properties. [ 2 ] [ 3 ] [ 4 ] For certain applications, other properties such as electrical properties or color may be relevant to determine. [ citation needed ]
The dry matter content of lignins is the residue after drying at specified conditions. Any matter that is volatile at the drying conditions is not included in the dry matter content. The moisture content can be approximated by 100% minus the dry matter content. To determine the dry matter content, The sample is dried at a temperature of 105±2 °C. The mass before and after the drying is determined gravimetrically . The dry matter content of sample is calculated as the ratio of mass after to the mass before the drying.
The lignin content can be defined as the sum of the amount of acid-insoluble matter and acid-soluble matter, absorbing at 205 nm, after sulphuric acid hydrolysis during specified conditions, as determined by gravimetry and spectrophotometry, in milligrams per gram. In the determination, the samples are hydrolyzed with sulphuric acid using a two-step technique. The amount of lignin is determined using gravimetry and spectrophotometry. [ 5 ]
The carbohydrate content can be defined as the sum of the amounts of the five principal, neutral wood monosaccharides ; arabinose , galactose , glucose , mannose and xylose in anhydrous form, in a sample, in milligrams per gram.
In the determination, the samples are hydrolyzed with sulphuric acid using a two-step technique. The amounts of the different monosaccharides are determined using ion chromatography (IC). [ citation needed ]
The ash content can be defined as the gravimetrically determined residue after ignition at a defined temperature, in a sample, in percent (weight / weight dry matter of sample).
In the determination, a sample is weighed in a heat-resistant crucible, dried at 105±2 °C, and ignited in a muffle furnace at 525±25 °C. The ash content is then determined, on a moisture-free basis, from the weight of residue after ignition and the moisture content of the sample. [ citation needed ]
The metal elements content (including sulphur) may be determined as the sum of the elements Al, Ba, Ca, Cu, Fe, K, Mg, Mn, Na, P, Si, S and Zn after oxidation and acid digestion.
The metal elements can be determined by inductively coupled plasma optical emission spectroscopy (ICP-OES) after wet digestion. In such a determination, the samples are oxidized by hydrogen peroxide and subsequently acid digested in a closed vessel using a microwave acid digestion apparatus. After cooling, the samples are diluted and the concentration of each element determined by the ICP-OES. [ citation needed ]
The extractives content can be defined as the sum of matter that can be extracted by petroleum ether , and that does not evaporate during drying. This material consists mainly of fatty acids , resin acids , fatty alcohols , sterols , glycerides and steryl esters . In the determination, the samples are extracted with petroleum ether in a for instance a Soxtec apparatus. After extraction, the solvents are evaporated and the residue is dried. Note that petroleum ether extracts may also contain elemental sulphur, S8, if present in the lignin sample. If the dried extracts contain a yellowish precipitate, this indicates that sulphur is present. [ citation needed ]
The main hydroxyl groups in lignin are aliphatic (R–OH), phenolic (Ph–OH) and carboxylic acid (R–COOH) hydroxyl groups. Phenolic hydroxyl groups are syringyl (S), guaiacyl (G) and p-hydroxyphenyl (H) structures and C5-substituted (i.e. having β-5, 4-O-5 and 5-5 inter-unit linkages) structures.
The hydroxyl groups may be determined by 31P nuclear magnetic resonance spectroscopy . In such a determination, the lignin sample is dissolved using a mixture of DMF and pyridine (in excess for a quantitative reaction), in the presence of an internal standard (IS) and a relaxation reagent (RR), and then phosphitylated using a mixture of a derivatisation regent (DR) and deuterated chloroform . The phosphitylated sample is then scanned using liquide state 31P-NMR spectroscopy and the hydroxyl groups are quantified by integration of the corresponding signals from obtained 31P-NMR spectra. [ citation needed ]
Structural elements in lignins are the building blocks in the macromolecule corresponding to the monomers and the intra-molecular bonds.
For lignins, the structural elements are often determined by pyrolysis-gas chromatography-mass spectrometry (py-GC-MS) or nuclear magnetic resonance spectroscopy (NMR) . [ citation needed ]
The molar mass distribution of lignin describe the relationship between the number of moles of each lignin molecule species and the molar mass of that species. Different average values can be defined, depending on the statistical method applied. For lignins, weight-average molar mass (Mw) and number-average molar mass (Mn) are often determined. In addition, the peak molar mass (Mp) is often determined.
For kraft lignins, the molar mass distribution can be determined by aqueous phase or organic phase size-exclusion chromatography . [ citation needed ]
The glass transition temperature (Tg) can be defined by the temperature at which an amorphous polymeric material undergoes a reversible transition from a hard, solid state to a more rubbery state, as determined as inflection point of the heat capacity-temperature curve recorded by differential scanning calorimetry (DSC) . In the determination, the samples are often dried at 105 °C and subsequently analyzed by DSC in a hermetic aluminum pan by increasing the temperature above the Tg, and recording the heat capacity-temperature curve. [ 6 ]
Carbonized lignin can be used in electrical applications such as batteries and supercapacitors. The electrical properties of carbonized lignin can be assessed with techniques such as two-and four-point method, impedance spectroscopy, galvanostatic charge-discharge and cyclic voltammetry. [ 7 ] | https://en.wikipedia.org/wiki/Lignin_characterization |
LignoSat is a small Japanese wooden satellite . It is credited as the world's first satellite to be made of wood. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
LignoSat was developed by Kyoto University and logging firm Sumitomo Forestry as a demonstration of using wood for space exploration uses. [ 4 ]
The satellite is named after the Latin word for "wood" which is "Ligno". LignoSat is made of wood from honoki , a magnolia tree native in Japan. Wood from the tree is customarily used for sword sheaths. The choice of material was determined through a 10-month experiment aboard the International Space Station . The satellite was assembled through a traditional Japanese crafts technique without screws or glue. [ 4 ] It still has some traditional aluminium structures and electronic components. [ 3 ]
The LignoSat 1 is a CubeSat and measures 10 centimetres (3.9 in) on each side, [ 5 ] and weighs 900 grams (32 oz) [ 3 ]
The satellite was launched to space on November 5, 2024 by SpaceX 's Falcon 9 Block 5 rocket inside the uncrewed Cargo Dragon from the Kennedy Space Center in Florida to the International Space Station . [ 6 ] [ 7 ]
It was deployed into orbit from the ISS on 9 December 2024, [ 8 ] but it could not establish communication with ground station. [ 9 ]
LignoSat 2 is a 2U CubeSat. As of 2023 [update] , it is planned for launch in 2026. [ 10 ] | https://en.wikipedia.org/wiki/LignoSat |
Ligroin is the petroleum fraction consisting mostly of C 7 and C 8 hydrocarbons and boiling in the range 90‒140 °C (194–284 °F). The fraction is also called heavy naphtha . [ 1 ] [ 2 ] Ligroin is used as a laboratory solvent . Products under the name ligroin can have boiling ranges as low as 60‒80 °C and may be called light naphtha. [ 3 ]
The name ligroin (or ligroine or ligroïne ) appeared as early as 1866 . [ note 1 ]
Ligroin is assigned the CAS Registry Number 8032-32-4, which is also applied to many other products, particularly the lower boiling ones, called petroleum spirit , petroleum ether and petroleum benzine . [ 3 ]
Ligroin was used to refuel the world's first production automobile, the Benz Patent-Motorwagen , on a long distance journey between Mannheim and Pforzheim . Bertha Benz added ligroin to the vehicle at a pharmacy in Wiesloch , making it the first filling station in history.
The first functional diesel engine could also run on ligroin. [ 4 ] | https://en.wikipedia.org/wiki/Ligroin |
A like button , like option , or recommend button is a feature in communication software such as social networking services , Internet forums , news websites and blogs where the user can express that they like or support certain content . [ 1 ] Internet services that feature like buttons usually display the number of users who liked the content, and may show a full or partial list of them. This is a quantitative alternative to other methods of expressing reaction to content, like writing a reply text. It is the most used feature on social media. [ 2 ]
Some websites also include a dislike button , so the user can either vote in favor, against or neutrally. Other websites include more complex web content voting systems; for example, five stars or reaction buttons to show a wider range of emotion to the content.
Video sharing site Vimeo added a "like" button in November 2005. [ 3 ] Developer Andrew Pile describes it as an iteration of the "digg" button from the site Digg.com , saying "We liked the Digg concept, but we didn't want to call it 'Diggs,' so we came up with 'Likes ' ". [ 3 ]
The like button on FriendFeed was announced as a feature on October 30, 2007, and was popularized within that community. [ 4 ] Later the feature was integrated into Facebook before FriendFeed was acquired by Facebook on August 10, 2009. [ 5 ]
The Facebook like button is designed as a hand giving " thumbs up ". It was originally discussed to have been a star or a plus sign, and during development the feature was referred to as "awesome" instead of "like". [ citation needed ] It was introduced on 9 February 2009. [ 6 ] In February 2016, Facebook introduced reactions - a new way to express people's emotions to Facebook posts. These reactions include "Love", "Haha", "Wow", "Sad", and "Angry".
The like button is a significant power sharing tool, as one "like" will make the post show up on friends' feed, boosting the algorithm to ensure the post is seen and interacted with in order to continue the cycle of engagement. [ 7 ] On the other hand, a study highlights the disadvantage of the "like" reaction in algorithmic content ranking on Facebook. The "like" button can increase the engagement, but can decrease the organic reach as a "brake effect of viral reach". [ 8 ]
In early 2010, as part of a broader redesign of the service, YouTube switched from a star-based rating system to Like/Dislike buttons. Under the previous system, users could rate videos on a scale from 1 to 5 stars; YouTube staff argued that this change reflected common usage of the system, as 2-, 3-, and 4-star ratings were not used as often. [ 9 ] [ 10 ] In 2012, YouTube briefly experimented with replacing the Like and Dislike buttons with a Google+ +1 button. [ 11 ]
In 2019, after the backlash from YouTube Rewind 2018 , YouTube began considering options to combat "dislike mobs," including an option to completely remove the dislike button. [ 12 ] The video is the most disliked video on YouTube , passing the music video for Justin Bieber's "Baby" .
On November 12, 2021, YouTube announced it will make dislike counts private, with only the content creator being able to view the number of dislikes on the back end, in what the company says is an effort to combat targeted dislike and harassment campaigns and encourage smaller content creators. [ 13 ]
On October 17, 2023, with an update on the website, views and likes will be updated periodically during the first 24 hours of a new video. Additionally, the Like button will "glow" when a creator asks their viewers to press the Like button. [ 14 ]
In addition to videos, each of their user comments also have its own set of Like and Dislike buttons since August 2007. [ 15 ] The feature was originally implemented in a similar fashion to Reddit 's system of Upvotes and Downvotes until a greater redesign of the comment system in September 2013 (initially oriented on Google+), since which – while comments continue to show their Likes count – Dislikes won't be made public and thus have no visible effect on a comment's rating. [ 16 ] [ 17 ]
Google+ had a like button called the +1 ( Internet slang for "I like that" or "I agree"), which was introduced in June 2011. [ 18 ] In August 2011, the +1 button also became a share icon. [ 19 ]
On Reddit (a system of message boards ), users can upvote and downvote posts (and comments on posts). The votes contribute to posters' and commenters' "karma" (Reddit's name for a user's overall rating). [ 20 ]
Alongside reposts (commonly known as retweets), X (formerly Twitter) users can like posts made on the service, indicated by a heart. Until November 2015, the equivalent of "liking a post" was "favoriting a post" and favorites were symbolized by a gold star ( ). However, that was changed to alleviate user confusion and put the function more in line with other social networks, the favorite function was renamed to like. [ 21 ]
Previously users were able users to see which posts others have liked under a likes tab in a user's profile. Though in June 2024 this feature was removed across the site thus making likes private for all users. [ 22 ]
In July 2024 it was reported that a "dislike" button featuring a broken heart icon was being tested as an addition to the site. [ 23 ]
VK like buttons for posts, comments, media and external sites operate in a different way from Facebook. Liked content doesn't get automatically pushed to the user's wall, but is saved in the (private) Favorites section instead.
The Instagram like button is indicated by a heart symbol. In addition to tapping the heart symbol on a post, users can double tap an image to "like" it. In May 2019, Instagram began tests wherein the number of likes on a user's post is hidden from other users. [ 24 ]
The TikTok like button is indicated by a heart symbol, and users can use the like button by double tapping on a post they like, similar to YT Shorts and Instagram. Liked content can be accessed via the "Liked" tab on a user's profile.
Additionally in 2022, TikTok implemented a Dislike button for their user comments with the intent of giving their users power to identify comments that are considered "irrelevant or inappropriate". Just like on YouTube ever since the late 2013 overhaul of their comment system, these dislikes are not visible to others. [ 25 ] [ 26 ]
XWiki , the application wiki and open source collaborative platform, added the "Like" button in version 12.7. This button allows users to like wiki pages. It is possible to see all liked pages and the Like counter for each page.
The business and employment social media LinkedIn includes a "like" button. In 2019 the platform added reaction options such as "celebrate", "love", "insightful" and "support". [ 27 ] [ 28 ]
In 2012, following the death of Indian political leader Bal Thackeray , two women were arrested related to a Facebook post about the death. One of the women posted the status update, and her friend had liked it. [ 29 ] The arrest under sections of the Indian Penal Code and the Information Technology Act caused a national outrage against freedom of speech and misuse of the Information Technology laws. [ 30 ] After an enquiry that concluded that the arrests were avoidable and not justified, and recommended action against the arresting policemen, [ 31 ] the allegations were dropped, the police officers suspended, and the magistrate involved in the case was transferred. [ 32 ]
In 2017, a man was fined 4,000 Swiss francs by a Swiss regional court for liking defamatory messages on Facebook written by other people which criticized an activist. According to the court, the defendant "clearly endorsed the unseemly content and made it his own". [ 33 ] | https://en.wikipedia.org/wiki/Like_button |
In mathematics , like terms are summands in a sum that differ only by a numerical factor. [ 1 ] Like terms can be regrouped by adding their coefficients.
Typically, in a polynomial expression , like terms are those that contain the same variables to the same powers , possibly with different coefficients .
More generally, when some variable are considered as parameters, like terms are defined similarly, but "numerical factors" must be replaced by "factors depending only on the parameters".
For example, when considering a quadratic equation , one considers often the expression
where r {\displaystyle r} and s {\displaystyle s} are the roots of the equation and may be considered as parameters. Then, expanding the above product and regrouping the like terms gives
In this discussion, a "term" will refer to a string of numbers being multiplied or divided (that division is simply multiplication by a reciprocal) together. Terms are within the same expression and are combined by either addition or subtraction. For example, take the expression:
a x + b x {\displaystyle ax+bx}
There are two terms in this expression. Notice that the two terms have a common factor, that is, both terms have an x {\displaystyle x} . This means that the common factor variable can be factored out, resulting in
( a + b ) x {\displaystyle (a+b)x}
If the expression in parentheses may be calculated, that is, if the variables in the expression in the parentheses are known numbers, then it is simpler to write the calculation a + b {\displaystyle a+b} . and juxtapose that new number with the remaining unknown number. Terms combined in an expression with a common, unknown factor (or multiple unknown factors) are called like terms.
To provide an example for above, let a {\displaystyle a} and b {\displaystyle b} have numerical values, so that their sum may be calculated. For ease of calculation, let a = 5 {\displaystyle a=5} and b = 3 {\displaystyle b=3} . The original expression becomes
5 x + 3 x {\displaystyle 5x+3x}
which may be factored into
( 5 + 3 ) x {\displaystyle (5+3)x}
or, equally,
8 x {\displaystyle 8x} .
This demonstrates that
5 x + 3 x = 8 x {\displaystyle 5x+3x=8x}
The known values assigned to the unlike part of two or more terms are called coefficients. As this example shows, when like terms exist in an expression, they may be combined by adding or subtracting (whatever the expression indicates) the coefficients, and maintaining the common factor of both terms. Such combination is called combining like terms or collecting like terms, and it is an important tool used for solving equations.
Take the expression, which is to be simplified:
3 ( 4 x 2 y − 6 y ) + 7 x 2 y − 3 y 2 + 2 ( 8 y − 4 y 2 − 4 x 2 y ) {\displaystyle 3(4x^{2}y-6y)+7x^{2}y-3y^{2}+2(8y-4y^{2}-4x^{2}y)}
The first step to grouping like terms in this expression is to get rid of the parentheses. Do this by distributing (multiplying) each number in front of a set of parentheses to each term in that set of parentheses:
12 x 2 y − 18 y + 7 x 2 y − 3 y 2 + 16 y − 8 y 2 − 8 x 2 y {\displaystyle 12x^{2}y-18y+7x^{2}y-3y^{2}+16y-8y^{2}-8x^{2}y}
The like terms in this expression are the terms that can be grouped together by having exactly the same set of unknown factors. Here, the sets of unknown factors are x 2 y , {\displaystyle x^{2}y,} y 2 , {\displaystyle y^{2},} and y . {\displaystyle y.} . By the rule in the first example, all terms with the same set of unknown factors, that is, all like terms, may be combined by adding or subtracting their coefficients, while maintaining the unknown factors. Thus, the expression becomes
11 x 2 y − 2 y − 11 y 2 {\displaystyle 11x^{2}y-2y-11y^{2}}
The expression is considered simplified when all like terms have been combined, and all terms present are unlike. In this case, all terms now have different unknown factors, and are thus unlike, and so the expression is completely simplified. | https://en.wikipedia.org/wiki/Like_terms |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.