text
stringlengths
11
320k
source
stringlengths
26
161
In mathematics and the field of number theory , the Landau–Ramanujan constant is the positive real number b that occurs in a theorem proved by Edmund Landau in 1908, [ 1 ] stating that for large x {\displaystyle x} , the number of positive integers below x {\displaystyle x} that are the sum of two square numbers behaves asymptotically as This constant b was rediscovered in 1913 by Srinivasa Ramanujan , in the first letter he wrote to G.H. Hardy . [ 2 ] By the sum of two squares theorem , the numbers that can be expressed as a sum of two squares of integers are the ones for which each prime number congruent to 3 mod 4 appears with an even exponent in their prime factorization . For instance, 45 = 9 + 36 is a sum of two squares; in its prime factorization, 3 2 × 5, the prime 3 appears with an even exponent, and the prime 5 is congruent to 1 mod 4, so its exponent can be odd. Landau's theorem states that if N ( x ) {\displaystyle N(x)} is the number of positive integers less than x {\displaystyle x} that are the sum of two squares, then where b {\displaystyle b} is the Landau–Ramanujan constant. The Landau-Ramanujan constant can also be written as an infinite product: b = 1 2 ∏ p ≡ 3 ( mod 4 ) ( 1 − 1 p 2 ) − 1 / 2 = π 4 ∏ p ≡ 1 ( mod 4 ) ( 1 − 1 p 2 ) 1 / 2 . {\displaystyle b={\frac {1}{\sqrt {2}}}\prod _{p\equiv 3{\pmod {4}}}\left(1-{\frac {1}{p^{2}}}\right)^{-1/2}={\frac {\pi }{4}}\prod _{p\equiv 1{\pmod {4}}}\left(1-{\frac {1}{p^{2}}}\right)^{1/2}.} This constant was stated by Landau in the limit form above; Ramanujan instead approximated N ( x ) {\displaystyle N(x)} as an integral, with the same constant of proportionality, and with a slowly growing error term. [ 3 ]
https://en.wikipedia.org/wiki/Landau–Ramanujan_constant
In fluid dynamics , Landau–Squire jet or Submerged Landau jet describes a round submerged jet issued from a point source of momentum into an infinite fluid medium of the same kind. This is an exact solution to the incompressible form of the Navier-Stokes equations, which was first discovered by Lev Landau in 1944 [ 1 ] [ 2 ] and later by Herbert Squire in 1951. [ 3 ] The self-similar equation was in fact first derived by N. A. Slezkin in 1934, [ 4 ] but never applied to the jet. Following Landau's work, V. I. Yatseyev obtained the general solution of the equation in 1950. [ 5 ] In the presence of solid walls, the problem is described by the Schneider flow . The problem is described in spherical coordinates ( r , θ , ϕ ) {\displaystyle (r,\theta ,\phi )} with velocity components ( u , v , 0 ) {\displaystyle (u,v,0)} . The flow is axisymmetric, i.e., independent of ϕ {\displaystyle \phi } . Then the continuity equation and the incompressible Navier–Stokes equations reduce to where A self-similar description is available for the solution in the following form, [ 6 ] Substituting the above self-similar form into the governing equations and using the boundary conditions u = v = p − p ∞ = 0 {\displaystyle u=v=p-p_{\infty }=0} at infinity, one finds the form for pressure as where c 1 {\displaystyle c_{1}} is a constant. Using this pressure, we find again from the momentum equation, Replacing θ {\displaystyle \theta } by μ = cos ⁡ θ {\displaystyle \mu =\cos \theta } as independent variable, the velocities become (for brevity, the same symbol is used for f ( θ ) {\displaystyle f(\theta )} and f ( μ ) {\displaystyle f(\mu )} even though they are functionally the same, but takes different numerical values) and the equation becomes After two integrations, the equation reduces to where c 2 {\displaystyle c_{2}} and c 3 {\displaystyle c_{3}} are constants of integration. The above equation is a Riccati equation . After some calculation, the general solution can be shown to be where α , β , c {\displaystyle \alpha ,\ \beta ,\ c} are constants. The physically relevant solution to the jet corresponds to the case α = β = 0 {\displaystyle \alpha =\beta =0} (Equivalently, we say that c 1 = c 2 = c 3 = 0 {\displaystyle c_{1}=c_{2}=c_{3}=0} , so that the solution is free from singularities on the axis of symmetry, except at the origin). [ 7 ] Therefore, The function f {\displaystyle f} is related to the stream function as ψ = ν r f {\displaystyle \psi =\nu rf} , thus contours of f {\displaystyle f} for different values of c {\displaystyle c} provides the streamlines. The constant c {\displaystyle c} describes the force at the origin acting in the direction of the jet (this force is equal to the rate of momentum transfer across any sphere around the origin plus the force in the jet direction exerted by the sphere due to pressure and viscous forces), the exact relation between the force and the constant is given by The solution describes a jet of fluid moving away from the origin rapidly and entraining the slowly moving fluid outside of the jet. The edge of the jet can be defined as the location where the streamlines are at minimum distance from the axis, i.e., the edge is given by Therefore, the force can be expressed alternatively using this semi-angle of the conical-boundary of the jet, When the force becomes large, the semi-angle of the jet becomes small, in which case, and the solution inside and outside of the jet become The jet in this limiting case is called the Schlichting jet . On the other extreme, when the force is small, the semi-angle approaches 90 degree (no inside and outside region, the whole domain is considered as single region), the solution itself goes to
https://en.wikipedia.org/wiki/Landau–Squire_jet
The Landau–Zener formula is an analytic solution to the equations of motion governing the transition dynamics of a two-state quantum system , with a time-dependent Hamiltonian varying such that the energy separation of the two states is a linear function of time. The formula, giving the probability of a diabatic (not adiabatic ) transition between the two energy states, was published separately by Lev Landau , [ 1 ] Clarence Zener , [ 2 ] Ernst Stueckelberg , [ 3 ] and Ettore Majorana , [ 4 ] in 1932. If the system starts, in the infinite past, in the lower energy eigenstate, we wish to calculate the probability of finding the system in the upper energy eigenstate in the infinite future (a so-called Landau–Zener transition). For infinitely slow variation of the energy difference (that is, a Landau–Zener velocity of zero), the adiabatic theorem tells us that no such transition will take place, as the system will always be in an instantaneous eigenstate of the Hamiltonian at that moment in time. At non-zero velocities, transitions occur with probability as described by the Landau–Zener formula. Such transitions occur between states of the entire system; hence any description of the system must include all external influences, including collisions and external electric and magnetic fields. In order that the equations of motion for the system might be solved analytically, a set of simplifications are made, known collectively as the Landau–Zener approximation. The simplifications are as follows: The first simplification makes this a semi-classical treatment. In the case of an atom in a magnetic field, the field strength becomes a classical variable which can be precisely measured during the transition. This requirement is quite restrictive as a linear change will not, in general, be the optimal profile to achieve the desired transition probability. The second simplification allows us to make the substitution where E 1 ( t ) {\displaystyle E_{1}(t)} and E 2 ( t ) {\displaystyle E_{2}(t)} are the energies of the two states at time t , given by the diagonal elements of the Hamiltonian matrix, and α {\displaystyle \alpha } is a constant. For the case of an atom in a magnetic field this corresponds to a linear change in magnetic field. For a linear Zeeman shift this follows directly from point 1. The final simplification requires that the time–dependent perturbation does not couple the diabatic states; rather, the coupling must be due to a static deviation from a 1 / r {\displaystyle 1/r} Coulomb potential , commonly described by a quantum defect . The details of Zener's solution are somewhat opaque, relying on a set of substitutions to put the equation of motion into the form of the Weber equation [ 5 ] and using the known solution. A more transparent solution is provided by Curt Wittig [ 6 ] using contour integration . The key figure of merit in this approach is the Landau–Zener velocity: where q is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} are the energies of the two diabatic (crossing) states. A large v L Z {\displaystyle v_{\rm {LZ}}} results in a large diabatic transition probability and vice versa. Using the Landau–Zener formula the probability, P D {\displaystyle P_{\rm {D}}} , of a diabatic transition is given by The quantity a {\displaystyle a} is the off-diagonal element of the two-level system's Hamiltonian coupling the bases, and as such it is half the distance between the two unperturbed eigenenergies at the avoided crossing, when E 1 = E 2 {\displaystyle E_{1}=E_{2}} . The simplest generalization of the two-state Landau–Zener model is a multistate system with a Hamiltonian of the form where A and B are Hermitian N x N matrices with time-independent elements. The goal of the multistate Landau–Zener theory is to determine elements of the scattering matrix and the transition probabilities between states of this model after evolution with such a Hamiltonian from negative infinite to positive infinite time. The transition probabilities are the absolute value squared of scattering matrix elements. There are exact formulas, called hierarchy constraints, that provide analytical expressions for special elements of the scattering matrix in any multi-state Landau–Zener model. [ 7 ] Special cases of these relations are known as the Brundobler–Elser (BE) formula, [ 8 ] [ 9 ] [ 10 ] ), and the no-go theorem ,. [ 11 ] [ 12 ] Discrete symmetries often lead to constraints that reduce the number of independent elements of the scattering matrix. [ 13 ] [ 14 ] There are also integrability conditions that, when they are satisfied, lead to exact expressions for the entire scattering matrices in multistate Landau–Zener models. Numerous such completely solvable models have been identified, including: Applications of the Landau–Zener solution to the problems of quantum state preparation and manipulation with discrete degrees of freedom stimulated the study of noise and decoherence effects on the transition probability in a driven two-state system. Several compact analytical results have been derived to describe these effects, including the Kayanuma formula [ 28 ] for a strong diagonal noise, and Pokrovsky–Sinitsyn formula [ 29 ] for the coupling to a fast colored noise with off-diagonal components. Using the Schwinger–Keldysh Green's function, a rather complete and comprehensive study on the effect of quantum noise in all parameter regimes were performed by Ao and Rammer in late 1980s, from weak to strong coupling, low to high temperature, slow to fast passage, etc. Concise analytical expressions were obtained in various limits, showing the rich behaviors of such problem. [ 30 ] The effects of nuclear spin bath and heat bath coupling on the Landau–Zener process was explored by Sinitsyn and Prokof'ev [ 31 ] and Pokrovsky and Sun, [ 32 ] [ 33 ] [ 34 ] respectively. Exact results in multistate Landau–Zener theory ( no-go theorem and BE-formula ) can be applied to Landau–Zener systems which are coupled to baths composed of infinite many oscillators and/or spin baths (dissipative Landau–Zener transitions). They provide exact expressions for transition probabilities averaged over final bath states if the evolution begins from the ground state at zero temperature, see in Ref. for oscillator baths [ 35 ] and for universal results including spin baths in Ref. [ 36 ]
https://en.wikipedia.org/wiki/Landau–Zener_formula
In physics, Landau–de Gennes theory describes the NI transition, i.e., phase transition between nematic liquid crystals and isotropic liquids, which is based on the classical Landau's theory and was developed by Pierre-Gilles de Gennes in 1969. [ 1 ] [ 2 ] The phenomonological theory uses the Q {\displaystyle \mathbf {Q} } tensor as an order parameter in expanding the free energy density. [ 3 ] [ 4 ] The NI transition is a first-order phase transition, albeit it is very weak. The order parameter is the Q {\displaystyle \mathbf {Q} } tensor, which is symmetric, traceless, second-order tensor and vanishes in the isotropic liquid phase. We shall consider a uniaxial Q {\displaystyle \mathbf {Q} } tensor, which is defined by where S = S ( T ) {\displaystyle S=S(T)} is the scalar order parameter and n {\displaystyle \mathbf {n} } is the director. The Q {\displaystyle \mathbf {Q} } tensor is zero in the isotropic liquid phase since the scalar order parameter S {\displaystyle S} is zero, but becomes non-zero in the nematic phase. Near the NI transition, the ( Helmholtz or Gibbs ) free energy density F {\displaystyle {\mathcal {F}}} is expanded about as or more compactly where ( A , B , C ) {\displaystyle (A,B,C)} are functions of temperature. Near the phase transition, we can expand A ( T ) = a ( T − T ∗ ) + ⋯ {\displaystyle A(T)=a(T-T_{*})+\cdots } , B ( T ) = b + ⋯ {\displaystyle B(T)=b+\cdots } and C ( T ) = c + ⋯ {\displaystyle C(T)=c+\cdots } with ( a , b , c ) {\displaystyle (a,b,c)} being three positive constants. Now substituting the Q {\displaystyle \mathbf {Q} } tensor results in [ 5 ] This is minimized when The two required solutions of this equation are The NI transition temperature T N I {\displaystyle T_{NI}} is not simply equal to T ∗ {\displaystyle T_{*}} (which would be the case in second-order phase transition), but is given by S N I {\displaystyle S_{NI}} is the scalar order parameter at the transition.
https://en.wikipedia.org/wiki/Landau–de_Gennes_theory
In number theory , the Lander, Parkin, and Selfridge conjecture concerns the integer solutions of equations which contain sums of like powers . The equations are generalisations of those considered in Fermat's Last Theorem . The conjecture is that if the sum of some k -th powers equals the sum of some other k -th powers, then the total number of terms in both sums combined must be at least k . Diophantine equations , such as the integer version of the equation a 2 + b 2 = c 2 that appears in the Pythagorean theorem , have been studied for their integer solution properties for centuries. Fermat's Last Theorem states that for powers greater than 2, the equation a k + b k = c k has no solutions in non-zero integers a , b , c . Extending the number of terms on either or both sides, and allowing for higher powers than 2, led to Leonhard Euler to propose in 1769 that for all integers n and k greater than 1, if the sum of n k th powers of positive integers is itself a k th power, then n is greater than or equal to k . In symbols, if ∑ i = 1 n a i k = b k {\displaystyle \sum _{i=1}^{n}a_{i}^{k}=b^{k}} where n > 1 and a 1 , a 2 , … , a n , b {\displaystyle a_{1},a_{2},\dots ,a_{n},b} are positive integers, then his conjecture was that n ≥ k . In 1966 , a counterexample to Euler's sum of powers conjecture was found by Leon J. Lander and Thomas R. Parkin for k = 5 : [ 1 ] In subsequent years, further counterexamples were found, including for k = 4 . The latter disproved the more specific Euler quartic conjecture , namely that a 4 + b 4 + c 4 = d 4 has no positive integer solutions. In fact, the smallest solution, found in 1988, is In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured [ 2 ] that if ∑ i = 1 n a i k = ∑ j = 1 m b j k , {\displaystyle \sum _{i=1}^{n}a_{i}^{k}=\sum _{j=1}^{m}b_{j}^{k},} where a i ≠ b j are positive integers for all 1 ≤ i ≤ n and 1 ≤ j ≤ m , then m + n ≥ k . The equal sum of like powers formula is often abbreviated as ( k , m , n ) . Small examples with m = n = k 2 {\displaystyle m=n={\tfrac {k}{2}}} (related to generalized taxicab number ) include and The conjecture implies in the special case of m = 1 that if ∑ i = 1 n a i k = b k {\displaystyle \sum _{i=1}^{n}a_{i}^{k}=b^{k}} (under the conditions given above) then n ≥ k − 1 . For this special case of m = 1 , some of the known solutions satisfying the proposed constraint with n ≤ k , where terms are positive integers , hence giving a partition of a power into like powers, are: [ 3 ] Fermat's Last Theorem implies that for k = 4 the conjecture is true. It is not known if the conjecture is true, or if nontrivial solutions exist that would be counterexamples, such as a k + b k = c k + d k for k ≥ 5 . [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Lander,_Parkin,_and_Selfridge_conjecture
Landfarming is an ex situ waste treatment process that is performed in the upper soil zone or in biotreatment cells. Contaminated soils , sediments , or sludges are transported to the landfarming site, mixed into the soil surface and periodically turned over ( tilled ) to aerate the mixture. [ 1 ] Landfarming commonly uses a clay or composite liner to intercept leaching contaminants and prevent groundwater pollution , [ 2 ] however, a liner is not a universal requirement. [ 3 ] This technique has been used for years in the management and disposal of drill cuttings , oily sludge and other petroleum refinery wastes. The equipment employed in land farming is typical of that used in agricultural operations. These land farming activities cultivate and enhance microbial degradation of hazardous compounds. As a rule of thumb, the higher the molecular weight (i.e., the more rings within a polycyclic aromatic hydrocarbon ), the slower the degradation rate. Also, the more chlorinated or nitrated the compound, the more difficult it is to degrade. [ 4 ] Factors that may limit the applicability and effectiveness of the process include: Hydrocarbon compounds that have been identified as being not readily degraded by land farming include creosote , pentachlorophenol (PCP), and bunker C oil . [ citation needed ]
https://en.wikipedia.org/wiki/Landfarming
Landfill gas is a mix of different gases created by the action of microorganisms within a landfill as they decompose organic waste , including for example, food waste and paper waste . Landfill gas is approximately forty to sixty percent methane , with the remainder being mostly carbon dioxide . Trace amounts of other volatile organic compounds (VOCs) comprise the remainder (<1%). These trace gases include a large array of species, mainly simple hydrocarbons . [ 1 ] Landfill gases have an influence on climate change . The major components are CO 2 and methane , both of which are greenhouse gases . Methane in the atmosphere is a far more potent greenhouse gas, with each molecule having twenty-five times the effect of a molecule of carbon dioxide. Methane itself however accounts for less composition of the atmosphere than does carbon dioxide. Landfills are the third-largest source of methane in the US. [ 2 ] Because of the significant negative effects of these gases, regulatory regimes have been set up to monitor landfill gas , reduce the amount of biodegradable content in municipal waste , and to create landfill gas utilization strategies, which include gas flaring or capture for electricity generation. Landfill gases are the result of three processes: [ 1 ] The first two depend strongly on the nature of the waste. The dominant process in most landfills is the third process whereby anaerobic bacteria decompose organic waste to produce biogas , which consists of methane and carbon dioxide together with traces of other compounds. [ 3 ] Despite the heterogeneity of waste, the evolution of gases follows well defined kinetic pattern . Formation of methane and CO 2 commences about six months after depositing the landfill material. The evolution of gas reaches a maximum at about 20 years, then declines over the course of decades. [ 1 ] Conditions and changes within the landfill can be observed with electrical resistivity tomography (ERT) to detect sources of landfill gas, [ 4 ] and leachate movements and pathways. [ 5 ] Conditions at different locations, such as temperature, moisture levels and fraction of biodegradable material can be inferred and this information can be used to improve gas production with optimal well locations over hotspots and interventions such as heap irrigation. When landfill gas permeates through a soil cover, a fraction of the methane in the gas is oxidized microbially to CO 2 . [ 6 ] Because gases produced by landfills are both valuable and sometimes hazardous, monitoring techniques have been developed. Flame ionization detectors can be used to measure methane levels as well as total VOC levels. Surface monitoring and sub-surface monitoring as well as monitoring of the ambient air is carried out. In the U.S., under the Clean Air Act of 1990, it is required that many large landfills install gas collection and control systems, which means that at the very least the facilities must collect and flare the gas . U.S. Federal regulations under Subtitle D of RCRA formed in October 1979 regulate the siting, design, construction, operation, monitoring, and closure of municipal solid waste landfills. Subtitle D now requires controls on the migration of methane in landfill gas. Monitoring requirements must be met at landfills during their operation, and for an additional 30 years after. The landfills affected by Subtitle D of RCRA are required to control gas by establishing a way to check for methane emissions periodically and therefore prevent off-site migration. Landfill owners and operators must make sure the concentration of methane gas does not exceed 25% of the LEL for methane in the facilities' structures and the LEL for methane at the facility boundary. [ 7 ] The gases produced within a landfill can be collected and used in various ways. The landfill gas can be utilized directly on-site by a boiler or any type of combustion system, providing heat. Electricity can also be generated on-site through the use of microturbines, steam turbines, or fuel cells. [ 8 ] The landfill gas can also be sold off-site and sent into natural gas pipelines. This approach requires the gas to be processed into pipeline quality, e.g., by removing various contaminants and components. [ 9 ] Landfill gas can also be used to evaporate leachate, another byproduct of the landfill process. This application displaces another fuel that was previously used for the same thing. [ 10 ] The efficiency of gas collection at landfills directly impacts the amount of energy that can be recovered - closed landfills (those no longer accepting waste) collect gas more efficiently than open landfills (those that are still accepting waste). A comparison of collection efficiency at closed and open landfills found about a 17 percentage point difference between the two. [ 11 ] Capture and use of landfill gas can be expensive. Some environmental groups claim that the projects do not produce "renewable power" because trash (their source) is not renewable. The Sierra Club opposes government subsidies for such projects. [ 12 ] The Natural Resources Defense Council (NRDC) argues that government incentives should be directed more towards solar, wind, and energy-efficiency efforts. [ 12 ] Landfill gas emissions can lead to environmental , hygiene and security problems in the landfill. [ 13 ] [ 14 ] Several accidents have occurred, for example at Loscoe , England in 1986, [ 15 ] where migrating landfill gas accumulated and partially destroyed a property. An accident causing two deaths occurred from an explosion in a house adjacent to Skellingsted landfill in Denmark in 1991. [ 16 ] Due to the risk presented by landfill gas, there is a clear need to monitor gas produced by landfills. In addition to the risk of fire and explosion, gas migration in the subsurface can result in contact with landfill gas with groundwater . This, in turn, can result in contamination of groundwater by organic compounds present in nearly all landfill gas. [ 17 ] Although usually evolved only in trace amounts, landfills do release some aromatics and chlorocarbons . Landfill gas migration , due to pressure differentials and diffusion, can occur. This can create an explosion hazard if the gas reaches sufficiently high concentrations in adjacent buildings. A United States Environmental Protection Agency (EPA) report indicates that as of 2016, counts of operational municipal solid waste landfills range between 1,900 and 2,000. In a nationwide study done by the Environmental Research and Education Foundation in 2013, only 1,540 operational municipal solid waste landfills were counted throughout the United States. Decomposing waste in these landfills produces landfill gas, which is a mixture of about half methane and half carbon dioxide. Landfills are the third-largest source of methane emissions in the United States, with municipal solid waste landfills representing 95 percent of this fraction. [ 18 ] [ 19 ] In the U.S., the number of landfill gas projects increased from 399 in 2005, to 594 in 2012 [ 20 ] according to the Environmental Protection Agency . These projects are popular because they control energy costs and reduce greenhouse gas emissions . These projects collect the methane gas and treat it, so it can be used for electricity or upgraded to pipeline-grade gas. (Methane gas has twenty-one times the global warming potential of carbon dioxide). [ 21 ] For example, in the U.S., Waste Management uses landfill gas as an energy source at 110 landfill gas-to-energy facilities. This energy production offsets almost two million tons of coal per year, creating energy equivalent to that needed by four hundred thousand homes. These projects also reduce greenhouse gas emissions into the atmosphere. [ 22 ]
https://en.wikipedia.org/wiki/Landfill_gas
A landfill liner , or composite liner , is intended to be a low permeable barrier, which is laid down under engineered landfill sites. Until it deteriorates, the liner retards migration of leachate , and its toxic constituents, into underlying aquifers or nearby rivers from causing potentially irreversible contamination of the local waterway and its sediments. Modern landfills generally require a layer of compacted clay with a minimum required thickness and a maximum allowable hydraulic conductivity , overlaid by a high-density polyethylene geomembrane . The United States Environmental Protection Agency has stated that the barriers "will ultimately fail," while sites remain threats for "thousands of years," suggesting that modern landfill designs delay but do not prevent ground and surface water pollution. [ 1 ] Chipped or waste tires are used to support and insulate the liner. [ 2 ] Different types of liquid trash will vary in their chemical properties and threat posed to the local environment, so any individual landfills may use a variety of different liner systems depending on the type of trash that is collected there. There are two main types of liner systems in use: single-liner systems, and double-liner systems. Single-liner systems are generally used in landfills which hold rubble waste from construction. Landfills with single-liner systems are not designed to contain harmful liquid wastes such as paint or tar that could easily seep through a single-liner system. Double-liner systems are usually found in municipal solid waste landfills, as well all hazardous waste landfills. The first layer is constructed to collect the leachate, while the second layer is engineered to be a leak-detection system to ensure that no contaminants seep into the ground. [ 3 ] Composite liners are required to be used in municipal solid waste systems for landfills and use a double liner system which is composed of a leachate system which is a liquid that collects solids from the substance this is passed through it. The leachate system is surrounded in a by a type of solid drainage layer such as gravel which is enclosed by a geomembrane and compressed clay, also known as a geosynthetic clay liner . This geosynthetic clay liner is usually made of sodium bentonite which is compacted in between two thick pieces of geotextile. The next material surrounding the composite liner would be a leak detection system composed of another material like gravel with an additional geomembrane or complex liner. [ 4 ] The geomembranes within the composite liner consist of a high-density polyethylene which provide an effective minimization for flow and deliver and helpful barrier which is used on inorganic contaminants. [ 5 ] It can be used as a substitute for sand or gravel and also has a very high transmissivity and low storage. The lower surface helps provide an effective leak test once correctly installed. It is also a low permeable vapor and liquid barrier. The geosynthetic clay liners are manufactured by factories and the purpose for it being made of sodium bentonite is that they regulate the movement of liquids in gases within the waste. [ 6 ] The geocomposites which are a combination of the geomembranes and geosynthetic liner material also include a layer of bentonite between the middle of the layers of geotextile; however, airspace is allowed to be implemented. It is then topped off with a final cover. [ citation needed ] The main role a composite liner performs in a municipal solid waste system for landfills is reducing the amount of leakage through small seep holes that sometimes form in the geomembrane part of the composite liner. The protection layer part serves as a preventer from these holes from forming inside the geomembrane which would allow the waste to leak through the entire liner. It also takes away the pressure and stress which can cause cracking and holes from forming in the membrane as well. [ 7 ] An effective liner in a landfill system should be able to control water in terms of movement and protection on the environment. It should be able to regulate the flow away from the waste area and withhold the waste contents as it enters the actual landfill. Due to the effectiveness on how landfills are placed on top of slopes in order for the water to stream downhill and in an emergency, into the actual landfill. Water moves through the landfill and downward through the composite liner. The main purpose for all of this is so that the movement is lateral which lessens the probability for slope catastrophe and the waste leaking down and freely contaminating whatever is in its path. The final cover functions as a way to keep the water out of the contaminate and to control the runoff from entering the system. This helps prevent plants and animals from being harmed by the waste contaminated water, leachate. Using gravity and pumps the leachate is able to be pushed to a sump where it is removed by a pump. When developing composite liners it is extremely important to take in risk factors such as earthquakes and other slope failure problems that could occur. [ 8 ] Composite liners are used in municipal solid waste (MSW) landfills to reduce water pollution . A composite liner is made of a geomembrane along with a geosynthetic clay liner . Composite-liner systems are better at reducing leachate migration into the subsoil than either a clay liner or a single geomembrane layer. [ 9 ] The primary forms of mechanical degradation associated with geomembranes result from insufficient tensile strength, tear resistance, impact resistance, puncture resistance, and susceptibility to environmental stress cracking (ESC). The ideal method of assessing the amount of liner degradation would be by examining field samples over their service life. Due to the lengths of time required for field sampling tests, various laboratory-accelerated ageing tests have been developed to measure the important mechanical properties. [ 10 ] Tensile strength represents the ability for a geomembrane to resist tensile stress. Geomembranes are most commonly tested for tensile strength using one of three methods; the uniaxial tensile test described in ASTM D639-94, the wide-strip tensile test described in ASTM D4885-88, and the multiaxial tension test described in ASTM D5617-94. The difference in these three methods lies in the boundaries imposed into the test specimens. Uniaxial tests do not provide lateral restraint during testing and thus tests the sample under uniaxial stress conditions. During the wide-strip test the sample is restrained laterally while the middle portion is unrestrained. The multiaxial tensile test provides a plane stress boundary condition at the edges of the sample. [ 11 ] A typical range of tensile strengths in the machine direction are from 225 to 245 lb/in for 60-mil HDPE to 280 to 325 lb/in for 80-mil HDPE. [ 12 ] Tear resistance of a geomembrane becomes important when it is exposed to high winds or handling stress during installation. There are various ASTM methods for measuring tear resistance of geomembranes, with most common reports using ASTM D1004. Typical tear resistances show a value of 40 to 45 lb for 60-mil HDPE and 50 to 60 lb for 80-mil HDPE. [ 12 ] Impact resistance provides an assessment of the effects of impacts from falling objects which can either tear or weaken the geomembrane. As with the previous mechanical properties, there are various ASTM methods for assessment. Significantly higher impact resistances are realized when geotextiles are placed above or below the geomembrane . Thicker geomembranes also display higher impact resistances. [ 12 ] Puncture resistance of a geomembrane is important due to the heterogeneous material above and below a typical liner. Rough surfaces, such as stones or other sharp objects, may puncture a membrane if it does not have sufficient puncture resistance. Various methods beyond standard ASTM tests are available; one such method, the critical cone height test, measures the maximum height of a cone on which a compressed geomembrane, which is subjected to increasing pressure, does not fail. HDPE samples typically have a critical cone height of around 1 cm. [ 13 ] Environmental stress cracking is defined as external or internal cracking in plastic induced by applied tensile stress more than its short-term tensile strength. ESC is a fairly common observation in HDPE geomembranes and thus needs to be evaluated carefully. Proper polymeric properties, such as molecular weight, orientation, and distribution, aid in ESC resistance. ASTM D5397 [standard test method for evaluation of stress crack resistance of polyolefin geomembranes using notched constant tensile load (NCTL)] provides the necessary procedure for measuring the ESC resistance of most HDPE geomembranes. The current recommended transition time for an acceptable HDPE geomembrane is around 100 h. [ 12 ]
https://en.wikipedia.org/wiki/Landfill_liner
A landlord's gas safety certificate , also referred to as the landlord's gas safety record , is required by law to be held for all rental accommodation in the UK where there are gas appliances present. The requirement is enshrined in the Gas Safety (Installation and Use) Regulations 1998 . The law requires all gas appliances in a rented property to be checked annually, [ 1 ] with a gas safety record being completed and a copy provided to tenants. [ 2 ] [ 3 ] The definition of "rented" is broad, covering accommodation that is provided under a contractual arrangement for domestic staff as well as rented properties in general. [ 4 ] Gas safety records, sometimes referred to as a CP12 (from CORGI Proforma 12 when CORGI was UK body for gas safety matters), [ 5 ] [ 6 ] are completed by engineers who must be registered with the Gas Safe Register scheme, which took over from the previous CORGI scheme in 2009. Gas safety checks should be carried out on any boilers, ovens, pipework, flues, chimneys and other fixtures and fittings that burn or exhaust gas. The checklist includes: Once the inspection has been carried out, and the certificate is issued, [ 7 ] it will contain the following information: This article relating to law in the United Kingdom , or its constituent jurisdictions, is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Landlord's_gas_safety_certificate
Landolt–Börnstein is a collection of property data in materials science and the closely related fields of chemistry , physics and engineering published by Springer Nature . [ 1 ] [ 2 ] [ 3 ] On July 28, 1882, Dr. Hans Heinrich Landolt and Dr. Richard Börnstein , both professors at the " Landwirtschaftliche Hochschule " (Agricultural College) at Berlin, signed a contract with the publisher Ferdinand Springer on the publication of a collection of tables with physical-chemical data. The title of this book "Physikalisch-chemische Tabellen" (Physical-Chemical Tables) published in 1883 was soon forgotten. Owing to its success the data collection has been known for more than a hundred years by each scientist only as "The Landolt-Börnstein". [ 4 ] 1250 copies of the 1st Edition were printed and sold. In 1894, the 2nd Edition was published, in 1905 the 3rd Edition, in 1912 the 4th Edition, and finally in 1923 the 5th Edition. Supplementary volumes of the latter were printed until as late as 1936. New Editions saw changes in large expansion of volumes, number of authors, updated structure, additional tables and coverage of new areas of physics and chemistry. The 5th Edition was eventually published in 1923, consisting of two volumes and comprising a total of 1,695 pages. Sixty three authors had contributed to it. The growth that had already been noticed in previous editions, continued. It was clear, that "another edition in approximately 10 years" was no solution. A complete conceptual change of the Landolt–Börnstein had thus become necessary. For the meantime supplementary volumes in two-year intervals should be provided to fill in the blanks and add the latest data. The first supplementary volume of the 5th Edition was published in 1927, the second in 1931 and the third in 1935/36. The latter consisted of three sub-volumes with a total of 3,039 pages and contributions from 82 authors. The 6th Edition (1950) was published in line with the revised general frame. The basic idea was to have four volumes instead of one, each of which was to cover different fields of the Landolt–Börnstein under different editors. Each volume was given a detailed table of contents. Two major restrictions were also imposed. The author of a contribution was asked to choose a "Bestwert" (optimum value) from the mass of statements of an experimental value in the publications of different authors, or derive a "wahrscheinlichster Wert” (most possible value). The other change of importance was that not only diagrams became as important as tables, but that text also became necessary to explain the presented data. [ 5 ] The New Series represents over 520 books published between 1961 and 2018 and includes more than 220,000 pages covering mechanical , optical , acoustical , thermal , spectroscopic , electrical and magnetic properties among others. The New Series offers critically evaluated data by over 1,000 expert authors and editors in materials science. Three of the volumes in the New series. Group 1 Elementary particles, nuclei and atoms—volumes 21A, B1, B2, and C—have been updated (2020) and published open access in an independent hand book series, Particle Physics Reference Library , following a joint CERN –Springer initiative. [ 6 ] These volumes are Theory and experiments , [ 7 ] Detectors for particles and radiation , [ 8 ] and Accelerators and colliders . [ 9 ] Landolt–Börnstein books have gone through various digitization initiatives, from CD-ROM to FTP and PDF formats. Landolt–Börnstein books content is now available on SpringerMaterials.
https://en.wikipedia.org/wiki/Landolt–Börnstein
Landomycins are angucycline antibiotics isolated from Streptomyces . This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Landomycins
A landrace is a domesticated , locally adapted, [ 2 ] [ 3 ] [ 4 ] often traditional [ 5 ] variety of a species of animal or plant that has developed over time, through adaptation to its natural and cultural environment of agriculture and pastoralism , and due to isolation from other populations of the species. [ 2 ] Landraces are distinct from cultivars and from standard breeds . [ 6 ] A significant proportion of farmers around the world grow landrace crops , [ 4 ] and most plant landraces are associated with traditional agricultural systems. [ 5 ] Landraces of many crops have probably been grown for millennia. [ 7 ] Increasing reliance upon modern plant cultivars that are bred to be uniform has led to a reduction in biodiversity , [ 8 ] [ 9 ] [ 10 ] because most of the genetic diversity of domesticated plant species lies in landraces and other traditionally used varieties. [ 9 ] Some farmers using scientifically improved varieties also continue to raise landraces for agronomic reasons that include better adaptation to the local environment, lower fertilizer requirements, lower cost, and better disease resistance. Cultural and market preferences for landraces include culinary uses and product attributes such as texture, color, or ease of use. [ 8 ] [ 9 ] Plant landraces have been the subject of more academic research, and the majority of academic literature about landraces is focused on botany in agriculture , not animal husbandry . Animal landraces are distinct from ancestral wild species of modern animal stock, and are also distinct from separate species or subspecies derived from the same ancestor as modern domestic stock. Not all landraces derive from wild or ancient animal stock; in some cases, notably dogs and horses, domestic animals have escaped in sufficient numbers in an area to breed feral populations that form new landraces through evolutionary pressure . There are differences between authoritative sources on the specific criteria which describe landraces, although there is broad consensus about the existence and utility of the classification. Individual criteria may be weighted differently depending on a given source's focus (e.g., governmental regulation , biological sciences , agribusiness , anthropology and culture, environmental conservation , pet -keeping and -breeding , etc.). Additionally, not all cultivars agreed to be landraces exhibit every characteristic of a landrace. [ 5 ] General features that characterize a landrace may include: Landrace literally means 'country-breed' (German: Landrasse ) [ 14 ] and close cognates of it are found in various Germanic languages . The first known reference to the role of landraces as genetic resources was made in 1890 at an agriculture and forestry congress in Vienna , Austria . The term was first defined by Kurt von Rümker in 1908, [ 7 ] and more clearly described in 1909 by U. J. Mansholt, who wrote that landraces have more stable characteristics and better resistance to adverse conditions, but have lower production capacity than cultivars, and are apt to change genetically when moved to another environment. [ 7 ] H. Kiessling added in 1912 that a landrace is a mixture of phenotypic forms despite relative outward uniformity, and a great adaptability to its natural and human environment. [ 7 ] The word landrace entered non-academic English in the early 1930s, by way of the Danish Landrace pig , a particular breed of lop-eared swine. [ 14 ] Many other languages do not use separate terms, like landrace and breed , but instead rely on extended description to convey such distinctions. Spanish is one such language. [ citation needed ] Geneticist D. Phillip Sponenberg described animal breeds within these classes: the landrace, the standardized breed, modern "type" breeds, industrial strains, and feral populations. He describes landraces as an early stage of breed development, created by a combination of founder effect , isolation, and environmental pressures. Human selection for production goals is also typical of landraces. [ 15 ] As discussed in more detail in breed , that term itself has several definitions from various scientific and animal husbandry perspectives. Some of those senses of breed relate to the concept of landraces. A Food and Agriculture Organization of the United Nations (FAO) guideline defines landrace and landrace breed as "a breed that has largely developed through adaptation to the natural environment and traditional production system in which it has been raised." [ 6 ] This is in contrast to its definition of a standardized breed : "a breed of livestock that was developed according to a strict programme of genetic isolation and formal artificial selection to achieve a particular phenotype." In various domestic species (including pigs, goats, sheep and geese) some standardized breeds include "Landrace" in their names, but do not meet widely used definitions of landraces. For example, the British Landrace pig is a standardized breed, derived from earlier breeds with "Landrace" names. [ 16 ] Farmers' variety , usually applied to local cultivars, or seen as intermediate between a landrace and a cultivar, [ 17 ] may also include landraces when referring to plant varieties not subjected to formal breeding programs. [ 12 ] A landrace native to, or produced for a long time within the agricultural system in which it is found is referred to as an autochthonous landrace , while a more recently introduced one is termed an allochthonous landrace . [ 7 ] [ 5 ] [ 18 ] Within academic agronomy , the term autochthonous landrace is sometimes used with a more technical, productivity-related definition, synthesized by A. C. Zeven from previous definitions beginning with Mansholt's: "an autochthonous landrace is a variety with a high capacity to tolerate biotic and abiotic stress, resulting in a high yield stability and an intermediate yield level under a low input agricultural system." [ 7 ] The terms autochthonous and allochthonous are most often applied to plants, with animals more often being referred to as indigenous or native . Examples of references in sources to long-term local landraces of livestock include constructions such as "indigenous landraces of sheep", [ 19 ] and "Leicester Longwool sheep were bred to the native landraces of the region". [ 20 ] Some usage of autochthonous does occur in reference to livestock, e.g. "autochthonous races of cattle such as the Asturian mountain cattle – Ratina and Casina – and Tudanca cattle." [ 21 ] A significant proportion of farmers around the world grow landrace crops . [ 4 ] However, as industrialized agriculture spreads, cultivars , which are selectively bred for high yield, rapid growth, disease and drought resistance, and other commercial production values, are supplanting landraces, putting more and more of them at risk of extinction . [ citation needed ] In 1927 at the International Agricultural Congress, organized by the predecessor of the FAO, an extensive discussion was held on the need to conserve landraces. A recommendation that members organize nation-by-nation landrace conservation did not succeed in leading to widespread conservation efforts. [ 7 ] Landraces are often free from many intellectual property and other regulatory encumbrances. However, in some jurisdictions, a focus on their production may result in missing out on some benefits afforded to producers of genetically selected and homogenous organisms, including breeders' rights legislation, easier availability of loans and other business services, even the right to share seed or stock with others, depending on how favorable the laws in the area are to high-yield agribusiness interests. [ 9 ] As Regine Andersen of the Fridtjof Nansen Institute (Norway) and the Farmers' Rights Project puts it, "Agricultural biodiversity is being eroded. This trend is putting at risk the ability of future generations to feed themselves. In order to reverse the trend, new policies must be implemented worldwide. The irony of the matter is that the poorest farmers are the stewards of genetic diversity." [ 9 ] Protecting farmer interests and protecting biodiversity is at the heart of the International Treaty on Plant Genetic Resources for Food and Agriculture (the "Plant Treaty" for short), under the Food and Agriculture Organization of the United Nations (FAO), though its concerns are not exclusively limited to landraces. [ 9 ] Landraces played a basic role in the development of the standardized breeds but are today threatened by the market success of the standardized breeds. In developing countries, landraces still play an important role, especially in traditional production systems. [ 6 ] Specimens within an animal landrace tend to be genetically similar, though more diverse than members of a standardized or formal breed. [ 2 ] Two approaches have been used to conserve plant landraces: [ 10 ] [ 22 ] As the amount of agricultural land dedicated to growing landrace crops declines, such as in the example of wheat landraces in the Fertile Crescent , landraces can become extinct in cultivation. Therefore ex situ landrace conservation practices are considered a way to avoid losing the genetic diversity completely. Research published in 2020 suggested that existing ways of cataloging diversity within ex situ genebanks fall short of cataloging the appropriate information for landrace crops. [ 22 ] An in situ conservation effort to save the Berrettina di Lungavilla squash landrace made use of participatory plant breeding practices in order to incorporate the local community into the work. [ 23 ] Preservation efforts for cereal strains are ongoing including in situ and in online-searchable germplasm collections ( seed banks ), coordinated by Biodiversity International and the National Institute of Agricultural Botany (NIAB, UK). [ 4 ] However, more may need to be done, because plant genetic variety, the source of crop health and seed quality, depends on a diversity of landraces and other traditionally used varieties. [ 9 ] Efforts (as of 2008 [update] ) were mostly focused on Iberia , the Balkans , and European Russia , and dominated by species from mountainous areas. [ 4 ] Despite their incompleteness, these efforts have been described as "crucial in preventing the extinction of many of these local ecotypes". [ 4 ] An agricultural study published in 2008 showed that landrace cereal crops began to decline in Europe in the 19th century such that cereal landraces "have largely fallen out of use" in Europe. [ 4 ] Landrace cultivation in central and northwest Europe was almost eradicated by the early 20th century, due to economic pressure to grow improved, modern cultivars. [ 24 ] While many in the region are already extinct, [ 4 ] some have survived by being passed from generation to generation, [ 4 ] and have also been revived by enthusiasts outside Europe to preserve European agriculture and food culture elsewhere. [ 4 ] These survivals are usually for specific uses, such as thatch , and traditional European cuisine and craft beer brewing. [ 4 ] The label landrace includes regional cultigens that are genetically heterogeneous , but with enough characteristics in common to permit their recognition as a group. These characteristics are used by farmers to manage diversity and purity within landraces. [ 25 ] In some cultures, the development of new landraces is typically limited to members of specific social groups, such as women or shaman. Maintaining existing landraces, like developing new landraces, requires that farmers be able to identify crop-specific characteristics and that those characteristics are passed on to following generations. [ 25 ] Over time, the process of identifying the distinguishing characteristic or features of a new landrace is reinforced by cultivation processes; for example, descendants of a plant that is notably drought tolerant may become iteratively more so through selective breeding as farmers regard it as better for dry areas and prioritize planting it in those locations. This is one way in which farming systems can develop a portfolio of landraces over time that have specific ecological niches and uses. [ 25 ] Conversely, modern cultivars can also be developed into a landrace over time when farmers save seed and practice selective breeding . [ 12 ] Although landraces are often discussed once they have become endemic to a particular geographical region, landraces have always been moved over long and short distances. Some landraces can adapt to various environments, while others only thrive within specific conditions. Self-fertilizing and vegetatively populated species adapt by changing the frequencies of phenotypes. Outbreeding crops absorb new genotypes through intentional and unintentional hybridization, or through mutation. [ 7 ] A clear example of vegetal landrace would consist in the diverse adaptations of wheat to differential artificial selection constraints. [ 26 ] Members of a landrace variety, selected for uniformity with regards to a unique feature over a period of time, can be developed into a farmers' variety or cultivar . [ 17 ] Traits from landraces are valuable for incorporation into elite lines . [ 27 ] Crop disease resistance genes from landraces can provide eternally-needed resistances to more widely-used, modern varieties. [ 27 ] Some standardized animal breeds originate from attempts to make landraces more consistent through selective breeding , and a landrace may become a more formal breed with the creation of a breed registry or publication of a breed standard . In such a case, one may think of the landrace as a "stage" in breed development. However, in other cases, formalizing a landrace may result in the genetic resource of a landrace being lost through crossbreeding . [ 2 ] While many landrace animals are associated with farming, other domestic animals have been put to use as modes of transportation, as companion animals , for sporting purposes, and for other non-farming uses, so their geographic distribution may differ. For example, horse landraces are less common because human use of them for transport has meant that they have moved with people more commonly and constantly than most other domestic animals, reducing the incidence of populations locally genetically isolated for extensive periods of time. [ 2 ] Many standardized breeds have rather recently (within a century or less) been derived from landraces. Examples, often called natural breeds , include Arabian Mau , Egyptian Mau , Korat , Kurilian Bobtail , Maine Coon , Manx , Norwegian Forest Cat , Siberian , and Siamese . In some cases, such as the Turkish Angora and Turkish Van breeds and their possible derivation from the Van cat landrace, the relationships are not entirely clear. Dog landraces and the selectively bred dog breeds that follow breed standards vary widely depending on their origins and purpose. [ 50 ] Landraces are distinguished from dog breeds which have breed standards, breed clubs and registries. [ 51 ] Landrace dogs have more variety in their appearance than do standardized dog breeds. [ 51 ] An example of a dog landrace with a related standardized breed with a similar name is the collie . The Scotch Collie is a landrace, while the Rough Collie and the Border Collie are standardized breeds. They can be very different in appearance, though the Rough Collie in particular was developed from the Scotch Collie by inbreeding to fix certain highly desired traits. In contrast to the landrace, in the various standardized Collie breeds, purebred individuals closely match a breed-standard appearance but might have lost other useful characteristics and have developed undesirable traits linked to inbreeding. [ 52 ] The ancient landrace dogs of the Fertile Crescent that led to the Saluki breed excels in running down game across open tracts of hot desert, but conformation -bred individuals of the breed are not necessarily able to chase and catch desert hares . [ citation needed ] Some standardized breeds that are derived from landraces include the Dutch Landrace , Swedish Landrace and Finnish Landrace goats . The Danish Landrace is a modern mix of three different breeds, one of which was a "Landrace"-named breed. The wild progenitor of the domestic horse is extinct. [ 2 ] It is rare for landraces among domestic horses to remain isolated, due to human use of horses for transportation, thus causing horses to move from one local population to another. The heavy 'draft' type of domestic horse, developed in Europe, has differentiated into many separate landraces or breeds. [ citation needed ] Examples of horse landraces also include insular populations in Greece and Indonesia, and, on a broader scale, New World populations derived from the founder stock of Colonial Spanish horse . [ 2 ] The Yakutian and Mongolian Horses of Asia have "unimproved" characteristics. [ 54 ] The standardized swine breeds named "Landrace" are often not actually landraces or derived from landraces. The Danish Landrace pig breed, pedigreed in 1896 from an actual local landrace, is the principal ancestor of the American Landrace (1930s). In this way, the Swedish Landrace is derived from the Danish and from other Scandinavian breeds, as is the British Landrace breed. Many standardized goose breeds named "Landrace", e.g. the Twente Landrace goose , are not actually true landraces, but may be derived from them.
https://en.wikipedia.org/wiki/Landrace
Landscape architecture is the design of outdoor areas, landmarks , and structures to achieve environmental, social-behavioural, or aesthetic outcomes. [ 2 ] It involves the systematic design and general engineering of various structures for construction and human use, investigation of existing social, ecological, and soil conditions and processes in the landscape, and the design of other interventions that will produce desired outcomes. The scope of the profession is broad and can be subdivided into several sub-categories including professional or licensed landscape architects who are regulated by governmental agencies and possess the expertise to design a wide range of structures and landforms for human use; landscape design which is not a licensed profession; site planning ; stormwater management ; erosion control ; environmental restoration ; public realm, parks, recreation and urban planning ; visual resource management; green infrastructure planning and provision; and private estate and residence landscape master planning and design; all at varying scales of design, planning and management. A practitioner in the profession of landscape architecture may be called a landscape architect ; however, in jurisdictions where professional licenses are required it is often only those who possess a landscape architect license who can be called a landscape architect. Modern landscape architecture is a multi-disciplinary field, incorporating aspects of urban design , architecture , geography , ecology , civil engineering , structural engineering , horticulture , environmental psychology , industrial design , soil sciences , botany , and fine arts . The activities of a landscape architect can range from the creation of public parks and parkways to site planning for campuses and corporate office parks; from the design of residential estates to the design of civil infrastructure ; and from the management of large wilderness areas to reclamation of degraded landscapes such as mines or landfills . Landscape architects work on structures and external spaces in the landscape aspect of the design – large or small, urban , suburban and rural , and with "hard" (built) and "soft" (planted) materials, while integrating ecological sustainability . The most valuable contribution can be made at the first stage of a project to generate ideas with technical understanding and creative flair for the design, organization, and use of spaces. The landscape architect can conceive the overall concept and prepare the master plan, from which detailed design drawings and technical specifications are prepared. They can also review proposals to authorize and supervise contracts for the construction work. Other skills include preparing design impact assessments, conducting environmental assessments and audits, and serving as an expert witness at inquiries on land use issues. The majority of their time will most likely be spent inside an office building designing and preparing models for clients. [ citation needed ] For the period before 1800, the history of landscape gardening (later called landscape architecture) is largely that of master planning and garden design for manor houses , palaces and royal properties. An example is the extensive work by André Le Nôtre for King Louis XIV of France on the Gardens of Versailles . The first person to write of making a landscape was Joseph Addison in 1712. The term landscape architecture was invented by Gilbert Laing Meason in 1828, and John Claudius Loudon (1783–1843) was instrumental in the adoption of the term landscape architecture by the modern profession. He took up the term from Meason and gave it publicity in his Encyclopedias and in his 1840 book on the Landscape Gardening and Landscape Architecture of the Late Humphry Repton . [ 6 ] John Claudius Loudon was an established and influential horticultural journalist and Scottish landscape architect whose writings were instrumental in shaping Victorian taste in gardens, public parks, and architecture . [ 7 ] In the Landscape Gardening and Landscape Architecture of the Late Humphry Repton, Loudon describes two distinct styles of landscape gardening existing at the beginning of the 19th century: geometric and natural. [ 6 ] Loudon wrote that each style reflected a different stage of society. The geometric style was “most striking and pleasing,” displaying wealth and taste in an “early state of society” and in “countries where the general scenery was wild, irregular, and natural, and man, comparatively, uncultivated and unrefined.” [ 6 ] The natural style was used in “modern times” and in countries where “society is in a higher state of cultivation," displaying wealth and taste through the sacrifice of profitable lands to make room for such designs. [ 6 ] The prominent English landscape designer Humphry Repton (1752-1818) echoed similar ideas in his work and design ideas. In his writings on the use of delineated spaces (e.g. courtyards , terrace walls , fences), Repton states that while the motive for defense no longer exists, the features are still useful in separating "the gardens, which belong to man, and the forest, or desert, which belongs to the wild denizens." [ 6 ] Repton refers to Indigenous peoples as "uncivilized human beings, against whom some decided line of defense was absolutely necessary.” [ 6 ] The practice of landscape architecture spread from the Old to the New World. The term "landscape architect" was used as a professional title by Frederick Law Olmsted in the United States in 1863 [ citation needed ] and Andrew Jackson Downing , another early American landscape designer , was editor of The Horticulturist magazine (1846–52). In 1841 his first book, A Treatise on the Theory and Practice of Landscape Gardening, Adapted to North America , was published to a great success; it was the first book of its kind published in the United States. [ 8 ] During the latter 19th century, the term landscape architect began to be used by professional landscapes designers, and was firmly established after Frederick Law Olmsted Jr. and Beatrix Jones (later Farrand) with others founded the American Society of Landscape Architects (ASLA) in 1899. IFLA was founded at Cambridge , England , in 1948 with Sir Geoffrey Jellicoe as its first president, representing 15 countries from Europe and North America. Later, in 1978, IFLA's Headquarters were established in Versailles . [ 9 ] [ 10 ] [ 11 ] The variety of the professional tasks that landscape architects collaborate on is very broad, but some examples of project types include: [ 12 ] Landscape managers use their knowledge of landscape processes to advise on the long-term care and development of the landscape. They often work in forestry , nature conservation and agriculture . [ citation needed ] Landscape scientists have specialist skills such as soil science , hydrology , geomorphology or botany that they relate to the practical problems of landscape work. Their projects can range from site surveys to the ecological assessment of broad areas for planning or management purposes. They may also report on the impact of development or the importance of particular species in a given area. [ citation needed ] Landscape planners are concerned with landscape planning for the location, scenic, ecological and recreational aspects of urban, rural, and coastal land use. Their work is embodied in written statements of policy and strategy, and their remit includes master planning for new developments, landscape evaluations and assessments, and preparing countryside management or policy plans. Some may also apply an additional specialism such as landscape archaeology or law to the process of landscape planning. [ citation needed ] Green roof (or more specifically, vegetative roof) designers design extensive and intensive roof gardens for stormwater management, evapo-transpirative cooling, sustainable architecture , aesthetics, and habitat creation. [ 13 ] Through the 19th century, urban planning became a focal point and central issue in cities. The combination of the tradition of landscape gardening and the emerging field of urban planning offered landscape architecture an opportunity to serve these needs. [ 14 ] In the second half of the century, Frederick Law Olmsted completed a series of parks that continue to have a significant influence on the practices of landscape architecture today. Among these were Central Park in New York City , Prospect Park in Brooklyn, New York and Boston's Emerald Necklace park system. Jens Jensen designed sophisticated and naturalistic urban and regional parks for Chicago , Illinois , and private estates for the Ford family including Fair Lane and Gaukler Point . One of the original eleven founding members of the American Society of Landscape Architects (ASLA), and the only woman, was Beatrix Farrand . She was design consultant for over a dozen universities including: Princeton in Princeton, New Jersey ; Yale in New Haven, Connecticut ; and the Arnold Arboretum for Harvard in Boston , Massachusetts . Her numerous private estate projects include the landmark Dumbarton Oaks in the Georgetown neighborhood of Washington, D.C. [ 15 ] Since that time, other architects – most notably Ruth Havey and Alden Hopkins – changed certain elements of the Farrand design. [ citation needed ] Since this period urban planning has developed into a separate independent profession that has incorporated important contributions from other fields such as civil engineering , architecture and public administration . Urban Planners are qualified to perform tasks independent of landscape architects, and in general, the curriculum of landscape architecture programs do not prepare students to become urban planners. [ 16 ] Landscape architecture continues to develop as a design discipline and to respond to the various movements in architecture and design throughout the 20th and 21st centuries. Thomas Church was a pioneering mid-century landscape architect known for shaping modern American garden design. [ 17 ] Roberto Burle Marx in Brazil combined the International style and native Brazilian plants and culture for a new aesthetic. Innovation continues today solving challenging problems with contemporary design solutions for master planning , landscapes, and gardens . [ citation needed ] Ian McHarg was known for introducing environmental concerns in landscape architecture. [ 18 ] [ 19 ] He popularized a system of analyzing the layers of a site in order to compile a complete understanding of the qualitative attributes of a place. This system became the foundation of today's Geographic Information Systems (GIS) . McHarg would give every qualitative aspect of the site a layer, such as the history, hydrology, topography, vegetation, etc. GIS software is ubiquitously used in the landscape architecture profession today to analyze materials in and on the Earth's surface and is similarly used by urban planners , geographers , forestry and natural resources professionals, etc. [ citation needed ] European nations enabled the widespread circulation of urban planning strategies by transferring landscaping ideas and practices to overseas colonies. The green belt was a popular landscape practice exported by Britain onto colonial territories such as Haifa (1918-1948). [ 20 ] Spatial mechanisms like the green belt, implemented through the Haifa Bay Plan and the British "Grand Model," were used to enforce political control and civic order and extend western ideas of progress and development. [ 20 ] The Greater London Regional Planning Committee accepted the green belt concept which formed the basis of the 1938 Green Belt Act . The planning prototype demarcated open spaces, distinguished between city and countryside, limited urban growth , and created zoning divisions . [ 20 ] It was used extensively in the British colonies to facilitate British rule through the organized division of landscape and populations. [ 20 ] Indigenous land management practices create constantly changing landscapes through the use of vegetation and natural systems, contrasting with western epistemologies of the discipline that separate ornament from function. [ 21 ] The discipline of landscape architecture favors western designs made from structured materials and geometric forms. [ 21 ] Landscape architecture history books tend to include projects that contain constructed architectural elements that persist over time, excluding many Indigenous landscape-based designs. [ 21 ] Landscape architecture textbooks often place Indigenous peoples as a prefix to the official start of the discipline. The widely read landscape history text The Landscape of Man (1964) offers a global history of the designed landscape from past to present, featuring African and other Indigenous peoples in its discussions of Paleolithic man between 500,000 and 8,000 BCE in relation to human migration . [ 21 ] Indigenous land-management practices are described as archaeological rather than a part of contemporary practice. Gardens in Time (1980) also places Indigenous practice as prehistory at the beginning of the landscape architecture timeline. Authors John and Ray Oldham describe Aborigines of Australia as “survivors of an ancient way of life” who provide an opportunity to examine western Australia as a “meeting place of a prehistoric man.” [ 21 ] In the late 18th century, the landscapes created by aboriginal land and fire management practices appealed to English settlers in Australia . [ 21 ] Journals from the period of early white settlement note the landscape resembling parks and popular designs in English landscape gardens of the same period. [ 21 ] In England, these designs were considered sophisticated and celebrated for the intentional sacrifice of usable land. In Australia, the park-like condition was used to justify British control, citing its emptiness and lack of productive use as a basis for the dispossession of Aboriginal people. [ 21 ] Landscape architects are generally required to have university or graduate education from an accredited landscape architecture degree program, which can vary in length and degree title. They learn how to create projects from scratch, such as residential or commercial planting and designing outdoor living spaces. [ 22 ] They are willing to work with others to get a better outcome for the customers when doing a project, and learn the basics of how to create a project on a manner of time, how to interact with clients and how to explain a design from scratch when presenting a final project. [ 23 ] In many countries, a professional institute , comprising members of the professional community, exists in order to protect the standing of the profession and promote its interests, and sometimes also regulate the practice of landscape architecture. The standard and strength of legal regulations governing landscape architecture practice varies from nation to nation, with some requiring licensure in order to practice; and some having little or no regulation. In Europe , North America , parts of South America , Australia , India , and New Zealand , landscape architecture is a regulated profession. [ 24 ] Since 1889, with the arrival of the French architect and urbanist landscaper Carlos Thays , recommended to recreate the National Capital's parks and public gardens, it was consolidated an apprentice and training program in landscaping that eventually became a regulated profession, currently the leading academic institution is the UBA University of Buenos Aires "UBA Facultad de Arquitectura, Diseño y Urbanismo" (Faculty of Architecture, Design and Urbanism) offering a Bacherlor's degree in Urban Landscaping Design and Planning, the profession itself is regulated by the National Ministry of Urban Planning of Argentina and the Institute of the Buenos Aires Botanical Garden . [ citation needed ] The Australian Institute of Landscape Architects (AILA) provides accreditation of university degrees and non-statutory professional registration for landscape architects. Once recognized by AILA, landscape architects use the title 'Registered Landscape Architect' across the six states and territories within Australia. [ citation needed ] AILA's system of professional recognition is a national system overseen by the AILA National Office in Canberra. To apply for AILA Registration, an applicant usually needs to satisfy a number of pre-requisites, including university qualification, a minimum number years of practice and a record of professional experience. [ 25 ] Landscape Architecture within Australia covers a broad spectrum of planning, design, management, and research. From specialist design services for government and private sector developments through to specialist professional advice as an expert witness. [ citation needed ] In Canada, landscape architecture, like law and medicine, is a self-regulating profession pursuant to provincial statute. For example, Ontario's profession is governed by the Ontario Association of Landscape Architects pursuant to the Ontario Association of Landscape Architects Act . Landscape architects in Ontario, British Columbia, and Alberta must complete the specified components of L.A.R.E (Landscape Architecture Registration Examination) as a prerequisite to full professional standing. Provincial regulatory bodies are members of a national organization, the Canadian Society of Landscape Architects / L'Association des Architectes Paysagistes du Canada (CSLA-AAPC), and individual membership in the CSLA-AAPC is obtained through joining one of the provincial or territorial components. [ 26 ] ISLA (Indonesia Society of Landscape Architects) is the Indonesian society for professional landscape architects formed on 4 February 1978 and is a member of IFLA APR and IFLA World. The main aim is to increase the dignity of the professional members of landscape architects by increasing their activity role in community service, national and international development. The management of IALI consists of National Administrators who are supported by 20 Regional Administrators (Provincial level) and 3 Branch Managers at city level throughout Indonesia. [ citation needed ] Landscape architecture education in Indonesia was held in 18 universities, which graduated D3, Bachelor and Magister graduates. The landscape architecture education incorporate in Association of Indonesian Landscape Architecture Education. [ citation needed ] AIAPP (Associazione Italiana Architettura del Paesaggio) is the Italian association of professional landscape architects formed in 1950 and is a member of IFLA and IFLA Europe (formerly known as EFLA). AIAPP is in the process of contesting this new law which has given the Architects' Association the new title of Architects, Landscape Architects, Planners and Conservationists whether or not they have had any training or experience in any of these fields other than Architecture. In Italy, there are several different professions involved in landscape architecture: The New Zealand Institute of Landscape Architects (NZILA) is the professional body for Landscape Architects in NZ. [ 27 ] In April 2013, NZILA jointly with AILA, hosted the 50th International Federation of Landscape Architects (IFLA) World Congress in Auckland, New Zealand. The World Congress is an international conference where Landscape Architects from all around the globe meet to share ideas around a particular topic. [ citation needed ] Within NZ, Members of NZILA when they achieve their professional standing, can use the title Registered Landscape Architect NZILA. [ citation needed ] NZILA provides an education policy and an accreditation process to review education programme providers; currently there are three accredited undergraduate Landscape Architecture programmes in New Zealand. Lincoln University also has an accredited masters programme in landscape architecture. [ citation needed ] Landscape architecture in Norway was established in 1919 at the Norwegian University of Life Sciences (NMBU) at Ås. The Norwegian School of Landscape Architecture at the Faculty of Landscape and Society is responsible for Europe's oldest landscape architecture education on an academic level. The departments areas include design and design of cities and places, garden art history, landscape engineering, greenery, zone planning, site development, place making and place keeping. [ citation needed ] In May 1962, Joane Pim , Ann Sutton, Peter Leutscher and Roelf Botha (considered the forefathers of the profession in South Africa) established the Institute for Landscape Architects, now known as the Institute for Landscape Architecture in South Africa (ILASA). [ 28 ] ILASA is a voluntary organisation registered with the South African Council for the Landscape Architectural Profession (SACLAP). [ 29 ] It consists of three regional bodies, namely, Gauteng, KwaZula-Natal and the Western Cape. ILASA's mission is to advance the profession of landscape architecture and uphold high standards of professional service to its members, and to represent the profession of landscape architecture in any matter which may affect the interests of the members of the institute. ILASA holds the country's membership with The International Federation of Landscape Architects (IFLA). [ 30 ] In South Africa, the profession is regulated by SACLAP, established as a statutory council in terms of Section 2 of the South African Council for the Landscape Architectural Profession Act – Act 45 of 2000. The Council evolved out of the Board of Control for Landscape Architects (BOCLASA), which functioned under the Council of Architects in terms of The Architectural Act, Act 73 of 1970. SACLAP's mission is to establish, direct, sustain and ensure a high level of professional responsibilities and ethical conduct within the art and science of landscape architecture with honesty, dignity and integrity in the broad interest of public health, safety and welfare of the community. [ citation needed ] After completion of an accredited under-graduate and/or post-graduate qualification in landscape architecture at either the University of Cape Town or the University of Pretoria , or landscape technology at the Cape Peninsula University of Technology , professional registration is attained via a mandatory mentored candidacy period (minimum of two years) and sitting of the professional registration exam. After successfully completing the exam, the individual is entitled to the status of Professional Landscape Architect or Professional Landscape Technologist. [ citation needed ] Architects Sweden, Sveriges Arkitekter, is the collective trade union and professional organisation for all architects, including landscape architects, in Sweden. The professional body is a member of IFLA ( International Federation of Landscape Architects ) as well as IFLA Europe. As a landscape architect, anyone can become a member of Architects Sweden if they have a national or international university degree that is approved by the association. If the degree is from within the European Union, Architects Sweden approves Landscape architect educations listed by IFLA Europe . For educations outside the EU, the association makes an assessment on a statement from the Swedish Council for Higher Education (UHR). The UK's professional body is the Landscape Institute (LI). It is a chartered body that accredits landscape professionals and university courses. At present there are fifteen accredited programmes in the UK. Membership of the LI is available to students, academics and professionals, and there are over 3,000 professionally qualified members. [ citation needed ] The Institute provides services to assist members including support and promotion of the work of landscape architects; information and guidance to the public and industry about the specific expertise offered by those in the profession; and training and educational advice to students and professionals looking to build upon their experience. [ citation needed ] In 2008, the LI launched a major recruitment drive entitled "I want to be a Landscape Architect" to encourage the study of Landscape Architecture. The campaign aimed to raise the profile of landscape architecture and highlight its valuable role in building sustainable communities and fighting climate change . [ 31 ] As of July 2018, the "I want to be a Landscape Architect" initiative was replaced by a brand new careers campaign entitled #ChooseLandscape , which aims to raise awareness of landscape as a profession; improve and increase access to landscape education; and inspire young people to choose landscape as a career. [ 32 ] This new campaign includes other landscape-related professions such as landscape management, landscape planning, landscape science and urban design. [ 33 ] In the United States, landscape architecture is regulated by individual state governments. For a landscape architect, obtaining licensure requires advanced education and work experience, plus passage of the national examination called the Landscape Architect Registration Examination (L.A.R.E.). Licensing is overseen at the national level by the Council of Landscape Architectural Registration Boards (CLARB). Several states require passage of a state exam as well. Landscape architecture has been identified as an above-average growth profession by the US Bureau of Labor Statistics and was listed in U.S. News & World Report' s list of Best Jobs to Have in 2006, 2007, 2008, 2009 and 2010. [ 34 ] The national trade association for United States landscape architects is the American Society of Landscape Architects . Frederick Law Olmsted , who designed Central Park in New York City, is known as the "father of American landscape architecture". [ 35 ]
https://en.wikipedia.org/wiki/Landscape_architecture
Landscape ecology is the science of studying and improving relationships between ecological processes in the environment and particular ecosystems. This is done within a variety of landscape scales, development spatial patterns, and organizational levels of research and policy. [ 1 ] [ 2 ] [ 3 ] Landscape ecology can be described as the science of "landscape diversity" as the synergetic result of biodiversity and geodiversity . [ 4 ] As a highly interdisciplinary field in systems science , landscape ecology integrates biophysical and analytical approaches with humanistic and holistic perspectives across the natural sciences and social sciences . Landscapes are spatially heterogeneous geographic areas characterized by diverse interacting patches or ecosystems, ranging from relatively natural terrestrial and aquatic systems such as forests, grasslands, and lakes to human-dominated environments including agricultural and urban settings. [ 2 ] [ 5 ] [ 6 ] The most salient characteristics of landscape ecology are its emphasis on the relationship among pattern, process and scales , and its focus on broad-scale ecological and environmental issues. These necessitate the coupling between biophysical and socioeconomic sciences. Key research topics in landscape ecology include ecological flows in landscape mosaics, land use and land cover change, scaling, relating landscape pattern analysis with ecological processes, and landscape conservation and sustainability . [ 7 ] Landscape ecology also studies the role of human impacts on landscape diversity in the development and spreading of new human pathogens that could trigger epidemics . [ 8 ] [ 9 ] The German term Landschaftsökologie – thus landscape ecology – was coined by German geographer Carl Troll in 1939. [ 10 ] He developed this terminology and many early concepts of landscape ecology as part of his early work, which consisted of applying aerial photograph interpretation to studies of interactions between environment and vegetation. Heterogeneity is the measure of how parts of a landscape differ from one another. Landscape ecology looks at how this spatial structure affects organism abundance at the landscape level, as well as the behavior and functioning of the landscape as a whole. This includes studying the influence of pattern, or the internal order of a landscape, on process, or the continuous operation of functions of organisms. [ 11 ] Landscape ecology also includes geomorphology as applied to the design and architecture of landscapes. [ 12 ] Geomorphology is the study of how geological formations are responsible for the structure of a landscape. One central landscape ecology theory originated from MacArthur & Wilson's The Theory of Island Biogeography . This work considered the biodiversity on islands as the result of competing forces of colonization from a mainland stock and stochastic extinction . The concepts of island biogeography were generalized from physical islands to abstract patches of habitat by Levins' metapopulation model (which can be applied e.g. to forest islands in the agricultural landscape [ 13 ] ). This generalization spurred the growth of landscape ecology by providing conservation biologists a new tool to assess how habitat fragmentation affects population viability. Recent growth of landscape ecology owes much to the development of geographic information systems (GIS) [ 14 ] and the availability of large-extent habitat data (e.g. remotely sensed datasets). Landscape ecology developed in Europe from historical planning on human-dominated landscapes. Concepts from general ecology theory were integrated in North America . [ when? ] While general ecology theory and its sub-disciplines focused on the study of more homogenous, discrete community units organized in a hierarchical structure (typically as ecosystems , populations , species , and communities), landscape ecology built upon heterogeneity in space and time. It frequently included human-caused landscape changes in theory and application of concepts. [ 15 ] By 1980, landscape ecology was a discrete, established discipline. It was marked by the organization of the International Association for Landscape Ecology (IALE) in 1982. Landmark book publications defined the scope and goals of the discipline, including Naveh and Lieberman [ 16 ] and Forman and Godron. [ 17 ] [ 18 ] Forman [ 6 ] wrote that although study of "the ecology of spatial configuration at the human scale" was barely a decade old, there was strong potential for theory development and application of the conceptual framework. Today, theory and application of landscape ecology continues to develop through a need for innovative applications in a changing landscape and environment. Landscape ecology relies on advanced technologies such as remote sensing, GIS, and models . There has been associated development of powerful quantitative methods to examine the interactions of patterns and processes. [ 5 ] An example would be determining the amount of carbon present in the soil based on landform over a landscape, derived from GIS maps, vegetation types, and rainfall data for a region. Remote sensing work has been used to extend landscape ecology to the field of predictive vegetation mapping, for instance by Janet Franklin . Nowadays, at least six different conceptions of landscape ecology can be identified: one group tending toward the more disciplinary concept of ecology (subdiscipline of biology ; in conceptions 2, 3, and 4) and another group—characterized by the interdisciplinary study of relations between human societies and their environment—inclined toward the integrated view of geography (in conceptions 1, 5, and 6): [ 19 ] Some research programmes of landscape ecology theory, namely those standing in the European tradition, may be slightly outside of the "classical and preferred domain of scientific disciplines" because of the large, heterogeneous areas of study. However, general ecology theory is central to landscape ecology theory in many aspects. Landscape ecology consists of four main principles: the development and dynamics of spatial heterogeneity, interactions and exchanges across heterogeneous landscapes, influences of spatial heterogeneity on biotic and abiotic processes, and the management of spatial heterogeneity. The main difference from traditional ecological studies, which frequently assume that systems are spatially homogenous, is the consideration of spatial patterns . [ 33 ] Landscape ecology not only created new terms, but also incorporated existing ecological terms in new ways. Many of the terms used in landscape ecology are as interconnected and interrelated as the discipline itself. Certainly, 'landscape' is a central concept in landscape ecology. It is, however, defined in quite different ways. For example: [ 19 ] Carl Troll conceives of landscape not as a mental construct but as an objectively given 'organic entity', a harmonic individuum of space . [ 34 ] Ernst Neef [ 20 ] [ 21 ] defines landscapes as sections within the uninterrupted earth-wide interconnection of geofactors which are defined as such on the basis of their uniformity in terms of a specific land use, and are thus defined in an anthropocentric and relativistic way. According to Richard Forman and Michel Godron , [ 22 ] a landscape is a heterogeneous land area composed of a cluster of interacting ecosystems that is repeated in similar form throughout, whereby they list woods, meadows, marshes and villages as examples of a landscape's ecosystems, and state that a landscape is an area at least a few kilometres wide. John A. Wiens [ 24 ] [ 25 ] opposes the traditional view expounded by Carl Troll , Isaak S. Zonneveld, Zev Naveh, Richard T. T. Forman/Michel Godron and others that landscapes are arenas in which humans interact with their environments on a kilometre-wide scale; instead, he defines 'landscape'—regardless of scale—as "the template on which spatial patterns influence ecological processes". [ 25 ] [ 35 ] Some define 'landscape' as an area containing two or more ecosystems in close proximity. [ 15 ] A main concept in landscape ecology is scale . Scale represents the real world as translated onto a map, relating distance on a map image and the corresponding distance on earth. [ 36 ] Scale is also the spatial or temporal measure of an object or a process, [ 33 ] or amount of spatial resolution. [ 6 ] Components of scale include composition, structure, and function, which are all important ecological concepts. Applied to landscape ecology, composition refers to the number of patch types (see below) represented on a landscape and their relative abundance. For example, the amount of forest or wetland , the length of forest edge, or the density of roads can be aspects of landscape composition. Structure is determined by the composition, the configuration, and the proportion of different patches across the landscape, while function refers to how each element in the landscape interacts based on its life cycle events. [ 33 ] Pattern is the term for the contents and internal order of a heterogeneous area of land. [ 17 ] A landscape with structure and pattern implies that it has spatial heterogeneity , or the uneven distribution of objects across the landscape. [ 6 ] Heterogeneity is a key element of landscape ecology that separates this discipline from other branches of ecology. Landscape heterogeneity is able to quantify with agent-based methods as well. [ 37 ] Patch , a term fundamental to landscape ecology, is defined as a relatively homogeneous area that differs from its surroundings. [ 6 ] Patches are the basic unit of the landscape that change and fluctuate, a process called patch dynamics . Patches have a definite shape and spatial configuration, and can be described compositionally by internal variables such as number of trees, number of tree species, height of trees, or other similar measurements. [ 6 ] Matrix is the "background ecological system" of a landscape with a high degree of connectivity . Connectivity is the measure of how connected or spatially continuous a corridor, network, or matrix is. [ 6 ] For example, a forested landscape (matrix) with fewer gaps in forest cover (open patches) will have higher connectivity. Corridors have important functions as strips of a particular type of landscape differing from adjacent land on both sides. [ 6 ] A network is an interconnected system of corridors while mosaic describes the pattern of patches, corridors, and matrix that form a landscape in its entirety. [ 6 ] Landscape patches have a boundary between them which can be defined or fuzzy. [ 15 ] The zone composed of the edges of adjacent ecosystems is the boundary . [ 6 ] Edge means the portion of an ecosystem near its perimeter, where influences of the adjacent patches can cause an environmental difference between the interior of the patch and its edge. This edge effect includes a distinctive species composition or abundance. [ 6 ] For example, when a landscape is a mosaic of perceptibly different types, such as a forest adjacent to a grassland , the edge is the location where the two types adjoin. In a continuous landscape, such as a forest giving way to open woodland, the exact edge location is fuzzy and is sometimes determined by a local gradient exceeding a threshold, such as the point where the tree cover falls below thirty-five percent. [ 33 ] A type of boundary is the ecotone , or the transitional zone between two communities. [ 12 ] Ecotones can arise naturally, such as a lakeshore , or can be human-created, such as a cleared agricultural field from a forest. [ 12 ] The ecotonal community retains characteristics of each bordering community and often contains species not found in the adjacent communities. Classic examples of ecotones include fencerows , forest to marshlands transitions, forest to grassland transitions, or land-water interfaces such as riparian zones in forests. Characteristics of ecotones include vegetational sharpness , physiognomic change, occurrence of a spatial community mosaic, many exotic species , ecotonal species , spatial mass effect , and species richness higher or lower than either side of the ecotone. [ 38 ] An ecocline is another type of landscape boundary, but it is a gradual and continuous change in environmental conditions of an ecosystem or community. Ecoclines help explain the distribution and diversity of organisms within a landscape because certain organisms survive better under certain conditions, which change along the ecocline. They contain heterogeneous communities which are considered more environmentally stable than those of ecotones. [ 39 ] An ecotope is a spatial term representing the smallest ecologically distinct unit in mapping and classification of landscapes. [ 6 ] Relatively homogeneous, they are spatially explicit landscape units used to stratify landscapes into ecologically distinct features. They are useful for the measurement and mapping of landscape structure, function, and change over time, and to examine the effects of disturbance and fragmentation. Disturbance is an event that significantly alters the pattern of variation in the structure or function of a system. Fragmentation is the breaking up of a habitat, ecosystem, or land-use type into smaller parcels. [ 6 ] Disturbance is generally considered a natural process. Fragmentation causes land transformation, an important process in landscapes as development occurs. An important consequence of repeated, random clearing (whether by natural disturbance or human activity) is that contiguous cover can break down into isolated patches. This happens when the area cleared exceeds a critical level, which means that landscapes exhibit two phases: connected and disconnected. [ 40 ] Landscape ecology theory stresses the role of human impacts on landscape structures and functions. It also proposes ways for restoring degraded landscapes. [ 16 ] Landscape ecology explicitly includes humans as entities that cause functional changes on the landscape. [ 15 ] Landscape ecology theory includes the landscape stability principle, which emphasizes the importance of landscape structural heterogeneity in developing resistance to disturbances, recovery from disturbances , and promoting total system stability. [ 17 ] This principle is a major contribution to general ecological theories which highlight the importance of relationships among the various components of the landscape. Integrity of landscape components helps maintain resistance to external threats, including development and land transformation by human activity. [ 5 ] Analysis of land use change has included a strongly geographical approach which has led to the acceptance of the idea of multifunctional properties of landscapes. [ 18 ] There are still calls for a more unified theory of landscape ecology due to differences in professional opinion among ecologists and its interdisciplinary approach (Bastian 2001). An important related theory is hierarchy theory, which refers to how systems of discrete functional elements operate when linked at two or more scales. For example, a forested landscape might be hierarchically composed of drainage basins , which in turn are composed of local ecosystems, which are in turn composed of individual trees and gaps. [ 6 ] Recent theoretical developments in landscape ecology have emphasized the relationship between pattern and process, as well as the effect that changes in spatial scale has on the potential to extrapolate information across scales. [ 33 ] Several studies suggest that the landscape has critical thresholds at which ecological processes will show dramatic changes, such as the complete transformation of a landscape by an invasive species due to small changes in temperature characteristics which favor the invasive's habitat requirements. [ 33 ] Developments in landscape ecology illustrate the important relationships between spatial patterns and ecological processes. These developments incorporate quantitative methods that link spatial patterns and ecological processes at broad spatial and temporal scales. This linkage of time, space, and environmental change can assist managers in applying plans to solve environmental problems. [ 5 ] The increased attention in recent years on spatial dynamics has highlighted the need for new quantitative methods that can analyze patterns, determine the importance of spatially explicit processes, and develop reliable models. [ 33 ] Multivariate analysis techniques are frequently used to examine landscape level vegetation patterns. Studies use statistical techniques, such as cluster analysis , canonical correspondence analysis (CCA), or detrended correspondence analysis (DCA), for classifying vegetation. Gradient analysis is another way to determine the vegetation structure across a landscape or to help delineate critical wetland habitat for conservation or mitigation purposes (Choesin and Boerner 2002). [ 41 ] Climate change is another major component in structuring current research in landscape ecology. [ 42 ] Ecotones, as a basic unit in landscape studies, may have significance for management under climate change scenarios , since change effects are likely to be seen at ecotones first because of the unstable nature of a fringe habitat. [ 38 ] Research in northern regions has examined landscape ecological processes, such as the accumulation of snow, melting, freeze-thaw action, percolation, soil moisture variation, and temperature regimes through long-term measurements in Norway. [ 43 ] The study analyzes gradients across space and time between ecosystems of the central high mountains to determine relationships between distribution patterns of animals in their environment. Looking at where animals live, and how vegetation shifts over time, may provide insight into changes in snow and ice over long periods of time across the landscape as a whole. Other landscape-scale studies maintain that human impact is likely the main determinant of landscape pattern over much of the globe. [ 44 ] [ 45 ] Landscapes may become substitutes for biodiversity measures because plant and animal composition differs between samples taken from sites within different landscape categories. Taxa, or different species, can "leak" from one habitat into another, which has implications for landscape ecology. As human land use practices expand and continue to increase the proportion of edges in landscapes, the effects of this leakage across edges on assemblage integrity may become more significant in conservation. This is because taxa may be conserved across landscape levels, if not at local levels. [ 46 ] Land change modeling is an application of landscape ecology designed to predict future changes in land use . Land change models are used in urban planning , geography, GIS , and other disciplines to gain a clear understanding of the course of a landscape. [ 47 ] In recent years, much of the Earth's land cover has changed rapidly, whether from deforestation or the expansion of urban areas . [ 48 ] Landscape ecology has been incorporated into a variety of ecological subdisciplines. For example, it is closely linked to land change science , the interdisciplinary of land use and land cover change and their effects on surrounding ecology. Another recent development has been the more explicit consideration of spatial concepts and principles applied to the study of lakes, streams, and wetlands in the field of landscape limnology . Seascape ecology is a marine and coastal application of landscape ecology. [ 49 ] In addition, landscape ecology has important links to application-oriented disciplines such as agriculture and forestry . In agriculture, landscape ecology has introduced new options for the management of environmental threats brought about by the intensification of agricultural practices. Agriculture has always been a strong human impact on ecosystems. [ 18 ] In forestry, from structuring stands for fuelwood and timber to ordering stands across landscapes to enhance aesthetics, consumer needs have affected conservation and use of forested landscapes. Landscape forestry provides methods, concepts, and analytic procedures for landscape forestry. [ 50 ] Landscape ecology has been cited as a contributor to the development of fisheries biology as a distinct biological science discipline, [ 51 ] and is frequently incorporated in study design for wetland delineation in hydrology . [ 39 ] It has helped shape integrated landscape management . [ 52 ] Lastly, landscape ecology has been very influential for progressing sustainability science and sustainable development planning. For example, a recent study assessed sustainable urbanization across Europe using evaluation indices, country-landscapes, and landscape ecology tools and methods. [ 53 ] Landscape ecology has also been combined with population genetics to form the field of landscape genetics, which addresses how landscape features influence the population structure and gene flow of plant and animal populations across space and time [ 54 ] and on how the quality of intervening landscape, known as "matrix", influences spatial variation. [ 55 ] After the term was coined in 2003, the field of landscape genetics had expanded to over 655 studies by 2010, [ 56 ] and continues to grow today. As genetic data has become more readily accessible, it is increasingly being used by ecologists to answer novel evolutionary and ecological questions, [ 57 ] many with regard to how landscapes effect evolutionary processes, especially in human-modified landscapes, which are experiencing biodiversity loss . [ 58 ]
https://en.wikipedia.org/wiki/Landscape_ecology
Landscape epidemiology draws some of its roots from the field of landscape ecology . [ 1 ] Just as the discipline of landscape ecology is concerned with analyzing both patterns and processes in ecosystems across time and space, landscape epidemiology can be used to analyze both risk patterns and environmental risk factors. This field emerges from the theory that most vectors, hosts, and pathogens are commonly tied to the landscape as environmental determinants control their distribution and abundance. [ 2 ] In 1966, Evgeniy Pavlovsky introduced the concept of natural nidality or focality, defined by the idea that microscale disease foci are determined by the entire ecosystem. [ 3 ] With the recent availability of new computing technologies such as geographic information systems , remote sensing , statistical methods including spatial statistics and theories of landscape ecology , the concept of landscape epidemiology has been applied analytically to a variety of disease systems, including malaria , [ 4 ] hantavirus , Lyme disease and Chagas' disease . [ 5 ]
https://en.wikipedia.org/wiki/Landscape_epidemiology
Landscape limnology is the spatially explicit study of lakes , streams , and wetlands as they interact with freshwater, terrestrial, and human landscapes to determine the effects of pattern on ecosystem processes across temporal and spatial scales. Limnology is the study of inland water bodies inclusive of rivers, lakes, and wetlands; landscape limnology seeks to integrate all of these ecosystem types. The terrestrial component represents spatial hierarchies of landscape features that influence which materials, whether solutes or organisms, are transported to aquatic systems; aquatic connections represent how these materials are transported; and human activities reflect features that influence how these materials are transported as well as their quantity and temporal dynamics. [ 1 ] The core principles or themes of landscape ecology provide the foundation for landscape limnology. These ideas can be synthesized into a set of four landscape ecology themes that are broadly applicable to any aquatic ecosystem type, and that consider the unique features of such ecosystems. A landscape limnology framework begins with the premise of Thienemann (1925). Wiens (2002): [ 2 ] freshwater ecosystems can be considered patches. As such, the location of these patches and their placement relative to other elements of the landscape is important to the ecosystems and their processes. Therefore, the four main themes of landscape limnology are: Findings from landscape limnology research are contributing to many facets of aquatic ecosystem research, management, and conservation. Landscape limnology is especially relevant for geographical areas with thousands of ecosystems (i.e. lake-rich regions of the world), in situations with a range of human disturbances, or when considering lakes, streams, and wetlands that are connected to other such ecosystems. For example, landscape limnology perspectives have contributed to the development of nutrient criteria for lakes, [ 10 ] formation of classification systems that can be used to monitor the health of aquatic ecosystems, [ 11 ] understanding ecosystem responses to environmental stressors, [ 12 ] or explaining biogeographic patterns of community composition. [ 7 ]
https://en.wikipedia.org/wiki/Landscape_limnology
A landship is a large land vehicle that travels exclusively on land. Its name is meant to distinguish it from vehicles that travel through other mediums such as conventional ships , airships , and spaceships . The British Landship Committee formed during World War I to develop armored vehicles for use in trench warfare . The British proposed building "landships," super-heavy tanks capable of crossing the trench systems of the Western Front. The committee originated from the armored car division of the Royal Naval Air Service . [ 1 ] It gained the notable support of Winston Churchill . [ 2 ] The tank was originally referred to as the landship, owing to the continuous development from the Landship Committee . The concept of a 1,000-ton armored, fighting machine on land quickly became too impractical and too costly for it to be realistically conceived. [ 3 ] As such, the landship project proposed a smaller vehicle. The first conceptual tank prototype was for a 300-ton vehicle that would be made by suspending a "sort of Crystal Palace body" between three enormous wheels, allegedly inspired by the Great Wheel at Earls Court in London. [ 4 ] Six of these 'Big Wheel' landships were eventually commissioned. However, even at a revised weight, 300 tons was considered impractical given the technology present, but the influence of the big wheel would persist in the "creeping grip" tracks of the first tanks, which were wrapped around the entire body of the machine. [ 4 ] The constant revision eventually led to the creation of the first tank . The Mark I and later variations were smaller than the initial behemoths engineers envisioned but still used naval guns, including the QF 6-pounder Hotchkiss , later shortened to the QF 6-pounder guns . Schwerer Gustav was a German super-heavy railway gun developed in the late 1930s. It was the largest caliber rifled weapon ever used in combat and, in terms of overall weight, the heaviest mobile artillery piece ever built. With a length of 47.3 meters (155 feet, 2 inches), a width of 7.1 meters (23 feet, 4 inches) and a height of 11.6 meters (38 feet, 1 inch), the Schwerer Gustav weighed 1,350 tonnes. The gun's massive size required its own diesel-powered generator, a special railway track and an oversized crew of 2,750 (250 to assemble and fire the gun in 3 days and 2,500 to lay the tracks). By definition, the Schwerer Gustav would have qualified as a landship, albeit one limited to rails. Super-heavy tanks are massive tanks, concepts of which led to gargantuan vehicles akin to naval warships. Super-heavy tanks such as the British TOG 2 and the Soviet T-42 were built in a similar layout as naval battleships, albeit on a smaller scale. The T-35 was a Soviet multi-turreted heavy tank. Nicknamed the "Land Battleship," it continues to be one of few armored historical vehicles named as such. [ 5 ] The Maus was a German super-heavy tank from the 2nd World War, weighing in at 188 tons. It was heaviest tank ever built. Although 141 were ordered, only one finished prototype and one partially finished prototype were in working order by the end of the war due to the Allies bombing the only factory capable of producing the tank. [ 6 ] The Landkreuzer P.1000 was a super-heavy tank designed by Edward Grote for Nazi Germany in 1942. If completed, the P.1000 would have been 35 meters (115 feet) long and 14 meters (46 feet) wide, with a weight of ranging from 800 to 1,000 tons depending on the variant. The latest variant would have been armed with twin 28 cm guns housed in a central turret and two turrets with twin 12.8 cm cannons mounted towards the front of the hull. [ 7 ] Extremely large hovercraft such as the Zubr-class LCAC used by both the Russian Navy and the PLAN could also technically cover some aspect of landship design by factor of it being also capable of traversing overland as a partial-terrestrial vehicle. At over 50 meters long with a max tonnage weight of 555 tons, it is the closest one could get to a modern military landcraft, although it is more of an amphibious hovership than anything else. Siege towers were ancient forms of superheavy ground vehicles and siege engines that grew in prominence during the ancient world right up to the Renaissance. They required dozens of men or beasts of burden to move their bulk. They were exceptionally tall, had multiple decks, staircases and ladders, and some were armed internally with emplaced weapons such as ballistas , catapults or onagers and cannons . The largest of them all was the ancient Helepolis , a superheavy siege tower from ancient Greece that was 40 meters tall, 20 meters wide, 160 tons in weight and required a crew of 3,400 men. By definition, siege towers were effectively the medieval equivalent of a ground-based attack transport troopship . [ 8 ] Even though they are technically a conglomeration of individual vehicles, armoured trains are often the closest one would get to a modern landship design. Armoured trains are often extraordinarily fast for their size and commonly measure over a hundred meters long carrying hundreds of passengers. Likewise, armoured trains are incredibly variable and often used as a mobile headquarters on rails. They are powerful enough to mount naval weapons, with many railroad guns being comparable to actual naval calibers. Likewise, like ships, the majority of armoured trains were often christened with a name. Currently, the only country utilizing armoured trains in the modern era is Russia , where it is used more akin to a land-base landing ship on rails as of December 2023. [ 9 ] [ 10 ] The vast majority of the world's largest terrestrial vehicles come from the engineering and mining sector. As their role involves the collection of vast underground resources in large bulk, their physical dimensions dramatically increased to accommodate the transferral of those materials and easily dwarf any other ground vehicles by several orders of magnitude. These vehicles listed are: An unsuccessful vehicle designed to explore Antarctica . A large civilian mining vehicle. Their large size are compared to ocean liners on land. The SRs 8000-class or Type SRs 8000 bucket-wheel excavators (of which Bagger 293 , the lead SRs 8000, is the heaviest land vehicle ever made) remain the only ground vehicle to be referred with a naval classification. [ 11 ] [ 12 ] Large mining vehicles used in open-pit mining. The Overburden Conveyor Bridge F60 is considered the largest vehicle in physical dimensions of any type and has been referred to as a "lying Eiffel Tower." [ 13 ] [ 14 ] Similar in size to bucket-wheel excavators and used in surface mining and dredging , the largest of which are the Type Es 3750s . Massive excavators that move by "walking" on two, pneumatic feet. The Big Muskie was one of the largest terrestrial vehicles ever built. Extremely large power shovels – The Captain rivaled bucket-wheel excavators and dragline excavators in sheer size. [ 15 ] Spreaders are incredibly large ground vehicles that are meant to 'spread' overburden into a neat, consistent and orderly manner. They closely resemble both a bucket-wheel excavator and a stacker in appearance. They are identifiable by their long discharge boom which can range as long as 195 meters in length. [ 16 ] Stackers are mining vehicles that exclusively run on rails and are imposing in size, with some stacker-reclaimer hybrids having a boom length of 25 to 60 meters. [ 17 ] These vehicles may resemble a spreader, however, a stacker's role is to pile bulk material onto a stockpile so that a reclaimer could collect and redistribute the materials. Stackers, therefore, often work in conjunction with reclaimers. Reclaimers are mining vehicles that, like stackers, run exclusively on rails. Reclaimers are traditionally very wide vehicles that come in various shapes and types; from bridge reclaimers to overarching portal reclaimers and the bucket-wheel reclaimers which superficially resemble a bucket-wheel excavator in appearance. Reclaimers, as its name implies, 'reclaim' bulk material such as ores and cereals from a stockpile dumped by a stacker and are quite large, with bucket-wheel types usually having a boom length of 25 to 60 meters. [ 16 ] As such, these two vehicles often work in conjunction with each other. Large underground vehicles designed to drill and create subterranean subway transits, some of which weigh about 5,000 tons. An ultra-heavy transporter used to ferry spacecraft to the launching pad. At 2,000 tons each, they are the second largest ground vehicle that still use an internal combustion engine as its source of propulsion rather than being reliant on an external power source. Mobile gantry cranes and container cranes are notable for their large, imposing size and dimensions with weights varying from 900 tons up to 2000 tons. These vehicles are either driven by wheels or rails and require a small crew for their size. The largest gantry cranes such as Samson and Goliath are known to be one of the largest movable land machines in the world, with the Honghai Crane being the largest and the most powerful of its kind at 150m tall, a span of 124m and the total weight of 11,000 tons, with the strength to lift up to 22,000 tons. Certain crawler cranes are known to reach gargantuan size. Whilst not the same extant as gantry or container cranes, the very largest, such as the XGC88000 crawler crane remains the largest self-propelled ground vehicle to date, beating out the crawler-transporters in both gross tonnage and sheer dimensions. A proposed civilian railway line envisioned by Adolf Hitler . These super enlarged transit lines would have accommodated ultra-wide trains that would be 500 meters (1,640 feet) long.
https://en.wikipedia.org/wiki/Landship
Landslides , also known as landslips , skyfalls or rockslides , [ 3 ] [ 4 ] [ 5 ] are several forms of mass wasting that may include a wide range of ground movements, such as rockfalls , mudflows , shallow or deep-seated slope failures and debris flows . [ 6 ] Landslides occur in a variety of environments, characterized by either steep or gentle slope gradients, from mountain ranges to coastal cliffs or even underwater, [ 7 ] in which case they are called submarine landslides . Gravity is the primary driving force for a landslide to occur, but there are other factors affecting slope stability that produce specific conditions that make a slope prone to failure. In many cases, the landslide is triggered by a specific event (such as heavy rainfall , an earthquake , a slope cut to build a road, and many others), although this is not always identifiable. Landslides are frequently made worse by human development (such as urban sprawl ) and resource exploitation (such as mining and deforestation ). Land degradation frequently leads to less stabilization of soil by vegetation . [ 8 ] Additionally, global warming caused by climate change and other human impact on the environment , can increase the frequency of natural events (such as extreme weather ) which trigger landslides. [ 9 ] Landslide mitigation describes the policy and practices for reducing the risk of human impacts of landslides, reducing the risk of natural disaster . Landslides occur when the slope (or a portion of it) undergoes some processes that change its condition from stable to unstable. This is essentially due to a decrease in the shear strength of the slope material, an increase in the shear stress borne by the material, or a combination of the two. A change in the stability of a slope can be caused by a number of factors, acting together or alone. Natural causes of landslides include: Landslides are aggravated by human activities, such as: In traditional usage, the term landslide has at one time or another been used to cover almost all forms of mass movement of rocks and regolith at the Earth's surface. In 1978, geologist David Varnes noted this imprecise usage and proposed a new, much tighter scheme for the classification of mass movements and subsidence processes. [ 26 ] This scheme was later modified by Cruden and Varnes in 1996, [ 27 ] and refined by Hutchinson (1988), [ 28 ] Hungr et al. (2001), [ 29 ] and finally by Hungr, Leroueil and Picarelli (2014). [ 6 ] The classification resulting from the latest update is provided below. Under this classification, six types of movement are recognized. Each type can be seen both in rock and in soil. A fall is a movement of isolated blocks or chunks of soil in free-fall. The term topple refers to blocks coming away by rotation from a vertical face. A slide is the movement of a body of material that generally remains intact while moving over one or several inclined surfaces or thin layers of material (also called shear zones) in which large deformations are concentrated. Slides are also sub-classified by the form of the surface(s) or shear zone(s) on which movement happens. The planes may be broadly parallel to the surface ("planar slides") or spoon-shaped ("rotational slides"). Slides can occur catastrophically, but movement on the surface can also be gradual and progressive. Spreads are a form of subsidence, in which a layer of material cracks, opens up, and expands laterally. Flows are the movement of fluidised material, which can be both dry or rich in water (such as in mud flows). Flows can move imperceptibly for years, or accelerate rapidly and cause disasters. Slope deformations are slow, distributed movements that can affect entire mountain slopes or portions of it. Some landslides are complex in the sense that they feature different movement types in different portions of the moving body, or they evolve from one movement type to another over time. For example, a landslide can initiate as a rock fall or topple and then, as the blocks disintegrate upon the impact, transform into a debris slide or flow. An avalanching effect can also be present, in which the moving mass entrains additional material along its path. Slope material that becomes saturated with water may produce a debris flow or mud flow . However, also dry debris can exhibit flow-like movement. [ 30 ] Flowing debris or mud may pick up trees, houses and cars, and block bridges and rivers causing flooding along its path. This phenomenon is particularly hazardous in alpine areas, where narrow gorges and steep valleys are conducive of faster flows. Debris and mud flows may initiate on the slopes or result from the fluidization of landslide material as it gains speed or incorporates further debris and water along its path. River blockages as the flow reaches a main stream can generate temporary dams. As the impoundments fail, a domino effect may be created, with a remarkable growth in the volume of the flowing mass, and in its destructive power. An earthflow is the downslope movement of mostly fine-grained material. Earthflows can move at speeds within a very wide range, from as low as 1 mm/yr [ 15 ] [ 16 ] to many km/h. Though these are a lot like mudflows , overall they are more slow-moving and are covered with solid material carried along by the flow from within. Clay, fine sand and silt, and fine-grained, pyroclastic material are all susceptible to earthflows. These flows are usually controlled by the pore water pressures within the mass, which should be high enough to produce a low shearing resistance. On the slopes, some earthflow may be recognized by their elongated shape, with one or more lobes at their toes. As these lobes spread out, drainage of the mass increases and the margins dry out, lowering the overall velocity of the flow. This process also causes the flow to thicken. Earthflows occur more often during periods of high precipitation, which saturates the ground and builds up water pressures. However, earthflows that keep advancing also during dry seasons are not uncommon. Fissures may develop during the movement of clayey materials, which facilitate the intrusion of water into the moving mass and produce faster responses to precipitation. [ 31 ] A rock avalanche, sometimes referred to as sturzstrom , is a large and fast-moving landslide of the flow type. It is rarer than other types of landslides but it is often very destructive. It exhibits typically a long runout, flowing very far over a low-angle, flat, or even slightly uphill terrain. The mechanisms favoring the long runout can be different, but they typically result in the weakening of the sliding mass as the speed increases. [ 32 ] [ 33 ] [ 34 ] The causes of this weakening are not completely understood. Especially for the largest landslides, it may involve the very quick heating of the shear zone due to friction, which may even cause the water that is present to vaporize and build up a large pressure, producing a sort of hovercraft effect. [ 35 ] In some cases, the very high temperature may even cause some of the minerals to melt. [ 36 ] During the movement, the rock in the shear zone may also be finely ground, producing a nanometer-size mineral powder that may act as a lubricant, reducing the resistance to motion and promoting larger speeds and longer runouts. [ 37 ] The weakening mechanisms in large rock avalanches are similar to those occurring in seismic faults. [ 34 ] Slides can occur in any rock or soil material and are characterized by the movement of a mass over a planar or curvilinear surface or shear zone. A debris slide is a type of slide characterized by the chaotic movement of material mixed with water and/or ice. It is usually triggered by the saturation of thickly vegetated slopes which results in an incoherent mixture of broken timber, smaller vegetation and other debris. [ 31 ] Debris flows and avalanches differ from debris slides because their movement is fluid-like and generally much more rapid. This is usually a result of lower shear resistances and steeper slopes. Typically, debris slides start with the detachment of large rock fragments high on the slopes, which break apart as they descend. Clay and silt slides are usually slow but can experience episodic acceleration in response to heavy rainfall or rapid snowmelt. They are often seen on gentle slopes and move over planar surfaces, such as over the underlying bedrock. Failure surfaces can also form within the clay or silt layer itself, and they usually have concave shapes, resulting in rotational slides. Slope failure mechanisms often contain large uncertainties and could be significantly affected by heterogeneity of soil properties. [ 38 ] A landslide in which the sliding surface is located within the soil mantle or weathered bedrock (typically to a depth from few decimeters to some meters) is called a shallow landslide. Debris slides and debris flows are usually shallow. Shallow landslides can often happen in areas that have slopes with high permeable soils on top of low permeable soils. The low permeable soil traps the water in the shallower soil generating high water pressures. As the top soil is filled with water, it can become unstable and slide downslope. Deep-seated landslides are those in which the sliding surface is mostly deeply located, for instance well below the maximum rooting depth of trees. They usually involve deep regolith , weathered rock, and/or bedrock and include large slope failures associated with translational, rotational, or complex movements. [ 39 ] They tend to form along a plane of weakness such as a fault or bedding plane . They can be visually identified by concave scarps at the top and steep areas at the toe. [ 40 ] Deep-seated landslides also shape landscapes over geological timescales and produce sediment that strongly alters the course of fluvial streams . [ 41 ] Landslides that occur undersea, or have impact into water e.g. significant rockfall or volcanic collapse into the sea, [ 42 ] can generate tsunamis . Massive landslides can also generate megatsunamis , which are usually hundreds of meters high. In 1958, one such tsunami occurred in Lituya Bay in Alaska. [ 43 ] [ 44 ] Landslide hazard analysis and mapping can provide useful information for catastrophic loss reduction, and assist in the development of guidelines for sustainable land-use planning . The analysis is used to identify the factors that are related to landslides, estimate the relative contribution of factors causing slope failures, establish a relation between the factors and landslides, and to predict the landslide hazard in the future based on such a relationship. [ 45 ] The factors that have been used for landslide hazard analysis can usually be grouped into geomorphology , geology , land use/land cover, and hydrogeology . Since many factors are considered for landslide hazard mapping, GIS is an appropriate tool because it has functions of collection, storage, manipulation, display, and analysis of large amounts of spatially referenced data which can be handled fast and effectively. [ 46 ] Cardenas reported evidence on the exhaustive use of GIS in conjunction of uncertainty modelling tools for landslide mapping. [ 47 ] [ 48 ] Remote sensing techniques are also highly employed for landslide hazard assessment and analysis. Before and after aerial photographs and satellite imagery are used to gather landslide characteristics, like distribution and classification, and factors like slope, lithology , and land use/land cover to be used to help predict future events. [ 49 ] Before and after imagery also helps to reveal how the landscape changed after an event, what may have triggered the landslide, and shows the process of regeneration and recovery. [ 50 ] Using satellite imagery in combination with GIS and on-the-ground studies, it is possible to generate maps of likely occurrences of future landslides. [ 51 ] [ 52 ] Such maps should show the locations of previous events as well as clearly indicate the probable locations of future events. In general, to predict landslides, one must assume that their occurrence is determined by certain geologic factors, and that future landslides will occur under the same conditions as past events. [ 53 ] Therefore, it is necessary to establish a relationship between the geomorphologic conditions in which the past events took place and the expected future conditions. [ 54 ] Natural disasters are a dramatic example of people living in conflict with the environment. Early predictions and warnings are essential for the reduction of property damage and loss of life. Because landslides occur frequently and can represent some of the most destructive forces on earth, it is imperative to have a good understanding as to what causes them and how people can either help prevent them from occurring or simply avoid them when they do occur. Sustainable land management and development is also an essential key to reducing the negative impacts felt by landslides. GIS offers a superior method for landslide analysis because it allows one to capture, store, manipulate, analyze, and display large amounts of data quickly and effectively. Because so many variables are involved, it is important to be able to overlay the many layers of data to develop a full and accurate portrayal of what is taking place on the Earth's surface. Researchers need to know which variables are the most important factors that trigger landslides in any given location. Using GIS, extremely detailed maps can be generated to show past events and likely future events which have the potential to save lives, property, and money. Since the '90s, GIS have been also successfully used in conjunction to decision support systems , to show on a map real-time risk evaluations based on monitoring data gathered in the area of the Val Pola disaster (Italy). [ 56 ] Evidence of past landslides has been detected on many bodies in the Solar System , but since most observations are made by probes that only observe for a limited time and most bodies in the Solar System appear to be geologically inactive not many landslides are known to have happened in recent times. Both Venus and Mars have been subject to long-term mapping by orbiting satellites, and examples of landslides have been observed on both planets. Landslide mitigation refers to several human-made activities on slopes with the goal of lessening the effect of landslides. Landslides can be triggered by many, sometimes concomitant causes. In addition to shallow erosion or reduction of shear strength caused by seasonal rainfall , landslides may be triggered by anthropic activities, such as adding excessive weight above the slope, digging at mid-slope or at the foot of the slope. Often, individual phenomena join to generate instability over time, which often does not allow a reconstruction of the evolution of a particular landslide. Therefore, landslide hazard mitigation measures are not generally classified according to the phenomenon that might cause a landslide. [ 65 ] Instead, they are classified by the sort of slope stabilization method used: The monitoring of landslides is essential for estimating the dangerous situations, making it possible to issue alerts on time, to avoid loses of lives and property, and to have proper planning and reducing measures in place. Currently, there exist different type of techniques aimed to monitor landslides: •Geophones and accelerometers: Detect seismic vibrations or movements that might indicate slope instability. Climate-change impact on temperature, both average rainfall and rainfall extremes, and evapotranspiration may affect landslide distribution, frequency and intensity (62). However, this impact shows strong variability in different areas (63). Therefore, the effects of climate change on landslides need to be studied on a regional scale. Climate change can have both positive and negative impacts on landslides Temperature rise may increase evapotranspiration, leading to a reduction in soil moisture and stimulate vegetation growth, also due to a CO2 increase in the atmosphere. Both effects may reduce landslides in some conditions. On the other side, temperature rise causes an increase of landslides due to Since the average precipitation is expected to decrease or increase regionally (63), rainfall induced landslides may change accordingly, due to changes in infiltration, groundwater levels and river bank erosion. Weather extremes are expected to increase due to climate change including heavy precipitation (63). This yields negative effects on landslides due to focused infiltration in soil and rock (66) and an increase of runoff events, which may trigger debris flows. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Landslide
Landsupport (spelling: LANDSUPPORT) is a pilot consulting project funded by the European Union for land use for the near-natural modeling of different types and methods of land use while at the same time protecting the environment. [ 1 ] [ 2 ] In the long term, sustainable use of the soil must be guaranteed in order to meet the needs of the world's population. The project brings together numerous universities, research institutions, companies and stakeholders with the aim of creating a web-based, free system to support practical agriculture and land users in making decisions about sustainable land use, environmental protection and agricultural use. With the active participation of various and numerous stakeholders in and outside Europe, the consortium also aims at legislation at European level, based on scientific data that is processed and modeled in the system. In the research framework program Horizon 2020, the project is organized under the direction of Fabio Terribile at the University of Naples Federico II . The Landsupport consortium consists of the following partners: The results of the investigations are internationally evaluated by the members in specialist committees and made available to practice and the responsible bodies at regional and state level, as well as to the European Uninion for legislative and approval procedures.
https://en.wikipedia.org/wiki/Landsupport
In physics , the Landé g -factor is a particular example of a g -factor , namely for an electron with both spin and orbital angular momenta . It is named after Alfred Landé , who first described it in 1921. [ 1 ] In atomic physics , the Landé g -factor is a multiplicative term appearing in the expression for the energy levels of an atom in a weak magnetic field . The quantum states of electrons in atomic orbitals are normally degenerate in energy , with these degenerate states all sharing the same angular momentum. When the atom is placed in a weak magnetic field, however, the degeneracy is lifted. The factor comes about during the calculation of the first-order perturbation in the energy of an atom when a weak uniform magnetic field (that is, weak in comparison to the system's internal magnetic field) is applied to the system. Formally we can write the factor as, [ 2 ] The orbital g L {\displaystyle g_{L}} is equal to 1, and under the approximation g S = 2 {\displaystyle g_{S}=2} , the above expression simplifies to Here, J is the total electronic angular momentum , L is the orbital angular momentum, and S is the spin angular momentum . Because S = 1 / 2 {\displaystyle S=1/2} for electrons, one often sees this formula written with 3/4 in place of S ( S + 1 ) {\displaystyle S(S+1)} . The quantities g L and g S are other g -factors of an electron. For an S = 0 {\displaystyle S=0} atom, g J = 1 {\displaystyle g_{J}=1} and for an L = 0 {\displaystyle L=0} atom, g J = 2 {\displaystyle g_{J}=2} . If we wish to know the g -factor for an atom with total atomic angular momentum F → = I → + J → {\displaystyle {\vec {F}}={\vec {I}}+{\vec {J}}} (nucleus + electrons), such that the total atomic angular momentum quantum number can take values of F = J + I , J + I − 1 , … , | J − I | {\displaystyle F=J+I,J+I-1,\dots ,|J-I|} , giving Here μ B {\displaystyle \mu _{\text{B}}} is the Bohr magneton and μ N {\displaystyle \mu _{\text{N}}} is the nuclear magneton . This last approximation is justified because μ N {\displaystyle \mu _{N}} is smaller than μ B {\displaystyle \mu _{B}} by the ratio of the electron mass to the proton mass. The following working is a common derivation. [ 3 ] [ 4 ] Both orbital angular momentum and spin angular momentum of electron contribute to the magnetic moment. In particular, each of them alone contributes to the magnetic moment by the following form where Note that negative signs in the above expressions are because an electron carries negative charge, and the value of g S {\displaystyle g_{S}} can be derived naturally from Dirac's equation . The total magnetic moment μ → J {\displaystyle {\vec {\mu }}_{J}} , as a vector operator, does not lie on the direction of total angular momentum J → = L → + S → {\displaystyle {\vec {J}}={\vec {L}}+{\vec {S}}} , because the g-factors for orbital and spin part are different. However, due to Wigner-Eckart theorem , its expectation value does effectively lie on the direction of J → {\displaystyle {\vec {J}}} which can be employed in the determination of the g -factor according to the rules of angular momentum coupling . In particular, the g -factor is defined as a consequence of the theorem itself Therefore, One gets The following table gives the calculated Lande g-factors for some common term symbols in the approximation g S = 2 {\displaystyle g_{S}=2} .
https://en.wikipedia.org/wiki/Landé_g-factor
Lane Wyatt Martin is an American materials scientist and engineer specializing in complex oxide thin films, their physics and properties, and applications of the same. He is best known for his work on ferroelectric and multiferroic thin films. Currently he is a Robert A. Welch Professor of Materials Science and NanoEngineering, Chemistry, and Physics and Astronomy, and serves as the director of the Rice Advanced Materials Institute (RAMI) at Rice University . Martin was born in Lincoln, Nebraska and grew up primarily in Indiana, Pennsylvania and graduated from Indiana Area Senior High School . He earned his Bachelor of Science in Materials Science and Engineering from Carnegie Mellon University in December 2003 in just three-and-half years. He then pursued graduate studies at the University of California, Berkeley , obtaining a Master of Science (M.S., May 2006) and a Doctor of Philosophy (Ph.D., May 2008) in Materials Science and Engineering. Early Career After completing his doctorate degree, Martin served as a postdoctoral fellow in the Quantum Materials Program at Lawrence Berkeley National Laboratory from 2008 to 2009. He began his academic career as an assistant professor in the Department of Materials Science and Engineering at the University of Illinois, Urbana-Champaign . As an assistant professor of materials science and engineering, Martin received a National Science Foundation CAREER Award for his proposal, "Enhanced Pyroelectric and Electrocaloric Effects in Complex Oxide Thin Film Heterostructures." [ 1 ] He also helped devise a method to make thin films of ferroelectric material with twice the strain of traditional methods, giving the films exceptional electric properties. [ 2 ] In 2013, Martin was nominated for a Presidential Early Career Award for Scientists and Engineers by the United States Department of Defense "for his research accomplishments in the synthesis and study of multifunctional materials that have enabled the development and understanding of fundamentally new materials phenomena and potential for advanced devices." [ 3 ] UC Berkeley In 2014, Martin returned to the University of California, Berkeley as an associate professor , was promoted to professor in July 2018, and served as vice chair and associate chair of the Department of Materials Science and Engineering from 2018 to 2021. From 2021 to 2023, Martin was a Chancellor’s Professor and chair of the Department of Materials Science and Engineering at the University of California, Berkeley. While serving in this role, he received the 2015 American Associate for Crystal Growth Young Author Award for his "outstanding accomplishments in the heteroepitaxial crystal growth of complex oxide thin films." [ 4 ] He also received the 2016 Robert L. Coble Award for Young Scholars from the American Ceramic Society for outstanding contributions in ceramics research. [ 5 ] In 2021, Martin was elected to the American Physical Society for his seminal contributions to the science of ferroelectrics. [ 6 ] During his tenure as chair, The University of California, Berkeley's Materials Science and Engineering (MSE) program was consistently ranked among the top in the nation. In the 2023 U.S. News & World Report rankings, the program was tied for the #2 position. Rice University and the Rice Advanced Materials Institute (RAMI) Martin joined Rice University in July 2023 as the Robert A. Welch Professor of Materials Science and NanoEngineering, Chemistry, and Physics and Astronomy and as inaugural director of the Rice Advanced Materials Institute (RAMI), a leading hub for interdisciplinary research in advanced materials. RAMI brings together experts from materials science, chemistry, physics, and engineering to address pressing global challenges through innovations in material design and application. Under Martin's leadership, the institute focuses on exploring a diverse array of materials to enable transformative advances in areas such as next-generation, low-power electronics and communications, energy storage and conversion, and catalysis, separations, storage, and beyond while fostering collaboration across academic, industry, and governmental sectors. RAMI aims to advance the frontiers of science while promoting sustainable and impactful technological solutions, solidifying Rice University’s position as a global leader in materials research. Martin’s research focuses on the design and characterization of functional materials, particularly dielectric , piezoelectric , pyroelectric , ferroelectric , and multiferroic materials. His research focuses on the synthesis (growth), characterization, and utilization of emergent functions in such materials, particularly in epitaxial thin-film materials. By applying innovative approaches to making materials, he is able to access new states of matter and explores fundamental materials physics through growth and epitaxy, strain, defect, and interfacial engineering. In turn, his work explores the unique properties of these materials, including their ability to generate electric charge under mechanical stress and change physical dimensions when subjected to an electric field. Dr. Martin investigates the fundamental mechanisms governing the behavior of these materials at the atomic scale, aiming to enhance their performance for a wide range of applications. His research has significant implications for energy harvesting and conversion, advanced sensors, next-generation logic, and data-storage technologies, addressing global challenges in energy efficiency and the development of sustainable technologies. Throughout his career, Martin has made significant contributions to understanding how to produce unexpected properties and phenomena in ferroelectrics materials. For example, his team used strain gradients induced by compositional gradients to induce built-in potentials that can give rise to properties not found in the bulk. [ 7 ] Using similar approaches his team greatly expanded the range of functional temperatures for a given ferroelectric system by creating polarization gradients. [ 8 ] In this work, the team directly measured the gradient and found that expanded the temperature range for optimal performance by the material across a 500-degree Celsius window (nearly an order of magnitude larger as compared to standard materials). In other systems, Martin’s ability to finely control materials - at the unit cell level - has led to him and colleague discovering new states of matter in layered versions of materials. For example, by layering a ferroelectric and a dielectric repeatedly (with just each layer being a few nanometers thick), it was found that totally unexpected polarization textures could be formed including so-called polar vortices [ 9 ] and polar skyrmions. [ 10 ] These emergent features are topologically protected states that were not expected to form and it was only due to the team’s ability to control materials at these exacting sizes that such effects could be produced. In turn, these emergent structures exhibit a range of novel properties and respond in intriguing ways under excitation including being highly light sensitive [ 11 ] and can undergo dramatic evolution of the phases under the same. [ 12 ] The materials Martin works on are being widely considered for an array of applications and devices. Among other contributions, he has helped developed pathways to reduce the energy costs for and speed up the switching of ferroelectric materials [ 13 ] which could enable their more ready utilization in logic and memory applications. Likewise, Martin has developed ways to make such materials with properties rarely obtained in thin films - again showing a pathway to potential low-power logic and memory applications. [ 14 ] Martin has also demonstrated how the same classes of materials could be useful for everything from waste-heat energy conversion via a process called pyroelectric energy conversion [ 15 ] to solid-state capacitive energy storage [ 16 ] good for pulsed energy needs, and even worked to develop new understanding of fuel cell materials. [ 17 ] His work on relaxor ferroelectrics and antiferroelectrics are also bringing new understanding about how these materials look and respond to field. In the former, among other contributions, he and his colleagues developed new understanding of the nanoscale structure of these materials and how this relates to their properties [ 18 ] and in the latter, he demonstrated that an electric-field-driven phase transition in these materials results in a large volume change that can be used actuation. [ 19 ] Both of these classes of materials are being considered for applications in micro- and nano-electromechanical systems. As of Jan. 2025, Martin has authored over 300 papers, with his work cited approximately 32,500 times, resulting in an h-index of 81. [ 20 ] Martin and his wife Sophi have one son together. [ 33 ]
https://en.wikipedia.org/wiki/Lane_W._Martin
In road-transport terminology, lane centering , also known as lane centering assist , lane assist , auto steer or autosteer , is an advanced driver-assistance system that keeps a road vehicle centered in the lane, relieving the driver of the task of steering. Lane centering is similar to lane departure warning and lane keeping assist , but rather than warn the driver, or bouncing the car away from the lane edge, it keeps the car centered in the lane. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Together with adaptive cruise control (ACC), this feature may allow unassisted driving for some length of time. [ 5 ] [ 6 ] [ 7 ] It is also part of automated lane keeping systems . Starting in 2019, semi-trailer trucks have also been fitted with this technology. [ 8 ] [ 9 ] Lane centering keeps the vehicle centered in the lane and almost always comes with steering assist to help the vehicle take gentle turns at highway speeds. [ 10 ] Lane departure warning generates a warning when the vehicle crosses a line, while lane keeping assist helps the vehicle to avoid crossing a line, standardized in ISO 11270:2014. [ 11 ] In farming, "machine autosteer" is a technology which make automated steering and positioning of a machine in a landscape. [ 12 ] Usually with lane departure avoidance (LDA) after initial command or confirmation by the driver, automatically applies steering to move the vehicle to an adjacent lane The first commercially available lane centering systems were based on off-the-shelf systems created by Mobileye , such as Tesla Autopilot and Nissan ProPilot , [ 14 ] although Tesla switched to an in-house design when Mobileye ended their partnership. [ 15 ] A handful of companies like Bosch, Delphi, ZF and Mobileye provide sensors, control units, or algorithms to car makers, who then integrate and refine those systems. [ 16 ] While not directly attributable to lane centering, crash rates on the Tesla Model S and Model X equipped with the Mobileye system were reduced by almost 40% while Tesla Autopilot was in use. [ 17 ] [ 18 ] [ 19 ] The lane detection system used by the lane departure warning system uses image processing techniques to detect lane lines from real-time camera images fed from cameras mounted on the automobile. Examples of image processing techniques used include the Hough transform , Canny edge detector , Gabor filter and deep learning . A basic flowchart of how a lane detection algorithm works to produce lane departure warning is shown in the figures. Features that differentiate systems include how well they perform on turns, speed limitations, and whether the system resumes from a stop. [ 20 ] [ 21 ] Current lane centering systems rely on visible lane markings. They typically cannot decipher faded, missing, incorrect or overlapping lane markings. Markings covered in snow, or obsolete lane markings left visible, can affect the accuracy of the system. [ 22 ] GM's Super Cruise only works on known freeways that have been previously mapped, [ 23 ] as it uses a combination of these maps and a precise GNSS position provided by Trimble's RTX GNSS correction service to determine if Super Cruise can be enabled or not. [ 23 ] Most vehicles require the driver's hands to remain on the wheel, but GM's Super Cruise monitors the driver's eyes to ensure human attention to the road, [ 24 ] and thus allows hands-free driving. Mobileye claimed in 2018 that 11 automakers would incorporate their EyeQ4 chip that enables L2+ and L3 autonomous systems ; this would collectively represent more than 50% of the auto industry. [ 25 ] Level 2 automation is also known as "hands off": this system takes full control of the vehicle (accelerating, braking, and steering). Level 3 is also known as "eyes off": the driver can safely turn their attention away from driving, e.g. the driver can text or watch a movie. [ 26 ] In 2018, the average selling price for the EyeQ4 chip to auto makers was about $450 U.S. dollars. [ 27 ] Nissan uses the EyeQ4 chip for their hands-off ProPilot 2.0 system. [ 28 ] In the United States, in 2018, lane centering systems are not covered by any Federal Motor Vehicle Safety Standards , according to the NHTSA. [ 29 ] Territories such as the European union, Japan, Russia, Turkey, Egypt and the United Kingdom follow UNECE 79 regulation. [ 30 ] In those territories following UNECE 79 regulation, automatically commanded steering functions are classified in several categories, for instance: While all those functions are related to automated steering, lane centering is a concept close to the concept related to category B2, while LKA is closer to category B1. Current international regulations require assistance systems to monitor that the driver keeps their hands on the steering wheel, with escalating warnings and eventual disengagement if they fail to do so. In North America, some manufacturers have "hands-off" systems which instead monitor whether the driver is paying attention to the road ahead. [ 32 ] Because all of these vehicles also have adaptive cruise control that can work in tandem with lane centering, they meet the SAE standard for level 2 automation . Adaptive cruise control and lane centering are often only available in more expensive trim levels rather than just the base trim. An example is the Hyundai Kona EV, which only has adaptive cruise control available on the "ultimate" edition. [ 33 ] Nissan ProPilot is based on Mobileye technology [ 135 ] and assists with acceleration, steering and braking input under single lane highway driving conditions. [ 136 ] ProPilot keeps the car centered in the lane and will deactivate below 31 mph if not tracking a car in front of it. [ 104 ] Adaptive cruise control handles stop-and-go traffic if stopped for less than 4 seconds [ 137 ] and helps maintain a set vehicle speed and maintain a safe distance between the vehicle ahead. ProPilot, which can follow curves, [ 138 ] uses a forward-facing camera, forward-facing radar and other sensors. A traffic sign recognition system provides drivers with the most recent speed limit information detected by a camera on the windshield, in front of the rear-view mirror. In a review by ExtremeTech, ProPilot worked well in 1,000 miles of testing and only on some twisty sections did it require driver intervention. [ 139 ] During Euro NCAP 2018 testing, ProPilot failed some tests as did all other systems tested. [ 140 ] [ 141 ] Consumer Reports indicates that ProPilot is especially helpful in stop and go traffic. [ 142 ] Honda Sensing and AcuraWatch are a suite of advanced driver assistance features including Lane Keeping Assist System (LKAS) which helps keep the vehicle centered in a lane, by applying mild steering torque if the vehicle is deviating from the center of a detected lane with no turn-signal activation by the driver. [ 143 ] [ 144 ] The Lane Keeping Assist System (LKAS) does not work at speeds below 45 mph (72 km/h). However, certain vehicles equipped with Traffic Jam Assist (TJA) will have the system take over the lane-keeping task when the speed falls below 45 mph (72 km/h) until a stop, and it will automatically switch to Lane Keeping Assist System (LKAS) when the speed exceeds 45 mph (72 km/h). The Honda Sensing and AcuraWatch packages also include: Quote from David Zuby, chief research officer at the American Insurance Institute for Highway Safety : [ 145 ] We're not ready to say yet which company has the safest implementation of Level 2 driver assistance, but it's important to note that none of these vehicles is capable of driving safely on its own... The report indicated that only the Tesla Model 3 stayed within the lane on all 18 trials. Quote from the report: The evidence for safety benefits of active lane-keeping systems isn't as pronounced as for ACC. Still, the potential to prevent crashes and save lives is large. IIHS research shows that preventing lane-departure crashes could save nearly 8,000 lives in a typical year...
https://en.wikipedia.org/wiki/Lane_centering
In road-transport terminology, a lane departure warning system ( LDWS ) is a mechanism designed to warn the driver when the vehicle begins to move out of its lane (unless a turn signal is on in that direction) on freeways and arterial roads . These systems are designed to minimize accidents by addressing the main causes of collisions: driver error, distractions and drowsiness. In 2009 the U.S. National Highway Traffic Safety Administration (NHTSA) began studying whether to mandate lane departure warning systems and frontal collision warning systems on automobiles. [ 1 ] [ 2 ] There are four types of systems: Another system is the emergency lane keeping (ELK). The emergency lane keeping applies correction to a vehicle which drifts beyond a solid lane marking. [ 3 ] One of the main causes of single vehicle crashes and frontal crashes is lane departure. The goal of the lateral support systems (LSS) is to help to avoid such crashes. [ 3 ] Without those LSS systems, lane departure can be unintentional; the car drifts towards and across the edge of the lane. The car then reach a potentially dangerous situation. [ 3 ] This system does not work when the edge of the lane is not marked by a line. A lane detection system used behind the lane departure warning system uses the principle of Hough transform and Canny edge detector to detect lane lines from realtime camera images fed from the front-end camera of the automobile. A basic flowchart of how a lane detection algorithm works to help lane departure warning is shown in the figures. Lane warning/keeping systems are based on: In Europe, the lane departure warning system should be compatible with a visible lane marking identification standard such as commission regulation EU-351/2012. The concept and a working model of this technology was invented and fitted to a Rover SD1 in England by British inventor Nick Parish in 1988. Patent application No 8911571.1 was made in 1989. The first production lane departure warning system in Europe was developed by the United States company Iteris for Mercedes Actros commercial trucks. The system debuted in 2000, and is now available on many new cars, SUVs, and trucks. [ 5 ] In 2002, the Iteris system became available on Freightliner Trucks ' North American vehicles. In both these systems, the driver is warned of unintentional lane departures by an audible rumble strip sound generated on the side of the vehicle drifting out of the lane. No warnings are generated if, before crossing the lane, an active turn signal is given by the driver. [ citation needed ] Nissan Motors began offering a lane-keeping support system on the Cima 450XV Limited (F500) sold in Japan. [ 6 ] Toyota introduced its lane monitoring system [ 7 ] on models such as the Caldina and Alphard [ 8 ] sold in Japan; this system warns the driver if it appears the vehicle is beginning to drift out of its lane. [ 9 ] Honda launched its Lane Keep Assist System (LKAS) on the Inspire . [ 10 ] [ 11 ] It provides up to 80% of steering torque to keep the car in its lane on the highway. It is also designed to make highway driving less cumbersome, by minimizing the driver's steering input. [ 12 ] A camera, mounted at the top of the windshield just above the rear-view mirror , scans the road ahead in a 40-degree radius, picking up the dotted white lines used to divide lane boundaries on the highway. The computer recognizes that the driver is "locked into" a particular lane, monitors how sharp a curve is, and uses factors such as yaw and vehicle speed to calculate the steering input required. [ 13 ] In 2004, the first passenger-vehicle system available in North America was jointly developed by Iteris and Valeo for Nissan on the Infiniti FX and (in 2005) the M vehicles. [ 14 ] In this system, a camera (mounted in the overhead console above the mirror) monitors the lane markings on a roadway. A warning tone is triggered to alert the driver when the vehicle begins to drift over the markings. Also in 2004, Toyota added a lane keeping assist feature to the Crown Majesta which can apply a small counter-steering force to aid in keeping the vehicle in its lane. [ 15 ] Citroën became the first in Europe to offer LDWS on its 2005 C4 and C5 models, and its C6 . This system uses infrared sensors to monitor lane markings on the road surface, and a vibration mechanism in the seat alerts the driver of deviations. [ 4 ] Lexus introduced a multi-mode lane keeping assist system on the LS 460 , which utilizes stereo cameras and more sophisticated object- and pattern-recognition processors. This system can issue an audiovisual warning and also (using the electric power steering or EPS) steer the vehicle to hold its lane. It also applies counter-steering torque to help ensure the driver does not over-correct or "saw" the steering wheel while attempting to return the vehicle to its proper lane. [ 16 ] If the radar cruise control system is engaged, the Lane Keep function works to help reduce the driver's steering-input burden by providing steering torque; however, the driver must remain active or the system will deactivate. [ 17 ] [ 18 ] In 2007, Audi began offering its Audi lane assist feature [ 19 ] for the first time on the Q7 . This system, unlike the Japanese "assist" systems, will not intervene in actual driving; rather, it will vibrate the steering wheel if the vehicle appears to be exiting its lane. The LDW System in Audi is based on a forward-looking video-camera in its visible range, instead of the downward-looking infrared sensors in the Citroën. [ 20 ] Also in 2007, Infiniti offered a newer version of its 2004 system, which it called the Lane Departure Prevention (LDP) system. This feature utilizes the vehicle stability control system to help assist the driver maintain lane position by applying gentle brake pressure on the appropriate wheels. [ 21 ] General Motors introduced Lane Departure Warning on its 2008 model-year Cadillac STS , DTS , and Buick Lucerne models. The General Motors system warns the driver with an audible tone and a warning indicator on the dashboard. BMW also introduced Lane Departure Warning on the 5 Series (E60) and 6 Series , using a vibrating steering wheel to warn the driver of unintended departures. In late 2013 BMW updated the system with Traffic Jam Assistant appearing first on the redesigned BMW X5 , this system works below 25 miles per hour (40 km/h). Volvo introduced the lane departure warning system and the driver alert control on its 2008 model-year S80 , the V70 , and XC70 executive cars . [ 22 ] Volvo's lane departure warning system uses a camera to track road markings and sound an alarm when drivers depart their lane without signaling. The systems used by BMW, Volvo, and General Motors are based on core technology from Mobileye . Mercedes-Benz began offering a Lane Keeping Assist function on the new E-class . [ 23 ] This system warns the driver (with a steering-wheel vibration) if it appears the vehicle is beginning to leave its lane. Another feature will automatically deactivate and reactivate if it ascertains the driver is intentionally leaving his lane (for instance, aggressively cornering). A newer version will use the braking system to assist in maintaining the vehicle's lane. Kia Motors offered the 2011 Cadenza premium sedan with an optional lane departure warning system (LDWS) in limited markets. This system uses a flashing dashboard icon and emits an audible warning when a white lane marking is being crossed, and emits a louder audible warning when a yellow-line marking is crossed. This system is canceled when a turn signal is operating, or by pressing a deactivation switch on the dashboard; it works by using an optical sensor on both sides of the car. Audi A7 introduces Audi active lane assist. Mobileye developed a system that detected lane markings , and identified when a vehicle departed from its driving lane without the use of the turn signal. [ 24 ] [ non-primary source needed ] Mercedes began Distronic Plus with Steering Assist and Stop&Go Pilot on the redesigned S-class in 2013. Tesla Model S comes with advanced lane assistance systems with their 2014 release. [ 25 ] It was also released with a speed assist feature where the front facing camera reads the traffic speed limits using the technology of computer vision character recognition system, and then conveys it to the car. On roads where traffic signs are absent, it relies on the GPS data. When the car moves away from a lane at above 30 miles per hour (48 km/h), the system beeps and the steering wheel vibrate, alerting the driver of an unintended lane change. This happens during speed limit non-compliance as well. Fiat is launching its lane keep assist feature based on TRW's lane keeping assist system (also known as the haptic lane feedback system). This system integrates the lane-detection camera with TRW's electric power-steering system; when an unintended lane departure is detected (the turn signal is not engaged to indicate the driver's desire to change lanes), the electric power-steering system will introduce a gentle torque that will help guide the driver back toward the center of the lane. Introduced on the Lancia Delta in 2008, this system earned the Italian Automotive Technical Association's Best Automotive Innovation of the Year Award for 2008. Peugeot introduced the same system as Citroën in its new 308 . Lane departure warning systems combine prevention with risk reports in the transportation industry. Viewnyx applies video-based technology to assist fleets in lowering their driving liability costs. By providing safety managers with driver- and fleet-risk assessment reports and tools, it facilitates proactive coaching and training to eliminate high-risk behaviors. The Lookout Solution is used by North American fleets, and there is research on implementing a lane departure warning system via a mobile phone. [ 26 ] An Insurance Institute for Highway Safety raised concern that drivers may be less vigilant when relying on automated safety systems or become distracted by dashboard displays that monitor how the systems are performing. Two separate studies found that lane-keeping systems and blind spot monitoring systems had lower crash rates than the same vehicles without the systems. Police crash data from 25 states between 2009 and 2015 for vehicle models where the systems were sold as optional reduced rates of single-vehicle, sideswipe, and head-on crashes by 11 percent, and injuries in such crashes by 21 percent. The sample size was not large enough to control for demographic and other variables. [ 27 ] Lane keeping assist (LKA) is a feature that, in addition to the lane departure warning system, automatically takes steps to ensure the vehicle stays in its lane. Some vehicles combine adaptive cruise control with lane keeping systems to provide additional safety. While the combination of these features creates a semi-autonomous vehicle [ non sequitur ] , most require the driver to remain in control of the vehicle while it is in use. This is because of the limitations associated with the lane-keeping feature. [ 28 ] The lane keeping assist system is being achieved in modern vehicle systems using image processing techniques called Hough transform and Canny edge detection techniques. These advanced image processing techniques derive lane data from forward facing cameras attached to the front of the vehicle. Real-time image processing using powerful computers like Nvidia 's Drive PX1 are being used by many vehicle OEMs to achieve fully autonomous vehicles in which lane detection algorithm plays a key part. Advanced lane detection algorithms are also being developed using deep learning and neural network techniques. [ 29 ] Nvidia has achieved high accuracy in developing self-driving features including lane keeping using the neural network based training mechanism in which they use a front facing camera in a car and run it through a route and then uses the steering input and camera images of the road fed into the neural network and make it 'learn'. The neural network then will be able to change the steering angle based on the lane change on the road and keep the car in the middle of the lane. [ 30 ] A lane keeping assist mechanism can either reactively turn a vehicle back into the lane if it starts to leave or proactively keep the vehicle in the center of the lane. Vehicle companies often use the term "lane keep(ing) assist" to refer to both reactive lane keep assist (LKA) and proactive lane centering assist (LCA) but the terms are beginning to be differentiated. In 2020, UNECE released an automated lane keeping system (ALKS) regulation which include features such as lane-keeping and adaptative speed for specific roads up to 60 km/h. Requires driver control while vehicle is in use, but adjusts steering if vehicle detects itself drifting out of lane ((LKA) refers to reactive "lane keep(ing) assist" and (LCA) refers to proactive lane centering): List shows up to 2015 model year. This feature has become more widespread since then, as seen below. Allows unassisted driving under limited conditions Lane keeping assist is mandatory for new cars and vans in the European Union as of 2022 under the name Emergency Lane Keeping System. [ 82 ] Lane departure warning systems and lane keeping systems rely on visible lane markings. They typically cannot decipher faded, missing, or incorrect lane markings. Markings covered in snow or old lane markings left visible can hinder the ability of the system. [ 28 ] UNECE regulation 130 does not require LDWS of heavy vehicles to work under 60 km/h or to work in a curve with a radius lower than 250 meters. [ 83 ] Lane departure warning systems also face many legal limitations regarding autonomous driving. As stated previously, this system requires constant driver input. Vehicles with this technology are limited to assisting the driver, not driving the vehicle. Lane departure warning systems biggest limitation is that it is not in complete control of the vehicle. The system does not take into account other vehicles on the road and "cannot replace good driving habits". [ 84 ] American Automobile Association testers found advanced driver assistance systems inconsistent and dangerous. Systems performed 'mostly as expected', but when approaching a simulated disabled vehicle a collision occurred 66 per cent of the time and the average impact speed was 25 mph (40kmh). [ 85 ]
https://en.wikipedia.org/wiki/Lane_departure_warning_system
The Lane hydrogen producer was an apparatus for hydrogen production based on the steam-iron process and water gas [ 1 ] invented in 1903 [ 2 ] by a British engineer, Howard Lane. The first commercial Lane hydrogen producer was commissioned in 1904. By 1913, 850,000,000 cubic feet (24,000,000 m 3 ) of hydrogen was manufactured annually by this process. [ 3 ] In the early-part of the 20th century, the process found some use as a means of producing hydrogen lifting gas for airships , as it could produce large volumes of gas cheaply. Lane producers were installed at some British airship stations so the gas could be manufactured on-site. To work efficiently however, the plant required skilled operators and to be running as a quasi-continuous process. A competing process, referred to as the Silicol Process , reacted Ferrosilicon with a strong Sodium hydroxide solution and had the advantage of flexibility. [ 4 ] In the 1940s the Lane process was superseded by cheaper methods of hydrogen production that used oil or natural gas as a feedstock. [ 3 ] Where hydrogen was commonly produced with the single retort like the Messerschmitt [ 5 ] and the Bamag type, Lane introduced the multiple retort type. In the Lane generator water gas was used to heat the retorts up to 600-800 °C after which water gas-air was used in the retorts. In the steam-iron process the iron oxidizes and has to be replaced with fresh metal, in the Lane hydrogen producer the iron is reduced with water gas back to its metallic condition, after which the process restarts. The chemical reactions are [ 3 ] The net chemical reaction is:
https://en.wikipedia.org/wiki/Lane_hydrogen_producer
In astrophysics , the Lane–Emden equation is a dimensionless form of Poisson's equation for the gravitational potential of a Newtonian self-gravitating, spherically symmetric, polytropic fluid. It is named after astrophysicists Jonathan Homer Lane and Robert Emden . [ 1 ] The equation reads 1 ξ 2 d d ξ ( ξ 2 d θ d ξ ) + θ n = 0 , {\displaystyle {\frac {1}{\xi ^{2}}}{\frac {d}{d\xi }}\left({\xi ^{2}{\frac {d\theta }{d\xi }}}\right)+\theta ^{n}=0,} where ξ {\displaystyle \xi } is a dimensionless radius and θ {\displaystyle \theta } is related to the density, and thus the pressure, by ρ = ρ c θ n {\displaystyle \rho =\rho _{c}\theta ^{n}} for central density ρ c {\displaystyle \rho _{c}} . The index n {\displaystyle n} is the polytropic index that appears in the polytropic equation of state, P = K ρ 1 + 1 n {\displaystyle P=K\rho ^{1+{\frac {1}{n}}}\,} where P {\displaystyle P} and ρ {\displaystyle \rho } are the pressure and density, respectively, and K {\displaystyle K} is a constant of proportionality. The standard boundary conditions are θ ( 0 ) = 1 {\displaystyle \theta (0)=1} and θ ′ ( 0 ) = 0 {\displaystyle \theta '(0)=0} . Solutions thus describe the run of pressure and density with radius and are known as polytropes of index n {\displaystyle n} . If an isothermal fluid (polytropic index tends to infinity) is used instead of a polytropic fluid, one obtains the Emden–Chandrasekhar equation . Physically, hydrostatic equilibrium connects the gradient of the potential, the density, and the gradient of the pressure, whereas Poisson's equation connects the potential with the density. Thus, if we have a further equation that dictates how the pressure and density vary with respect to one another, we can reach a solution. The particular choice of a polytropic gas as given above makes the mathematical statement of the problem particularly succinct and leads to the Lane–Emden equation. The equation is a useful approximation for self-gravitating spheres of plasma such as stars, but typically it is a rather limiting assumption. Consider a self-gravitating, spherically symmetric fluid in hydrostatic equilibrium . Mass is conserved and thus described by the continuity equation d m d r = 4 π r 2 ρ {\displaystyle {\frac {dm}{dr}}=4\pi r^{2}\rho } where ρ {\displaystyle \rho } is a function of r {\displaystyle r} . The equation of hydrostatic equilibrium is 1 ρ d P d r = − G m r 2 {\displaystyle {\frac {1}{\rho }}{\frac {dP}{dr}}=-{\frac {Gm}{r^{2}}}} where m {\displaystyle m} is also a function of r {\displaystyle r} . Differentiating again gives d d r ( 1 ρ d P d r ) = 2 G m r 3 − G r 2 d m d r = − 2 ρ r d P d r − 4 π G ρ {\displaystyle {\begin{aligned}{\frac {d}{dr}}\left({\frac {1}{\rho }}{\frac {dP}{dr}}\right)&={\frac {2Gm}{r^{3}}}-{\frac {G}{r^{2}}}{\frac {dm}{dr}}\\&=-{\frac {2}{\rho r}}{\frac {dP}{dr}}-4\pi G\rho \end{aligned}}} where the continuity equation has been used to replace the mass gradient. Multiplying both sides by r 2 {\displaystyle r^{2}} and collecting the derivatives of P {\displaystyle P} on the left, one can write r 2 d d r ( 1 ρ d P d r ) + 2 r ρ d P d r = d d r ( r 2 ρ d P d r ) = − 4 π G r 2 ρ {\displaystyle r^{2}{\frac {d}{dr}}\left({\frac {1}{\rho }}{\frac {dP}{dr}}\right)+{\frac {2r}{\rho }}{\frac {dP}{dr}}={\frac {d}{dr}}\left({\frac {r^{2}}{\rho }}{\frac {dP}{dr}}\right)=-4\pi Gr^{2}\rho } Dividing both sides by r 2 {\displaystyle r^{2}} yields, in some sense, a dimensional form of the desired equation. If, in addition, we substitute for the polytropic equation of state with P = K ρ c 1 + 1 n θ n + 1 {\displaystyle P=K\rho _{c}^{1+{\frac {1}{n}}}\theta ^{n+1}} and ρ = ρ c θ n {\displaystyle \rho =\rho _{c}\theta ^{n}} , we have 1 r 2 d d r ( r 2 K ρ c 1 n ( n + 1 ) d θ d r ) = − 4 π G ρ c θ n {\displaystyle {\frac {1}{r^{2}}}{\frac {d}{dr}}\left(r^{2}K\rho _{c}^{\frac {1}{n}}(n+1){\frac {d\theta }{dr}}\right)=-4\pi G\rho _{c}\theta ^{n}} Gathering the constants and substituting r = α ξ {\displaystyle r=\alpha \xi } , where α 2 = ( n + 1 ) K ρ c 1 n − 1 / 4 π G , {\displaystyle \alpha ^{2}=(n+1)K\rho _{c}^{{\frac {1}{n}}-1}/4\pi G,} we have the Lane–Emden equation, 1 ξ 2 d d ξ ( ξ 2 d θ d ξ ) + θ n = 0 {\displaystyle {\frac {1}{\xi ^{2}}}{\frac {d}{d\xi }}\left({\xi ^{2}{\frac {d\theta }{d\xi }}}\right)+\theta ^{n}=0} Equivalently, one can start with Poisson's equation , ∇ 2 Φ = 1 r 2 d d r ( r 2 d Φ d r ) = 4 π G ρ {\displaystyle \nabla ^{2}\Phi ={\frac {1}{r^{2}}}{\frac {d}{dr}}\left(r^{2}{\frac {d\Phi }{dr}}\right)=4\pi G\rho } One can replace the gradient of the potential using the hydrostatic equilibrium, via d Φ d r = − 1 ρ d P d r {\displaystyle {\frac {d\Phi }{dr}}=-{\frac {1}{\rho }}{\frac {dP}{dr}}} which again yields the dimensional form of the Lane–Emden equation. For a given value of the polytropic index n {\displaystyle n} , denote the solution to the Lane–Emden equation as θ n ( ξ ) {\displaystyle \theta _{n}(\xi )} . In general, the Lane–Emden equation must be solved numerically to find θ n {\displaystyle \theta _{n}} . There are exact, analytic solutions for certain values of n {\displaystyle n} , in particular: n = 0 , 1 , 5 {\displaystyle n=0,1,5} . For n {\displaystyle n} between 0 and 5, the solutions are continuous and finite in extent, with the radius of the star given by R = α ξ 1 {\displaystyle R=\alpha \xi _{1}} , where θ n ( ξ 1 ) = 0 {\displaystyle \theta _{n}(\xi _{1})=0} . For a given solution θ n {\displaystyle \theta _{n}} , the density profile is given by ρ = ρ c θ n n . {\displaystyle \rho =\rho _{c}\theta _{n}^{n}.} The total mass M {\displaystyle M} of the model star can be found by integrating the density over radius, from 0 to ξ 1 {\displaystyle \xi _{1}} . The pressure can be found using the polytropic equation of state, P = K ρ 1 + 1 n {\displaystyle P=K\rho ^{1+{\frac {1}{n}}}} , i.e. P = K ρ c 1 + 1 n θ n n + 1 {\displaystyle P=K\rho _{c}^{1+{\frac {1}{n}}}\theta _{n}^{n+1}} Finally, if the gas is ideal , the equation of state is P = k B ρ T / μ {\displaystyle P=k_{B}\rho T/\mu } , where k B {\displaystyle k_{B}} is the Boltzmann constant and μ {\displaystyle \mu } the mean molecular weight. The temperature profile is then given by T = K μ k B ρ c 1 / n θ n {\displaystyle T={\frac {K\mu }{k_{B}}}\rho _{c}^{1/n}\theta _{n}} In spherically symmetric cases, the Lane–Emden equation is integrable for only three values of the polytropic index n {\displaystyle n} . If n = 0 {\displaystyle n=0} , the equation becomes 1 ξ 2 d d ξ ( ξ 2 d θ d ξ ) + 1 = 0 {\displaystyle {\frac {1}{\xi ^{2}}}{\frac {d}{d\xi }}\left(\xi ^{2}{\frac {d\theta }{d\xi }}\right)+1=0} Re-arranging and integrating once gives ξ 2 d θ d ξ = C 1 − 1 3 ξ 3 {\displaystyle \xi ^{2}{\frac {d\theta }{d\xi }}=C_{1}-{\frac {1}{3}}\xi ^{3}} Dividing both sides by ξ 2 {\displaystyle \xi ^{2}} and integrating again gives θ ( ξ ) = C 0 − C 1 ξ − 1 6 ξ 2 {\displaystyle \theta (\xi )=C_{0}-{\frac {C_{1}}{\xi }}-{\frac {1}{6}}\xi ^{2}} The boundary conditions θ ( 0 ) = 1 {\displaystyle \theta (0)=1} and θ ′ ( 0 ) = 0 {\displaystyle \theta '(0)=0} imply that the constants of integration are C 0 = 1 {\displaystyle C_{0}=1} and C 1 = 0 {\displaystyle C_{1}=0} . Therefore, θ ( ξ ) = 1 − 1 6 ξ 2 {\displaystyle \theta (\xi )=1-{\frac {1}{6}}\xi ^{2}} When n = 1 {\displaystyle n=1} , the equation can be expanded in the form d 2 θ d ξ 2 + 2 ξ d θ d ξ + θ = 0 {\displaystyle {\frac {d^{2}\theta }{d\xi ^{2}}}+{\frac {2}{\xi }}{\frac {d\theta }{d\xi }}+\theta =0} One assumes a power series solution: θ ( ξ ) = ∑ n = 0 ∞ a n ξ n {\displaystyle \theta (\xi )=\sum _{n=0}^{\infty }a_{n}\xi ^{n}} This leads to a recursive relationship for the expansion coefficients: a n + 2 = − a n ( n + 3 ) ( n + 2 ) {\displaystyle a_{n+2}=-{\frac {a_{n}}{(n+3)(n+2)}}} This relation can be solved leading to the general solution: θ ( ξ ) = a 0 sin ⁡ ξ ξ + a 1 cos ⁡ ξ ξ {\displaystyle \theta (\xi )=a_{0}{\frac {\sin \xi }{\xi }}+a_{1}{\frac {\cos \xi }{\xi }}} The boundary condition for a physical polytrope demands that θ ( ξ ) → 1 {\displaystyle \theta (\xi )\rightarrow 1} as ξ → 0 {\displaystyle \xi \rightarrow 0} . This requires that a 0 = 1 , a 1 = 0 {\displaystyle a_{0}=1,a_{1}=0} , thus leading to the solution: θ ( ξ ) = sin ⁡ ξ ξ {\displaystyle \theta (\xi )={\frac {\sin \xi }{\xi }}} This exact solution was found by accident when searching for zero values of the related TOV Equation . [ 2 ] We consider a series expansion around θ = 0 {\displaystyle \theta =0} θ = ∑ m = 0 ∞ a m ξ m {\displaystyle \theta =\sum \limits _{m=0}^{\infty }a_{m}\xi ^{m}} with initial values θ | ξ = 0 = θ 0 {\displaystyle \theta |_{\xi =0}=\theta _{0}} and d θ d ξ | ξ = 0 = 0 {\displaystyle \left.{\frac {d\theta }{d\xi }}\right|_{\xi =0}=0} . Plugging this into the Lane-Emden equation, we can show that all odd coefficients of the series vanish a 2 m + 1 = 0 {\displaystyle a_{2m+1}=0} . Furthermore, we obtain a recursive relationship between the even coefficients b m = a 2 m {\displaystyle b_{m}=a_{2m}} of the series. b m + 1 = − 1 ( 2 m + 2 ) ( 2 m + 3 ) ∑ k = 0 m b m − k b k {\displaystyle b_{m+1}=-{\frac {1}{(2m+2)(2m+3)}}\sum \limits _{k=0}^{m}b_{m-k}b_{k}} It was proven that this series converges at least for ξ ≤ 1 {\displaystyle \xi \leq 1} but numerical results showed good agreement for much larger values. We start from with the Lane–Emden equation: 1 ξ 2 d d ξ ( ξ 2 d θ d ξ ) + θ 5 = 0 {\displaystyle {\frac {1}{\xi ^{2}}}{\frac {d}{d\xi }}\left(\xi ^{2}{\frac {d\theta }{d\xi }}\right)+\theta ^{5}=0} Rewriting for d θ d ξ {\displaystyle {\frac {d\theta }{d\xi }}} produces: d θ d ξ = 1 2 ( 1 + ξ 2 3 ) 3 / 2 2 ξ 3 = ξ 3 3 [ 1 + ξ 2 3 ] 3 / 2 {\displaystyle {\frac {d\theta }{d\xi }}={\frac {1}{2}}\left(1+{\frac {\xi ^{2}}{3}}\right)^{3/2}{\frac {2\xi }{3}}={\frac {\xi ^{3}}{3\left[1+{\frac {\xi ^{2}}{3}}\right]^{3/2}}}} Differentiating with respect to ξ leads to: θ 5 = ξ 2 [ 1 + ξ 2 3 ] 3 / 2 + 3 ξ 2 9 [ 1 + ξ 2 3 ] 5 / 2 = 9 9 [ 1 + ξ 2 3 ] 5 / 2 {\displaystyle \theta ^{5}={\frac {\xi ^{2}}{\left[1+{\frac {\xi ^{2}}{3}}\right]^{3/2}}}+{\frac {3\xi ^{2}}{9\left[1+{\frac {\xi ^{2}}{3}}\right]^{5/2}}}={\frac {9}{9\left[1+{\frac {\xi ^{2}}{3}}\right]^{5/2}}}} Reduced, we come by: θ 5 = 1 [ 1 + ξ 2 3 ] 5 / 2 {\displaystyle \theta ^{5}={\frac {1}{\left[1+{\frac {\xi ^{2}}{3}}\right]^{5/2}}}} Therefore, the Lane–Emden equation has the solution θ ( ξ ) = 1 1 + ξ 2 / 3 {\displaystyle \theta (\xi )={\frac {1}{\sqrt {1+\xi ^{2}/3}}}} when n = 5 {\displaystyle n=5} . This solution is finite in mass but infinite in radial extent, and therefore the complete polytrope does not represent a physical solution. Chandrasekhar believed for a long time that finding other solution for n = 5 {\displaystyle n=5} "is complicated and involves elliptic integrals". In 1962, Sambhunath Srivastava found an explicit solution when n = 5 {\displaystyle n=5} . [ 3 ] His solution is given by θ = sin ⁡ ( ln ⁡ ξ ) 3 ξ − 2 ξ sin 2 ⁡ ( ln ⁡ ξ ) , {\displaystyle \theta ={\frac {\sin(\ln {\sqrt {\xi }})}{\sqrt {3\xi -2\xi \sin ^{2}(\ln {\sqrt {\xi }})}}},} and from this solution, a family of solutions θ ( ξ ) → A θ ( A ξ ) {\displaystyle \theta (\xi )\rightarrow {\sqrt {A}}\,\theta (A\xi )} can be obtained using homology transformation. Since this solution does not satisfy the conditions at the origin (in fact, it is oscillatory with amplitudes growing indefinitely as the origin is approached), this solution can be used in composite stellar models. In applications, the main role play analytic solutions that are expressible by the convergent power series expanded around some initial point. Typically the expansion point is ξ = 0 {\displaystyle \xi =0} , which is also a singular point (fixed singularity) of the equation, and there is provided some initial data θ ( 0 ) {\displaystyle \theta (0)} at the centre of the star. One can prove [ 4 ] [ 5 ] that the equation has the convergent power series/analytic solution around the origin of the form θ ( ξ ) = θ ( 0 ) − θ ( 0 ) n 6 ξ 2 + O ( ξ 3 ) , ξ ≈ 0. {\displaystyle \theta (\xi )=\theta (0)-{\frac {\theta (0)^{n}}{6}}\xi ^{2}+O(\xi ^{3}),\quad \xi \approx 0.} The radius of convergence of this series is limited due to existence [ 5 ] [ 7 ] of two singularities on the imaginary axis in the complex plane . These singularities are located symmetrically with respect to the origin. Their position change when we change equation parameters and the initial condition θ ( 0 ) {\displaystyle \theta (0)} , and therefore, they are called movable singularities due to classification of the singularities of non-linear ordinary differential equations in the complex plane by Paul Painlevé . A similar structure of singularities appears in other non-linear equations that result from the reduction of the Laplace operator in spherical symmetry, e.g., Isothermal Sphere equation. [ 7 ] Analytic solutions can be extended along the real line by analytic continuation procedure resulting in the full profile of the star or molecular cloud cores. Two analytic solutions with the overlapping circles of convergence can also be matched on the overlap to the larger domain solution, which is a commonly used method of construction of profiles of required properties. The series solution is also used in the numerical integration of the equation. It is used to shift the initial data for analytic solution slightly away from the origin since at the origin the numerical methods fail due to the singularity of the equation. In general, solutions are found by numerical integration. Many standard methods require that the problem is formulated as a system of first-order ordinary differential equations . For example, [ 8 ] Here, φ ( ξ ) {\displaystyle \varphi (\xi )} is interpreted as the dimensionless mass, defined by m ( r ) = 4 π α 3 ρ c φ ( ξ ) {\displaystyle m(r)=4\pi \alpha ^{3}\rho _{c}\varphi (\xi )} . The relevant initial conditions are φ ( 0 ) = 0 {\displaystyle \varphi (0)=0} and θ ( 0 ) = 1 {\displaystyle \theta (0)=1} . The first equation represents hydrostatic equilibrium and the second represents mass conservation. It is known that if θ ( ξ ) {\displaystyle \theta (\xi )} is a solution of the Lane–Emden equation, then so is C 2 / n + 1 θ ( C ξ ) {\displaystyle C^{2/n+1}\theta (C\xi )} . [ 9 ] Solutions that are related in this way are called homologous ; the process that transforms them is homology . If one chooses variables that are invariant to homology, then we can reduce the order of the Lane–Emden equation by one. A variety of such variables exist. A suitable choice is U = d log ⁡ m d log ⁡ r = ξ 3 θ n φ {\displaystyle U={\frac {d\log m}{d\log r}}={\frac {\xi ^{3}\theta ^{n}}{\varphi }}} and V = d log ⁡ P d log ⁡ r = ( n + 1 ) φ ξ θ {\displaystyle V={\frac {d\log P}{d\log r}}=(n+1){\frac {\varphi }{\xi \theta }}} We can differentiate the logarithms of these variables with respect to ξ {\displaystyle \xi } , which gives 1 U d U d ξ = 1 ξ ( 3 − n ( n + 1 ) − 1 V − U ) {\displaystyle {\frac {1}{U}}{\frac {dU}{d\xi }}={\frac {1}{\xi }}\left(3-n(n+1)^{-1}V-U\right)} and 1 V d V d ξ = 1 ξ ( − 1 + U + ( n + 1 ) − 1 V ) . {\displaystyle {\frac {1}{V}}{\frac {dV}{d\xi }}={\frac {1}{\xi }}\left(-1+U+(n+1)^{-1}V\right).} Finally, we can divide these two equations to eliminate the dependence on ξ {\displaystyle \xi } , which leaves d V d U = − V U ( U + ( n + 1 ) − 1 V − 1 U + n ( n + 1 ) − 1 V − 3 ) . {\displaystyle {\frac {dV}{dU}}=-{\frac {V}{U}}\left({\frac {U+(n+1)^{-1}V-1}{U+n(n+1)^{-1}V-3}}\right).} This is now a single first-order equation. The homology-invariant equation can be regarded as the autonomous pair of equations d U d log ⁡ ξ = − U ( U + n ( n + 1 ) − 1 V − 3 ) {\displaystyle {\frac {dU}{d\log \xi }}=-U\left(U+n(n+1)^{-1}V-3\right)} and d V d log ⁡ ξ = V ( U + ( n + 1 ) − 1 V − 1 ) . {\displaystyle {\frac {dV}{d\log \xi }}=V\left(U+(n+1)^{-1}V-1\right).} The behaviour of solutions to these equations can be determined by linear stability analysis. The critical points of the equation (where d V / d log ⁡ ξ = d U / d log ⁡ ξ = 0 {\displaystyle dV/d\log \xi =dU/d\log \xi =0} ) and the eigenvalues and eigenvectors of the Jacobian matrix are tabulated below. [ 10 ]
https://en.wikipedia.org/wiki/Lane–Emden_equation
The Lang Factor is an estimated ratio of the total cost of creating a process within a plant, to the cost of all major technical components. It is widely used in industrial engineering to calculate the capital and operating costs of a plant. [ 1 ] [ 2 ] [ 3 ] The factors were introduced by H. J. Lang and Dr Micheal Bird in Chemical Engineering magazine in 1947 as a method for estimating the total installation cost for plants and equipment. These factors are widely used in the refining and petrochemical industries to help estimate the cost of new facilities. A typical multiplier for a new unit within a refinery would be in the range of 5.0. When the purchase price of all the pumps, heat exchangers, pressure vessels, and other process equipment are multiplied by 5.0, a rough estimate of the total installed cost of the plant, including equipment, materials, construction, and engineering will be achieved. The accuracy of this estimate method usually is +/- 35%. The factors change over time because construction labor, bulk materials ( concrete , pipe , etc.), engineering design, indirect costs, and major process equipment prices often do not change at the same rate. In the late 1960s and early 1970s Kenneth Guthrie further expanded on this concept, generating different factors for different types of process equipment (pumps, exchangers, vessels, etc.). These are sometimes referred to as "Guthrie factors".
https://en.wikipedia.org/wiki/Lang_factor
In algebraic geometry , Lange's conjecture is a theorem about stability of vector bundles over curves, introduced by Herbet Lange [ de ] [ 1 ] and proved by Montserrat Teixidor i Bigas and Barbara Russo in 1999. Let C be a smooth projective curve of genus greater or equal to 2. For generic vector bundles E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} on C of ranks and degrees ( r 1 , d 1 ) {\displaystyle (r_{1},d_{1})} and ( r 2 , d 2 ) {\displaystyle (r_{2},d_{2})} , respectively, a generic extension has E stable provided that μ ( E 1 ) < μ ( E 2 ) {\displaystyle \mu (E_{1})<\mu (E_{2})} , where μ ( E i ) = d i / r i {\displaystyle \mu (E_{i})=d_{i}/r_{i}} is the slope of the respective bundle. The notion of a generic vector bundle here is a generic point in the moduli space of semistable vector bundles on C , and a generic extension is one that corresponds to a generic point in the vector space Ext 1 {\displaystyle \operatorname {Ext} ^{1}} ( E 2 , E 1 ) {\displaystyle (E_{2},E_{1})} . An original formulation by Lange is that for a pair of integers ( r 1 , d 1 ) {\displaystyle (r_{1},d_{1})} and ( r 2 , d 2 ) {\displaystyle (r_{2},d_{2})} such that d 1 / r 1 < d 2 / r 2 {\displaystyle d_{1}/r_{1}<d_{2}/r_{2}} , there exists a short exact sequence as above with E stable. This formulation is equivalent because the existence of a short exact sequence like that is an open condition on E in the moduli space of semistable vector bundles on C . This algebraic geometry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lange's_conjecture
The Langendorff heart or isolated perfused heart assay is an ex vivo technique used in pharmacological and physiological research using animals and also humans. [ 1 ] Named after the German physiologist Oskar Langendorff , this technique allows the examination of cardiac contractile strength and heart rate without the complications of an intact animal or human. [ 2 ] After more than 100 years, this method is still being used. [ 3 ] In the Langendorff preparation, the heart is removed from the animal's or human's body, severing the blood vessels ; it is then perfused in a reverse fashion ( retrograde perfusion ) via the aorta , usually with a nutrient rich, oxygenated solution (e.g. Krebs–Henseleit solution or Tyrode's solution ). The backwards pressure causes the aortic valve to shut, forcing the solution into the coronary vessels, which normally supply the heart tissue with blood. This feeds nutrients and oxygen to the cardiac muscle, allowing it to continue beating for several hours after its removal from the animal or human. This is a useful preparation because it allows the addition of drugs (via the perfusate) and observation of their effect on the heart without the complications involved with in vivo experimentation, such as neuronal and hormonal effects from living animal or human. [ 4 ] This preparation also allows the organ to be digested into individual cells by adding collagenase to the perfusate. This can be done before the experiment as a technique for cell harvesting, or after the experiment to measure its effects at the cellular level. The first isolated perfused heart was created using frog tissue in 1866. [ 5 ]
https://en.wikipedia.org/wiki/Langendorff_heart
The Langer correction , named after the mathematician Rudolf Ernest Langer , is a correction to the WKB approximation for problems with radial symmetry. When applying WKB approximation method to the radial Schrödinger equation , − ℏ 2 2 m d 2 R ( r ) d r 2 + [ E − V eff ( r ) ] R ( r ) = 0 , {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}R(r)}{dr^{2}}}+[E-V_{\textrm {eff}}(r)]R(r)=0,} where the effective potential is given by V eff ( r ) = V ( r ) − ℏ 2 ℓ ( ℓ + 1 ) 2 m r 2 {\displaystyle V_{\textrm {eff}}(r)=V(r)-{\frac {\hbar ^{2}\ell (\ell +1)}{2mr^{2}}}} ( ℓ {\displaystyle \ell } the azimuthal quantum number related to the angular momentum operator ), the eigenenergies and the wave function behaviour obtained are different from the real solution. In 1937, Rudolf E. Langer suggested a correction ℓ ( ℓ + 1 ) → ( ℓ + 1 2 ) 2 {\displaystyle \ell (\ell +1)\rightarrow \left(\ell +{\frac {1}{2}}\right)^{2}} which is known as Langer correction or Langer replacement . [ 1 ] This manipulation is equivalent to inserting a 1/4 constant factor whenever ℓ ( ℓ + 1 ) {\displaystyle \ell (\ell +1)} appears. Heuristically, it is said that this factor arises because the range of the radial Schrödinger equation is restricted from 0 to infinity, as opposed to the entire real line. By such a changing of constant term in the effective potential, the results obtained by WKB approximation reproduces the exact spectrum for many potentials. That the Langer replacement is correct follows from the WKB calculation of the Coulomb eigenvalues with the replacement which reproduces the well known result. [ 2 ] Note that for 2D systems, as the effective potential takes the form V eff ( r ) = V ( r ) − ℏ 2 ( ℓ 2 − 1 4 ) 2 m r 2 , {\displaystyle V_{\textrm {eff}}(r)=V(r)-{\frac {\hbar ^{2}(\ell ^{2}-{\frac {1}{4}})}{2mr^{2}}},} so Langer correction goes: [ 3 ] ( ℓ 2 − 1 4 ) → ℓ 2 . {\displaystyle \left(\ell ^{2}-{\frac {1}{4}}\right)\rightarrow \ell ^{2}.} This manipulation is also equivalent to insert a 1/4 constant factor whenever ℓ 2 {\displaystyle \ell ^{2}} appears. An even more convincing calculation is the derivation of Regge trajectories (and hence eigenvalues) of the radial Schrödinger equation with Yukawa potential by both a perturbation method (with the old ℓ ( ℓ + 1 ) {\displaystyle \ell (\ell +1)} factor) and independently the derivation by the WKB method (with Langer replacement)-- in both cases even to higher orders. For the perturbation calculation see Müller-Kirsten book [ 4 ] and for the WKB calculation Boukema. [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Langer_correction
3C22 , 3KQG , 3P5D , 3P5E , 3P5F , 3P5G , 3P5H , 3P5I , 3P7F , 3P7G , 3P7H , 4AK8 , 4N32 , 4N33 , 4N34 , 4N35 , 4N36 , 4N37 , 4N38 50489 246278 ENSG00000116031 ENSMUSG00000034783 Q9UJ71 Q8VBX4 NM_015717 NM_144943 NP_056532 NP_659192 Langerin ( CD207 ) is a type II transmembrane protein which is encoded by the CD207 gene in humans. [ 5 ] [ 6 ] It was discovered by scientists Sem Saeland and Jenny Valladeau as a main part of Birbeck granules . Langerin is C-type lectin receptor on Langerhans cells (LCs) and in mice also on dermal interstitial CD103+ dendritic cells (DC) and on resident CD8+ DC in lymph nodes . [ 6 ] [ 7 ] [ 8 ] Langerin consists of a relatively short intracellular domain and an extracellular domain which consists of a neck-region and a carbohydrate recognition domain (CRD). The intracellular part contains a proline -rich domain (PRD). The neck region consists of alpha-helixes and mediates a formation of langerin homotrimers via a coiled-coil interaction. The homotrimers formation increases avidity and specificity of the antigen . [ 9 ] The CRD of langerin is similar to CRDs of other C-type lectins. It contains an EPN motif – a Glu-Pro-Asn rich region. The CRD is divided into two lobes by 2 anti-parallel beta-sheets . The upper lobe creates the primary Ca2+ dependent carbohydrates binding site. [ 9 ] In contrast to other lectins, for instance, DC-SIGN / DC-SIGNR and MBP , langerin has only one binding site for Ca2+. [ 5 ] In the upper lobe, there have been discovered two other binding sites by a crystallization method. These sites are not dependent on Ca2+ and their relation to the primary binding site is not completely understood. All the binding sites are flanked by positively charged amino acids (K299 and K313) which enable binding of negatively charged sulphated carbohydrates. These amino acids are not present in DC-SIGN. [ 9 ] Langerin is expressed in LCs which are located in the epidermis and in vaginal and oral mucosa . LCs are immune cells closely related to macrophages , but by their function, they are more like conventional dendritic cells (cDCs). [ 10 ] Langerin recognizes and binds carbohydrates, such as mannose , fucose and N-acetylglucosamine . Thus, LCs may react against pathogens such as HIV-1 , Mycobacterium leprae and Candida albicans . After pathogen binding to langerin, fate of the pathogens is not yet understood It has been proposed that the pathogen is internalised into a cytoplasmatic organelle called Birbeck granule . There, degradation and antigen processing for presentation to T-cells take place. For instance, langerin binds lipoarabinomannans of mycobacteria and inside the Birbeck granules, it contributes to the binding of the antigen to CD1a molecule. In mice , langerin is involved in antigen binding to MHC II glycoproteins and to MHC I glycoproteins during cross-presentation . [ 9 ] It seems an intracellular Src homology domain of langerin is important for the formation of Birbeck granules. These organelles contain Rab11a which is a molecule participating in langerin recycling. [ 9 ] Langerin has similar function and structure as a DCs surface protein DC-SIGN (CD209). Both receptors bind similar antigens via the CRD, for instance Mycobacterium tuberculosis and HIV-1. However, whereas HIV-1 binding to langerin leads to the elimination of the virus, HIV-1 binding to DC-SIGN leads to infection of the cell . [ 9 ] In human vaginal mucosa, LCs bind the strongly glycosylated glycoprotein gp120 in HIV-1 envelope via langerin. Subsequently, the virus is internalised into the Birbeck granule where it’s degraded and processed for presentation. Thus, langerin has an antiviral activity and protects the cell against HIV-1 infection. If langerin is defect or titres of the virus are too high, the HIV-1 infection may happen. [ 9 ] [ 11 ] [ 12 ] Langerin also binds mannose , which is in the outer membrane of fungi , and beta-glucans in membrane folds of fungi. By this way, LCs can protect themselves against pathogens like Candida , Saccharomyces and Malassezia furfur . Furthermore, langerin recognizes Gal-6-sulfated lactosamine of glioblastoma . [ 9 ] [ 13 ] In the respiratory epithelium , LCs recognize measles virus via langerin and then, they degrade it and present it to CD4+ T-cells . [ 13 ] Single nucleotide polymorphism (SNP) in langerin gene may affect the stability as well as the affinity of the protein for some carbohydrates. The most common polymorphism is a replacement of alanine for valine in the 278. position (rs741326). Allelic frequency of this polymorphism is up to 48 %, but it probably does not have any influence on stability and affinity of langerin. Substitution of asparagine for aspartic acid in the 288. position leads to 10-fold reduction in the ability to recognize mannose-BSA. A substitution of tryptophane for arginine in the 264. position leads to a loss of Birbeck granules . [ 9 ]
https://en.wikipedia.org/wiki/Langerin
In physics , Langevin dynamics is an approach to the mathematical modeling of the dynamics of molecular systems using the Langevin equation . It was originally developed by French physicist Paul Langevin . The approach is characterized by the use of simplified models while accounting for omitted degrees of freedom by the use of stochastic differential equations . Langevin dynamics simulations are a kind of Monte Carlo simulation . [ 1 ] Real world molecular systems occur in air or solvents, rather than in isolation, in a vacuum. Jostling of solvent or air molecules causes friction, and the occasional high velocity collision will perturb the system. Langevin dynamics attempts to extend molecular dynamics to allow for these effects. Also, Langevin dynamics allows temperature to be controlled as with a thermostat, thus approximating the canonical ensemble . Langevin dynamics mimics the viscous aspect of a solvent. It does not fully model an implicit solvent ; specifically, the model does not account for the electrostatic screening and also not for the hydrophobic effect . For denser solvents, hydrodynamic interactions are not captured via Langevin dynamics. For a system of N {\displaystyle N} particles with masses M {\displaystyle M} , with coordinates X = X ( t ) {\displaystyle X=X(t)} that constitute a time-dependent random variable , the resulting Langevin equation is [ 2 ] [ 3 ] M X ¨ = − ∇ U ( X ) − γ M X ˙ + 2 M γ k B T R ( t ) , {\displaystyle M\,{\ddot {\mathbf {X} }}=-\mathbf {\nabla } U(\mathbf {X} )-\gamma \,M\,{\dot {\mathbf {X} }}+{\sqrt {2\,M\,\gamma \,k_{\rm {B}}T}}\,\mathbf {R} (t)\,,} where U ( X ) {\displaystyle U(\mathbf {X} )} is the particle interaction potential; ∇ {\displaystyle \nabla } is the gradient operator such that − ∇ U ( X ) {\displaystyle -\mathbf {\nabla } U(\mathbf {X} )} is the force calculated from the particle interaction potentials; the dot is a time derivative such that X ˙ {\displaystyle {\dot {\mathbf {X} }}} is the velocity and X ¨ {\displaystyle {\ddot {\mathbf {X} }}} is the acceleration; γ {\displaystyle \gamma } is the damping constant (units of reciprocal time), also known as the collision frequency; T {\displaystyle T} is the temperature, k B {\displaystyle k_{\rm {B}}} is the Boltzmann constant ; and R ( t ) {\displaystyle \mathbf {R} (t)} is a delta-correlated stationary Gaussian process with zero-mean, called Gaussian white noise , satisfying ⟨ R ( t ) ⟩ = 0 {\displaystyle \left\langle \mathbf {R} (t)\right\rangle =0} ⟨ R ( t ) ⋅ R ( t ′ ) ⟩ = δ ( t − t ′ ) {\displaystyle \left\langle \mathbf {R} (t)\cdot \mathbf {R} (t')\right\rangle =\delta (t-t')} Here, δ {\displaystyle \delta } is the Dirac delta . Considering the covariance of standard Brownian motion or Wiener process W t {\displaystyle W_{t}} , we can find that E ( W t W τ ) = min ( t , τ ) {\displaystyle \mathbb {E} (W_{t}W_{\tau })=\min(t,\tau )} Define the covariance matrix of the derivative as E ( W t ˙ W τ ˙ ) = ∂ ∂ t ∂ ∂ τ E ( W t W τ ) = ∂ ∂ t ∂ ∂ τ min ( t , τ ) = δ ( t − τ ) {\displaystyle \mathbb {E} ({\dot {W_{t}}}{\dot {W_{\tau }}})={\frac {\partial }{\partial t}}{\frac {\partial }{\partial \tau }}\mathbb {E} (W_{t}W_{\tau })={\frac {\partial }{\partial t}}{\frac {\partial }{\partial \tau }}\min(t,\tau )=\delta (t-\tau )} So under the sense of covariance we can say that d W t = R ( t ) d t {\displaystyle {\rm {d}}W_{t}=\mathbf {R} (t){\rm {d}}t} Without loss of generality, let the mass M = 1 {\displaystyle M=1} , σ = M γ k B T {\displaystyle \sigma ={\sqrt {M\gamma k_{\rm {B}}T}}} , then the original SDE will become d X ˙ = − ∇ U ( X ) d t − γ d X + 2 σ d W ( t ) {\displaystyle {\rm {d}}{\dot {\mathbf {X} }}=-\nabla U(\mathbf {X} ){\rm {d}}t-\gamma {\rm {d}}{\mathbf {X} }+{\sqrt {2}}\sigma {\rm {d}}\mathbf {W} (t)} If the main objective is to control temperature, care should be exercised to use a small damping constant γ {\displaystyle \gamma } . As γ {\displaystyle \gamma } grows, it spans from the inertial all the way to the diffusive ( Brownian ) regime. The Langevin dynamics limit of non-inertia is commonly described as Brownian dynamics . Brownian dynamics can be considered as overdamped Langevin dynamics, i.e. Langevin dynamics where no average acceleration takes place. Under this limit we have d X ˙ = 0 {\displaystyle {\rm {d}}{\dot {X}}=0} , the original SDE then will becomes d X = − 1 γ ∇ U ( X ) d t + 2 σ γ d W ( t ) {\displaystyle {\rm {d}}{\mathbf {X} }=-{\frac {1}{\gamma }}\nabla U(\mathbf {X} ){\rm {d}}t+{\frac {{\sqrt {2}}\sigma }{\gamma }}{\rm {d}}\mathbf {W} (t)} The translational Langevin equation can be solved using various numerical methods [ 4 ] [ 5 ] with differences in the sophistication of analytical solutions, the allowed time-steps, time-reversibility ( symplectic methods ), in the limit of zero friction, etc. The Langevin equation can be generalized to rotational dynamics of molecules , Brownian particles, etc. A standard (according to NIST [ 6 ] ) way to do it is to leverage a quaternion -based description of the stochastic rotational motion. [ 7 ] [ 8 ] Langevin thermostat [ 9 ] is a type of Thermostat algorithm in molecular dynamics , which is used to simulate a canonical ensemble (NVT) under a desired temperature. It integrates the following Langevin equation of motion: M X ¨ = − ∇ U ( X ) − γ X ˙ + 2 γ k B T R ( t ) {\displaystyle M{\ddot {\mathbf {X} }}=-\nabla U(\mathbf {X} )-\gamma {\dot {\mathbf {X} }}+{\sqrt {2\gamma k_{B}T}}{\textbf {R}}(t)} − ∇ U ( X ) {\displaystyle -\nabla U(\mathbf {X} )} is the deterministic force term; γ {\displaystyle \gamma } is the friction coefficient and γ X ˙ {\displaystyle \gamma {\dot {X}}} is the friction or damping term; the last term is the random force term ( k B {\displaystyle k_{B}} : Boltzmann constant , T {\displaystyle T} : temperature). This equation allows the system to couple with an imaginary "heat bath": the kinetic energy of the system dissipates from the friction/damping term, and gain from random force/fluctuation; the strength of coupling is controlled by γ {\displaystyle \gamma } . This equation can be simulated with SDE solvers such as Euler–Maruyama method , where the random force term is replaced by a Gaussian random number in every integration step (variance σ 2 = 2 γ k B T / Δ t {\displaystyle \sigma ^{2}=2\gamma k_{B}T/\Delta t} , Δ t {\displaystyle \Delta t} : time step), or Langevin Leapfrog integration , etc. This method is also known as Langevin Integrator. [ 10 ] The overdamped Langevin equation gives d x t = − D k B T ∇ x U ( x t ) d t + 2 D d W t {\displaystyle {\rm {d}}\mathbf {x} _{t}=-{\frac {D}{k_{B}T}}\nabla _{\mathbf {x} }U(\mathbf {x} _{t}){\rm {d}}t+{\sqrt {2D}}{\rm {d}}W_{t}} Here, D = k B T / γ {\displaystyle D=k_{B}T/\gamma } is the diffusion coefficient from Einstein relation . As proven with Fokker-Planck equation , under appropriate conditions, the stationary distribution of x t {\displaystyle \mathbf {x} _{t}} is Boltzmann distribution p ( x ) ∝ e − U ( x ) / k B T {\displaystyle p(\mathbf {x} )\propto e^{-U(\mathbf {x} )/k_{B}T}} . Since that ∇ log ⁡ p ( x ) = − ∇ U ( x ) / k B T {\displaystyle \nabla \log p(\mathbf {x} )=-\nabla U(\mathbf {x} )/k_{B}T} , this equation is equivalent to the following form: d x t = ϵ ∇ x log ⁡ p ( x t ) d t + 2 ϵ d W t {\displaystyle {\rm {d}}\mathbf {x} _{t}=\epsilon \nabla _{\mathbf {x} }\log p(\mathbf {x} _{t}){\rm {d}}t+{\sqrt {2\epsilon }}{\rm {d}}W_{t}} And the distribution of x t ( t → ∞ ) {\displaystyle \mathbf {x} _{t}(t\to \infty )} follows p ( x ) {\displaystyle p(\mathbf {x} )} . In other words, Langevin dynamics drives particles towards a stationary distribution p ( x ) {\displaystyle p(\mathbf {x} )} along a gradient flow, due to the ∇ log ⁡ p ( x ) {\displaystyle \nabla \log p(\mathbf {x} )} term, while still allowing for some random fluctuations. This provides a Markov Chain Monte Carlo method that can be used to sample data x {\displaystyle \mathbf {x} } from a target distribution p ( x ) {\displaystyle p(\mathbf {x} )} , known as Langevin Monte Carlo . In many applications, we have a desired distribution p ( x ) {\displaystyle p(\mathbf {x} )} from which we would like to sample x {\displaystyle \mathbf {x} } , but direct sampling might be challenging or inefficient. Langevin Monte Carlo offers another way to sample x ∼ p ( x ) {\displaystyle \mathbf {x} \sim p(\mathbf {x} )} by sampling a Markov chain in accordance with the Langevin dynamics whose stationary state is p ( x ) {\displaystyle p(\mathbf {x} )} . The Metropolis-adjusted Langevin algorithm (MALA) is an example : Given a current state x t {\displaystyle \mathbf {x} _{t}} , the MALA method proposes a new state x ~ t + 1 {\displaystyle {\tilde {x}}_{t+1}} using the Langevin dynamics above. The proposal is then accepted or rejected based on the Metropolis-Hastings algorithm . The incorporation of the Langevin dynamics in the choice of x ~ t + 1 {\displaystyle {\tilde {x}}_{t+1}} provides greater computational efficiency, since the dynamics drive the particles into regions of higher p ( x ) {\displaystyle p(\mathbf {x} )} probability and are thus more likely to be accepted. Read more in Metropolis-adjusted Langevin algorithm . Langevin dynamics is one of the basis of score-based generative models . [ 11 ] [ 12 ] From (overdamped) Langevin dynamics, d x t = ϵ ∇ x log ⁡ p ( x t ) d t + 2 ϵ d W t {\displaystyle {\rm {d}}\mathbf {x} _{t}=\epsilon \nabla _{\mathbf {x} }\log p(\mathbf {x} _{t}){\rm {d}}t+{\sqrt {2\epsilon }}{\rm {d}}W_{t}} A generative model aims to generate samples that follow (unknown data distribution) p ( x ) {\displaystyle p(\mathbf {x} )} . To achieve that, a score-based model learns an approximate score function s θ ( x ) ≈ ∇ x log ⁡ p ( x ) {\displaystyle \mathbf {s} _{\theta }(\mathbf {x} )\approx \nabla _{\mathbf {x} }\log p(\mathbf {x} )} (a process called score matching ). With access to a score function, samples are generated by the following iteration, [ 11 ] [ 12 ] x i + 1 ← x i + ϵ ∇ x log ⁡ p ( x i ) + 2 ϵ z i , i = 0 , 1 , ⋯ , K {\displaystyle \mathbf {x} _{i+1}\gets \mathbf {x} _{i}+\epsilon \nabla _{\mathbf {x} }\log p(\mathbf {x} _{i})+{\sqrt {2\epsilon }}\mathbf {z} _{i},\quad i=0,1,\cdots ,K} with z i ∼ N ( 0 , 1 ) {\displaystyle \mathbf {z} _{i}\sim N(0,1)} . As ϵ → 0 {\displaystyle \epsilon \to 0} and K → ∞ {\displaystyle K\to \infty } , the generated x K {\displaystyle \mathbf {x} _{K}} converge to the target distribution p ( x ) {\displaystyle p(\mathbf {x} )} . Score-based models use s θ ( x ) ≈ ∇ x log ⁡ p ( x ) {\displaystyle \mathbf {s} _{\theta }(\mathbf {x} )\approx \nabla _{\mathbf {x} }\log p(\mathbf {x} )} as an approximation. [ 11 ] As a stochastic differential equation (SDE), Langevin dynamics equation, has its corresponding partial differential equation (PDE), Klein-Kramers equation , a special Fokker–Planck equation that governs the probability distribution of the particles in the phase space . The original Langevin dynamics equation can be reformulated as the following first order SDEs: d X = P d t {\displaystyle {\rm {d}}\mathbf {X} =\mathbf {P} {\rm {d}}t} d P = − γ P d t − ∇ U ( X ) d t + 2 σ d W ( t ) {\displaystyle {\rm {d}}\mathbf {P} =-\gamma \mathbf {P} {\rm {d}}t-\nabla U(\mathbf {X} ){\rm {d}}t+{\sqrt {2}}\sigma {\rm {d}}\mathbf {W} (t)} Now consider the following cases and their law of ( X , P ) {\displaystyle (\mathbf {X} ,\mathbf {P} )} : 1. d X = P d t , d P = − γ P d t − ∇ U ( X ) d t + 2 σ d W ( t ) {\displaystyle \mathbf {{\rm {d}}{X}} =\mathbf {P} {\rm {d}}t,\mathbf {{\rm {d}}{P}} =-\gamma \mathbf {P} {\rm {d}}t-\nabla U(\mathbf {X} ){\rm {d}}t+{\sqrt {2}}\sigma {\rm {d}}\mathbf {W} (t)} with ( X 0 , P 0 ) ∼ ρ 0 {\displaystyle (\mathbf {X} _{0},\mathbf {P} _{0})\sim \rho _{0}} 2. ∂ ρ ∂ t = − P ∇ X ρ + ∇ P ( γ P ρ + ∇ X U ( X ) ρ ) + ∇ P 2 ( σ T 2 ρ ) {\displaystyle {\frac {\partial \rho }{\partial t}}=-\mathbf {P} \nabla _{\mathbf {X} }\rho +\nabla _{\mathbf {P} }(\gamma \mathbf {P} \rho +\nabla _{\mathbf {X} }U(\mathbf {X} )\rho )+\nabla _{\mathbf {P} }^{2}(\sigma _{T}^{2}\rho )} with ρ ( t = 0 , X , P ) = ρ 0 {\displaystyle \rho (t=0,\mathbf {X} ,\mathbf {P} )=\rho _{0}} Consider a general function of momentum and position Ψ t = Ψ ( X , P ) {\displaystyle \Psi _{t}=\Psi (\mathbf {X} ,\mathbf {P} )} The expectation value of the function will be E [ Ψ t ] = ∫ ρ ( t , X , P ) Ψ ( X , P ) d P d X {\displaystyle \mathbb {E} [\Psi _{t}]=\int \rho (t,\mathbf {X} ,\mathbf {P} )\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {P} {\rm {d}}\mathbf {X} } Taking derivative with respect to time t {\displaystyle t} , and applying Itô's formula , we have E [ d d t Ψ ( X , P ) ] = E [ ∇ X Ψ d X d t + ∇ P Ψ d P d t + σ T 2 ∇ P 2 Ψ 1 d t ( d W ( t ) ) 2 ] {\displaystyle \mathbb {E} [{\frac {\rm {d}}{{\rm {d}}t}}\Psi (\mathbf {X} ,\mathbf {P} )]=\mathbb {E} [\nabla _{\mathbf {X} }\Psi {\frac {{\rm {d}}\mathbf {X} }{{\rm {d}}t}}+\nabla _{\mathbf {P} }\Psi {\frac {{\rm {d}}\mathbf {P} }{{\rm {d}}t}}+\sigma _{T}^{2}\nabla _{\mathbf {P} }^{2}\Psi {\frac {1}{{\rm {d}}t}}({\rm {d}}\mathbf {W} (t))^{2}]} which can be simplified to ∫ ( ∂ ∂ t ρ ) Ψ ( X , P ) d X d P = E [ ( ∇ X Ψ ) P + ∇ P Ψ ( − γ P − ∇ X U ( X ) ) + σ T 2 ∇ P 2 Ψ ] {\displaystyle \int ({\frac {\partial }{\partial t}}\rho )\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {X} {\rm {d}}\mathbf {P} =\mathbb {E} [(\nabla _{\mathbf {X} }\Psi )\mathbf {P} +\nabla _{\mathbf {P} }\Psi (-\gamma \mathbf {P} -\nabla _{\mathbf {X} }U(\mathbf {X} ))+\sigma _{T}^{2}\nabla _{\mathbf {P} }^{2}\Psi ]} Integration by parts on right hand side, due to vanishing density for infinite momentum or velocity we have ( ∂ ∂ t ρ ) Ψ ( X , P ) d X d P = ∫ ( − P ∇ X ρ + ∇ P ( γ P ρ + ∇ X U ( X ) ρ ) + ∇ P 2 ( σ T 2 ρ ) ) Ψ ( X , P ) d X d P {\displaystyle ({\frac {\partial }{\partial t}}\rho )\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {X} {\rm {d}}\mathbf {P} =\int (-\mathbf {P} \nabla _{\mathbf {X} }\rho +\nabla _{\mathbf {P} }(\gamma \mathbf {P} \rho +\nabla _{\mathbf {X} }U(\mathbf {X} )\rho )+\nabla _{\mathbf {P} }^{2}(\sigma _{T}^{2}\rho ))\Psi (\mathbf {X} ,\mathbf {P} ){\rm {d}}\mathbf {X} {\rm {d}}\mathbf {P} } This equation holds for arbitrary $\Psi$, so we require the density to satisfy ∂ ρ ∂ t = − P ∇ X ρ + ∇ P ( γ P ρ + ∇ X U ( X ) ρ ) + ∇ P 2 ( σ T 2 ρ ) {\displaystyle {\frac {\partial \rho }{\partial t}}=-\mathbf {P} \nabla _{\mathbf {X} }\rho +\nabla _{\mathbf {P} }(\gamma \mathbf {P} \rho +\nabla _{\mathbf {X} }U(\mathbf {X} )\rho )+\nabla _{\mathbf {P} }^{2}(\sigma _{T}^{2}\rho )} This equation is called the Klein-Kramers equation , a special version of Fokker Planck equation . It's a partial differential equation that describes the evolution of probability density of the system in the phase space. For the overdamped limit, we have d P = 0 {\displaystyle {\rm {d}}\mathbf {P} =0} , so the evolution of system can be reduced to the position subspace. Following similar logic we can prove that the SDE for position, d X = − 1 γ ∇ U ( X ) d t + 2 σ γ R ( t ) d t {\displaystyle {\rm {d}}\mathbf {X} =-{\frac {1}{\gamma }}\nabla U(\mathbf {X} ){\rm {d}}t+{\sqrt {2}}{\frac {\sigma }{\gamma }}\mathbf {R} (t){\rm {d}}t} corresponds to the Fokker Planck equation for probability density ∂ ρ ( t , X ) ∂ t = ∇ X ( 1 γ ∇ X U ( X ) ρ ( t , X ) ) + Δ X ( σ 2 γ 2 ρ ( t , X ) ) {\displaystyle {\frac {\partial \rho (t,\mathbf {X} )}{\partial t}}=\nabla _{\mathbf {X} }({\frac {1}{\gamma }}\nabla _{\mathbf {X} }U(\mathbf {X} )\rho (t,\mathbf {X} ))+\Delta _{\mathbf {X} }({\frac {\sigma ^{2}}{\gamma ^{2}}}\rho (t,\mathbf {X} ))} Consider Langevin dynamics of a free particle (i.e. U ( X ) = 0 {\displaystyle U(\mathbf {X} )=0} ), then the equation for momentum will become d P = − 1 γ P d t + 2 σ γ d W t {\displaystyle {\rm {d}}\mathbf {P} =-{\frac {1}{\gamma }}\mathbf {P} {\rm {d}}t+{\frac {{\sqrt {2}}\sigma }{\gamma }}{\rm {d}}\mathbf {W} _{t}} the analytical solution to this SDE is P = P 0 e − t / γ + 2 σ γ ∫ 0 t e − ( t − t ′ ) / γ d W t ′ {\displaystyle \mathbf {P} =\mathbf {P} _{0}e^{-t/\gamma }+{\frac {{\sqrt {2}}\sigma }{\gamma }}\int _{0}^{t}{\rm {e}}^{-(t-t')/\gamma }{\rm {d}}\mathbf {W} _{t}'} thus the average value of second moment of momentum will becomes (here we apply the Itô isometry ) E ( P 2 ) = P 0 2 e − 2 t / γ + σ 2 γ ( 1 − e − 2 t / γ ) → t → ∞ σ 2 γ {\displaystyle \mathbb {E} (\mathbf {P} ^{2})=\mathbf {P} _{0}^{2}{\rm {e}}^{-2t/\gamma }+{\frac {\sigma ^{2}}{\gamma }}(1-{\rm {e}}^{-2t/\gamma }){\overset {t\to \infty }{\to }}{\frac {\sigma ^{2}}{\gamma }}} That is, the limiting behavior when time approaches positive infinity, the momentum fluctuation of this system is related to the energy dissipation ( friction term parameter γ {\displaystyle \gamma } ) of this system. Combining this result with Equipartition theorem , which relates the average value of kinetic energy of particles with temperature ⟨ v 2 ⟩ = k B T {\displaystyle \langle v^{2}\rangle =k_{\rm {B}}T} we can determines the value of the variance σ {\displaystyle \sigma } in applications like Langevin thermostat. σ 2 / γ = k B T → σ = k B T γ {\displaystyle \sigma ^{2}/\gamma =k_{B}T\to \sigma ={\sqrt {k_{\rm {B}}T\gamma }}} This is consistent with the original definition assuming M = 1 {\displaystyle M=1} . Path Integral Formulation comes from Quantum Mechanics. But for a Langevin SDE we can also induce a corresponding path integral. Considering the following Overdamped Langevin equation under, where without loss of generality we take γ = σ = 1 {\displaystyle \gamma =\sigma =1} , d X = − ∇ U ( X ) d t + 2 d W t {\displaystyle {\rm {d}}{X}=-\nabla U({X}){\rm {d}}t+{\sqrt {2}}{\rm {d}}W_{t}} Discretize and define t n = n Δ t {\displaystyle t_{n}=n\Delta t} , we get X n + 1 − X n + ∇ U ( X ) Δ t = 2 ( W t n − W t n − 1 ) ∼ N ( 0 , 2 Δ t ) {\displaystyle {X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t={\sqrt {2}}(W_{t_{n}}-W_{t_{n-1}})\sim {\mathcal {N}}(0,2{\sqrt {\Delta t}})} Therefore the propagation probability will be P ( X n + 1 | X n ) = ∫ d ξ 1 2 π Δ t e − ξ 2 4 Δ t δ ( X n + 1 − X n + ∇ U ( X ) Δ t − ξ ) {\displaystyle P({X}_{n+1}|{X}_{n})=\int {\rm {d}}\xi {\frac {1}{2{\sqrt {\pi \Delta t}}}}{\rm {e}}^{-{\frac {\xi ^{2}}{4\Delta t}}}\delta ({X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t-\xi )} Applying Fourier Transform of delta function , and we will get P = ∫ d k 2 π e i k ( X n + 1 − X n + ∇ U ( X ) Δ t ) ∫ d ξ 1 2 π Δ t e − ξ 2 4 Δ t e − i k ξ {\displaystyle P=\int {\frac {{\rm {d}}k}{2\pi }}{\rm {e}}^{{\rm {i}}k({X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t)}\int {\rm {d}}\xi {\frac {1}{2{\sqrt {\pi \Delta t}}}}{\rm {e}}^{-{\frac {\xi ^{2}}{4\Delta t}}}{\rm {e}}^{-{\rm {i}}k\xi }} The second part is a Gaussian Integral , which yields P = ∫ d k 2 π e i k ( X n + 1 − X n + ∇ U ( X ) Δ t ) e − k 2 Δ t {\displaystyle P=\int {\frac {{\rm {d}}k}{2\pi }}{\rm {e}}^{{\rm {i}}k({X}_{n+1}-{X}_{n}+\nabla U({X})\Delta t)}{\rm {e}}^{-k^{2}\Delta t}} Now consider the probability from initial X 0 {\displaystyle X_{0}} to final X n {\displaystyle X_{n}} . P ( X n | X 0 ) = ∫ 1 2 π ∏ i N − 1 d k i e ( i k i ( X ˙ + ∇ U ( X ) ) − k i 2 ) Δ t {\displaystyle P(\mathbf {X} _{n}|\mathbf {X} _{0})=\int {\frac {1}{2\pi }}\prod _{i}^{N-1}{\rm {d}}k_{i}{\rm {e}}^{({\rm {i}}k_{i}({\dot {X}}+\nabla U(X))-k_{i}^{2})\Delta t}} take the limit of Δ t → 0 {\displaystyle \Delta t\to 0} ,we will get P ( X n | X 0 ) = ∫ D [ k ] e ∫ 0 t n ( i k ( X ˙ + ∇ U ( X ) ) − k 2 ) d t {\displaystyle P(\mathbf {X} _{n}|\mathbf {X} _{0})=\int {\mathcal {D}}[k]{\rm {e}}^{\int _{0}^{t_{n}}({\rm {i}}k({\dot {X}}+\nabla U(X))-k^{2}){\rm {d}}t}}
https://en.wikipedia.org/wiki/Langevin_dynamics
Langgan ( Chinese : 琅玕 ; pinyin : lánggān ) is the ancient Chinese name of a gemstone which remains an enigma in the history of mineralogy ; it has been identified, variously, as blue-green malachite , blue coral , white coral , whitish chalcedony , red spinel , and red jade . It is also the name of a mythological langgan tree of immortality found in the western paradise of Kunlun Mountain , and the name of the classic waidan alchemical elixir of immortality 琅玕華丹 ; langgan huadan ; "Elixir Efflorescence of Langgan". The Chinese characters 琅 and 玕 used to write the gemstone name lánggān are classified as radical-phonetic characters that combine the semantically significant " jade radical " 玉 or 王 (commonly used to write names of jades or gemstones) and phonetic elements hinting at pronunciation. 琅 ; Láng combines the "jade radical" with 良 ; liáng ; "good; fine" (interpreted to denote "fine jade") and 玕 ; gān combines it with the phonetic 干 ; gān ; "stem; trunk". The Chinese word 玉 ; yù is usually translated as "jade" but in some contexts translates as "fine ornamental stone; gemstone; precious stone", and can refer to a variety of rocks that carve and polish well, including jadeite , nephrite , agalmatolite , bowenite , and serpentine . [ 1 ] Modern written Chinese 琅 ; láng and 玕 ; gān have variant Chinese characters . 琅 ; Láng is occasionally transcribed as 瑯 ; láng (with 郞 ; láng ; "gentleman") or 瓓 ; lán ( 闌 ; lán ; "railing"); and 玕 ; gān is rarely written as 玵 ; gān (with a 甘 ; gān ; "sweet" phonetic). Guwen "ancient script" variants were 𤨜 ; láng or 𤦴 and 𤥚 ; gān . Berthold Laufer proposed that langgan was an onomatopoetic word "descriptive of the sound yielded by the sonorous stone when struck". [ 2 ] Lang occurs in several imitative words meaning "tinkling of jade pendants/ornaments": 琅琅 ; lángláng ; "tinkling/jingling sound", 玲琅 ; língláng ; "tinkling/jangling of jade", 琳琅 ; línláng ; "beautiful jade; sound of jade", and 琅璫 ; lángdāng ; "tinkling sound". Laufer further suggests this etymology would explain the transference of the name langgan from a stone to a coral; Du Wan's 杜綰 c. 1125 Yunlin shipu ( 雲林石譜 ; "Stone Catalogue of the Cloudy Forest") (below) expressly states that the coral langgan "when struck develops resonant properties". The name langgan ' has undergone remarkable semantic change . The first references to langgan are found in Chinese classics from the Warring States period (475-221 BCE) and Han dynasty (206 BCE-220 CE), which describe it as a valuable gemstone and mineral drug, as well as the mythological fruit of the langgan tree of immortality on Kunlun Mountain. Texts from the turbulent Six Dynasties period (220-589) and Sui dynasty (581-618) used langgan gemstone as a literary metaphor, and an ingredient in alchemical elixirs of immortality, many of which were poisonous . During the Tang dynasty (618-907), langgan was reinterpreted as a type of coral. Several early texts (including the Shujing , Guanzi , and Erya below) recorded langgan in context with the obscure gemstone(s) 璆琳 ; qiúlín . In Classical Chinese syntax, 璆琳 can be parsed as two qiu and lin types of jade or as one qiulin type. A recent dictionary of Classical Chinese says 璆 ( qiú ; "fine jade, jade lithophone") is cognate with 球 ( qiú ; "precious gem, fine jade; jade chime or lithophone (which later came to mean "ball; sphere")") , and 琳 ( lín ; "blue-gem; sapphire"). [ 3 ] In what may be the earliest record, [ 4 ] the c. 5th-3rd centuries BCE Yu Gong "Tribute of Yu the Great " chapter of the Shujing "Classic of Documents" says the tributary products from Yong Province (located in the Wei River plain, one of the ancient Nine Provinces ) included qiulin and langgan jade-like gemstones: "Its articles of tribute were the k'ew and lin gem-stones , and the lang-kan precious stones ". [ 5 ] Legge quotes Kong Anguo 's commentary that langgan is "a stone, but like a pearl", and suggests it was possibly lazulite or lapis lazuli , which Laufer calls "purely conjectural". [ 6 ] The c. 4th-3rd centuries BCE Guanzi encyclopedic text, named for and attributed to the 7th century BCE philosopher Guan Zhong , who served as Prime Minister to Duke Huan of Qi (r. 685-643 BCE), uses 璧 ( bi ; "a flat jade disc with a hole in the center"), 璆琳 ( qiulin ; "lapis lazuli"), and 琅玕 ( langgan ) as examples of how establishing diverse local commodities as fiat currencies will encourage foreign economic cooperation. When Duke Huan asks Guanzi about how to politically control the " Four Yi " (meaning "all foreigners" on China's borders), he replies: Since the Yuzhi [i.e., Yuezhi/Kushans in Central Asia] have not paid court, I request our use of white jade discs [ 白璧 ] as money. Since those in the Kunlun desert (modern-day Xinjiang and Tibet) have not paid court, I request our use of lapis lazuli and langgan gems as money. … Since a white jade held tight unseen against one's chest or under one's armpit will be used as a thousand pieces of gold, we can obtain the Yuezhi eight thousand li away and make them pay court. Since a lapis lazuli and langgan gem (fashioned in) a hair clasp and earring will be used as a thousand gold pieces, we can obtain [i.e., defeat] [the inhabitants] of the Kunlun deserts eight thousand li away and make them pay court. Therefore if resources are not commandeered, economies will not connect, those distant from each other will have nothing to use for their common interest and the four yi will not be obtained and come to court. [ 7 ] Xun Kuang 's 3rd century BCE Confucian classic Xunzi has a context criticizing elaborate burials that uses 丹矸 ; dan'gan (with 丹 ; dān ; "cinnabar" and 矸 ; gān ; "waste rock", with the " stone radical " and same 干 ; gān phonetic) and 琅玕 ; langgan . In these ancient times, the body was covered with pearls and jades, the inner coffin was filled with beautifully ornamented embroideries, and he outer coffin was filled with yellow gold and decorated with cinnabar [ 丹矸 ] with added layers of laminar verdite. [In the outer tomb chamber were] rhinoceros and elephant ivory fashioned into trees, with precious rubies [ 琅玕 ], magnetite lodestones, and flowering aconite for their fruit." (18.7) [ 8 ] John Knoblock translates langgan as "rubies", noting perhaps the genuine ruby or balas spinel , were connected with the cult of immortality, and cites the Shanhaijing saying they grow on Mount Kunlun's Fuchang trees, and the Zhen'gao saying that adepts swallow "ruby blossoms" to feign death and become transcendents. [ 9 ] Early Chinese dictionaries define langgan . The c. 4th-3rd century BCE Erya geography section (9 Shidi 釋地 ) lists valuable products from the various regions of ancient China: "The beautiful things of the northwest are the qiulin [ 璆琳 ] and langgan gemstones from the wastelands [ 虛 ] of Kunlun Mountain". The 121 CE Shuowen jiezi (Jade Radical section 玉部 ) has two consecutive definitions for 琅 ; lang and 玕 ; gan . Lang is [used in] langgan , which "resembles a pearl [ 似珠者 ]", Gan is [used in] langgan , paraphrasing the Yu Gong , "Yong Province [using the ancient 雝 ; yōng character for 雍 ; yōng ] [produces] qiulin and langgan [gems] [ 球琳琅玕 ]". Three sections about western Chinese mountains in the c. 4th-2nd centuries BCE Shanhaijing "Classic of Mountains and Seas" record early geographic legends associating langgan with Xi Wang Mu "Queen Mother of the West" who lives on Jade Mountain in the mythological axis mundi Kunlun Mountain paradise. Two mention langgan gems and one mentions langganshu 琅玕樹 trees. The Shanhaijing translator Anne Birrell exemplifies the difficulties of translating the word langgan in three ways: "pearl-like gems", "red jade", and "precious gem [tree]". First, the "Classic of the Mountains: West" section says Huaijiang 槐江 (lit. " pagoda-tree river") Mountain, located 400 li northeast of Kunlun Mountain, has abundant langgan and other valuable minerals. "On the summit of Mount Carobriver are quantities of green male-yellow [ 多青雄黃 ], precious pearl-like gems [ 藏琅玕 ], and yellow gold and jade. Granular cinnabar is abundant on its south face and there are quantities of speckled yellow gold and silver on its north face." (2) [ 10 ] "Male-yellow" overliterally translates 雄黃 ; xiónghuáng ; " realgar ; red orpiment "—Compare Richard Strassberg's translation, "On the mountain’s heights is much green realgar, the finest quality of Langgan-Stone, yellow gold, and jade. On its southern slope are many grains of cinnabar, while on its northern slope are much glittering yellow gold and silver.". [ 11 ] Guo Pu 's 4th century CE Shanhaijing commentary says langgan 石 ; shi ; "stone/gem" (cf. 子 ; zi ; "seeds" in the third section) resembles a pearl, and 藏 ; cáng ; "store; conceal, hide" means 隱 ; yǐn ; "conceal; hide". However, Hao Yixing's 郝懿行 1822 commentary says 藏 ; cáng was originally written 臧 ; zāng ; "good", that is, Huaijiang Mountain has the "best" quality langgan . Second, the "Classic of the Great Wilderness: West" section records that on [Xi] Wang Mu 王母 "Queen Mother [of the West]" Mountain: "Here are the sweet-bloom tree, sweet quince, white weeping willow, the look-flesh creature, the triply-grey horse, precious jade [ 琁瑰 ], dark green jade gemstone [ 瑤碧 ], the white tree, red jade [ 琅玕 ], white cinnabar, green cinnabar, and quantities of silver and iron." (16) [ 12 ] Third, the "Classic of Regions Within the Seas: West" section refers to a mythical tricephalic creature dwelling in a fuchangshu ( 服常樹 ; 'serve constant tree') who guards a langganshu tree south of Kunlun: "The wears-ever fruit tree—on its crown there is a three-headed person who is in charge of the precious gem tree [ 琅玕樹 ]." (11) [ 13 ] Interpreters disagree whether the langgan tree grows alongside the fuchang tree or grows on it. [ 9 ] Guo Pu's commentary admits unfamiliarity with the 服常 ; fuchang tree; Wu Renchen 's 17th-century commentary notes the similarity with the 沙棠 ; shachang ; "sand-plum tree" that the Huainanzi lists with langgan , but doubts they are the same. Guo's commentary says langgan zi 子 "seeds" [ 14 ] or "fruits" [ 9 ] resemble pearls (cf. the Shuowen definition) and quotes the Erya that it is found on Kunlun Mountain. The c. 120 BCE Huainanzi "Terrestrial Forms" chapter (4 墬形 ) describes langgan trees and langgan jade both found on Mt. Kunlun. The first context describes how Yu the Great controlled the Great Flood and "excavated the wastelands of Kunlun [ 昆侖之球 ] to make level ground". "Atop the heights of Kunlun are treelike cereal plants [ 木禾 ] thirty-five feet tall. Growing to the west of these are pearl trees [ 珠樹 ], jade trees [ 玉樹 ], carnelian trees [ 琁樹 ], and no-death trees [ 不死樹 ]. To the east are found sand-plum trees [ 沙棠 ] and malachite trees [ 琅玕 ]. To the south are crimson trees [ 絳樹 ]. To the north are bi jade trees [ 碧樹 ] and yao jade trees [ 瑤樹 ]." (4.3), [ 15 ] translating with Schafer's "malachite" instead of "coral"). The second context paraphrases the Erya definition (above) of langgan : "The beautiful things of the northwest are the qiu , lin , and langgan jades [ 球琳琅玕 ] of the Kunlun Mountains [ 昆侖 ]" (4.7), [ 15 ] noting that qiu , lin , and langgan are "types of jade, mostly not identifiable with certainty". Several early classics of traditional Chinese medicine mention langgan . The c. 1st century BCE Huangdi Neijing ' s Suwen 素問 "Basic Questions" section uses langgan beads to describe a healthy pulse. "When man is serene and healthy the pulse of the heart flows and connects, just as pearls are joined together or like a string of red jade [ 如循琅玕 ]—then one can speak of a healthy heart". [ 16 ] The c. 2nd century CE Nan Jing explains this langgan bead simile: "[If the qi in] the vessels comes tied together like rings, or as if they were following [in their movement a chain of] lang gan stones [ 如循琅玕 ], that implies a normal state." Commentaries elaborate that langgan stones "resemble pearls" and their movement is like a "string of jade- or pearl-like beads". [ 17 ] The c. 3rd century CE Shennong Bencaojing lists qīng lánggān ( 青琅玕 ; "blue-green langgan ") or shízhū ( 石珠 ; 'rock pearl') as a mineral drug used to treat ailments such as itchy skin, carbuncle, and ALS . This is one of the rare early references to langgan that treats it as a real substance, while many others make it a feature of the divine world. [ 18 ] The langgan huadan ( 琅玕華丹 ; "Elixir Efflorescence of Langgan") name of the waidan "external alchemy" elixir of immortality is the best-known usage of the word langgan . [ 19 ] Some other translations are "Elixir of Langgan Efflorescence", [ 20 ] "Lang-Kan (Gem) Radiant Elixir", [ 21 ] and "Elixir Flower of Langgan". [ 22 ] The earliest method of compounding the elixir is found in the Taiwei lingshu ziwen langgan huadan shenzhen shangjing ( 太微靈書紫文琅玕華丹神真上經 ; "Supreme Scripture on the Elixir of Langgan Efflorescence, from the Purple Texts Inscribed by the Spirits of Grand Tenuity"). [ 23 ] This text was originally part of the Daoist Shangqing School scriptural corpus supposedly revealed to Yang Xi (330-c. 386 CE) between 364 and 370. [ 24 ] The Purple Texts alchemical recipe for preparing Elixir of Langgan Efflorescence involves nine steps in four stages carried out over thirteen years. The first stage produces the Langgan Efflorescence proper, which when ingested is said to make "one's complexion similar to gold and jade and enables one to summon divine beings". The next three stages further refine and transform the Langgan Elixir, repeatedly plant it in the earth, and eventually generate a tree whose fruits confer immortality when eaten, just like those of the legendary langgan tree on Mount Kunlun. [ 19 ] Upon completing any of the nine successive steps in producing the elixir, the alchemist (or adept in the neidan interpretation) can choose to either ingest the products and obtain immortality by ascending into the realm of Shangqing heavens or may continue on to the next step with the promise of ever-increasing rewards. [ 25 ] The first stage has one complex waidan step of compounding the primary Langgan Efflorescence. After performing ritual zhāi ( 齋 ; "purification practices") for 40 days, the adept spends 60 days to acquire and prepared the elixir's fourteen ingredients, place them in a crucible, add mercury on top of them, lute the crucible with several layers of mud, and after sacrificing wine to the divinities, heating the crucible for 100 days. The elixir's fourteen reagents , given in exalted code names such as "White-Silk Flying Dragon" for quartz, are: cinnabar , realgar , milky quartz , azurite , amethyst , graphite , saltpeter , sulfur , asbestos , mica , iron pyrite , lead carbonate , Turkestan salt (desert lake precipitates containing gypsum , anhydrite , and halite ), and orpiment . [ 26 ] Based upon these ingredients, Schafer says the end product was probably bluish flint glass with a high lead content. [ 27 ] The alchemist can either leave the crucible closed and proceed to the next stage or break it open and consume the langan elixir that is said to yield marvelous results. The efflorescence should have thirty-seven hues. It is a volatile liquid both brilliant and mottled, a purple aurora darkly flashing. This is called the Elixir of Langgan Efflorescence. If, just at dawn on the first day of the eleventh, fourth, or eighth month, you bow repeatedly and ingest one ounce of this elixir with the water from an east-flowing stream, seven-colored pneumas will rise from your head and your face will have the jadelike glow of metallic efflorescence. If you hold your breath, immediately a chariot from the eight shrouded extents of the universe will arrive. When you spit on the ground, your saliva will transform into a flying dragon. When you whistle to your left, divine Transcendents will pay court to you; when you point to the right, the vapors of Three Elementals will join with the wind. Then, in thousands of conveyances, with myriad outriders, you will fly up to Upper Clarity. [ 28 ] The second stage comprises two iterative 100-day waidan alchemical steps transforming the elixir. Firing the unopened stage one crucible of Langgan Efflorescence for another 100 days will produce the Lunar Efflorescence of the Yellow Solution [ 黄水月華 ], which when consumed will make you "change forms ten thousand times, your eyes will become luminous moons, and you will float above in the Grand Void to fly off to the Palace of Purple Tenuity". The next step of firing the closed crucible for an additional one 100 days will produce three giant pearls called the Jade Essence of the Swirling Solution [ 徊水玉精 ]. Ingesting one alchemical pearl supposedly causes you to immediately give off liquid and fire, form gems with your breath, and your body "will become a sun, and the Thearchs of Heaven will descend to greet you. You will rise as a glowing orb to Upper Clarity." [ 29 ] The third stage involves four 3-year steps utilizing the elixirs produced in the first two stages to create fantastic seeds that are replanted and grow into increasingly perfected "spirit trees" with fruits of immortality. This stage falls between conventional waidan alchemy and the horticultural art of growing marvelous zhi ( 芝 ; "plants of longevity; fungi") such as the lingzhi mushroom . [ 21 ] Initially, the adept mixes the Elixir of Langgan Efflorescence with Jade Essence of the Swirling Solution, transforming the jīng ( 精 ; "essence; sperm; seed") in the latter name into an actual seed that is planted in an irrigated field. After three years it grows into the Tree of Ringed Adamant [ 環剛樹子 ] or Hidden Polypore of the Grand Bourne [ 太極隱芝 ], which has a ring-shaped fruit like a red jujube. Next, the adept plants one of the ringed fruits and waters it with the Yellow Solution, and after three years a plant called the Phoenix-Brain Polypore [ 鳳腦芝 ; fengnao zhi ] will grow like a calabash, with pits like five-colored peaches. Then, a phoenix-brain fruit is planted and watered with Yellow Solution, which after three years will grow into a red tree, like a pine, five or six feet in height, with a jade-white fruit like a pear [ 赤樹白子 ]. Lastly, the adept plants the seed of the red tree, waters it with Swirling Solution, waits another three years for the growth of a vermilion tree like a plum, six or seven feet in height, with a halycon-blue fruit like the jujube [ 絳樹青實 ]. Upon eating this fruit, the adept will ascend to the heaven of Purple Tenuity. [ 30 ] The fourth stage involves two comparatively quicker waidan steps. The adept repeatedly boils equal parts of the Yellow Solution and the Swirling Solution, and transforms them into the Blue Florets of Aqueous Yang [ 水陽青映 ]. If you drink this at dawn, your body will issue a blue and gemmy light, your mouth will spew forth purple vapors, and you will rise above to Upper Clarity [ Shangqing ]. But before departing earth, the adept's last step is to mix the remaining Elixir of Langan Efflorescence with liquified lead and mercury to produce 50-pound ingots of alchemical silver and purple gold, make incantations to the water spirits, and throw both oblatory ingots into a stream. [ 31 ] Despite the carefully detailed Purple Texts' waidan recipe for preparing langgan elixirs, scholars have doubted that the authors actually meant for it to be produced and consumed. Some interpret the impractical 13-year elixir recipe as symbolic instructions for what later came to be known as neidan meditative visualization, and is more a "product of religious imagination", drawing on the respected metaphors of alchemical language, than a laboratory manual drawing on the metaphors of meditation. [ 32 ] Others believe this "extravagantly impractical recipe" is an attempt to assimilate into conventional waidan alchemy the ancient legends about langgan gems that grow on trees in the paradise of KunIun. [ 21 ] The Shangqing Daoist patriarch Tao Hongjing compiled and edited both the c. 370 Taiwei lingshu ziwen langgan huadan shenzhen shangjing and the c. 499 真誥 ; Zhen'gao ; "Declarations of the Perfected" that also mentions langan elixirs in some of the same terminology. One context records that the early Daoist masters Yan Menzi 衍門子 , Gao Qiuzi 高丘子 , and Master Hongyai 洪涯先生 swallowed langgan hua ( 琅玕華 ; "langgan blossoms") to feign death and become xian transcendents and enter the "dark region" beyond the world. Needham and Lu proposed this langgan hua probably refers to a red or green poisonous mushroom, [ 33 ] and Knoblock surmised that these "ruby blossoms" were a species of hallucinogenic mushroom connected with the elixir of immortality. [ 9 ] Another Zhen'gao context describes how in the Shangqing latter days before the apocalypse (predicted to be in 507) people will practice alchemy to create immortality drugs, including the Langgan Elixir that "will flow and flower in thick billows" and Cloud Langgan . If the adept takes one spatula full of elixir, "their spiritual feathers will spread forth like pinions. Then will they (be able to) peruse the pattern figured on the Vault of Space, and glow forth in the Chamber of Primal Commencement". [ 34 ] Several ingredients in the Elixir of Langgan Efflorescence are toxic heavy metals including mercury, lead, and arsenic, and alchemical elixir poisoning was common knowledge in China. Academics have puzzled over why Daoist adepts would knowingly consume a compound of mineral poisons, and Michel Strickmann, a scholar of Daoist and Buddhist studies, proposes that langgan elixir was believed to be an agent of self-liberation that guaranteed immortality to the faithful through a kind of ritual suicide . Since early Daoist literature thoroughly, "even rapturously", described the deadly toxic qualities of many elixirs, Strickmann concluded that scholars need to reexamine the Western stereotype of "accidental elixir poisoning" that supposedly applied to "misguided alchemists and their unwitting imperial patrons". [ 35 ] Chinese authors extended the classical descriptions of langgan meaning "a highly valued gem from western China; a mythical tree of immortality on Kunlun Mountain" into a literary and poetic metaphor for the exotic beauties of an idealized natural world. Several early writers described langgan jewelry, both real and fictional. The 2nd-century scholar and scientist Zhang Heng described a party for the Han nobility at which guests were delighted with the presentation of bowls overflowing with zhēnxiū ( 珍羞 ; "delicacies; exotic foods") including langgan fruits of paradise. The 3rd-century poet Cao Zhi described hanging "halcyon blue" ( 翠 ; cuì ) langgan from the waist of his "beautiful person", and the 5th-century poet Jiang Yan adorned a goddess with gems of langgan . Some other authors reinforced use of its name to refer to divine fruits on heavenly trees. Ruan Ji , one of the Seven Sages of the Bamboo Grove , wrote a 3rd-century poem titled "Dining at Sunrise on Langgan Fruit". The 8th-century poet Li Bai wrote about a famished but proud fenghuang that would not deign to peck at bird food , but like a Daoist adept, would scorn all but a diet of langgan . This represents a literary transition from glittering fruit of distant Kunlun, to aristocratic fare in golden bowls, eventually to an elixir of immortality. [ 36 ] A further extension of the langgan metaphor was to describe natural images of beautiful crystals and lush vegetation. For example, Ban Zhao 's poem on "The Arrival of Winter" says, "The long [Yellow River] forms (crystalline) langgan [written 瓓玕 ; langan ] / Layered ice is like banked-up jade". Two of Du Fu 's poems figuratively used the word langgan in reference to the vegetation around the forest home of a Daoist recluse, and to the splendid grass that provided seating for guests at a royal picnic near a mysterious grotto. [ 37 ] Bamboo was the most typical representative of blue-green langgan in the plant world, compare 筤 ; láng ; " bamboo radical " and the liáng phonetic in 琅 ; láng ; "young bamboo; blue" [ 38 ] Liu Yuxi wrote that the famous spotted bamboo of South China was " langgan colored". [ 39 ] Chinese texts list many diverse locations from where langgan occurred. Several classical works associate mythical langan trees with Kunlun Mountain (far west or northwest China), and two gives sources of actual langgan gemstones, the Shujing says it was tribute from Yong Province (present day Gansu and Shaanxi ) and the Guanzi says the Kunlun desert ( Xinjiang and Tibet ). Official Chinese histories record langgan coming from different sources. The 3rd-century Weilüe , 5th-century Hou Hanshu , 6th-century Wei shu , and 7th-century Liang shu list langgan among the products of Daqin , which depending on context meant the Near East or the Eastern Roman Empire , especially Syria . The Liang shu also says it was found in Kucha (modern Aksu Prefecture , Xinjiang ), the 7th-century Jinshu says in Shaanxi , and the 10th-century Tangshu says in India . The Jiangnan Bielu history of the Southern Tang (937–976) says langgan was mined at Pingze 平澤 in Shu ( Sichuan Province ). [ 6 ] The Daoist scholar and alchemist Tao Hongjing (456-536) notes langgan gemstone was traditionally associated with Sichuan . The Tang pharmacologist Su Jing 蘇敬 (d. 674) reports that it came from the distant Man tribes of the Yunnan–Guizhou Plateau and Hotan /Khotan. [ 40 ] Accurately identifying geographic sources may be complicated by langgan referring to more than one mineral, as discussed next. The precise referent of the Chinese name langgan 琅玕 is uncertain in the present day. Scholars have described it as an "enigmatic archaism of politely pleasant or poetic usage", [ 41 ] and "one of the most elusive terms in Chinese mineralogy". [ 33 ] Identifications of langgan comprise at least three categories: Blue-green langgan was first recorded circa 4th century BCE, Coral langgan from the 8th century, and Red langgan is from an uncertain date. Edward H. Schafer , an eminent scholar of Tang dynasty literature and history, discussed langgan in several books and articles. His proposed identifications gradually changed from Mediterranean red coral, [ 42 ] to coral or a glass-like gem, [ 43 ] to chrysoprase or demantoid, [ 44 ] to coral or red spinel, [ 45 ] and ultimately to malachite. [ 46 ] langgan was a 青 ; qīng ; "green; blue; greenish black" (see Blue–green distinction in language ) gemstone of lustrous appearance mentioned in numerous classical texts. They listed it among historical imperial tribute products presented from the far western regions of China, and as the mineral-fruit of the legendary langgan trees of immortality on Mount Kunlun. [ 47 ] Schafer's 1978 monograph on langgan [ 46 ] sought to identify the treasured blue-green gemstone, if it ever had a unique identity, and concluded the most plausible identification is malachite , a bright green mineral that was anciently used as a copper ore and an ornamental stone. Two early Chinese mineralogical authorities [ 48 ] [ 49 ] identified langgan as malachite, commonly called 孔雀石 ; kǒngquèshí ; 'peacock stone' or 石綠 ; shílǜ ; 'stone green'. Comparing blue-green stones that were known in early East Asia, Schafer disqualified several conceivable identities; demantoid garnet and green tourmaline are rarely of gem quality, while neither apple-green chrysoprase nor light greenish-blue turquoise typically have dark hues. This leaves malachite, This handsome green carbonate of copper has important credentials. It is often found in copper mines, and is therefore regularly at the disposal of copper- and bronze-producing peoples. It has, in certain varieties, a lovely silky luster, caused by its fibrous structure. It is soft and easily cut. It takes a good polish. It was commonly made into beads both in the western and eastern worlds. Above all, even uncut malachite often has a nodular or botryoidal structure, like little clumps of bright green beads, one of the classical forms attributed to lang-kan . Sometimes, too, it is stalactitic, like little stone trees. [ 50 ] Furthermore, archeology confirms that malachite was an important gemstone of pre-Han China. Inlays of malachite and turquoise decorated many early Chinese bronze weapons and ritual vessels. [ 51 ] Tang sources continued to record blue-green langgan . Su Jing's 652 新修本草 ; Xinxiu bencao said it was a glassy substance similar to liúli ( 琉璃 ; "colored glaze; glass; glossy gem") that was imported from the Man tribes in the Southwest and from Khotan . [ 52 ] In 762, Emperor Daizong of Tang proclaimed a new era name of 寶應 ; Baoying ; "Treasure Response" in honor of the discovery of thirteen auspicious treasures in Jiangsu , one of which was glassy langgan beads [ 53 ] Tang dynasty herbalists and pharmacists changed the denotation of langgan from the traditional blue-green gemstone to a kind of coral. Chen Cangqi's c. 720 Bencao shiyi ( 本草拾遺 ; "Collected Addenda to the Pharmacopoeia") described it a pale red coral, growing like a branched tree on the bottom of the sea, fished by means of nets, and after coming out of the water gradually darkens and turns blue. [ 54 ] Langan already had an established connection with coral. Chinese mythology matches two antipodean paradises of Mount Kunlun in the far west and Mount Penglai located on an island in the far eastern Bohai Sea . Both mountains had mythic plants and trees of immortality that attracted Daoist xian transcendents; Kunlun's red langgan trees with blue-green fruits were paralleled by Penglai's shanhu shu ( 珊瑚樹 ; "red coral trees"). [ 52 ] Regarding what variety of blue or green branching coral was identified as this "mineralized subaqueous shrub" langgan . Since it must have been a coral attractive enough to be comparable with the extravagant myths of Kunlun, Schafer suggests considering the blue coral Heliopora coerula . It is the only living species in the family Helioporidae, the only octocoral known to produce a massive skeleton, and was found throughout Pacific and Indian Oceans, although the IUCN currently considers it a vulnerable species . [ 38 ] Du Wan's c. 1124 Yunlin shipu mineralogy book has a section (100) on 琅玕石 ; langgan shi that mentions shanhu "coral". A coral-like stone found in shallow water along the coast of Ningbo Zhejiang . Some specimens are two or three feet high. They must be pulled up by ropes let down from rafts. Though white when first taken from the water, they turn a dull purple after a while. They are patterned everywhere with circles, like ginger branches, and are rather brittle. Though the natives hold … [ 55 ] Li Shizhen 's 1578 Bencao Gangmu classic pharmacopeia objects to applying the term langgan to these marine invertebrates , which should properly be called shanhu while langgan should only be applied to the stone occurring in the mountains. Li's commentary suggests that the terminological confusion arose from the Shuowen jiezi definition of shanhu : "coral is red colored and grows in the ocean or in the mountains" ( 珊瑚: 色赤生於海或生於山 ). This puzzling description of mountain corals was more likely a textual misunderstanding than a reference to coral fossils . [ 6 ] The most recent, and least historically documented, identification of langgan is a red gemstone. The Chinese geologist Chang Hung-Chao (Zhang Hongzhao) propagated this explanation when his book about geological terms in Chinese literature identified langgan as malachite, and noted an alternative construal of reddish spinel or balas ruby from the famous mines at Badakhshan . [ 56 ] Some authors have cited Chang's balas ruby identification of langgan ; [ 57 ] others have used, or even confused, it with ruby , in translations (e.g., "precious rubies"). [ 8 ] However, Schafer demonstrates that Chang's "supposed" textual evidence for red langgan is tenuous and suggests that Guo Pu's Shanhai jing commentary created this mineralogical confusion. Guo glosses the langgan tree as red, but is unclear whether this refers to the tree itself or its gem-like fruit. Compare Birrell's and Bokenkamp's Shanhai jing translations of "red jade" and "green kernels from scarlet gem trees". [ 58 ] Chang misquotes dan'gan ( 丹矸 ; "cinnabar rock") from the Xunzi as dan'gan ( 丹玕 ; "cinnabar 'gan"), and cites one textual occurrence of the term. The Shangqing Daoist Dadong zhenjing ( 大洞真經 ; "Authentic Scripture of the Great Cavern") records a heavenly palace named Dan'gan dian ( 丹玕殿 ; "Basilica of the Cinnabar 'Gan"). Admitting the possibility of interpreting 玕 ; gan as a monosyllabic truncation for 琅玕 ; langgan , comparable with reading 红珀 ; hongpo for honghupo ( 红琥珀 ; "red amber"), Schafer concludes there is insufficient dan'gan evidence for an explicit red variety of langgan . [ 59 ] The lyrical term langgan occurs 87 times in the huge Complete Tang Poems collection of Tang poetry , with only two 紅琅玕 ; hong langgen ; "red 'langgan" usages by the Buddhist monk-poets Guanxiu (831-912) and Ji Qi 齊己 (863-937). Both poems use langgan to describe "red coral", the latter ( 贈念法華經 ) uses shanhu in the same line: 珊瑚捶打紅琅玕 ; "coral beating on red 'langgan" in cold waters. Chinese-English dictionaries illustrate the multifaceted difficulties of identifying langgan . Compare the following list. Most of these bilingual Chinese dictionaries cross-reference lang and gan to langgan , but a few translate lang and gan independently. In terms of Chinese word morphology , 琅 ; láng is a free morpheme that can appear alone (for instance, a surname) or in other compound words (such as 琺琅 ; fàláng ; "enamel" and 琅琊山 ; Lángyá shān ; " Mount Langya (Anhui) ") while 玕 ; gān is a bound morpheme that only occurs in the compound langgan and does not have independent meaning. The origin of Giles' lang translation "a kind of white carnelian" is unknown, unless it derives from Williams' "a whitish stone". It was copied in Mathews' and various other Chinese dictionaries up to the online standard Unihan Database "a variety of white carnelian; pure". "White carnelian" is a marketing name for "white or whitish chalcedony of faint carnelian color". [ 69 ] Carnelian is usually reddish-brown while common chalcedony colors are white, grey, brown, and blue. Footnotes
https://en.wikipedia.org/wiki/Langgan
Langhans giant cells (LGC) are giant cells found in granulomatous conditions. They are formed by the fusion of epithelioid cells ( macrophages ), and contain nuclei arranged in a horseshoe-shaped pattern in the cell periphery. [ 1 ] Although traditionally their presence was associated with tuberculosis , they are not specific for tuberculosis or even for mycobacterial disease. In fact, they are found in nearly every form of granulomatous disease, regardless of etiology. Langhans giant cells are named after Theodor Langhans (1839–1915), a German pathologist. [ 2 ] In 2012, a research paper showed that when activated CD4+ T cells and monocytes are in close contact, interaction of CD40 - CD40L between these two cells and subsequent IFNγ secretion by the T cells causes upregulation and secretion of fusion-related molecule DC-STAMP ( dendritic cell -specific transmembrane protein) by the monocytes, which results in LGC formation. [ 3 ] Langhans giant cells are often found in transbronchial lung biopsies or lymph node biopsies in patients with sarcoidosis . [ 4 ] They are also commonly found in tuberculous granulomas of tuberculosis. [ 5 ]
https://en.wikipedia.org/wiki/Langhans_giant_cell
Langley's Adventitious Angles is a puzzle in which one must infer an angle in a geometric diagram from other given angles. It was posed by Edward Mann Langley in The Mathematical Gazette in 1922. [ 1 ] [ 2 ] In its original form the problem was as follows: The problem of calculating angle ∠ B E F {\displaystyle \angle {BEF}} is a standard application of Hansen's resection . Such calculations can establish that ∠ B E F {\displaystyle \angle {BEF}} is within any desired precision of 30 ∘ {\displaystyle 30^{\circ }} , but being of only finite precision, always leave doubt about the exact value. A direct proof using classical geometry was developed by James Mercer in 1923. [ 2 ] This solution involves drawing one additional line, and then making repeated use of the fact that the internal angles of a triangle add up to 180° to prove that several triangles drawn within the large triangle are all isosceles. Many other solutions are possible. Cut the Knot list twelve different solutions and several alternative problems with the same 80-80-20 triangle but different internal angles. [ 4 ] A quadrilateral such as BCEF is called an adventitious quadrangle when the angles between its diagonals and sides are all rational angles, angles that give rational numbers when measured in degrees or other units for which the whole circle is a rational number. Numerous adventitious quadrangles beyond the one appearing in Langley's puzzle have been constructed. They form several infinite families and an additional set of sporadic examples. [ 5 ] Classifying the adventitious quadrangles (which need not be convex) turns out to be equivalent to classifying all triple intersections of diagonals in regular polygons. This was solved by Gerrit Bol in 1936 (Beantwoording van prijsvraag # 17, Nieuw-Archief voor Wiskunde 18, pages 14–66). He in fact classified (though with a few errors) all multiple intersections of diagonals in regular polygons. His results (all done by hand) were confirmed with computer, and the errors corrected, by Bjorn Poonen and Michael Rubinstein in 1998. [ 6 ] The article contains a history of the problem and a picture featuring the regular triacontagon and its diagonals. In 2015, an anonymous Japanese woman using the pen name "aerile re" published the first known method (the method of 3 circumcenters) to construct a proof in elementary geometry for a special class of adventitious quadrangles problem. [ 7 ] [ 8 ] [ 9 ] This work solves the first of the three unsolved problems listed by Rigby in his 1978 paper. [ 5 ]
https://en.wikipedia.org/wiki/Langley's_Adventitious_Angles
Langmuir is a peer-reviewed scientific journal that was established in 1985 and is published by the American Chemical Society . It is the leading journal focusing on the science and application of systems and materials in which the interface dominates structure and function. Research areas covered include surface and colloid chemistry . Langmuir publishes original research articles, invited feature articles, perspectives, and editorials. The title honors Irving Langmuir , winner of the 1932 Nobel Prize for Chemistry . The founding editor-in-chief was Arthur W. Adamson . [ 1 ] Langmuir is indexed in Chemical Abstracts Service , Scopus , EBSCOhost , British Library , PubMed , Web of Science , and SwetsWise. This article about a chemistry journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Langmuir_(journal)
The langmuir (symbol: L ) is a unit of exposure (or dosage) to a surface ( e.g. of a crystal ) and is used in ultra-high vacuum (UHV) surface physics to study the adsorption of gases . It is a practical unit, and is not dimensionally homogeneous , and so is used only in this field. It is named after American physicist Irving Langmuir . The langmuir is defined by multiplying the pressure of the gas by the time of exposure. One langmuir corresponds to an exposure of 10 −6 Torr during one second . [ 1 ] [ 2 ] For example, exposing a surface to a gas pressure of 10 −8 Torr for 100 seconds corresponds to 1 L. Similarly, keeping the pressure of oxygen gas at 2.5·10 −6 Torr for 40 seconds will give a dose of 100 L. Since both different pressures and exposure times can give the same langmuir (see definition) it can be difficult to convert Langmuir (L) to exposure pressure × time (Torr·s) and vice versa. The following equation can be used to convert between the two easily: x y [ L ] = x × 10 − n [ T o r r ] ⋅ y × 10 n − 6 [ s ] {\displaystyle xy[{\rm {L}}]=x\times 10^{-n}[{\rm {Torr}}]\cdot y\times 10^{n-6}[{\rm {s}}]} Here, x {\displaystyle x} and y {\displaystyle y} are any two numbers whose product equals the desired Langmuir value, n {\displaystyle n} is an integer allowing different magnitudes of pressure or exposure time to be used in conversion. The units are represented in the [square brackets]. Using the prior example, for a dose of 100 L a pressure of 2.5 × 10 −6 Torr can be applied for 40 seconds, thus, x = 2.5 {\displaystyle x=2.5} , y = 40 {\displaystyle y=40} and n = 6 {\displaystyle n=6} . However, this dosage could also be gained with 8 × 10 −8 Torr for 1250 seconds, here x = 8 {\displaystyle x=8} , y = 12.5 {\displaystyle y=12.5} , n = 8 {\displaystyle n=8} . In both scenarios x y = 100 {\displaystyle xy=100} . Exposure of a surface in surface physics is a type of fluence , that is the integral of number flux ( J N ) with respect to exposed time ( t ) to give a number of particles per unit area ( Φ ): The number flux for an ideal gas, that is the number of gas molecules passing through (in a single direction) a surface of unit area in unit time, can be derived from kinetic theory : [ 3 ] where C is the number density of the gas, and u ¯ {\displaystyle {\bar {u}}} is the mean speed of the molecules ( not the root-mean-square speed, although the two are related). The number density of an ideal gas depends on the thermodynamic temperature ( T ) and the pressure ( p ): The mean speed of the gas molecules can also be derived from kinetic theory: [ 4 ] where m is the mass of a gas molecule. Hence The proportionality between number flux and pressure is only strictly valid for a given temperature and a given molecular mass of adsorbing gas. However, the dependence is only on the square roots of m and T . Gas adsorption experiments typically operate around ambient temperature with light gases. Hence, the langmuir remains useful as a practical unit. Assuming that every gas molecule hitting the surface sticks to it (that is, the sticking coefficient is 1), one langmuir (1 L) leads to a coverage of about one monolayer of the adsorbed gas molecules on the surface [ citation needed ] . In general, the sticking coefficient varies depending on the reactivity of the surface and the molecules, so that the langmuir gives a lower limit of the time it needs to completely cover a surface. This also illustrates why ultra-high vacuum (UHV) must be used to study solid-state surfaces, nanostructures or even single molecules. The typical time to perform physical experiments on sample surfaces is in the range of one to several hours. To keep the surface free of contaminations , the pressure of the residual gas in a UHV chamber should be below 10 −10 Torr.
https://en.wikipedia.org/wiki/Langmuir_(unit)
The Langmuir adsorption model explains adsorption by assuming an adsorbate behaves as an ideal gas at isothermal conditions. According to the model, adsorption and desorption are reversible processes. This model even explains the effect of pressure; i.e. , at these conditions the adsorbate 's partial pressure p A {\displaystyle p_{A}} is related to its volume V adsorbed onto a solid adsorbent . The adsorbent, as indicated in the figure, is assumed to be an ideal solid surface composed of a series of distinct sites capable of binding the adsorbate. The adsorbate binding is treated as a chemical reaction between the adsorbate gaseous molecule A g {\displaystyle A_{\text{g}}} and an empty sorption site S . This reaction yields an adsorbed species A ad {\displaystyle A_{\text{ad}}} with an associated equilibrium constant K eq {\displaystyle K_{\text{eq}}} : From these basic hypotheses the mathematical formulation of the Langmuir adsorption isotherm can be derived in various independent and complementary ways: by the kinetics , the thermodynamics , and the statistical mechanics approaches respectively (see below for the different demonstrations). The Langmuir adsorption equation is where θ A {\displaystyle \theta _{A}} is the fractional occupancy of the adsorption sites, i.e., the ratio of the volume V of gas adsorbed onto the solid to the volume V m {\displaystyle V_{\text{m}}} of a gas molecules monolayer covering the whole surface of the solid and completely occupied by the adsorbate. A continuous monolayer of adsorbate molecules covering a homogeneous flat solid surface is the conceptual basis for this adsorption model. [ 1 ] In 1916, Irving Langmuir presented his model for the adsorption of species onto simple surfaces. Langmuir was awarded the Nobel Prize in 1932 for his work concerning surface chemistry. He hypothesized that a given surface has a certain number of equivalent sites to which a species can "stick", either by physisorption or chemisorption . His theory began when he postulated that gaseous molecules do not rebound elastically from a surface, but are held by it in a similar way to groups of molecules in solid bodies. [ 2 ] Langmuir published two papers that confirmed the assumption that adsorbed films do not exceed one molecule in thickness. The first experiment involved observing electron emission from heated filaments in gases. [ 3 ] The second, a more direct evidence, examined and measured the films of liquid onto an adsorbent surface layer. He also noted that generally the attractive strength between the surface and the first layer of adsorbed substance is much greater than the strength between the first and second layer. However, there are instances where the subsequent layers may condense given the right combination of temperature and pressure. [ 4 ] Inherent within this model, the following assumptions [ 5 ] are valid specifically for the simplest case: the adsorption of a single adsorbate onto a series of equivalent sites onto the surface of the solid. The mathematical expression of the Langmuir adsorption isotherm involving only one sorbing species can be demonstrated in different ways: the kinetics approach, the thermodynamics approach, and the statistical mechanics approach respectively. In case of two competing adsorbed species, the competitive adsorption model is required, while when a sorbed species dissociates into two distinct entities, the dissociative adsorption model need to be used. This section [ 5 ] provides a kinetic derivation for a single-adsorbate case. The kinetic derivation applies to gas-phase adsorption. The multiple-adsorbate case is covered in the competitive adsorption sub-section. The model assumes adsorption and desorption as being elementary processes, where the rate of adsorption r ad and the rate of desorption r d are given by where p A is the partial pressure of A over the surface, [ S ] is the concentration of free sites in number/m 2 , [ A ad ] is the surface concentration of A in molecules/m 2 (concentration of occupied sites), and k ad and k d are constants of forward adsorption reaction and backward desorption reaction in the above reactions. At equilibrium, the rate of adsorption equals the rate of desorption. Setting r ad = r d and rearranging, we obtain The concentration of sites is given by dividing the total number of sites ( S 0 ) covering the whole surface by the area of the adsorbent ( a ): We can then calculate the concentration of all sites by summing the concentration of free sites [ S ] and occupied sites: Combining this with the equilibrium equation, we get We define now the fraction of the surface sites covered with A as This, applied to the previous equation that combined site balance and equilibrium, yields the Langmuir adsorption isotherm: In condensed phases (solutions), adsorption to a solid surface is a competitive process between the solvent ( A ) and the solute ( B ) to occupy the binding site. The thermodynamic equilibrium is described as If we designate the solvent by the subscript "1" and the solute by "2", and the bound state by the superscript "s" (surface/bound) and the free state by the "b" (bulk solution / free), then the equilibrium constant can be written as a ratio between the activities of products over reactants: For dilute solutions the activity of the solvent in bulk solution a 1 b ≃ 1 , {\displaystyle a_{1}^{\text{b}}\simeq 1,} and the activity coefficients ( γ {\displaystyle \gamma } ) are also assumed to ideal on the surface. Thus, a 2 s = X 2 s = θ , {\displaystyle a_{2}^{\text{s}}=X_{2}^{\text{s}}=\theta ,} a 1 s = X 1 s , {\displaystyle a_{1}^{\text{s}}=X_{1}^{\text{s}},} , and X 1 s + X 2 s = 1 , {\displaystyle X_{1}^{\text{s}}+X_{2}^{\text{s}}=1,} where X i {\displaystyle X_{i}} are mole fractions. Re-writing the equilibrium constant and solving for θ {\displaystyle \theta } yields Note that the concentration of the solute adsorbate can be used instead of the activity coefficient. However, the equilibrium constant will no longer be dimensionless and will have units of reciprocal concentration instead. The difference between the kinetic and thermodynamic derivations of the Langmuir model is that the thermodynamic uses activities as a starting point while the kinetic derivation uses rates of reaction. The thermodynamic derivation allows for the activity coefficients of adsorbates in their bound and free states to be included. The thermodynamic derivation is usually referred to as the "Langmuir-like equation". [ 6 ] [ 7 ] This derivation [ 8 ] [ 9 ] based on statistical mechanics was originally provided by Volmer and Mahnert [ 10 ] in 1925. The partition function of the finite number of adsorbents adsorbed on a surface, in a canonical ensemble , is given by where ζ L {\displaystyle \zeta _{L}} is the partition function of a single adsorbed molecule, N S {\displaystyle N_{S}} is the number of adsorption sites (both occupied and unoccupied), and N A {\displaystyle N_{A}} is the number of adsorbed molecules which should be less than or equal to N S {\displaystyle N_{S}} . The terms in the bracket give the total partition function of the N A {\displaystyle N_{A}} adsorbed molecules by taking a product of the individual partition functions (refer to Partition function of subsystems ). The 1 / N A ! {\displaystyle 1/N_{A}!} factor accounts for the overcounting arising due to the indistinguishable nature of the adsorbates. The grand canonical partition function is given by μ A {\displaystyle \mu _{A}} is the chemical potential of an adsorbed molecule. As it has the form of binomial series , the summation is reduced to where x = ζ L exp ⁡ ( μ A k B T ) . {\displaystyle x=\zeta _{L}\exp \left({\frac {\mu _{A}}{k_{\rm {B}}T}}\right).} The grand canonical potential is based on which the average number of occupied sites is calculated which gives the coverage Now, invoking the condition that the system is in equilibrium, that is, the chemical potential of the adsorbed molecules is equal to that of the molecules in gas phase, we have The chemical potential of an ideal gas is where A g = − k B T ln ⁡ Z g {\displaystyle A_{g}=-k_{\rm {B}}T\ln Z_{g}} is the Helmholtz free energy of an ideal gas with its partition function q {\displaystyle q} is the partition function of a single particle in the volume of V {\displaystyle V} (only consider the translational freedom here). We thus have μ g = − k B T ln ⁡ ( q / N ) {\displaystyle \mu _{g}=-k_{\rm {B}}T\ln(q/N)} , where we use Stirling's approximation. Plugging μ g {\displaystyle \mu _{g}} to the expression of x {\displaystyle x} , we have which gives the coverage By defining and using the identity P V = N k B T {\displaystyle PV=Nk_{\rm {B}}T} , finally, we have It is plotted in the figure alongside demonstrating that the surface coverage increases quite rapidly with the partial pressure of the adsorbants, but levels off after P reaches P 0 . The previous derivations assumed that there is only one species, A , adsorbing onto the surface. This section [ 11 ] considers the case when there are two distinct adsorbates present in the system. Consider two species A and B that compete for the same adsorption sites. The following hypotheses are made here: As derived using kinetic considerations, the equilibrium constants for both A and B are given by and The site balance states that the concentration of total sites [ S 0 ] is equal to the sum of free sites, sites occupied by A and sites occupied by B : Inserting the equilibrium equations and rearranging in the same way we did for the single-species adsorption, we get similar expressions for both θ A and θ B : The other case of special importance is when a molecule D 2 dissociates into two atoms upon adsorption. [ 11 ] Here, the following assumptions would be held to be valid: Using similar kinetic considerations, we get The 1/2 exponent on p D 2 arises because one gas phase molecule produces two adsorbed species. Applying the site balance as done above, The formation of Langmuir monolayers by adsorption onto a surface dramatically reduces the entropy of the molecular system. To find the entropy decrease, we find the entropy of the molecule when in the adsorbed condition. [ 12 ] Using Stirling's approximation , we have On the other hand, the entropy of a molecule of an ideal gas is where λ {\displaystyle \lambda } is the thermal de Broglie wavelength of the gas molecule. The Langmuir adsorption model deviates significantly in many cases, primarily because it fails to account for the surface roughness of the adsorbent. Rough inhomogeneous surfaces have multiple site types available for adsorption, with some parameters varying from site to site, such as the heat of adsorption. Moreover, specific surface area is a scale-dependent quantity, and no single true value exists for this parameter. [ 1 ] Thus, the use of alternative probe molecules can often result in different obtained numerical values for surface area, rendering comparison problematic. The model also ignores adsorbate–adsorbate interactions. Experimentally, there is clear evidence for adsorbate–adsorbate interactions in heat of adsorption data. There are two kinds of adsorbate–adsorbate interactions: direct interaction and indirect interaction. Direct interactions are between adjacent adsorbed molecules, which could make adsorbing near another adsorbate molecule more or less favorable and greatly affects high-coverage behavior. In indirect interactions, the adsorbate changes the surface around the adsorbed site, which in turn affects the adsorption of other adsorbate molecules nearby. The modifications try to account for the points mentioned in above section like surface roughness, inhomogeneity, and adsorbate–adsorbate interactions. Also known as the two-site Langmuir equation. This equation describes the adsorption of one adsorbate to two or more distinct types of adsorption sites. Each binding site can be described with its own Langmuir expression, as long as the adsorption at each binding site type is independent from the rest. where This equation works well for adsorption of some drug molecules to activated carbon in which some adsorbate molecules interact with hydrogen bonding while others interact with a different part of the surface by hydrophobic interactions ( hydrophobic effect ). The equation was modified to account for the hydrophobic effect (also known as entropy-driven adsorption): [ 13 ] The hydrophobic effect is independent of concentration, since K 2 a 2 b ≫ 1. {\displaystyle K_{2}a_{2}^{\text{b}}\gg 1.} Therefore, the capacity of the adsorbent for hydrophobic interactions q HB {\displaystyle q_{\text{HB}}} can obtained from fitting to experimental data. The entropy-driven adsorption originates from the restriction of translational motion of bulk water molecules by the adsorbate, which is alleviated upon adsorption. The Freundlich isotherm is the most important multi-site adsorption isotherm for rough surfaces. where α F and C F are fitting parameters. [ 14 ] This equation implies that if one makes a log–log plot of adsorption data, the data will fit a straight line. The Freundlich isotherm has two parameters, while Langmuir's equations has only one: as a result, it often fits the data on rough surfaces better than the Langmuir isotherm. However, the Freundlich equation is not unique; consequently, a good fit of the data points does not offer sufficient proof that the surface is heterogeneous. The heterogeneity of the surface can be confirmed with calorimetry . Homogeneous surfaces (or heterogeneous surfaces that exhibit homogeneous adsorption (single-site)) have a constant Δ H {\displaystyle \Delta H} of adsorption as a function of the occupied-sites fraction. On the other hand, heterogeneous adsorbents (multi-site) have a variable Δ H {\displaystyle \Delta H} of adsorption depending on the sites occupation. When the adsorbate pressure (or concentration) is low, the fractional occupation is small and as a result, only low-energy sites are occupied, since these are the most stable. As the pressure increases, the higher-energy sites become occupied, resulting in a smaller Δ H {\displaystyle \Delta H} of adsorption, given that adsorption is an exothermic process. [ 15 ] A related equation is the Toth equation . Rearranging the Langmuir equation, one can obtain J. Toth [ 16 ] modified this equation by adding two parameters α T 0 and C T 0 to formulate the Toth equation : This isotherm takes into account indirect adsorbate–adsorbate interactions on adsorption isotherms. Temkin [ 17 ] noted experimentally that heats of adsorption would more often decrease than increase with increasing coverage. The heat of adsorption Δ H ad is defined as He derived a model assuming that as the surface is loaded up with adsorbate, the heat of adsorption of all the molecules in the layer would decrease linearly with coverage due to adsorbate–adsorbate interactions: where α T is a fitting parameter. Assuming the Langmuir adsorption isotherm still applied to the adsorbed layer, K eq A {\displaystyle K_{\text{eq}}^{A}} is expected to vary with coverage as follows: Langmuir's isotherm can be rearranged to Substituting the expression of the equilibrium constant and taking the natural logarithm: Brunauer, Emmett and Teller (BET) [ 18 ] derived the first isotherm for multilayer adsorption. It assumes a random distribution of sites that are empty or that are covered with by one monolayer, two layers and so on, as illustrated alongside. The main equation of this model is where and [ A ] is the total concentration of molecules on the surface, given by where in which [ A ] 0 is the number of bare sites, and [ A ] i is the number of surface sites covered by i molecules. This section describes the surface coverage when the adsorbate is in liquid phase and is a binary mixture. [ 19 ] For ideal both phases [ clarification needed ] – no lateral interactions, homogeneous surface – the composition of a surface phase for a binary liquid system in contact with solid surface is given by a classic Everett isotherm equation (being a simple analogue of Langmuir equation), where the components are interchangeable (i.e. "1" may be exchanged to "2") without change of equation form: where the normal definition of multi-component system is valid as follows: By simple rearrangement, we get This equation describes competition of components "1" and "2".
https://en.wikipedia.org/wiki/Langmuir_adsorption_model
A Langmuir probe is a device used to determine the electron temperature, electron density, and electric potential of a plasma . It works by inserting one or more electrodes into a plasma, with a constant or time-varying electric potential between the various electrodes or between them and the surrounding vessel. The measured currents and potentials in this system allow the determination of the physical properties of the plasma. The beginning of Langmuir probe theory is the I–V characteristic of the Debye sheath , that is, the current density flowing to a surface in a plasma as a function of the voltage drop across the sheath. The analysis presented here indicates how the electron temperature, electron density, and plasma potential can be derived from the I–V characteristic. In some situations a more detailed analysis can yield information on the ion density ( n i {\displaystyle n_{i}} ), the ion temperature T i {\displaystyle T_{i}} , or the electron energy distribution function (EEDF) or f e ( v ) {\displaystyle f_{e}(v)} . Consider first a surface biased to a large negative voltage. If the voltage is large enough, essentially all electrons (and any negative ions) will be repelled. The ion velocity will satisfy the Bohm sheath criterion , which is, strictly speaking, an inequality, but which is usually marginally fulfilled. The Bohm criterion in its marginal form says that the ion velocity at the sheath edge is simply the sound speed given by c s = k B ( Z T e + γ i T i ) / m i {\displaystyle c_{s}={\sqrt {k_{B}(ZT_{e}+\gamma _{i}T_{i})/m_{i}}}} . The ion temperature term is often neglected, which is justified if the ions are cold. Z is the (average) charge state of the ions, and γ i {\displaystyle \gamma _{i}} is the adiabatic coefficient for the ions. The proper choice of γ i {\displaystyle \gamma _{i}} is a matter of some contention. Most analyses use γ i = 1 {\displaystyle \gamma _{i}=1} , corresponding to isothermal ions, but some kinetic theory suggests that γ i = 3 {\displaystyle \gamma _{i}=3} . For Z = 1 {\displaystyle Z=1} and T i = T e {\displaystyle T_{i}=T_{e}} , using the larger value results in the conclusion that the density is 2 {\displaystyle {\sqrt {2}}} times smaller. Uncertainties of this magnitude arise several places in the analysis of Langmuir probe data and are very difficult to resolve. The charge density of the ions depends on the charge state Z , but quasineutrality allows one to write it simply in terms of the electron density as q e n e {\displaystyle q_{e}n_{e}} , where q e {\displaystyle q_{e}} is the charge of an electron and n e {\displaystyle n_{e}} is the number density of electrons. Using these results we have the current density to the surface due to the ions. The current density at large negative voltages is due solely to the ions and, except for possible sheath expansion effects, does not depend on the bias voltage, so it is referred to as the ion saturation current density and is given by j i m a x = q e n e c s {\displaystyle j_{i}^{max}=q_{e}n_{e}c_{s}} where c s {\displaystyle c_{s}} is as defined above. The plasma parameters, in particular, the density, are those at the sheath edge. As the voltage of the Debye sheath is reduced, the more energetic electrons are able to overcome the potential barrier of the electrostatic sheath. We can model the electrons at the sheath edge with a Maxwell–Boltzmann distribution , i.e., f ( v x ) d v x ∝ e − 1 2 m e v x 2 / k B T e {\displaystyle f(v_{x})\,dv_{x}\propto e^{-{\frac {1}{2}}m_{e}v_{x}^{2}/k_{B}T_{e}}} , except that the high energy tail moving away from the surface is missing, because only the lower energy electrons moving toward the surface are reflected. The higher energy electrons overcome the sheath potential and are absorbed. The mean velocity of the electrons which are able to overcome the voltage of the sheath is ⟨ v e ⟩ = ∫ v e 0 ∞ f ( v x ) v x d v x ∫ − ∞ ∞ f ( v x ) d v x {\displaystyle \langle v_{e}\rangle ={\frac {\int _{v_{e0}}^{\infty }f(v_{x})\,v_{x}\,dv_{x}}{\int _{-\infty }^{\infty }f(v_{x})\,dv_{x}}}} , where the cut-off velocity for the upper integral is v e 0 = 2 q e Δ V / m e {\displaystyle v_{e0}={\sqrt {2q_{e}\Delta V/m_{e}}}} . Δ V {\displaystyle \Delta V} is the voltage across the Debye sheath, that is, the potential at the sheath edge minus the potential of the surface. For a large voltage compared to the electron temperature, the result is ⟨ v e ⟩ = k B T e 2 π m e e − q e Δ V / k B T e {\displaystyle \langle v_{e}\rangle ={\sqrt {\frac {k_{B}T_{e}}{2\pi m_{e}}}}\,e^{-q_{e}\Delta V/k_{B}T_{e}}} . With this expression, we can write the electron contribution to the current to the probe in terms of the ion saturation current as j e = j i m a x m i / 2 π m e e − q e Δ V / k B T e {\displaystyle j_{e}=j_{i}^{max}{\sqrt {m_{i}/2\pi m_{e}}}\,e^{-q_{e}\Delta V/k_{B}T_{e}}} , valid as long as the electron current is not more than two or three times the ion current. The total current, of course, is the sum of the ion and electron currents: j = j i m a x ( − 1 + m i / 2 π m e e − q e Δ V / k B T e ) {\displaystyle j=j_{i}^{max}\left(-1+{\sqrt {m_{i}/2\pi m_{e}}}\,e^{-q_{e}\Delta V/k_{B}T_{e}}\right)} . We are using the convention that current from the surface into the plasma is positive. An interesting and practical question is the potential of a surface to which no net current flows. It is easily seen from the above equation that Δ V = ( k B T e / q e ) ( 1 / 2 ) ln ⁡ ( m i / 2 π m e ) {\displaystyle \Delta V=(k_{B}T_{e}/q_{e})\,(1/2)\ln(m_{i}/2\pi m_{e})} . If we introduce the ion reduced mass μ i = m i / m e {\displaystyle \mu _{i}=m_{i}/m_{e}} , we can write Δ V = ( k B T e / q e ) ( 2.8 + 0.5 ln ⁡ μ i ) {\displaystyle \Delta V=(k_{B}T_{e}/q_{e})\,(2.8+0.5\ln \mu _{i})} Since the floating potential is the experimentally accessible quantity, the current (below electron saturation) is usually written as j = j i m a x ( − 1 + e q e ( V 0 − Δ V ) / k B T e ) {\displaystyle j=j_{i}^{max}\left(-1+\,e^{q_{e}(V_{0}-\Delta V)/k_{B}T_{e}}\right)} . When the electrode potential is equal to or greater than the plasma potential, then there is no longer a sheath to reflect electrons, and the electron current saturates. Using the Boltzmann expression for the mean electron velocity given above with v e 0 = 0 {\displaystyle v_{e0}=0} and setting the ion current to zero, the electron saturation current density would be j e m a x = j i m a x m i / π m e = j i m a x ( 24.2 μ i ) {\displaystyle j_{e}^{max}=j_{i}^{max}{\sqrt {m_{i}/\pi m_{e}}}=j_{i}^{max}\left(24.2\,{\sqrt {\mu _{i}}}\right)} Although this is the expression usually given in theoretical discussions of Langmuir probes, the derivation is not rigorous and the experimental basis is weak. The theory of double layers [ 1 ] typically employs an expression analogous to the Bohm criterion , but with the roles of electrons and ions reversed, namely j e m a x = q e n e k B ( γ e T e + T i ) / m e = j i m a x m i / m e = j i m a x ( 42.8 μ i ) {\displaystyle j_{e}^{max}=q_{e}n_{e}{\sqrt {k_{B}(\gamma _{e}T_{e}+T_{i})/m_{e}}}=j_{i}^{max}{\sqrt {m_{i}/m_{e}}}=j_{i}^{max}\left(42.8\,{\sqrt {\mu _{i}}}\right)} where the numerical value was found by taking T i = T e and γ i =γ e . In practice, it is often difficult and usually considered uninformative to measure the electron saturation current experimentally. When it is measured, it is found to be highly variable and generally much lower (a factor of three or more) than the value given above. Often a clear saturation is not seen at all. Understanding electron saturation is one of the most important outstanding problems of Langmuir probe theory. The Debye sheath theory explains the basic behavior of Langmuir probes, but is not complete. Merely inserting an object like a probe into a plasma changes the density, temperature, and potential at the sheath edge and perhaps everywhere. Changing the voltage on the probe will also, in general, change various plasma parameters. Such effects are less well understood than sheath physics, but they can at least in some cases be roughly accounted. The Bohm criterion requires the ions to enter the Debye sheath at the sound speed. The potential drop that accelerates them to this speed is called the pre-sheath . It has a spatial scale that depends on the physics of the ion source but which is large compared to the Debye length and often of the order of the plasma dimensions. The magnitude of the potential drop is equal to (at least) Φ p r e = 1 2 m i c s 2 Z e = k B ( T e + Z γ i T i ) / ( 2 Z e ) {\displaystyle \Phi _{pre}={\frac {{\frac {1}{2}}m_{i}c_{s}^{2}}{Ze}}=k_{B}(T_{e}+Z\gamma _{i}T_{i})/(2Ze)} The acceleration of the ions also entails a decrease in the density, usually by a factor of about 2 depending on the details. Collisions between ions and electrons will also affect the I-V characteristic of a Langmuir probe. When an electrode is biased to any voltage other than the floating potential, the current it draws must pass through the plasma, which has a finite resistivity. The resistivity and current path can be calculated with relative ease in an unmagnetized plasma. In a magnetized plasma, the problem is much more difficult. In either case, the effect is to add a voltage drop proportional to the current drawn, which shears the characteristic. The deviation from an exponential function is usually not possible to observe directly, so that the flattening of the characteristic is usually misinterpreted as a larger plasma temperature. Looking at it from the other side, any measured I-V characteristic can be interpreted as a hot plasma, where most of the voltage is dropped in the Debye sheath, or as a cold plasma, where most of the voltage is dropped in the bulk plasma. Without quantitative modeling of the bulk resistivity, Langmuir probes can only give an upper limit on the electron temperature. It is not enough to know the current density as a function of bias voltage since it is the absolute current which is measured. In an unmagnetized plasma, the current-collecting area is usually taken to be the exposed surface area of the electrode. In a magnetized plasma, the projected area is taken, that is, the area of the electrode as viewed along the magnetic field. If the electrode is not shadowed by a wall or other nearby object, then the area must be doubled to account for current coming along the field from both sides. If the electrode dimensions are not small in comparison to the Debye length, then the size of the electrode is effectively increased in all directions by the sheath thickness. In a magnetized plasma, the electrode is sometimes assumed to be increased in a similar way by the ion Larmor radius . The finite Larmor radius allows some ions to reach the electrode that would have otherwise gone past it. The details of the effect have not been calculated in a fully self-consistent way. If we refer to the probe area including these effects as A e f f {\displaystyle A_{eff}} (which may be a function of the bias voltage) and make the assumptions and ignore the effects of then the I-V characteristic becomes I = I i m a x ( − 1 + e q e ( V p r − V f l ) / ( k B T e ) ) {\displaystyle I=I_{i}^{max}(-1+e^{q_{e}(V_{pr}-V_{fl})/(k_{B}T_{e})})} , where I i m a x = q e n e k B T e / m i A e f f {\displaystyle I_{i}^{max}=q_{e}n_{e}{\sqrt {k_{B}T_{e}/m_{i}}}\,A_{eff}} . The theory of Langmuir probes is much more complex when the plasma is magnetized. The simplest extension of the unmagnetized case is simply to use the projected area rather than the surface area of the electrode. For a long cylinder far from other surfaces, this reduces the effective area by a factor of π/2 = 1.57. As mentioned before, it might be necessary to increase the radius by about the thermal ion Larmor radius, but not above the effective area for the unmagnetized case. The use of the projected area seems to be closely tied with the existence of a magnetic sheath . Its scale is the ion Larmor radius at the sound speed, which is normally between the scales of the Debye sheath and the pre-sheath. The Bohm criterion for ions entering the magnetic sheath applies to the motion along the field, while at the entrance to the Debye sheath it applies to the motion normal to the surface. This results in a reduction of the density by the sine of the angle between the field and the surface. The associated increase in the Debye length must be taken into account when considering ion non-saturation due to sheath effects. Especially interesting and difficult to understand is the role of cross-field currents. Naively, one would expect the current to be parallel to the magnetic field along a flux tube . In many geometries, this flux tube will end at a surface in a distant part of the device, and this spot should itself exhibit an I-V characteristic. The net result would be the measurement of a double-probe characteristic; in other words, electron saturation current equal to the ion saturation current. When this picture is considered in detail, it is seen that the flux tube must charge up and the surrounding plasma must spin around it. The current into or out of the flux tube must be associated with a force that slows down this spinning. Candidate forces are viscosity, friction with neutrals, and inertial forces associated with plasma flows, either steady or fluctuating. It is not known which force is strongest in practice, and in fact it is generally difficult to find any force that is powerful enough to explain the characteristics actually measured. It is also likely that the magnetic field plays a decisive role in determining the level of electron saturation, but no quantitative theory is as yet available. Once one has a theory of the I-V characteristic of an electrode, one can proceed to measure it and then fit the data with the theoretical curve to extract the plasma parameters. The straightforward way to do this is to sweep the voltage on a single electrode, but, for a number of reasons, configurations using multiple electrodes or exploring only a part of the characteristic are used in practice. The most straightforward way to measure the I-V characteristic of a plasma is with a single probe , consisting of one electrode biased with a voltage ramp relative to the vessel. The advantages are simplicity of the electrode and redundancy of information, i.e. one can check whether the I-V characteristic has the expected form. Potentially additional information can be extracted from details of the characteristic. The disadvantages are more complex biasing and measurement electronics and a poor time resolution. If fluctuations are present (as they always are) and the sweep is slower than the fluctuation frequency (as it usually is), then the I-V is the average current as a function of voltage, which may result in systematic errors if it is analyzed as though it were an instantaneous I-V . The ideal situation is to sweep the voltage at a frequency above the fluctuation frequency but still below the ion cyclotron frequency. This, however, requires sophisticated electronics and a great deal of care. An electrode can be biased relative to a second electrode, rather than to the ground. The theory is similar to that of a single probe, except that the current is limited to the ion saturation current for both positive and negative voltages. In particular, if V b i a s {\displaystyle V_{bias}} is the voltage applied between two identical electrodes, the current is given by; I = I i m a x ( − 1 + e q e ( V 2 − V f l ) / k B T e ) = − I i m a x ( − 1 + e q e ( V 1 − V f l ) / k B T e ) {\displaystyle I=I_{i}^{max}\left(-1+\,e^{q_{e}(V_{2}-V_{fl})/k_{B}T_{e}}\right)=-I_{i}^{max}\left(-1+\,e^{q_{e}(V_{1}-V_{fl})/k_{B}T_{e}}\right)} , which can be rewritten using V b i a s = V 2 − V 1 {\displaystyle V_{bias}=V_{2}-V_{1}} as a hyperbolic tangent : I = I i m a x tanh ⁡ ( 1 2 q e V b i a s k B T e ) {\displaystyle I=I_{i}^{max}\tanh \left({\frac {1}{2}}\,{\frac {q_{e}V_{bias}}{k_{B}T_{e}}}\right)} . One advantage of the double probe is that neither electrode is ever very far above floating, so the theoretical uncertainties at large electron currents are avoided. If it is desired to sample more of the exponential electron portion of the characteristic, an asymmetric double probe may be used, with one electrode larger than the other. If the ratio of the collection areas is larger than the square root of the ion to electron mass ratio, then this arrangement is equivalent to the single tip probe. If the ratio of collection areas is not that big, then the characteristic will be in-between the symmetric double tip configuration and the single-tip configuration. If A 1 {\displaystyle A_{1}} is the area of the larger tip then: I = A 1 J i m a x [ coth ⁡ ( q e V b i a s 2 k B T e ) + ( A 1 A 2 − 1 ) e − q e V b i a s / 2 k B T e 2 sinh ⁡ ( q e V b i a s 2 k B T e ) ] − 1 {\displaystyle I=A_{1}J_{i}^{max}\left[\coth \left({\frac {q_{e}V_{bias}}{2k_{B}T_{e}}}\right)+{\frac {\left({\frac {A_{1}}{A_{2}}}-1\right)\,e^{-q_{e}V_{bias}/2k_{B}T_{e}}}{2\sinh \left({\frac {q_{e}V_{bias}}{2k_{B}T_{e}}}\right)}}\right]^{-1}} Another advantage is that there is no reference to the vessel, so it is to some extent immune to the disturbances in a radio frequency plasma. On the other hand, it shares the limitations of a single probe concerning complicated electronics and poor time resolution. In addition, the second electrode not only complicates the system, but it makes it susceptible to disturbance by gradients in the plasma. An elegant electrode configuration is the triple probe, [ 2 ] consisting of two electrodes biased with a fixed voltage and a third which is floating. The bias voltage is chosen to be a few times the electron temperature so that the negative electrode draws the ion saturation current, which, like the floating potential, is directly measured. A common rule of thumb for this voltage bias is 3/e times the expected electron temperature. Because the biased tip configuration is floating, the positive probe can draw at most an electron current only equal in magnitude and opposite in polarity to the ion saturation current drawn by the negative probe, given by : − I + = I − = I i m a x {\displaystyle -I_{+}=I_{-}=I_{i}^{max}} and as before the floating tip draws effectively no current: I f l = 0 {\displaystyle I_{fl}=0} . Assuming that: 1.) The electron energy distribution in the plasma is Maxwellian, 2.) The mean free path of the electrons is greater than the ion sheath about the tips and larger than the probe radius, and 3.) the probe sheath sizes are much smaller than the probe separation, then the current to any probe can be considered composed of two parts – the high energy tail of the Maxwellian electron distribution, and the ion saturation current: I p r o b e = − I e e − q e V p r o b e / ( k T e ) + I i m a x {\displaystyle I_{probe}=-I_{e}e^{-q_{e}V_{probe}/(kT_{e})}+I_{i}^{max}} where the current I e is thermal current. Specifically, I e = S J e = S n e q e k T e / 2 π m e {\displaystyle I_{e}=SJ_{e}=Sn_{e}q_{e}{\sqrt {kT_{e}/2\pi m_{e}}}} , where S is surface area, J e is electron current density, and n e is electron density. [ 3 ] Assuming that the ion and electron saturation current is the same for each probe, then the formulas for current to each of the probe tips take the form I + = − I e e − q e V + / ( k T e ) + I i m a x {\displaystyle I_{+}=-I_{e}e^{-q_{e}V_{+}/(kT_{e})}+I_{i}^{max}} I − = − I e e − q e V − / ( k T e ) + I i m a x {\displaystyle I_{-}=-I_{e}e^{-q_{e}V_{-}/(kT_{e})}+I_{i}^{max}} I f l = − I e e − q e V f l / ( k T e ) + I i m a x {\displaystyle I_{fl}=-I_{e}e^{-q_{e}V_{fl}/(kT_{e})}+I_{i}^{max}} . It is then simple to show ( I + − I f l ) / ( I + − I − ) = ( 1 − e − q e ( V f l − V + ) / ( k T e ) ) / ( 1 − e − q e ( V − − V + ) / ( k T e ) ) {\displaystyle \left(I_{+}-I_{fl})/(I_{+}-I_{-}\right)=\left(1-e^{-q_{e}(V_{fl}-V_{+})/(kT_{e})}\right)/\left(1-e^{-q_{e}(V_{-}-V_{+})/(kT_{e})}\right)} but the relations from above specifying that I + =-I − and I fl =0 give 1 / 2 = ( 1 − e − q e ( V f l − V + ) / ( k T e ) ) / ( 1 − e − q e ( V − − V + ) / ( k T e ) ) {\displaystyle 1/2=\left(1-e^{-q_{e}(V_{fl}-V_{+})/(kT_{e})}\right)/\left(1-e^{-q_{e}(V_{-}-V_{+})/(kT_{e})}\right)} , a transcendental equation in terms of applied and measured voltages and the unknown T e that in the limit q e V Bias = q e (V + -V − ) >> k T e , becomes ( V + − V f l ) = ( k B T e / q e ) ln ⁡ 2 {\displaystyle (V_{+}-V_{fl})=(k_{B}T_{e}/q_{e})\ln 2} . That is, the voltage difference between the positive and floating electrodes is proportional to the electron temperature. (This was especially important in the sixties and seventies before sophisticated data processing became widely available.) More sophisticated analysis of triple probe data can take into account such factors as incomplete saturation, non-saturation, unequal areas. Triple probes have the advantage of simple biasing electronics (no sweeping required), simple data analysis, excellent time resolution, and insensitivity to potential fluctuations (whether imposed by an rf source or inherent fluctuations). Like double probes, they are sensitive to gradients in plasma parameters. Arrangements with four ( tetra probe ) or five ( penta probe ) have sometimes been used, but the advantage over triple probes has never been entirely convincing. The spacing between probes must be larger than the Debye length of the plasma to prevent an overlapping Debye sheath . A pin-plate probe consists of a small electrode directly in front of a large electrode, the idea being that the voltage sweep of the large probe can perturb the plasma potential at the sheath edge and thereby aggravate the difficulty of interpreting the I-V characteristic. The floating potential of the small electrode can be used to correct for changes in potential at the sheath edge of the large probe. Experimental results from this arrangement look promising, but experimental complexity and residual difficulties in the interpretation have prevented this configuration from becoming standard. Various geometries have been proposed for use as ion temperature probes , for example, two cylindrical tips that rotate past each other in a magnetized plasma. Since shadowing effects depend on the ion Larmor radius, the results can be interpreted in terms of ion temperature. The ion temperature is an important quantity that is very difficult to measure. Unfortunately, it is also very difficult to analyze such probes in a fully self-consistent way. Emissive probes use an electrode heated either electrically or by the exposure to the plasma. When the electrode is biased more positive than the plasma potential, the emitted electrons are pulled back to the surface so the I - V characteristic is hardly changed. As soon as the electrode is biased negative with respect to the plasma potential, the emitted electrons are repelled and contribute a large negative current. The onset of this current or, more sensitively, the onset of a discrepancy between the characteristics of an unheated and a heated electrode, is a sensitive indicator of the plasma potential. To measure fluctuations in plasma parameters, arrays of electrodes are used, usually one – but occasionally two-dimensional. A typical array has a spacing of 1 mm and a total of 16 or 32 electrodes. A simpler arrangement to measure fluctuations is a negatively biased electrode flanked by two floating electrodes. The ion-saturation current is taken as a surrogate for the density and the floating potential as a surrogate for the plasma potential. This allows a rough measurement of the turbulent particle flux Φ t u r b = ⟨ n ~ e v ~ E × B ⟩ ∝ ⟨ I ~ i m a x ( V ~ f l , 2 − V ~ f l , 1 ) ⟩ {\displaystyle \Phi _{turb}=\langle {\tilde {n}}_{e}{\tilde {v}}_{E\times B}\rangle \propto \langle {\tilde {I}}_{i}^{max}({\tilde {V}}_{fl,2}-{\tilde {V}}_{fl,1})\rangle } Most often, the Langmuir probe is a small sized electrode inserted into a plasma which is connected to an external circuit that measures the properties of the plasma with respect to ground. The ground is typically an electrode with a large surface area and is usually in contact with the same plasma (very often the metallic wall of the chamber). This allows the probe to measure the I-V characteristic of the plasma. The probe measures the characteristic current i ( V ) {\displaystyle i(V)} of the plasma when the probe is biased with a potential V {\displaystyle V} . Relations between the probe I-V characteristic and parameters of isotropic plasma were found by Irving Langmuir [ 4 ] and they can be derived most elementary for the planar probe of a large surface area S z {\displaystyle S_{z}} (ignoring the edge effects problem). Let us choose the point O {\displaystyle O} in plasma at the distance h {\displaystyle h} from the probe surface where electric field of the probe is negligible while each electron of plasma passing this point could reach the probe surface without collisions with plasma components: λ D ≪ λ T e {\displaystyle \lambda _{D}\ll \lambda _{Te}} , λ D {\displaystyle \lambda _{D}} is the Debye length and λ T e {\displaystyle \lambda _{Te}} is the electron free path calculated for its total cross section with plasma components. In the vicinity of the point O {\displaystyle O} we can imagine a small element of the surface area Δ S {\displaystyle \Delta S} parallel to the probe surface. The elementary current d i {\displaystyle di} of plasma electrons passing throughout Δ S {\displaystyle \Delta S} in a direction of the probe surface can be written in the form where v {\displaystyle v} is a scalar of the electron thermal velocity vector v → {\displaystyle {\vec {v}}} , 2 π sin ⁡ ϑ d ϑ {\displaystyle 2\pi \sin \vartheta d\vartheta } is the element of the solid angle with its relative value 2 π sin ⁡ ϑ d ϑ / 4 π {\displaystyle 2\pi \sin \vartheta d\vartheta /4\pi } , ϑ {\displaystyle \vartheta } is the angle between perpendicular to the probe surface recalled from the point O {\displaystyle O} and the radius-vector of the electron thermal velocity v → {\displaystyle {\vec {v}}} forming a spherical layer of thickness d v {\displaystyle dv} in velocity space, and f ( v ) {\displaystyle f(v)} is the electron distribution function normalized to unity Taking into account uniform conditions along the probe surface (boundaries are excluded), Δ S → S z {\displaystyle \Delta S\rightarrow S_{z}} , we can take double integral with respect to the angle ϑ {\displaystyle \vartheta } , and with respect to the velocity v {\displaystyle v} , from the expression ( 1 ), after substitution Eq. ( 2 ) in it, to calculate a total electron current on the probe where V {\displaystyle V} is the probe potential with respect to the potential of plasma V = 0 {\displaystyle V=0} , 2 q e V / m {\displaystyle {\sqrt {2q_{e}V/m}}} is the lowest electron velocity value at which the electron still could reach the probe surface charged to the potential V {\displaystyle V} , ζ {\displaystyle \zeta } is the upper limit of the angle ϑ {\displaystyle \vartheta } at which the electron having initial velocity v {\displaystyle v} can still reach the probe surface with a zero-value of its velocity at this surface. That means the value ζ {\displaystyle \zeta } is defined by the condition Deriving the value ζ {\displaystyle \zeta } from Eq. ( 5 ) and substituting it in Eq. ( 4 ), we can obtain the probe I-V characteristic (neglecting the ion current) in the range of the probe potential − ∞ < V ≤ 0 {\displaystyle -\infty <V\leq 0} in the form Differentiating Eq. ( 6 ) twice with respect to the potential V {\displaystyle V} , one can find the expression describing the second derivative of the probe I-V characteristic (obtained firstly by M. J. Druyvestein [ 5 ] defining the electron distribution function over velocity f ( 2 q e V / m ) {\displaystyle f\left({\sqrt {2q_{e}V/m}}\right)} in the evident form. M. J. Druyvestein has shown in particular that Eqs. ( 6 ) and ( 7 ) are valid for description of operation of the probe of any arbitrary convex geometrical shape. Substituting the Maxwellian distribution function: where v p = ⟨ v ⟩ π / 2 {\displaystyle v_{p}=\langle v\rangle {\sqrt {\pi }}/2} is the most probable velocity, in Eq. ( 6 ) we obtain the expression From which the very useful in practice relation follows allowing one to derive the electron energy E p = k B T {\displaystyle {\mathcal {E}}_{p}=k_{B}T} (for Maxwellian distribution function only!) by a slope of the probe I-V characteristic in a semilogarithmic scale. Thus in plasmas with isotropic electron distributions, the electron current i t h ( 0 ) {\displaystyle i_{th}(0)} on a surface S z = 2 π r z l z {\displaystyle S_{z}=2\pi r_{z}l_{z}} of the cylindrical Langmuir probe at plasma potential V = 0 {\displaystyle V=0} is defined by the average electron thermal velocity ⟨ v ⟩ {\displaystyle \langle v\rangle } and can be written down as equation (see Eqs. ( 6 ), ( 9 ) at V = 0 {\displaystyle V=0} ) where n {\displaystyle n} is the electron concentration, r z {\displaystyle r_{z}} is the probe radius, and l z {\displaystyle l_{z}} is its length. It is obvious that if plasma electrons form an electron wind ( flow ) across the cylindrical probe axis with a velocity v d ≫ ⟨ v ⟩ {\displaystyle v_{d}\gg \langle v\rangle } , the expression holds true. In plasmas produced by gas-discharge arc sources as well as inductively coupled sources, the electron wind can develop the Mach number M ( 0 ) = v d / ⟨ v ⟩ = ( π / 2 ) α ≳ 1 {\displaystyle M^{(0)}=v_{d}/\langle v\rangle =({\sqrt {\pi }}/2)\alpha \gtrsim 1} . Here the parameter α {\displaystyle \alpha } is introduced along with the Mach number for simplification of mathematical expressions. Note that ( π / 2 ) ⟨ v ⟩ = v p {\displaystyle ({\sqrt {\pi }}/2)\langle v\rangle =v_{p}} , where v p {\displaystyle v_{p}} is the most probable velocity for the Maxwellian distribution function, so that α = v d / v p {\displaystyle \alpha =v_{d}/v_{p}} . Thus the general case where α ≳ 1 {\displaystyle \alpha \gtrsim 1} is of the theoretical and practical interest. Corresponding physical and mathematical considerations presented in Refs. [9,10] has shown that at the Maxwellian distribution function of the electrons in a reference system moving with the velocity v d {\displaystyle v_{d}} across axis of the cylindrical probe set at plasma potential V = 0 {\displaystyle V=0} , the electron current on the probe can be written down in the form where I 0 {\displaystyle I_{0}} and I 1 {\displaystyle I_{1}} are Bessel functions of imaginary arguments and Eq. ( 13 ) is reduced to Eq. ( 11 ) at α → 0 {\displaystyle \alpha \rightarrow 0} being reduced to Eq. ( 12 ) at α → ∞ {\displaystyle \alpha \rightarrow \infty } . The second derivative of the probe I-V characteristic i ′ ′ ( V ) {\displaystyle i^{\prime \prime }(V)} with respect to the probe potential V {\displaystyle V} can be presented in this case in the form (see Fig. 3) where and the electron energy E p / e {\displaystyle {\mathcal {E}}_{p}/e} is expressed in eV. All parameters of the electron population: n {\displaystyle n} , α {\displaystyle \alpha } , ⟨ v ⟩ {\displaystyle \langle v\rangle } and v p {\displaystyle v_{p}} in plasma can be derived from the experimental probe I-V characteristic second derivative i ′ ′ ( V ) {\displaystyle i^{\prime \prime }(V)} by its least square best fitting with the theoretical curve expressed by Eq. ( 14 ). For detail and for problem of the general case of none-Maxwellian electron distribution functions see. [ 6 ] , [ 7 ] For laboratory and technical plasmas, the electrodes are most commonly tungsten or tantalum wires several thousandths of an inch thick, because they have a high melting point but can be made small enough not to perturb the plasma. Although the melting point is somewhat lower, molybdenum is sometimes used because it is easier to machine and solder than tungsten. For fusion plasmas, graphite electrodes with dimensions from 1 to 10 mm are usually used because they can withstand the highest power loads (also sublimating at high temperatures rather than melting), and result in reduced bremsstrahlung radiation (with respect to metals) due to the low atomic number of carbon. The electrode surface exposed to the plasma must be defined, e.g. by insulating all but the tip of a wire electrode. If there can be significant deposition of conducting materials (metals or graphite), then the insulator should be separated from the electrode by a meander [ clarify ] to prevent short-circuiting. In a magnetized plasma, it appears to be best to choose a probe size a few times larger than the ion Larmor radius. A point of contention is whether it is better to use proud probes , where the angle between the magnetic field and the surface is at least 15°, or flush-mounted probes , which are embedded in the plasma-facing components and generally have an angle of 1 to 5 °. Many plasma physicists feel more comfortable with proud probes, which have a longer tradition and possibly are less perturbed by electron saturation effects, although this is disputed. Flush-mounted probes, on the other hand, being part of the wall, are less perturbative. Knowledge of the field angle is necessary with proud probes to determine the fluxes to the wall, whereas it is necessary with flush-mounted probes to determine the density. In very hot and dense plasmas, as found in fusion research, it is often necessary to limit the thermal load to the probe by limiting the exposure time. A reciprocating probe is mounted on an arm that is moved into and back out of the plasma, usually in about one second by means of either a pneumatic drive or an electromagnetic drive using the ambient magnetic field. Pop-up probes are similar, but the electrodes rest behind a shield and are only moved the few millimeters necessary to bring them into the plasma near the wall. A Langmuir probe can be purchased off the shelf for on the order of 15,000 U.S. dollars, or they can be built by an experienced researcher or technician. When working at frequencies under 100 MHz, it is advisable to use blocking filters, and take necessary grounding precautions. In low temperature plasmas, in which the probe does not get hot, surface contamination may become an issue. This effect can cause hysteresis in the I-V curve and may limit the current collected by the probe. [ 8 ] A heating mechanism or a glow discharge plasma may be used to clean the probe and prevent misleading results.
https://en.wikipedia.org/wiki/Langmuir_probe
In quantum mechanics Langmuir states are certain quantum states of Helium that in the classical limit correspond to two parallel circular orbits of electrons one above the other and with the nucleus in between. [ 1 ] They are constructed in analogy to circular states of Hydrogen when the electron has the maximum angular momentum and moves on the circle. Because of the magic value of the Helium nucleus charge 2e the triangle nucleus-electron-electron which sweeps the configuration space during the circular motion is equilateral. [ 2 ]
https://en.wikipedia.org/wiki/Langmuir_states
In fluid dynamics , and oceanography , Langmuir turbulence is a turbulent flow with coherent Langmuir circulation structures that exist and evolve over a range of spatial and temporal scales. [ 1 ] These structures arise through an interaction between the ocean surface waves and the currents. In the upper ocean Langmuir circulations are a special case where the turbulent structures exhibit a dominant cell size. In general it is expected that Langmuir turbulence is a global ocean phenomenon and not confined to gentle wind conditions or shallow water ways (as with most observations of Langmuir circulation ). [ 2 ] An important consequence of the Langmuir turbulence are deeply penetrating jets. [ 3 ] These features occur between counter-rotating Langmuir circulations and can inject turbulent kinetic energy to depths well below the depth scale for the surface waves ( Stokes drift depth scale). [ 4 ] Langmuir turbulence could have an important impact on our understanding of climate. [ 2 ] In particular, Langmuir turbulence could affect the global ocean's sea surface temperature as the deeply penetrating Langmuir jets modify the depth of the ocean mixed layer.
https://en.wikipedia.org/wiki/Langmuir_turbulence
A Langmuir–Blodgett (LB) film is an emerging kind of 2D materials to fabricate heterostructures for nanotechnology, formed when Langmuir films—or Langmuir monolayers (LM)—are transferred from the liquid-gas interface to solid supports during the vertical passage of the support through the monolayers. LB films can contain one or more monolayers of an organic material, deposited from the surface of a liquid onto a solid by immersing (or emersing) the solid substrate into (or from) the liquid. A monolayer is adsorbed homogeneously with each immersion or emersion step, thus films with very accurate thickness can be formed. This thickness is accurate because the thickness of each monolayer is known and can therefore be added to find the total thickness of a Langmuir–Blodgett film. The monolayers are assembled vertically and are usually composed either of amphiphilic molecules (see chemical polarity ) with a hydrophilic head and a hydrophobic tail (example: fatty acids ) or nowadays commonly of nanoparticles . [ 1 ] Langmuir–Blodgett films are named after Irving Langmuir and Katharine B. Blodgett , who invented this technique while working in Research and Development for General Electric Co. Advances to the discovery of LB and LM films began with Benjamin Franklin in 1773 when he dropped about a teaspoon of oil onto a pond. Franklin noticed that the waves were calmed almost instantly and that the calming of the waves spread for about half an acre . [ 2 ] What Franklin did not realize was that the oil had formed a monolayer on top of the pond surface. Over a century later, Lord Rayleigh quantified what Benjamin Franklin had seen. Knowing that the oil, oleic acid , had spread evenly over the water, Rayleigh calculated that the thickness of the film was 1.6 nm by knowing the volume of oil dropped and the area of coverage. With the help of her kitchen sink, Agnes Pockels showed that area of films can be controlled with barriers. She added that surface tension varies with contamination of water. She used different oils to deduce that surface pressure would not change until area was confined to about 0.2 nm 2 . This work was originally written as a letter to Lord Rayleigh who then helped Agnes Pockels become published in the journal, Nature , in 1891. Agnes Pockels’ work set the stage for Irving Langmuir who continued to work and confirmed Pockels’ results. Using Pockels’ idea, he developed the Langmuir (or Langmuir–Blodgett ) trough. His observations indicated that chain length did not impact the affected area since the organic molecules were arranged vertically. Langmuir’s breakthrough did not occur until he hired Katherine Blodgett as his assistant. Blodgett initially went to seek for a job at General Electric ( GE ) with Langmuir during her Christmas break of her senior year at Bryn Mawr College , where she received a BA in Physics . Langmuir advised to Blodgett that she should continue her education before working for him. She thereafter attended University of Chicago for her MA in Chemistry . Upon her completion of her Master's, Langmuir hired her as his assistant. However, breakthroughs in surface chemistry happened after she received her PhD degree in 1926 from Cambridge University . While working for GE, Langmuir and Blodgett discovered that when a solid surface is inserted into an aqueous solution containing organic moieties, the organic molecules will deposit a monolayer homogeneously over the surface. This is the Langmuir–Blodgett film deposition process. Through this work in surface chemistry and with the help of Blodgett, Langmuir was awarded the Nobel Prize in 1932. In addition, Blodgett used Langmuir–Blodgett film to create 99% transparent anti-reflective glass by coating glass with fluorinated organic compounds, forming a simple anti-reflective coating . Langmuir films are formed when amphiphilic (surfactants) molecules or nanoparticles are spread on the water at an air–water interface. Surfactants (or surface-acting agents) are molecules with hydrophobic 'tails' and hydrophilic 'heads'. When surfactant concentration is less than the minimum surface concentration of collapse and it is completely insoluble in water, the surfactant molecules arrange themselves as shown in Figure 1 below. This tendency can be explained by surface-energy considerations. Since the tails are hydrophobic, their exposure to air is favoured over that to water. Similarly, since the heads are hydrophilic, the head–water interaction is more favourable than head-air interaction. The overall effect is reduction in the surface energy (or equivalently, surface tension of water). For very small concentrations, far from the surface density compatible with the collapse of the monolayer (which leads to polylayers structures) the surfactant molecules execute a random motion on the water–air interface. This motion can be thought to be similar to the motion of ideal-gas molecules enclosed in a container. The corresponding thermodynamic variables for the surfactant system are, surface pressure ( Π {\displaystyle \Pi } ), surface area (A) and number of surfactant molecules (N). This system behaves similar to a gas in a container. The density of surfactant molecules as well as the surface pressure increases upon reducing the surface area A ('compression' of the 'gas'). Further compression of the surfactant molecules on the surface shows behavior similar to phase transitions . The ‘gas’ gets compressed into ‘liquid’ and ultimately into a perfectly closed packed array of the surfactant molecules on the surface corresponding to a ‘solid’ state. The liquid state is usually separated in the liquid-expanded and liquid-condensed states. All the Langmuir film states are classified according to the compressionality factor of the films, defined as -A(d ( Π {\displaystyle \Pi } )/dA), usually related to the in-plane elasticity of the monolayer. The condensed Langmuir films (in surface pressures usually higher than 15 mN/m – typically 30 mN/m) can be subsequently transferred onto a solid substrate to create highly organized thin film coatings. Langmuir–Blodgett troughs Besides LB film from surfactants depicted in Figure 1, similar monolayers can also be made from inorganic nanoparticles. [ 3 ] Adding a monolayer to the surface reduces the surface tension , and the surface pressure, Π {\displaystyle \Pi } is given by the following equation: where γ 0 {\displaystyle \gamma _{0}} is equal to the surface tension of the water and γ {\displaystyle \gamma } is the surface tension due to the monolayer. But the concentration-dependence of surface tension (similar to Langmuir isotherm ) is as follows: Thus, or The last equation indicates a relationship similar to ideal gas law . However, the concentration-dependence of surface tension is valid only when the solutions are dilute and concentrations are low. Hence, at very low concentrations of the surfactant, the molecules behave like ideal gas molecules. Experimentally, the surface pressure is usually measured using the Wilhelmy plate . A pressure sensor/electrobalance arrangement detects the pressure exerted by the monolayer. Also monitored is the area to the side of the barrier which the monolayer resides. A simple force balance on the plate leads to the following equation for the surface pressure: only when w p ≫ t p {\displaystyle w_{p}\gg t_{p}} . Here, ℓ p , w p {\displaystyle \ell _{p},w_{p}} and t p {\displaystyle t_{p}} are the dimensions of the plate, and Δ F {\displaystyle \Delta F} is the difference in forces. The Wilhelmy plate measurements give pressure – area isotherms that show phase transition-like behaviour of the LM films, as mentioned before (see figure below). In the gaseous phase, there is minimal pressure increase for a decrease in area. This continues until the first transition occurs and there is a proportional increase in pressure with decreasing area. Moving into the solid region is accompanied by another sharp transition to a more severe area dependent pressure. This trend continues up to a point where the molecules are relatively close packed and have very little room to move. Applying an increasing pressure at this point causes the monolayer to become unstable and destroy the monolayer forming polylayer structures towards the air phase. The surface pressure during the monolayer collapse may remain approximately constant (in a process near the equilibrium) or may decay abruptly (out of equilibrium - when the surface pressure was over-increased because lateral compression was too fast for monomolecular rearrangements). Many possible applications have been suggested over years for LM and LB films. Their characteristics are extremely thin films and high degree of structural order. These films have different optical, electrical and biological properties which are composed of some specific organic compounds. Organic compounds usually have more positive responses than inorganic materials for outside factors ( pressure , temperature or gas change). LM films can be used also as models for half a cellular membrane.
https://en.wikipedia.org/wiki/Langmuir–Blodgett_film
A Langmuir–Taylor detector , also called surface ionization detector or hot wire detector , is a kind of ionization detector used in mass spectrometry , developed by John Taylor [ 1 ] based on the work of Irving Langmuir and K. H. Kingdon . [ 2 ] This detector usually consists of a heated thin filament or ribbon of a metal with a high work function (typically tungsten or rhenium ). Neutral atoms or molecules that strike the filament can boil off as positive ions in a process known as surface ionization , and these may be either measured as a current or detected, individually, using an electron multiplier and particle counting electronics. This detector is mostly used with alkali atoms, having a low ionization potential, with applications in mass spectrometry and atomic clocks . This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Langmuir–Taylor_detector
In computer science , language-based security ( LBS ) is a set of techniques that may be used to strengthen the security of applications on a high level by using the properties of programming languages. LBS is considered to enforce computer security on an application-level, making it possible to prevent vulnerabilities which traditional operating system security is unable to handle. Software applications are typically specified and implemented in certain programming languages , and in order to protect against attacks, flaws and bugs an application's source code might be vulnerable to, there is a need for application-level security; security evaluating the applications behavior with respect to the programming language. This area is generally known as language-based security. The use of large software systems, such as SCADA , is taking place all around the world [ 1 ] and computer systems constitute the core of many infrastructures. The society relies greatly on infrastructure such as water, energy, communication and transportation, which again all rely on fully functionally working computer systems. There are several well known examples of when critical systems fail due to bugs or errors in software, such as when shortage of computer memory caused LAX computers to crash and hundreds of flights to be delayed (April 30, 2014). [ 2 ] [ 3 ] Traditionally, the mechanisms used to control the correct behavior of software are implemented at the operating system level. The operating system handles several possible security violations such as memory access violations, stack overflow violations, access control violations, and many others. This is a crucial part of security in computer systems, however by securing the behavior of software on a more specific level, even stronger security can be achieved. Since a lot of properties and behavior of the software is lost in compilation, it is significantly more difficult to detect vulnerabilities in machine code. By evaluating the source code, before the compilation, the theory and implementation of the programming language can also be considered, and more vulnerabilities can be uncovered. "So why do developers keep making the same mistakes? Instead of relying on programmers' memories, we should strive to produce tools that codify what is known about common security vulnerabilities and integrate it directly into the development process." — D. Evans and D. Larochelle, 2002 By using LBS, the security of software can be increased in several areas, depending on the techniques used. Common programming errors such as allowing buffer overflows and illegal information flows to occur, can be detected and disallowed in the software used by the consumer. It is also desirable to provide some proof to the consumer about the security properties of the software, making the consumer able to trust the software without having to receive the source code and self checking it for errors. A compiler, taking source code as input, performs several language specific operations on the code in order to translate it into machine readable code. Lexical analysis , preprocessing , parsing , semantic analysis , code generation , and code optimization are all commonly used operations in compilers. By analyzing the source code and using the theory and implementation of the language, the compiler will attempt to correctly translate the high-level code into low-level code, preserving the behavior of the program. During compilation of programs written in a type-safe language, such as Java , the source code must type-check successfully before compilation. If the type-check fails, the compilation will not be performed, and the source code needs to be modified. This means that, given a correct compiler, any code compiled from a successfully type-checked source program should be clear of invalid assignment errors. This is information which can be of value to the code consumer, as it provides some degree of guarantee that the program will not crash due to some specific error. A goal of LBS is to ensure the presence of certain properties in the source code corresponding to the safety policy of the software. Information gathered during the compilation can be used to create a certificate that can be provided to the consumer as a proof of safety in the given program. Such a proof must imply that the consumer can trust the compiler used by the supplier and that the certificate, the information about the source code, can be verified. The figure illustrates how certification and verification of low-level code could be established by the use of a certifying compiler. The software supplier gains the advantage of not having to reveal the source code, and the consumer is left with the task of verifying the certificate, which is an easy task compared to evaluation and compilation of the source code itself. Verifying the certificate only requires a limited trusted code base containing the compiler and the verifier. The main applications of program analysis are program optimization (running time, space requirements, power consumption etc.) and program correctness (bugs, security vulnerabilities etc.). Program analysis can be applied to compilation ( static analysis ), run-time ( dynamic analysis ), or both. In language-based security, program analysis can provide several useful features, such as: type checking (static and dynamic), monitoring , taint checking and control-flow analysis . Information flow analysis can be described as a set of tools used to analyze the information flow control in a program, in order to preserve confidentiality and integrity where regular access control mechanisms come short. "By decoupling the right to access information from the right to disseminate it, the flow model goes beyond the access matrix model in its ability to specify secure information flow. A practical system needs both access and flow control to satisfy all security requirements." — D. Denning, 1976 Access control enforces checks on access to information, but is not concerned about what happens after that. An example: A system has two users, Alice and Bob. Alice has a file secret.txt , which is only allowed to be read and edited by her, and she prefers to keep this information to herself. In the system, there also exists a file public.txt , which is free to read and edit for all users in the system. Now suppose that Alice has accidentally downloaded a malicious program. This program can access the system as Alice, bypassing the access control check on secret.txt . The malicious program then copies the content of secret.txt and places it in public.txt , allowing Bob and all other users to read it. This constitutes a violation of the intended confidentiality policy of the system. Noninterference is a property of programs that does not leak or reveal information of variables with a higher security classification, depending on the input of variables with a lower security classification. A program which satisfies noninterference should produce the same output whenever the corresponding same input on the lower variables are used. This must hold for every possible value on the input. This implies that even if higher variables in the program has different values from one execution to another, this should not be visible on the lower variables. An attacker could try to execute a program which does not satisfy noninterference repeatedly and systematically to try to map its behavior. Several iterations could lead to the disclosure of higher variables, and let the attacker learn sensitive information about for example the systems state. Whether a program satisfies noninterference or not can be evaluated during compilation assuming the presence of security type systems . A security type system is a kind of type system that can be used by software developers in order to check the security properties of their code. In a language with security types, the types of variables and expressions relate to the security policy of the application, and programmers may be able to specify the application security policy via type declarations. Types can be used to reason about various kinds of security policies, including authorization policies (as access control or capabilities) and information flow security. Security type systems can be formally related to the underlying security policy, and a security type system is sound if all programs that type-check satisfy the policy in a semantic sense. For example, a security type system for information flow might enforce noninterference, meaning that type checking reveals whether there is any violation of confidentiality or integrity in the program. Vulnerabilities in low-level code are bugs or flaws that will lead the program into a state where further behavior of the program is undefined by the source programming language. The behavior of the low-level program will depend on compiler, runtime system or operating system details. This allows for an attacker to drive the program towards an undefined state, and exploit the behavior of the system. Common exploits of insecure low-level code lets an attacker perform unauthorized reads or writes to memory addresses. The memory addresses can be either random or chosen by the attacker. An approach to achieve secure low-level code is to use safe high-level languages. A safe language is considered to be completely defined by its programmers' manual. [ 4 ] Any bug that could lead to implementation-dependent behavior in a safe language will either be detected at compile time or lead to a well-defined error behavior at run-runtime. In Java , if accessing an array out of bounds, an exception will be thrown. Examples of other safe languages are C# , Haskell and Scala . During compilation of an unsafe language run-time checks is added to the low-level code to detect source-level undefined behavior. An example is the use of canaries , which can terminate a program when discovering bounds violations. A downside of using run-time checks such as in bounds checking is that they impose considerable performance overhead. Memory protection , such as using non-executable stack and/or heap, can also be seen as additional run-time checks. This is used by many modern operating systems. The general idea is to identify sensitive code from application data by analyzing the source code. Once this is done the different data is separated and placed in different modules. When assuming that each module has total control over the sensitive information it contains, it is possible to specify when and how should leave the module. An example is a cryptographic module that can prevent keys from ever leaving the module unencrypted. Certifying compilation is the idea of producing a certificate during compilation of source code, using the information from the high-level programming language semantics. This certificate should be enclosed with the compiled code in order to provide a form of proof to the consumer that the source code was compiled according to a certain set of rules. The certificate can be produced in different ways, e.g. through Proof-carrying code (PCC) or Typed assembly language (TAL). The main aspects of PCC can be summarized in the following steps: [ 5 ] An example of a certifying compiler is the Touchstone compiler , that provides a PCC formal proof of type- and memory safety for programs implemented in Java. TAL is applicable to programming languages that make use of a type system . After compilation, the object code will carry a type annotation that can be checked by an ordinary type checker. The annotation produced here is in many ways similar to the annotations provided by PCC, with some limitations. However, TAL can handle any security policy that may be expressed by the restrictions of the type system, which can include memory safety and control flow, among others.
https://en.wikipedia.org/wiki/Language-based_security
Language engineering involves the creation of natural language processing systems, whose cost and outputs are measurable and predictable. [ citation needed ] It is a distinct field contrasted to natural language processing and computational linguistics . [ 1 ] A recent trend of language engineering is the use of Semantic Web technologies for the creation, archiving, processing, and retrieval of machine processable language data. [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Language_engineering
Language equations are mathematical statements that resemble numerical equations , but the variables assume values of formal languages rather than numbers. Instead of arithmetic operations in numerical equations, the variables are joined by language operations. Among the most common operations on two languages A and B are the set union A ∪ B , the set intersection A ∩ B , and the concatenation A ⋅ B . Finally, as an operation taking a single operand , the set A * denotes the Kleene star of the language A . Therefore, language equations can be used to represent formal grammars , since the languages generated by the grammar must be the solution of a system of language equations. Ginsburg and Rice [ 1 ] gave an alternative definition of context-free grammars by language equations. To every context-free grammar G = ( V , Σ , R , S ) {\displaystyle G=(V,\Sigma ,R,S)} , is associated a system of equations in variables V {\displaystyle V} . Each variable X ∈ V {\displaystyle X\in V} is an unknown language over Σ {\displaystyle \Sigma } and is defined by the equation X = α 1 ∪ … ∪ α m {\displaystyle X=\alpha _{1}\cup \ldots \cup \alpha _{m}} where X → α 1 {\displaystyle X\to \alpha _{1}} , ..., X → α m {\displaystyle X\to \alpha _{m}} are all productions for X {\displaystyle X} . Ginsburg and Rice used a fixed-point iteration argument to show that a solution always exists, and proved that the assignment X = L G ( X ) {\displaystyle X=L_{G}(X)} is the least solution to this system, [ clarify ] i.e. any other solution must be a subset [ clarify ] of this one. For example, the grammar S → a S c ∣ b ∣ S {\displaystyle S\to aSc\mid b\mid S} corresponds to the equation system S = ( { a } ⋅ S ⋅ { c } ) ∪ { b } ∪ S {\displaystyle S=(\{a\}\cdot S\cdot \{c\})\cup \{b\}\cup S} which has as solution every superset of { a n b c n ∣ n ∈ N } {\displaystyle \{a^{n}bc^{n}\mid n\in {\mathcal {N}}\}} . Language equations with added intersection analogously correspond to conjunctive grammars . [ citation needed ] Brzozowski and Leiss [ 2 ] studied left language equations where every concatenation is with a singleton constant language on the left, e.g. { a } ⋅ X {\displaystyle \{a\}\cdot X} with variable X {\displaystyle X} , but not X ⋅ Y {\displaystyle X\cdot Y} nor X ⋅ { a } {\displaystyle X\cdot \{a\}} . Each equation is of the form X i = F ( X 1 , . . . , X k ) {\displaystyle X_{i}=F(X_{1},...,X_{k})} with one variable on the right-hand side. Every nondeterministic finite automaton has such corresponding equation using left-concatenation and union, see Fig. 1. If intersection operation is allowed, equations correspond to alternating finite automata . Baader and Narendran [ 3 ] studied equations F ( X 1 , … , X k ) = G ( X 1 , … , X k ) {\displaystyle F(X_{1},\ldots ,X_{k})=G(X_{1},\ldots ,X_{k})} using left-concatenation and union and proved that their satisfiability problem is EXPTIME-complete . Conway [ 4 ] proposed the following problem: given a constant finite language L {\displaystyle L} , is the greatest solution of the equation L X = X L {\displaystyle LX=XL} always regular? This problem was studied by Karhumäki and Petre [ 5 ] [ 6 ] who gave an affirmative answer in a special case. A strongly negative answer to Conway's problem was given by Kunc [ 7 ] who constructed a finite language L {\displaystyle L} such that the greatest solution of this equation is not recursively enumerable. Kunc [ 8 ] also proved that the greatest solution of inequality L X ⊆ X L {\displaystyle LX\subseteq XL} is always regular. Language equations with concatenation and Boolean operations were first studied by Parikh , Chandra , Halpern and Meyer [ 9 ] who proved that the satisfiability problem for a given equation is undecidable, and that if a system of language equations has a unique solution, then that solution is recursive. Later, Okhotin [ 10 ] proved that the unsatisfiability problem is RE-complete and that every recursive language is a unique solution of some equation. For a one-letter alphabet, Leiss [ 11 ] discovered the first language equation with a nonregular solution, using complementation and concatenation operations. Later, Jeż [ 12 ] showed that non-regular unary languages can be defined by language equations with union, intersection and concatenation, equivalent to conjunctive grammars . By this method Jeż and Okhotin [ 13 ] proved that every recursive unary language is a unique solution of some equation.
https://en.wikipedia.org/wiki/Language_equation
A language workbench [ 1 ] [ 2 ] is a tool or set of tools that enables software development in the language-oriented programming [ 2 ] software development paradigm. A language workbench will typically include tools to support the definition, reuse and composition of domain-specific languages together with their integrated development environment . Language workbenches were introduced and popularized by Martin Fowler in 2005. Language workbenches usually support: [ 1 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Language_workbench
Slightly over half of the homepages of the most visited websites on the World Wide Web are in English, with varying amounts of information available in many other languages. [ 1 ] [ 2 ] Other top languages are Chinese, Spanish, Russian, Persian, French, German and Japanese. [ 1 ] [ 3 ] Of the more than 7,000 existing languages, only a few hundred are recognized as being in use for Web pages on the World Wide Web. [ 4 ] There is debate over the most-used languages on the Internet. A 2009 UNESCO report monitoring the languages of websites for 12 years, from 1996 to 2008, found a steady year-on-year decline in the percentage of webpages in English, from 75 percent in 1998 to 45 percent in 2005. [ 2 ] The authors found that English remained at 45 percent of content for 2005 to the end of the study but believe this was due to the bias of search engines indexing more English-language content rather than a true stabilization of the percentage of content in English on the World Wide Web. [ 2 ] The number of non-English web pages is rapidly expanding. The use of English online increased by around 281 percent from 2001 to 2011, a lower rate of growth than that of Spanish (743 percent), Chinese (1,277 percent), Russian (1,826 percent) or Arabic (2,501 percent) over the same period. [ 5 ] According to a 2000 study, the international auxiliary language Esperanto ranked 40 out of all languages in search engine queries, also ranking 27 out of all languages that rely on the Latin script . [ 6 ] W3Techs estimated percentages of the top 10 million websites on the World Wide Web using various content languages as of 18 March 2025: [ 1 ] Bokmål All other languages are used in less than 0.1% of websites. Even including all languages, percentages may not sum to 100% because some websites contain multiple content languages. The figures from the W3Techs study are based on the one million most visited websites (i.e., approximately 0.27 percent of all websites according to December 2011 figures) as ranked by Alexa.com , and language is identified using only the home page of the sites in most cases (e.g., all of Wikipedia is based on the language detection of http://www.wikipedia.org ). [ 7 ] As a consequence, the figures show a significantly higher percentage for many languages (especially for English) as compared to the figures for all websites. [ 8 ] For all websites, estimates are between 20 and 50% for English. [ 9 ] [ 2 ] [ 10 ] [ 11 ] Of the top 250 YouTube channels, 66% of the content is in English, 15% in Spanish, 7% in Portuguese, 5% in Hindi, and 2% in Korean, while other languages make up 5%, [ 12 ] although other sources point to different percentages. [ 13 ] [ better source needed ] YouTube is available in over 80 languages with more than a hundred different local versions. [ 14 ] Of those popular YouTube channels that posted a video in the first week of 2019, just over half contained some content in a language other than English. [ 15 ] InternetWorldStats estimates of the number of Internet users by language as of March 31, 2020: [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] The Wikimedia Analytics API provides the most recent data on page views and page edits, among other statistics, for all language editions of Wikipedia. What are the three stages of translation Process ?
https://en.wikipedia.org/wiki/Languages_used_on_the_Internet
The Laniakea Supercluster ( / ˌ l ɑː n i . ə ˈ k eɪ . ə / ; Hawaiian for "open skies" or "immense heaven") [ 2 ] or the Local Supercluster ( LSC or LS ) is the galaxy supercluster that is home to the Milky Way and approximately 100,000 other nearby galaxies. It was defined in September 2014, when a group of astronomers including R. Brent Tully of the University of Hawaiʻi , Hélène Courtois of the University of Lyon , Yehuda Hoffman of the Hebrew University of Jerusalem , and Daniel Pomarède of CEA Université Paris-Saclay published a new way of defining superclusters according to the relative velocities of galaxies. [ 3 ] [ 4 ] The new definition of the local supercluster subsumes the prior defined Virgo and Hydra-Centaurus Supercluster as appendages, the former being the prior defined local supercluster. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Follow-up studies suggest that the Laniakea Supercluster is not gravitationally bound. It will disperse rather than continue to maintain itself as an overdensity relative to surrounding areas. [ 10 ] The name laniākea ( [ˈlɐnijaːˈkɛjə] ) means 'immense heaven' in Hawaiian , from lani ' heaven ' and ākea ' spacious, immeasurable ' . The name was suggested by Nawaʻa Napoleon , an associate professor of Hawaiian language at Kapiolani Community College . [ 11 ] The name honors Polynesian navigators , who used knowledge of the sky to navigate the Pacific Ocean . [ 12 ] The Laniakea Supercluster encompasses approximately 100,000 galaxies stretched out over 160 Mpc (520 million ly ). It has the approximate mass of 10 17 solar masses, or 100,000 times that of our galaxy, which is almost the same as that of the Horologium Supercluster . [ 3 ] It consists of four subparts, which were known previously as separate superclusters: The most massive galaxy clusters of the Laniakea Supercluster are Virgo , Hydra , Centaurus , Abell 3565 , Abell 3574 , Abell 3521 , Fornax , Eridanus , and Norma . The entire supercluster consists of approximately 300 to 500 known galaxy clusters and groups. The real number may be much larger because some of these are traversing the Zone of Avoidance , an area of the sky that is partially obscured by gas and dust from the Milky Way galaxy, making them essentially undetectable. Superclusters are some of the universe 's largest structures and have boundaries that are difficult to define, especially from the inside. Within a given supercluster, most galaxy motions will be directed inward, toward the center of mass . This gravitational focal point, in the case of Laniakea, is called the Great Attractor , and influences the motions of the Local Group of galaxies, where the Milky Way galaxy resides, and all others throughout the supercluster. Unlike its constituent clusters, Laniakea is not gravitationally bound and is projected to be torn apart by dark energy . [ 7 ] Although the confirmation of the existence of the Laniakea Supercluster emerged in 2014, [ 3 ] early studies in the 1980s already suggested that several of the superclusters then known might be connected. For example, South African astronomer Tony Fairall stated in 1988 that redshifts suggested that the Virgo and Hydra–Centaurus superclusters may be connected. [ 14 ] The neighboring superclusters to the Laniakea Supercluster are the Shapley Supercluster , Hercules Supercluster , Coma Supercluster , and Perseus–Pisces Supercluster . The edges of the superclusters and Laniakea were not clearly known at the time of Laniakea's definition. [ 6 ] Since then, the study of the edges of the supercluster and of structures beyond them has substantially improved. [ 15 ] [ 16 ] Laniakea is itself a constituent part of the Pisces–Cetus Supercluster Complex , a galaxy filament . Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". * It is uncertain whether these are companion galaxies of the Andromeda Galaxy
https://en.wikipedia.org/wiki/Laniakea_Supercluster
1W6J , 1W6K 4047 16987 ENSG00000160285 ENSG00000281289 ENSMUSG00000033105 P48449 Q8BLN5 NM_001001438 NM_001145436 NM_001145437 NM_002340 NM_146006 NP_001001438 NP_001138908 NP_001138909 NP_002331 NP_666118 Lanosterol synthase ( EC 5.4.99.7 ) is an oxidosqualene cyclase (OSC) enzyme that converts ( S )-2,3-oxidosqualene to a protosterol cation and finally to lanosterol . [ 5 ] Lanosterol is a key four-ringed intermediate in cholesterol biosynthesis. [ 6 ] [ 7 ] In humans, lanosterol synthase is encoded by the LSS gene . [ 8 ] [ 9 ] In eukaryotes , lanosterol synthase is an integral monotopic protein associated with the cytosolic side of the endoplasmic reticulum . [ 10 ] Some evidence suggests that the enzyme is a soluble, non- membrane bound protein in the few prokaryotes that produce it. [ 11 ] Due to the enzyme's role in cholesterol biosynthesis, there is interest in lanosterol synthase inhibitors as potential cholesterol-reducing drugs, to complement existing statins . [ 12 ] Though some data on the mechanism has been obtained by the use of suicide inhibitors , mutagenesis studies, and homology modeling , it is still not fully understood how the enzyme catalyzes the formation of lanosterol . [ 12 ] Before the acquisition of the protein's X-ray crystal structure , site-directed mutagenesis was used to determine residues key to the enzyme's catalytic activity. It was determined that an aspartic acid residue (D455) and two histidine residues (H146 and H234) were essential to enzyme function. Corey et al. hypothesized that the aspartic acid acts by protonating the substrate's epoxide ring, thus increasing its susceptibility to intramolecular attack by the nearest double bond , with H146 possibly intensifying the proton donor ability of the aspartic acid through hydrogen bonding . [ 13 ] After acquisition of the X-ray crystal structure of the enzyme, the role of D455 as a proton donor to the substrate's epoxide was confirmed, though it was found that D455 is more likely stabilized by hydrogen bonding from two cysteine residues (C456 and C533) than from the earlier suggested histidine. [ 12 ] Epoxide protonation activates the substrate, setting off a cascade of ring forming reactions. Four rings in total (A through D) are formed, producing the cholesterol backbone. [ 12 ] Though the idea of a concerted formation of all four rings had been entertained in the past, kinetic studies with ( S )-2,3-oxidosqualene analogs showed that product formation is achieved through discrete carbocation intermediates (see Figure 1 ). Isolation of monocyclic and bicyclic products from lanosterol synthase mutants has further weakened the hypothesis of a concerted mechanism. [ 14 ] [ 15 ] Evidence suggests that epoxide ring opening and A ring formation is concerted, though. [ 16 ] Lanosterol synthase is a two-domain monomeric protein [ 10 ] composed of two connected (α/α) barrel domains and three smaller β-structures . The enzyme active site is in the center of the protein, closed off by a constricted channel. Passage of the ( S )-2,3-epoxysqualene substrate through the channel requires a change in protein conformation . In eukaryotes , a hydrophobic surface (6% of the total enzyme surface area) is the ER membrane-binding region (see Figure 2 ). [ 12 ] The enzyme contains five fingerprint regions containing Gln - Trp motifs, which are also present in the highly analogous bacterial enzyme squalene-hopene cyclase . [ 12 ] Residues of these fingerprint regions contain stacked sidechains which are thought to contribute to enzyme stability during the highly exergonic cyclization reactions catalyzed by the enzyme. [ 17 ] Lanosterol synthase catalyzes the conversion of ( S )-2,3-epoxysqualene to lanosterol , a key four-ringed intermediate in cholesterol biosynthesis. [ 6 ] [ 7 ] Thus, it in turn provides the precursor to estrogens , androgens , progestogens , glucocorticoids , mineralocorticoids , and neurosteroids . In eukaryotes the enzyme is bound to the cytosolic side of the endoplasmic reticulum membrane. [ 10 ] While cholesterol synthesis is mostly associated with eukaryotes , few prokaryotes have been found to express lanosterol synthase; it has been found as a soluble protein in Methylococcus capsulatus . [ 11 ] Lanosterol synthase also catalyzes the cyclization of 2,3;22,23-diepoxysqualene to 24( S ),25-epoxylanosterol, [ 18 ] which is later converted to 24( S ),25-epoxycholesterol. [ 19 ] Since the enzyme affinity for this second substrate is greater than for the monoepoxy ( S )-2,3-epoxysqualene, under partial inhibition conversion of 2,3;22,23-diepoxysqualene to 24( S ),25-epoxylanosterol is favored over lanosterol synthesis. [ 20 ] This has relevance for disease prevention and treatment. Interest has grown in lanosterol synthase inhibitors as drugs to lower blood cholesterol and treat atherosclerosis . The widely popular statin drugs currently used to lower LDL (low-density lipoprotein) cholesterol function by inhibiting HMG-CoA reductase activity. [ 6 ] Because this enzyme catalyzes the formation of precursors far upstream of ( S )-2,3-epoxysqualene and cholesterol, statins may negatively influence amounts of intermediates required for other biosynthetic pathways (e.g. synthesis of isoprenoids , coenzyme Q ). Thus, lanosterol synthase, which is more closely tied to cholesterol biosynthesis than HMG-CoA reductase , is an attractive drug target. [ 21 ] Lanosterol synthase inhibitors are thought to lower LDL and VLDL cholesterol by a dual control mechanism. Studies in which lanosterol synthase is partially inhibited have shown both a direct decrease in lanosterol formation and a decrease in HMG-CoA reductase activity. The oxysterol 24( S ),25-epoxylanosterol, which is preferentially formed over lanosterol during partial lanosterol synthase inhibition, is believed to be responsible for this inhibition of HMG-CoA reductase activity. [ 22 ] It is believed that oxidosqualene cyclases (OSCs, the class to which lanosterol cyclase belongs) evolved from bacterial squalene-hopene cyclase (SHC), which is involved with the formation of hopanoids . Phylogenetic trees constructed from the amino acid sequences of OSCs in diverse organisms suggest a single common ancestor, and that the synthesis pathway evolved only once. [ 23 ] The discovery of steranes including cholestane in 2.7-billion year-old shales from Pilbara Craton , Australia , suggests that eukaryotes with OSCs and complex steroid machinery were present early in earth's history. [ 24 ]
https://en.wikipedia.org/wiki/Lanosterol_synthase
Lansweeper is an IT discovery & inventory platform which delivers insights into the status of users, devices, and software within IT environments. This platform inventories connected IT devices, enabling organizations to centrally manage their IT infrastructure . Lansweeper's automated processes identify and compile a list of connected devices, including computers , routers , servers , and printers . It furnishes device-specific information covering installed software, applied updates & patches , and user details. Lansweeper was founded in Belgium in 2004. In October 2020, Lansweeper announced the acquisition of Fing, a network scanning and device recognition platform. [ 3 ] In June 2021, Lansweeper received a €130 million investment from Insight Partners. [ 4 ] The main purpose of Lansweeper derives from a discovery phase of sweeping round a local area network (LAN) and maintaining an inventory of the hardware assets and software deployed on those assets. [ 5 ] Reports from the inventory enable complete hardware and software reports on the devices and can be used to identify problems. [ 6 ] Lansweeper can collect information on all Windows , Linux and macOS devices as well as IP -addressable network appliances. [ 5 ] The software incorporates an integrated ticket-based Help Desk module used to assist issues to be captured and tracked through to completion. [ 7 ] There is also a software module that allows Lansweeper to orchestrate software updates on Windows computers. [ 7 ] The Lansweeper central inventory database must be located on either an SQL Local DB or SQL Server database on a Microsoft Windows machine. Lansweeper claims that while a minimum default configuration can be supported by placing all its components on a single server, the application has the capability to scale up to hundreds of thousands of devices. [ 8 ] While Lansweeper can be set up agentless, it may be recommended to use agents for more complex configurations. [ 9 ] Lansweeper has a freeware version of the product, but it is limited in the number of devices available and functionality provided unless appropriate commercial licenses are purchased. [ 10 ] [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Lansweeper
The lanthanide ( / ˈ l æ n θ ə n aɪ d / ) or lanthanoid ( / ˈ l æ n θ ə n ɔɪ d / ) series of chemical elements [ a ] comprises at least the 14 metallic chemical elements with atomic numbers 57–70, from lanthanum through ytterbium . In the periodic table, they fill the 4f orbitals. [ 2 ] [ 3 ] [ 4 ] Lutetium (element 71) is also sometimes considered a lanthanide, despite being a d-block element and a transition metal. The informal chemical symbol Ln is used in general discussions of lanthanide chemistry to refer to any lanthanide. [ 5 ] All but one of the lanthanides are f-block elements, corresponding to the filling of the 4f electron shell . Lutetium is a d-block element (thus also a transition metal ), [ 6 ] [ 7 ] and on this basis its inclusion has been questioned; however, like its congeners scandium and yttrium in group 3, it behaves similarly to the other 14. The term rare-earth element or rare-earth metal is often used to include the stable group 3 elements Sc, Y, and Lu in addition to the 4f elements. [ 8 ] All lanthanide elements form trivalent cations, Ln 3+ , whose chemistry is largely determined by the ionic radius , which decreases steadily from lanthanum (La) to lutetium (Lu). These elements are called lanthanides because the elements in the series are chemically similar to lanthanum . Because "lanthanide" means "like lanthanum", it has been argued that lanthanum cannot logically be a lanthanide, but the International Union of Pure and Applied Chemistry (IUPAC) acknowledges its inclusion based on common usage. [ 1 ] In presentations of the periodic table , the f-block elements are customarily shown as two additional rows below the main body of the table. [ 2 ] This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods), respectively. The 1985 IUPAC "Red Book" (p. 45) recommends using lanthanoid instead of lanthanide , as the ending -ide normally indicates a negative ion . However, owing to widespread current use, lanthanide is still allowed. Primordial From decay Synthetic Border shows natural occurrence of the element The term "lanthanide" was introduced by Victor Goldschmidt in 1925. [ 9 ] [ 10 ] Despite their abundance, the technical term "lanthanides" is interpreted to reflect a sense of elusiveness on the part of these elements, as it comes from the Greek λανθανειν ( lanthanein ), "to lie hidden". [ 11 ] Rather than referring to their natural abundance, the word reflects their property of "hiding" behind each other in minerals. The term derives from lanthanum , first discovered in 1838, at that time a so-called new rare-earth element "lying hidden" or "escaping notice" in a cerium mineral, [ 12 ] and it is an irony that lanthanum was later identified as the first in an entire series of chemically similar elements and gave its name to the whole series. Together with the stable elements of group 3, scandium , yttrium , and lutetium , the trivial name " rare earths " is sometimes used to describe the set of lanthanides. The "earth" in the name "rare earths" arises from the minerals from which they were isolated, which were uncommon oxide-type minerals. However, these elements are neither rare in abundance nor "earths" (an obsolete term for water-insoluble strongly basic oxides of electropositive metals incapable of being smelted into metal using late 18th century technology). Group 2 is known as the alkaline earth elements for much the same reason. The "rare" in the name "rare earths" has more to do with the difficulty of separating of the individual elements than the scarcity of any of them. By way of the Greek dysprositos for "hard to get at", element 66, dysprosium was similarly named. The elements 57 (La) to 71 (Lu) are very similar chemically to one another and frequently occur together in nature. Often a mixture of three to all 15 of the lanthanides (along with yttrium as a 16th) occur in minerals, such as monazite and samarskite (for which samarium is named). These minerals can also contain group 3 elements, and actinides such as uranium and thorium. [ 13 ] A majority of the rare earths were discovered at the same mine in Ytterby , Sweden and four of them are named (yttrium, ytterbium, erbium, terbium) after the village and a fifth (holmium) after Stockholm; scandium is named after Scandinavia , thulium after the old name Thule , and the immediately-following group 4 element (number 72) hafnium is named for the Latin name of the city of Copenhagen . [ 13 ] The properties of the lanthanides arise from the order in which the electron shells of these elements are filled—the outermost (6s) has the same configuration for all of them, and a deeper (4f) shell is progressively filled with electrons as the atomic number increases from 57 towards 71. [ 13 ] For many years, mixtures of more than one rare earth were considered to be single elements, such as neodymium and praseodymium being thought to be the single element didymium. [ 14 ] Very small differences in solubility are used in solvent and ion-exchange purification methods for these elements, which require repeated application to obtain a purified metal. The diverse applications of refined metals and their compounds can be attributed to the subtle and pronounced variations in their electronic, electrical, optical, and magnetic properties. [ 13 ] By way of example of the term meaning "hidden" rather than "scarce", cerium is almost as abundant as copper; [ 13 ] on the other hand promethium , with no stable or long-lived isotopes, is truly rare. [ 15 ] * Between initial Xe and final 6s 2 electronic shells ** Sm has a close packed structure like most of the lanthanides but has an unusual 9 layer repeat Gschneider and Daane (1988) attribute the trend in melting point which increases across the series, ( lanthanum (920 °C) – lutetium (1622 °C)) to the extent of hybridization of the 6s, 5d, and 4f orbitals. The hybridization is believed to be at its greatest for cerium, which has the lowest melting point of all, 795 °C. [ 16 ] The lanthanide metals are soft; their hardness increases across the series. [ 1 ] Europium stands out, as it has the lowest density in the series at 5.24 g/cm 3 and the largest metallic radius in the series at 208.4 pm. It can be compared to barium, which has a metallic radius of 222 pm. It is believed that the metal contains the larger Eu 2+ ion and that there are only two electrons in the conduction band. Ytterbium also has a large metallic radius, and a similar explanation is suggested. [ 1 ] The resistivities of the lanthanide metals are relatively high, ranging from 29 to 134 μΩ·cm. These values can be compared to a good conductor such as aluminium, which has a resistivity of 2.655 μΩ·cm. With the exceptions of La, Yb, and Lu (which have no unpaired f electrons), the lanthanides are strongly paramagnetic, and this is reflected in their magnetic susceptibilities. Gadolinium becomes ferromagnetic at below 16 °C ( Curie point ). The other heavier lanthanides – terbium, dysprosium, holmium, erbium, thulium, and ytterbium – become ferromagnetic at much lower temperatures. [ 17 ] 4f 14 * Not including initial [Xe] core f → f transitions are symmetry forbidden (or Laporte-forbidden), which is also true of transition metals . However, transition metals are able to use vibronic coupling to break this rule. The valence orbitals in lanthanides are almost entirely non-bonding and as such little effective vibronic coupling takes, hence the spectra from f → f transitions are much weaker and narrower than those from d → d transitions. In general this makes the colors of lanthanide complexes far fainter than those of transition metal complexes. Viewing the lanthanides from left to right in the periodic table, the seven 4f atomic orbitals become progressively more filled (see above and Periodic table § Electron configuration table ). The electronic configuration of most neutral gas-phase lanthanide atoms is [Xe]6s 2 4f n , where n is 56 less than the atomic number Z . Exceptions are La, Ce, Gd, and Lu, which have 4f n −1 5d 1 (though even then 4f n is a low-lying excited state for La, Ce, and Gd; for Lu, the 4f shell is already full, and the fifteenth electron has no choice but to enter 5d). With the exception of lutetium, the 4f orbitals are chemically active in all lanthanides and produce profound differences between lanthanide chemistry and transition metal chemistry. The 4f orbitals penetrate the [Xe] core and are isolated, and thus they do not participate much in bonding. This explains why crystal field effects are small and why they do not form π bonds. [ 18 ] As there are seven 4f orbitals, the number of unpaired electrons can be as high as 7, which gives rise to the large magnetic moments observed for lanthanide compounds. Measuring the magnetic moment can be used to investigate the 4f electron configuration, and this is a useful tool in providing an insight into the chemical bonding. [ 22 ] The lanthanide contraction , i.e. the reduction in size of the Ln 3+ ion from La 3+ (103 pm) to Lu 3+ (86.1 pm), is often explained by the poor shielding of the 5s and 5p electrons by the 4f electrons. [ 18 ] The chemistry of the lanthanides is dominated by the +3 oxidation state, and in Ln III compounds the 6s electrons and (usually) one 4f electron are lost and the ions have the configuration [Xe]4f ( n −1) . [ 23 ] All the lanthanide elements exhibit the oxidation state +3. In addition, Ce 3+ can lose its single f electron to form Ce 4+ with the stable electronic configuration of xenon. Also, Eu 3+ can gain an electron to form Eu 2+ with the f 7 configuration that has the extra stability of a half-filled shell. Other than Ce(IV) and Eu(II), none of the lanthanides are stable in oxidation states other than +3 in aqueous solution. In terms of reduction potentials, the Ln 0/3+ couples are nearly the same for all lanthanides, ranging from −1.99 (for Eu) to −2.35 V (for Pr). Thus these metals are highly reducing, with reducing power similar to alkaline earth metals such as Mg (−2.36 V). [ 1 ] The ionization energies for the lanthanides can be compared with aluminium. In aluminium the sum of the first three ionization energies is 5139 kJ·mol −1 , whereas the lanthanides fall in the range 3455 – 4186 kJ·mol −1 . This correlates with the highly reactive nature of the lanthanides. The sum of the first two ionization energies for europium, 1632 kJ·mol −1 can be compared with that of barium 1468.1 kJ·mol −1 and europium's third ionization energy is the highest of the lanthanides. The sum of the first two ionization energies for ytterbium are the second lowest in the series and its third ionization energy is the second highest. The high third ionization energy for Eu and Yb correlate with the half filling 4f 7 and complete filling 4f 14 of the 4f subshell, and the stability afforded by such configurations due to exchange energy. [ 18 ] Europium and ytterbium form salt like compounds with Eu 2+ and Yb 2+ , for example the salt like dihydrides. [ 24 ] Both europium and ytterbium dissolve in liquid ammonia forming solutions of Ln 2+ (NH 3 ) x again demonstrating their similarities to the alkaline earth metals. [ 1 ] The relative ease with which the 4th electron can be removed in cerium and (to a lesser extent praseodymium) indicates why Ce(IV) and Pr(IV) compounds can be formed, for example CeO 2 is formed rather than Ce 2 O 3 when cerium reacts with oxygen. Also Tb has a well-known IV state, as removing the 4th electron in this case produces a half-full 4f 7 configuration. The additional stable valences for Ce and Eu mean that their abundances in rocks sometimes varies significantly relative to the other rare earth elements: see cerium anomaly and europium anomaly . The similarity in ionic radius between adjacent lanthanide elements makes it difficult to separate them from each other in naturally occurring ores and other mixtures. Historically, the very laborious processes of cascading and fractional crystallization were used. Because the lanthanide ions have slightly different radii, the lattice energy of their salts and hydration energies of the ions will be slightly different, leading to a small difference in solubility . Salts of the formula Ln(NO 3 ) 3 ·2NH 4 NO 3 ·4H 2 O can be used. Industrially, the elements are separated from each other by solvent extraction . Typically an aqueous solution of nitrates is extracted into kerosene containing tri- n -butylphosphate . The strength of the complexes formed increases as the ionic radius decreases, so solubility in the organic phase increases. Complete separation can be achieved continuously by use of countercurrent exchange methods. The elements can also be separated by ion-exchange chromatography , making use of the fact that the stability constant for formation of EDTA complexes increases for log K ≈ 15.5 for [La(EDTA)] − to log K ≈ 19.8 for [Lu(EDTA)] − . [ 1 ] [ 25 ] When in the form of coordination complexes , lanthanides exist overwhelmingly in their +3 oxidation state , although particularly stable 4f configurations can also give +4 (Ce, Pr, Tb) or +2 (Sm, Eu, Yb) ions. All of these forms are strongly electropositive and thus lanthanide ions are hard Lewis acids . [ 26 ] The oxidation states are also very stable; with the exceptions of SmI 2 [ 27 ] and cerium(IV) salts , [ 28 ] lanthanides are not used for redox chemistry. 4f electrons have a high probability of being found close to the nucleus and are thus strongly affected as the nuclear charge increases across the series ; this results in a corresponding decrease in ionic radii referred to as the lanthanide contraction . The low probability of the 4f electrons existing at the outer region of the atom or ion permits little effective overlap between the orbitals of a lanthanide ion and any binding ligand . Thus lanthanide complexes typically have little or no covalent character and are not influenced by orbital geometries. The lack of orbital interaction also means that varying the metal typically has little effect on the complex (other than size), especially when compared to transition metals . Complexes are held together by weaker electrostatic forces which are omni-directional and thus the ligands alone dictate the symmetry and coordination of complexes. Steric factors therefore dominate, with coordinative saturation of the metal being balanced against inter-ligand repulsion. This results in a diverse range of coordination geometries , many of which are irregular, [ 29 ] and also manifests itself in the highly fluxional nature of the complexes. As there is no energetic reason to be locked into a single geometry, rapid intramolecular and intermolecular ligand exchange will take place. This typically results in complexes that rapidly fluctuate between all possible configurations. Many of these features make lanthanide complexes effective catalysts . Hard Lewis acids are able to polarise bonds upon coordination and thus alter the electrophilicity of compounds, with a classic example being the Luche reduction . The large size of the ions coupled with their labile ionic bonding allows even bulky coordinating species to bind and dissociate rapidly, resulting in very high turnover rates; thus excellent yields can often be achieved with loadings of only a few mol%. [ 30 ] The lack of orbital interactions combined with the lanthanide contraction means that the lanthanides change in size across the series but that their chemistry remains much the same. This allows for easy tuning of the steric environments and examples exist where this has been used to improve the catalytic activity of the complex [ 31 ] [ 32 ] [ 33 ] and change the nuclearity of metal clusters. [ 34 ] [ 35 ] Despite this, the use of lanthanide coordination complexes as homogeneous catalysts is largely restricted to the laboratory and there are currently few examples them being used on an industrial scale. [ 36 ] Lanthanides exist in many forms other than coordination complexes and many of these are industrially useful. In particular lanthanide metal oxides are used as heterogeneous catalysts in various industrial processes. The trivalent lanthanides mostly form ionic salts. The trivalent ions are hard acceptors and form more stable complexes with oxygen-donor ligands than with nitrogen-donor ligands. The larger ions are 9-coordinate in aqueous solution, [Ln(H 2 O) 9 ] 3+ but the smaller ions are 8-coordinate, [Ln(H 2 O) 8 ] 3+ . There is some evidence that the later lanthanides have more water molecules in the second coordination sphere. [ 37 ] Complexation with monodentate ligands is generally weak because it is difficult to displace water molecules from the first coordination sphere. Stronger complexes are formed with chelating ligands because of the chelate effect , such as the tetra-anion derived from 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid ( DOTA ). The most common divalent derivatives of the lanthanides are for Eu(II), which achieves a favorable f 7 configuration. Divalent halide derivatives are known for all of the lanthanides. They are either conventional salts or are Ln(III) " electride "-like salts. The simple salts include YbI 2 , EuI 2 , and SmI 2 . The electride-like salts, described as Ln 3+ , 2I − , e − , include LaI 2 , CeI 2 and GdI 2 . Many of the iodides form soluble complexes with ethers, e.g. TmI 2 (dimethoxyethane) 3 . [ 38 ] Samarium(II) iodide is a useful reducing agent. Ln(II) complexes can be synthesized by transmetalation reactions. The normal range of oxidation states can be expanded via the use of sterically bulky cyclopentadienyl ligands , in this way many lanthanides can be isolated as Ln(II) compounds. [ 39 ] Ce(IV) in ceric ammonium nitrate is a useful oxidizing agent. The Ce(IV) is the exception owing to the tendency to form an unfilled f shell. Otherwise tetravalent lanthanides are rare. However, recently Tb(IV) [ 40 ] [ 41 ] [ 42 ] and Pr(IV) [ 43 ] complexes have been shown to exist. Lanthanide metals react exothermically with hydrogen to form LnH 2 , dihydrides. [ 24 ] With the exception of Eu and Yb, which resemble the Ba and Ca hydrides (non-conducting, transparent salt-like compounds), they form black, pyrophoric , conducting compounds [ 48 ] where the metal sub-lattice is face centred cubic and the H atoms occupy tetrahedral sites. [ 24 ] Further hydrogenation produces a trihydride which is non-stoichiometric , non-conducting, more salt like. The formation of trihydride is associated with and increase in 8–10% volume and this is linked to greater localization of charge on the hydrogen atoms which become more anionic (H − hydride anion) in character. [ 24 ] The only tetrahalides known are the tetrafluorides of cerium , praseodymium , terbium , neodymium and dysprosium, the last two known only under matrix isolation conditions. [ 1 ] [ 54 ] All of the lanthanides form trihalides with fluorine, chlorine, bromine and iodine. They are all high melting and predominantly ionic in nature. [ 1 ] The fluorides are only slightly soluble in water and are not sensitive to air, and this contrasts with the other halides which are air sensitive, readily soluble in water and react at high temperature to form oxohalides. [ 55 ] The trihalides were important as pure metal can be prepared from them. [ 1 ] In the gas phase the trihalides are planar or approximately planar, the lighter lanthanides have a lower % of dimers, the heavier lanthanides a higher proportion. The dimers have a similar structure to Al 2 Cl 6 . [ 56 ] Some of the dihalides are conducting while the rest are insulators. The conducting forms can be considered as Ln III electride compounds where the electron is delocalised into a conduction band, Ln 3+ (X − ) 2 (e − ). All of the diiodides have relatively short metal-metal separations. [ 49 ] The CuTi 2 structure of the lanthanum, cerium and praseodymium diiodides along with HP-NdI 2 contain 4 4 nets of metal and iodine atoms with short metal-metal bonds (393-386 La-Pr). [ 49 ] these compounds should be considered to be two-dimensional metals (two-dimensional in the same way that graphite is). The salt-like dihalides include those of Eu, Dy, Tm, and Yb. The formation of a relatively stable +2 oxidation state for Eu and Yb is usually explained by the stability (exchange energy) of half filled (f 7 ) and fully filled f 14 . GdI 2 possesses the layered MoS 2 structure, is ferromagnetic and exhibits colossal magnetoresistance . [ 49 ] The sesquihalides Ln 2 X 3 and the Ln 7 I 12 compounds listed in the table contain metal clusters , discrete Ln 6 I 12 clusters in Ln 7 I 12 and condensed clusters forming chains in the sesquihalides. Scandium forms a similar cluster compound with chlorine, Sc 7 Cl 12 [ 1 ] Unlike many transition metal clusters these lanthanide clusters do not have strong metal-metal interactions and this is due to the low number of valence electrons involved, but instead are stabilised by the surrounding halogen atoms. [ 49 ] LaI and TmI are the only known monohalides. LaI, prepared from the reaction of LaI 3 and La metal, it has a NiAs type structure and can be formulated La 3+ (I − )(e − ) 2 . [ 52 ] TmI is a true Tm(I) compound, however it is not isolated in a pure state. [ 53 ] All of the lanthanides form sesquioxides, Ln 2 O 3 . The lighter/larger lanthanides adopt a hexagonal 7-coordinate structure while the heavier/smaller ones adopt a cubic 6-coordinate "C-M 2 O 3 " structure. [ 50 ] All of the sesquioxides are basic, and absorb water and carbon dioxide from air to form carbonates, hydroxides and hydroxycarbonates. [ 57 ] They dissolve in acids to form salts. [ 18 ] Cerium forms a stoichiometric dioxide, CeO 2 , where cerium has an oxidation state of +4. CeO 2 is basic and dissolves with difficulty in acid to form Ce 4+ solutions, from which Ce IV salts can be isolated, for example the hydrated nitrate Ce(NO 3 ) 4 .5H 2 O. CeO 2 is used as an oxidation catalyst in catalytic converters. [ 18 ] Praseodymium and terbium form non-stoichiometric oxides containing Ln IV , [ 18 ] although more extreme reaction conditions can produce stoichiometric (or near stoichiometric) PrO 2 and TbO 2 . [ 1 ] Europium and ytterbium form salt-like monoxides, EuO and YbO, which have a rock salt structure. [ 18 ] EuO is ferromagnetic at low temperatures, [ 1 ] and is a semiconductor with possible applications in spintronics . [ 58 ] A mixed Eu II /Eu III oxide Eu 3 O 4 can be produced by reducing Eu 2 O 3 in a stream of hydrogen. [ 57 ] Neodymium and samarium also form monoxides, but these are shiny conducting solids, [ 1 ] although the existence of samarium monoxide is considered dubious. [ 57 ] All of the lanthanides form hydroxides, Ln(OH) 3 . With the exception of lutetium hydroxide, which has a cubic structure, they have the hexagonal UCl 3 structure. [ 57 ] The hydroxides can be precipitated from solutions of Ln III . [ 18 ] They can also be formed by the reaction of the sesquioxide, Ln 2 O 3 , with water, but although this reaction is thermodynamically favorable it is kinetically slow for the heavier members of the series. [ 57 ] Fajans' rules indicate that the smaller Ln 3+ ions will be more polarizing and their salts correspondingly less ionic. The hydroxides of the heavier lanthanides become less basic, for example Yb(OH) 3 and Lu(OH) 3 are still basic hydroxides but will dissolve in hot concentrated NaOH . [ 1 ] All of the lanthanides form Ln 2 Q 3 (Q= S, Se, Te). [ 18 ] The sesquisulfides can be produced by reaction of the elements or (with the exception of Eu 2 S 3 ) sulfidizing the oxide (Ln 2 O 3 ) with H 2 S. [ 18 ] The sesquisulfides, Ln 2 S 3 generally lose sulfur when heated and can form a range of compositions between Ln 2 S 3 and Ln 3 S 4 . The sesquisulfides are insulators but some of the Ln 3 S 4 are metallic conductors (e.g. Ce 3 S 4 ) formulated (Ln 3+ ) 3 (S 2− ) 4 (e − ), while others (e.g. Eu 3 S 4 and Sm 3 S 4 ) are semiconductors. [ 18 ] Structurally the sesquisulfides adopt structures that vary according to the size of the Ln metal. The lighter and larger lanthanides favoring 7-coordinate metal atoms, the heaviest and smallest lanthanides (Yb and Lu) favoring 6 coordination and the rest structures with a mixture of 6 and 7 coordination. [ 18 ] Polymorphism is common amongst the sesquisulfides. [ 59 ] The colors of the sesquisulfides vary metal to metal and depend on the polymorphic form. The colors of the γ-sesquisulfides are La 2 S 3 , white/yellow; Ce 2 S 3 , dark red; Pr 2 S 3 , green; Nd 2 S 3 , light green; Gd 2 S 3 , sand; Tb 2 S 3 , light yellow and Dy 2 S 3 , orange. [ 60 ] The shade of γ-Ce 2 S 3 can be varied by doping with Na or Ca with hues ranging from dark red to yellow, [ 49 ] [ 60 ] and Ce 2 S 3 based pigments are used commercially and are seen as low toxicity substitutes for cadmium based pigments. [ 60 ] All of the lanthanides form monochalcogenides, LnQ, (Q= S, Se, Te). [ 18 ] The majority of the monochalcogenides are conducting, indicating a formulation Ln III Q 2− (e-) where the electron is in conduction bands. The exceptions are SmQ, EuQ and YbQ which are semiconductors or insulators but exhibit a pressure induced transition to a conducting state. [ 59 ] Compounds LnQ 2 are known but these do not contain Ln IV but are Ln III compounds containing polychalcogenide anions. [ 61 ] Oxysulfides Ln 2 O 2 S are well known, they all have the same structure with 7-coordinate Ln atoms, and 3 sulfur and 4 oxygen atoms as near neighbours. [ 62 ] Doping these with other lanthanide elements produces phosphors. As an example, gadolinium oxysulfide , Gd 2 O 2 S doped with Tb 3+ produces visible photons when irradiated with high energy X-rays and is used as a scintillator in flat panel detectors. [ 63 ] When mischmetal , an alloy of lanthanide metals, is added to molten steel to remove oxygen and sulfur, stable oxysulfides are produced that form an immiscible solid. [ 18 ] All of the lanthanides form a mononitride, LnN, with the rock salt structure. The mononitrides have attracted interest because of their unusual physical properties. SmN and EuN are reported as being " half metals ". [ 49 ] NdN, GdN, TbN and DyN are ferromagnetic, SmN is antiferromagnetic. [ 64 ] Applications in the field of spintronics are being investigated. [ 58 ] CeN is unusual as it is a metallic conductor, contrasting with the other nitrides also with the other cerium pnictides. A simple description is Ce 4+ N 3− (e–) but the interatomic distances are a better match for the trivalent state rather than for the tetravalent state. A number of different explanations have been offered. [ 65 ] The nitrides can be prepared by the reaction of lanthanum metals with nitrogen. Some nitride is produced along with the oxide, when lanthanum metals are ignited in air. [ 18 ] Alternative methods of synthesis are a high temperature reaction of lanthanide metals with ammonia or the decomposition of lanthanide amides, Ln(NH 2 ) 3 . Achieving pure stoichiometric compounds, and crystals with low defect density has proved difficult. [ 58 ] The lanthanide nitrides are sensitive to air and hydrolyse producing ammonia. [ 48 ] The other pnictides phosphorus, arsenic, antimony and bismuth also react with the lanthanide metals to form monopnictides, LnQ, where Q = P, As, Sb or Bi. Additionally a range of other compounds can be produced with varying stoichiometries, such as LnP 2 , LnP 5 , LnP 7 , Ln 3 As, Ln 5 As 3 and LnAs 2 . [ 66 ] Carbides of varying stoichiometries are known for the lanthanides. Non-stoichiometry is common. All of the lanthanides form LnC 2 and Ln 2 C 3 which both contain C 2 units. The dicarbides with exception of EuC 2 , are metallic conductors with the calcium carbide structure and can be formulated as Ln 3+ C 2 2− (e–). The C-C bond length is longer than that in CaC 2 , which contains the C 2 2− anion, indicating that the antibonding orbitals of the C 2 2− anion are involved in the conduction band. These dicarbides hydrolyse to form hydrogen and a mixture of hydrocarbons. [ 67 ] EuC 2 and to a lesser extent YbC 2 hydrolyse differently producing a higher percentage of acetylene (ethyne). [ 68 ] The sesquicarbides, Ln 2 C 3 can be formulated as Ln 4 (C 2 ) 3 . These compounds adopt the Pu 2 C 3 structure [ 49 ] which has been described as having C 2 2− anions in bisphenoid holes formed by eight near Ln neighbours. [ 69 ] The C-C bond is less elongated than in the dicarbides, with the exception of Ce 2 C 3 , [ 67 ] indicating that the delocalized metal electrons do not fill C-C antibonding orbitals. [ 70 ] Other carbon rich stoichiometries are known for some lanthanides. Ln 3 C 4 (Ho-Lu) containing C, C 2 and C 3 units; [ 71 ] Ln 4 C 7 (Ho-Lu) contain C atoms and C 3 units [ 72 ] and Ln 4 C 5 (Gd-Ho) containing C and C 2 units. [ 73 ] Metal rich carbides contain interstitial C atoms and no C 2 or C 3 units. These are Ln 4 C 3 (Tb and Lu); Ln 2 C (Dy, Ho, Tm) [ 74 ] [ 75 ] and Ln 3 C [ 49 ] (Sm-Lu). These hydrolyze to methane . [ 76 ] All of the lanthanides form a number of borides. The "higher" borides (LnB x where x > 12) are insulators/semiconductors whereas the lower borides are typically conducting. The lower borides have stoichiometries of LnB 2 , LnB 4 , LnB 6 and LnB 12 . [ 77 ] Applications in the field of spintronics are being investigated. [ 58 ] The range of borides formed by the lanthanides can be compared to those formed by the transition metals. The boron rich borides are typical of the lanthanides (and groups 1–3) whereas for the transition metals tend to form metal rich, "lower" borides. [ 78 ] The lanthanide borides are typically grouped together with the group 3 metals with which they share many similarities of reactivity, stoichiometry and structure. Collectively these are then termed the rare earth borides. [ 77 ] Many methods of producing lanthanide borides have been used, amongst them are direct reaction of the elements; the reduction of Ln 2 O 3 with boron; reduction of boron oxide, B 2 O 3 , and Ln 2 O 3 together with carbon; reduction of metal oxide with boron carbide , B 4 C. [ 77 ] [ 78 ] [ 79 ] [ 80 ] Producing high purity samples has proved to be difficult. [ 80 ] Single crystals of the higher borides have been grown in a low melting metal (e.g. Sn, Cu, Al). [ 77 ] Diborides, LnB 2 , have been reported for Sm, Gd, Tb, Dy, Ho, Er, Tm, Yb and Lu. All have the same, AlB 2 , structure containing a graphitic layer of boron atoms. Low temperature ferromagnetic transitions for Tb, Dy, Ho and Er. TmB 2 is ferromagnetic at 7.2 K. [ 49 ] Tetraborides, LnB 4 have been reported for all of the lanthanides except EuB 4 , all have the same UB 4 structure . The structure has a boron sub-lattice consists of chains of octahedral B 6 clusters linked by boron atoms. The unit cell decreases in size successively from LaB 4 to LuB 4 . The tetraborides of the lighter lanthanides melt with decomposition to LnB 6 . [ 80 ] Attempts to make EuB 4 have failed. [ 79 ] The LnB 4 are good conductors [ 77 ] and typically antiferromagnetic. [ 49 ] Hexaborides, LnB 6 have been reported for all of the lanthanides. They all have the CaB 6 structure , containing B 6 clusters. They are non-stoichiometric due to cation defects. The hexaborides of the lighter lanthanides (La – Sm) melt without decomposition, EuB 6 decomposes to boron and metal and the heavier lanthanides decompose to LnB 4 with exception of YbB 6 which decomposes forming YbB 12 . The stability has in part been correlated to differences in volatility between the lanthanide metals. [ 80 ] In EuB 6 and YbB 6 the metals have an oxidation state of +2 whereas in the rest of the lanthanide hexaborides it is +3. This rationalises the differences in conductivity, the extra electrons in the Ln III hexaborides entering conduction bands. EuB 6 is a semiconductor and the rest are good conductors. [ 49 ] [ 80 ] LaB 6 and CeB 6 are thermionic emitters, used, for example, in scanning electron microscopes . [ 81 ] Dodecaborides, LnB 12 , are formed by the heavier smaller lanthanides, but not by the lighter larger metals, La – Eu. With the exception YbB 12 (where Yb takes an intermediate valence and is a Kondo insulator ), the dodecaborides are all metallic compounds. They all have the UB 12 structure containing a 3 dimensional framework of cubooctahedral B 12 clusters. [ 77 ] The higher boride LnB 66 is known for all lanthanide metals. The composition is approximate as the compounds are non-stoichiometric. [ 77 ] They all have similar complex structure with over 1600 atoms in the unit cell. The boron cubic sub lattice contains super icosahedra made up of a central B 12 icosahedra surrounded by 12 others, B 12 (B 12 ) 12 . [ 77 ] Other complex higher borides LnB 50 (Tb, Dy, Ho Er Tm Lu) and LnB 25 are known (Gd, Tb, Dy, Ho, Er) and these contain boron icosahedra in the boron framework. [ 77 ] Lanthanide-carbon σ bonds are well known; however as the 4f electrons have a low probability of existing at the outer region of the atom there is little effective orbital overlap, resulting in bonds with significant ionic character. As such organo-lanthanide compounds exhibit carbanion -like behavior, unlike the behavior in transition metal organometallic compounds. Because of their large size, lanthanides tend to form more stable organometallic derivatives with bulky ligands to give compounds such as Ln[CH(SiMe 3 ) 3 ]. [ 82 ] Analogues of uranocene are derived from dilithiocyclooctatetraene, Li 2 C 8 H 8 . Organic lanthanide(II) compounds are also known, such as Cp* 2 Eu. [ 38 ] All the trivalent lanthanide ions, except lanthanum and lutetium, have unpaired f electrons. (Ligand-to-metal charge transfer can nonetheless produce a nonzero f-occupancy even in La(III) compounds.) [ 83 ] However, the magnetic moments deviate considerably from the spin-only values because of strong spin–orbit coupling . The maximum number of unpaired electrons is 7, in Gd 3+ , with a magnetic moment of 7.94 B.M. , but the largest magnetic moments, at 10.4–10.7 B.M., are exhibited by Dy 3+ and Ho 3+ . However, in Gd 3+ all the electrons have parallel spin and this property is important for the use of gadolinium complexes as contrast reagent in MRI scans. Crystal field splitting is rather small for the lanthanide ions and is less important than spin–orbit coupling in regard to energy levels. [ 1 ] Transitions of electrons between f orbitals are forbidden by the Laporte rule . Furthermore, because of the "buried" nature of the f orbitals, coupling with molecular vibrations is weak. Consequently, the spectra of lanthanide ions are rather weak and the absorption bands are similarly narrow. Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid ) have sharp optical absorption peaks in the spectral range 200–900 nm and can be used as a wavelength calibration standard for optical spectrophotometers , [ 84 ] and are available commercially. [ 85 ] As f-f transitions are Laporte-forbidden, once an electron has been excited, decay to the ground state will be slow. This makes them suitable for use in lasers as it makes the population inversion easy to achieve. The Nd:YAG laser is one that is widely used. Europium-doped yttrium vanadate was the first red phosphor to enable the development of color television screens. [ 86 ] Lanthanide ions have notable luminescent properties due to their unique 4f orbitals. Laporte forbidden f-f transitions can be activated by excitation of a bound "antenna" ligand. This leads to sharp emission bands throughout the visible, NIR, and IR and relatively long luminescence lifetimes. [ 87 ] Samarskite and similar minerals contain lanthanides in association with the elements such as tantalum , niobium , hafnium , zirconium , vanadium , and titanium , from group 4 and group 5 , often in similar oxidation states. Monazite is a phosphate of numerous group 3 + lanthanide + actinide metals and mined especially for the thorium content and specific rare earths, especially lanthanum, yttrium and cerium. Cerium and lanthanum as well as other members of the rare-earth series are often produced as a metal called mischmetal containing a variable mixture of these elements with cerium and lanthanum predominating; it has direct uses such as lighter flints and other spark sources which do not require extensive purification of one of these metals. [ 13 ] There are also lanthanide-bearing minerals based on group-2 elements, such as yttrocalcite, yttrocerite and yttrofluorite, which vary in content of yttrium, cerium, lanthanum and others. [ 88 ] Other lanthanide-bearing minerals include bastnäsite , florencite , chernovite, perovskite , xenotime , cerite , gadolinite , lanthanite , fergusonite , polycrase , blomstrandine , håleniusite , miserite, loparite , lepersonnite , euxenite , all of which have a range of relative element concentration and may be denoted by a predominating one, as in monazite-(Ce) . Group 3 elements do not occur as native-element minerals in the fashion of gold, silver, tantalum and many others on Earth, but may occur in lunar soil . Very rare halides of cerium, lanthanum, and presumably other lanthanides, feldspars and garnets are also known to exist. [ 89 ] The lanthanide contraction is responsible for the great geochemical divide that splits the lanthanides into light and heavy-lanthanide enriched minerals, the latter being almost inevitably associated with and dominated by yttrium. This divide is reflected in the first two "rare earths" that were discovered: yttria (1794) and ceria (1803). The geochemical divide has put more of the light lanthanides in the Earth's crust, but more of the heavy members in the Earth's mantle. The result is that although large rich ore-bodies are found that are enriched in the light lanthanides, correspondingly large ore-bodies for the heavy members are few. The principal ores are monazite and bastnäsite . Monazite sands usually contain all the lanthanide elements, but the heavier elements are lacking in bastnäsite. The lanthanides obey the Oddo–Harkins rule – odd-numbered elements are less abundant than their even-numbered neighbors. Three of the lanthanide elements have radioactive isotopes with long half-lives ( 138 La, 147 Sm and 176 Lu) that can be used to date minerals and rocks from Earth, the Moon and meteorites. [ 90 ] Promethium is effectively a man-made element , as all its isotopes are radioactive with half-lives shorter than 20 years. Lanthanide elements and their compounds have many uses but the quantities consumed are relatively small in comparison to other elements. About 15000 ton/year of the lanthanides are consumed as catalysts and in the production of glasses. This 15000 tons corresponds to about 85% of the lanthanide production. From the perspective of value, however, applications in phosphors and magnets are more important. [ 91 ] The devices lanthanide elements are used in include superconductors , samarium-cobalt and neodymium-iron-boron high-flux rare-earth magnets , magnesium alloys , electronic polishers, refining catalysts and hybrid car components (primarily batteries and magnets). [ 92 ] Lanthanide ions are used as the active ions in luminescent materials used in optoelectronics applications, most notably the Nd:YAG laser. Erbium-doped fiber amplifiers are significant devices in optical-fiber communication systems. Phosphors with lanthanide dopants are also widely used in cathode-ray tube technology such as television sets. The earliest color television CRTs had a poor-quality red; europium as a phosphor dopant made good red phosphors possible. Yttrium iron garnet (YIG) spheres can act as tunable microwave resonators. Lanthanide oxides are mixed with tungsten to improve their high temperature properties for TIG welding , replacing thorium , which was mildly hazardous to work with. Many defense-related products also use lanthanide elements such as night-vision goggles and rangefinders . The SPY-1 radar used in some Aegis equipped warships, and the hybrid propulsion system of Arleigh Burke -class destroyers all use rare earth magnets in critical capacities. [ 93 ] The price for lanthanum oxide used in fluid catalytic cracking has risen from $5 per kilogram in early 2010 to $140 per kilogram in June 2011. [ 94 ] Most lanthanides are widely used in lasers , and as (co-)dopants in doped-fiber optical amplifiers; for example, in Er-doped fiber amplifiers, which are used as repeaters in the terrestrial and submarine fiber-optic transmission links that carry internet traffic. These elements deflect ultraviolet and infrared radiation and are commonly used in the production of sunglass lenses. Other applications are summarized in the following table: [ 95 ] The complex Gd( DOTA ) is used in magnetic resonance imaging . Mixtures containing all of the lanthanides operating as a single-atom catalysts have been proposed for the electroreduction of carbon dioxide (CO2) to carbon monoxide (CO) with a faradaic efficiency greater than 90%. [ 96 ] Lanthanide complexes can be used for optical imaging. Applications are limited by the lability of the complexes. [ 97 ] Some applications depend on the unique luminescence properties of lanthanide chelates or cryptates . [ 98 ] [ 99 ] These are well-suited for this application due to their large Stokes shifts and extremely long emission lifetimes (from microseconds to milliseconds ) compared to more traditional fluorophores (e.g., fluorescein , allophycocyanin , phycoerythrin , and rhodamine ). The biological fluids or serum commonly used in these research applications contain many compounds and proteins which are naturally fluorescent. Therefore, the use of conventional, steady-state fluorescence measurement presents serious limitations in assay sensitivity. Long-lived fluorophores, such as lanthanides, combined with time-resolved detection (a delay between excitation and emission detection) minimizes prompt fluorescence interference. Time-resolved fluorometry (TRF) combined with Förster resonance energy transfer (FRET) offers a powerful tool for drug discovery researchers: Time-Resolved Förster Resonance Energy Transfer or TR-FRET. TR-FRET combines the low background aspect of TRF with the homogeneous assay format of FRET. The resulting assay provides an increase in flexibility, reliability and sensitivity in addition to higher throughput and fewer false positive/false negative results. This method involves two fluorophores: a donor and an acceptor. Excitation of the donor fluorophore (in this case, the lanthanide ion complex) by an energy source (e.g. flash lamp or laser) produces an energy transfer to the acceptor fluorophore if they are within a given proximity to each other (known as the Förster's radius ). The acceptor fluorophore in turn emits light at its characteristic wavelength. The two most commonly used lanthanides in life science assays are shown below along with their corresponding acceptor dye as well as their excitation and emission wavelengths and resultant Stokes shift (separation of excitation and emission wavelengths). Currently there is research showing that lanthanide elements can be used as anticancer agents. The main role of the lanthanides in these studies is to inhibit proliferation of the cancer cells. Specifically cerium and lanthanum have been studied for their role as anti-cancer agents. One of the specific elements from the lanthanide group that has been tested and used is cerium (Ce). There have been studies that use a protein-cerium complex to observe the effect of cerium on the cancer cells. The hope was to inhibit cell proliferation and promote cytotoxicity. [ 100 ] Transferrin receptors in cancer cells, such as those in breast cancer cells and epithelial cervical cells, promote the cell proliferation and malignancy of the cancer. [ 100 ] Transferrin is a protein used to transport iron into the cells and is needed to aid the cancer cells in DNA replication. Transferrin acts as a growth factor for the cancerous cells and is dependent on iron. Cancer cells have much higher levels of transferrin receptors than normal cells and are very dependent on iron for their proliferation. [ 100 ] In the field of magnetic resonance imaging (MRI), compounds containing gadolinium are utilized extensively. [ 101 ] The photobiological characteristics, anticancer, anti-leukemia, and anti-HIV activities of the lanthanides with coumarin and its related compounds are demonstrated by the biological activities of the complex. [ 102 ] Cerium has shown results as an anti-cancer agent due to its similarities in structure and biochemistry to iron. Cerium may bind in the place of iron on to the transferrin and then be brought into the cancer cells by transferrin-receptor mediated endocytosis. [ 100 ] The cerium binding to the transferrin in place of the iron inhibits the transferrin activity in the cell. This creates a toxic environment for the cancer cells and causes a decrease in cell growth. This is the proposed mechanism for cerium's effect on cancer cells, though the real mechanism may be more complex in how cerium inhibits cancer cell proliferation. Specifically in HeLa cancer cells studied in vitro, cell viability was decreased after 48 to 72 hours of cerium treatments. Cells treated with just cerium had decreases in cell viability, but cells treated with both cerium and transferrin had more significant inhibition for cellular activity. [ 100 ] Another specific element that has been tested and used as an anti-cancer agent is lanthanum, more specifically lanthanum chloride (LaCl 3 ). The lanthanum ion is used to affect the levels of let-7a and microRNAs miR-34a in a cell throughout the cell cycle. When the lanthanum ion was introduced to the cell in vivo or in vitro, it inhibited the rapid growth and induced apoptosis of the cancer cells (specifically cervical cancer cells). This effect was caused by the regulation of the let-7a and microRNAs by the lanthanum ions. [ 103 ] The mechanism for this effect is still unclear but it is possible that the lanthanum is acting in a similar way as the cerium and binding to a ligand necessary for cancer cell proliferation. In the field of magnetic resonance imaging (MRI), compounds containing gadolinium are utilized extensively. Due to their sparse distribution in the earth's crust and low aqueous solubility, the lanthanides have a low availability in the biosphere, and for a long time were not known to naturally form part of any biological molecules. In 2007 a novel methanol dehydrogenase that strictly uses lanthanides as enzymatic cofactors was discovered in a bacterium from the phylum Verrucomicrobiota , Methylacidiphilum fumariolicum . This bacterium was found to survive only if there are lanthanides present in the environment. [ 104 ] Compared to most other nondietary elements , non-radioactive lanthanides are classified as having low toxicity. [ 91 ] The same nutritional requirement has also been observed in Methylorubrum extorquens and Methylobacterium radiotolerans .
https://en.wikipedia.org/wiki/Lanthanide
The lanthanide contraction is the greater-than-expected decrease in atomic radii and ionic radii of the elements in the lanthanide series, from left to right. It is caused by the poor shielding effect of nuclear charge by the 4f electrons along with the expected periodic trend of increasing electronegativity and nuclear charge on moving from left to right. About 10% of the lanthanide contraction has been attributed to relativistic effects . [ 1 ] A decrease in atomic radii can be observed across the 4f elements from atomic number 57, lanthanum , to 70, ytterbium . This results in smaller than otherwise expected atomic radii and ionic radii for the subsequent d-block elements starting with 71, lutetium . [ 2 ] [ 3 ] [ 4 ] [ 5 ] This effect causes the radii of transition metals of group 5 and 6 to become unusually similar, as the expected increase in radius going down a period is nearly cancelled out by the f-block insertion, and has many other far ranging consequences in post-lanthanide elements. The decrease in ionic radii (Ln 3+ ) is much more uniform compared to decrease in atomic radii. The term was coined by the Norwegian geochemist Victor Goldschmidt in his series "Geochemische Verteilungsgesetze der Elemente" (Geochemical distribution laws of the elements). [ 6 ] The effect results from poor shielding of nuclear charge (nuclear attractive force on electrons) by 4f electrons; the 6s electrons are drawn towards the nucleus, thus resulting in a smaller atomic radius. In single-electron atoms, the average separation of an electron from the nucleus is determined by the subshell it belongs to, and decreases with increasing charge on the nucleus; this, in turn, leads to a decrease in atomic radius . In multi-electron atoms, the decrease in radius brought about by an increase in nuclear charge is partially offset by increasing electrostatic repulsion among electrons. In particular, a " shielding effect " operates: i.e., as electrons are added in outer shells, electrons already present shield the outer electrons from nuclear charge, making them experience a lower effective charge on the nucleus. The shielding effect exerted by the inner electrons decreases in the order s > p > d > f . Usually, as a particular subshell is filled in a period, the atomic radius decreases. This effect is particularly pronounced in the case of lanthanides, as the 4 f subshell which is filled across these elements is not very effective at shielding the outer shell (n=5 and n=6) electrons. Thus the shielding effect is less able to counter the decrease in radius caused by increasing nuclear charge. This leads to "lanthanide contraction". The ionic radius drops from 103 pm for lanthanum(III) to 86.1 pm for lutetium(III). About 10% of the lanthanide contraction has been attributed to relativistic effects . [ 1 ] The lanthanide contraction was experimentally observed in aqueous solutions of lanthanides, including radioactive promethium , through X-ray absorption spectroscopy measurements. [ 7 ] The results of the increased attraction of the outer shell electrons across the lanthanide period may be divided into effects on the lanthanide series itself including the decrease in ionic radii, and influences on the following or post-lanthanide elements. The ionic radii of the lanthanides decrease from 103 pm ( La 3+ ) to 86 pm ( Lu 3+ ) in the lanthanide series, electrons are added to the 4f shell. This first f shell is inside the full 5s and 5p shells (as well as the 6s shell in the neutral atom); the 4f shell is well-localized near the atomic nucleus and has little effect on chemical bonding. The decrease in atomic and ionic radii does affect their chemistry, however. Without the lanthanide contraction, a chemical separation of lanthanides would be extremely difficult. However, this contraction makes the chemical separation of period 5 and period 6 transition metals of the same group rather difficult. Even when the mass of an atomic nucleus is the same, a decrease in the atomic volume has a corresponding increase in the density as illustrated by alpha crystals of cerium (at 77 Kelvin) and gamma crystals of cerium (near room temperature) where the atomic volume of the latter is 120.3% of the former and the density of the former is 120.5% of the latter (i.e., 20.696 vs 17.2 and 8.16 vs 6.770, respectively). [ 8 ] As expected, when more mass (protons & neutrons) is packed into a space that is subject to "contraction", the density increases consistently with atomic number for the lanthanides (excluding the atypical 2nd, 7th, and 14th) culminating in the value for the last lanthanide (Lu) being 160% of the first lanthanide (La). Melting points (in Kelvin) also increase consistently across these 12 lanthanides culminating in the value for the last being 161% of the first. This density-melting point association does not depend upon just a comparison between these two lanthanides because the correlation coefficient (Pearson product-moment) for density and melting point for these 12 lanthanides is 0.982 and 0.946 for all 15 lanthanides. There is a general trend of increasing Vickers hardness , Brinell hardness , density and melting point from lanthanum to lutetium (with europium and ytterbium being the most notable exceptions; in the metallic state, they are divalent rather than trivalent). Cerium, along with europium and ytterbium, are atypical when their properties are compared with the other 12 lanthanides as evidenced by the clearly lower values (than either adjacent element) for melting points (lower by >10<43%), Vickers hardness (lower by >32<82%), and densities (lower by >26<33%, when exclude Ce, where the density increases by 10% vs lanthanum). The lower densities for europium and ytterbium (than their adjacent lanthanides) are associated with larger atomic volumes at 148% and 128% of the average volume for the typical 12 lanthanides (i.e., 28.979, 25.067, and 19.629 cm3/mol, respectively). [ 8 ] Because the atomic volume of Yb is 21% more than that of Ce, [ 8 ] it is understandable that the density for Ce (the 2nd lanthanide) is 98% of that of ytterbium (the 14th lanthanide) when there is a 24% increase in atomic weight for the latter, and the melting point for Ce (1068 K) is nearly the same as the 1097 K for ytterbium and the 1099 K for europium. These 3 elements are the only lanthanides with melting points below the lowest for the other twelve, which is 1193 K for lanthanum. Because europium has a half-filled 4f subshell, this may account for its atypical values when compared with the data for 12 of the lanthanides. Lutetium is the hardest and densest lanthanide and has the highest melting point at 1925 K, which is the year that Goldschmidt published the terminology "Die Lanthaniden-Kontraktion." Unlike the m. p. data for the lanthanides (where the values increase consistently when the 2nd, 7th & 14th are excluded), the b. p. temperatures show a repeated pattern at 162% and 165% for the 8th lanthanide relative to the 6th and the 15th relative to the 13th (which ignores the atypical 7th and 14th). The 8th and 15th are among the four lanthanides with one electron in the 5d shell (where the others are the 1st and 2nd) and the b. p. values for these four are +/- 2.6% about 3642 K. See the post-lanthanides section for more comments on the 5d-shell electrons. There is also a repeated b. p. pattern at 66% and 71% for the 6th and 13th lanthanides (relative to the preceding elements) that differ by one electron in the 4f shell, i.e., 5 to 6 and 12 to 13. It has been shown that lanthanide contraction plays a crucial role in determining the magnetic phase diagram of the heavy rare-earth elements, [ 9 ] [ 10 ] i.e. those from Gadolinium onwards. The elements following the lanthanides in the periodic table are influenced by the lanthanide contraction. When the first three post-lanthanide elements (Hf, Ta, and W) are combined with the 12 lanthanides, the Pearson correlation coefficient increases from 0.982 to 0.997. On average for the 12 lanthanides, the melting point (on the Kelvin scale) = 1.92x the density (in g/cm^3) while the three elements following the lanthanides have similar values at 188x, 197x, and 192x before the densities continue to increase but the melting points decrease for the next 2 elements followed by both properties decreasing (at different rates) for the next 8 elements. Hafnium is rather unique because not only do density and m. p. temperature change proportionally (relative to lutetium, the last lanthanide) at 135% and 130% but also the b. p. temperature at 133%. The elements with 2, 3, & 4 electrons in the 5d shell (post-lanthanides Hf, Ta, W) have increasing b. p. values such that the b. p. value for W (wolfram, aka tungsten) is 169% of that for the element with one 5d electron (Lu). The high melting point and two other properties of tungsten originates from strong covalent bonds formed between tungsten atoms by the 5d electrons. The elements with 5 to 10 electrons in the 5d shell (Re to Hg) have progressively lower b. p. values such that the element with ten 5d electrons (Hg) has a b. p. value at 52% of tungsten’s (with four 5d electrons). [ citation needed ] The radii of the period-6 transition metals are smaller than would be expected if there were no lanthanides, and are in fact very similar to the radii of the period-5 transition metals since the effect of the additional electron shell is almost entirely offset by the lanthanide contraction. [ 4 ] For example, the atomic radius of the metal zirconium , Zr (a period-5 transition element), is 155 pm [ 11 ] ( empirical value ) and that of hafnium , Hf (the corresponding period-6 element), is 159 pm. [ 12 ] The ionic radius of Zr 4+ is 84 pm and that of Hf 4+ is 83 pm. [ 13 ] The radii are very similar even though the number of electrons increases from 40 to 72 and the atomic mass increases from 91.22 to 178.49 g/mol. The increase in mass and the unchanged radii lead to a steep increase in density from 6.51 to 13.35 g/cm 3 . Zirconium and hafnium, therefore, have very similar chemical behavior, having closely similar radii and electron configurations. Radius-dependent properties such as lattice energies , solvation energies , and stability constants of complexes are also similar. [ 3 ] Because of this similarity, hafnium is found only in association with zirconium, which is much more abundant. This also meant that hafnium was discovered as a separate element in 1923, 134 years after zirconium was discovered in 1789. Titanium , on the other hand, is in the same group, but differs enough from those two metals that it is seldom found with them.
https://en.wikipedia.org/wiki/Lanthanide_contraction
Lanthanide probes are a non-invasive [ 1 ] analytical tool commonly used for biological and chemical applications. Lanthanides are metal ions which have their 4f energy level filled and generally refer to elements cerium to lutetium in the periodic table . [ 2 ] The fluorescence of lanthanide salts is weak because the energy absorption of the metallic ion is low; hence chelated complexes of lanthanides are most commonly used. [ 3 ] The term chelate derives from the Greek word for “claw,” and is applied to name ligands, which attach to a metal ion with two or more donor atoms through dative bonds . The fluorescence is most intense when the metal ion has a +3 oxidation state . Not all lanthanide metals can be used and the most common are: Sm(III), Eu(III), Tb(III), and Dy(III). [ 3 ] It has been known since the early 1930s that the salts of certain lanthanides are fluorescent. [ 4 ] The reaction of lanthanide salts with nucleic acids was discussed in a number of publications during the 1930s and the 1940s where lanthanum-containing reagents were employed for the fixation of nucleic acid structures. [ 3 ] In 1942 complexes of europium , terbium , and samarium were discovered to exhibit unusual luminescence properties when excited by UV light . [ 3 ] However, the first staining of biological cells with lanthanides occurred twenty years later when bacterial smears of E. coli were treated with aqueous solutions of a europium complex, which under mercury lamp illumination appeared as bright red spots. [ 1 ] Attention to lanthanide probes increased greatly in the mid-1970s when Finnish researchers proposed Eu(III), Sm(III), Tb(III), and Dy(III) polyaminocarboxylates as luminescent sensors in time-resolved luminescent (TRL) immunoassays. [ 1 ] Optimization of analytical methods from the 1970s onward for lanthanide chelates and time-resolved luminescence microscopy (TRLM) resulted in the use of lanthanide probes in many scientific, medical and commercial fields. [ 1 ] There are two main assaying techniques: heterogeneous and homogeneous. If two lanthanide chelates are used in the analysis one after the other—it is called heterogeneous assaying. [ 4 ] The first analyte is linked to a specific binding agent on a solid support such as a polymer and then another reaction couples the first poorly luminescent lanthanide complex with a new better one. [ 1 ] [ 4 ] This tedious method is used because the second more luminescent compound would not bind without the first analyte already present. Subsequent time resolved detection of the metal-centered luminescent probe yields the desired signal. Antigens , steroids and hormones are routinely assayed with heterogeneous techniques. Homogeneous assays rely on direct coupling of the lanthanide label with an organic acceptor. [ 1 ] The relaxation of excited molecules states often occurs by the emission of light which is called fluorescence. There are two ways of measuring this emitted radiation: as a function of frequency (inverse to wavelength ) or time. [ 4 ] Conventionally the fluorescence spectrum shows the intensity of fluorescence at different wavelengths, but since lanthanides have relatively long fluorescence decay times (ranging from one microsecond to one millisecond), it is possible to record the fluorescence emission at different decay times from the given excitation energy at time zero. This is called time resolved fluorescence spectroscopy. [ 5 ] Lanthanides can be used because their small size ( ionic radius ) gives them the ability to replace metal ions inside protein complex such as calcium or nickel . The optical properties of lanthanide ions such as Ln(III) originate in the special features of their electronic [Xe]4f n configurations. [ 4 ] These configurations generate many electronic levels, the number of which is given by [14!/n!(14−n)!], translating into 3003 energy levels for Eu(III) and Tb(III). [ 1 ] The energies of these levels are well defined due to the shielding of the 4f orbitals by the filled 5s and 5p sub-shells, [ 4 ] and are not very sensitive to the chemical environments in which the lanthanide ions are inserted. Inner-shell 4f-4f transitions span both the visible and near-infrared ranges. [ 1 ] They are sharp and easily recognizable. Since these transitions are parity forbidden, the lifetimes of the excited states are long, which allows the use of time resolved spectroscopy , [ 4 ] a definitive asset for bioassays and microscopy. The only drawback of f-f transitions are their faint oscillator strengths which may in fact be turned into an advantage. [ 1 ] The energy absorbed by the organic receptor (ligand) is transferred onto Ln(III) excited states, and sharp emission bands originating from the metal ion are detected after rapid internal conversion to the emitting level. [ 1 ] The phenomenon is termed sensitization of the metal centered complex (also referred to as antenna effect) and is quite complex. [ 4 ] The energy migration path though goes through the long-lived triplet state of the ligand. Ln(III) ions are good quenchers of triplet states so that photobleaching is substantially reduced. The three types of transitions seen for lanthanide probes are: LMCT, 4f-5d, and intraconfigurational 4f-4f. The former two usually occur at energies too high to be relevant for bio-applications. [ 1 ] [ 4 ] Screening tools for the development of new cancer therapies are in high demand worldwide and often require the determination of enzyme kinetics. [ 1 ] The high sensitivity of lanthanide luminescence, particularly of time-resolved luminescence has revealed to be an ideal candidate for this purpose. There are several ways of conducting this analysis by the use of fluorogenic enzyme substrates, substrates bearing donor/acceptor groups allowing fluorescence resonance energy transfer (FRET) and immunoassays. For example, guanine nucleotide binding proteins consist of several subunits, one of which comprises those of the Ras subfamily . [ 1 ] Ras GTPases act as binary switches by converting guadenosine triphosphate ( GTP ) into guadenosine diphosphate ( GDP ). Luminescence of the Tb(III) complex with norfloxacin is sensitive to determine the concentration of phosphate released by the GTP to GDP transformation. [ 1 ] Protonation of basic sites in systems comprising a chromophore and a luminescent metal center leads the way for pH sensors. [ 4 ] Some initially proposed systems were based on pyridine derivatives but these were not stable in water. [ 1 ] More robust sensors have been proposed in which the core is a substituted macrocycle usually bearing phosphinate , carboxylate or four amide coordinating groups. It has been observed that lanthanide luminescent probe emission increases about six-fold when decreasing the pH of the solution from six to two. [ 1 ] Hydrogen peroxide can be detected with high sensitivity by the luminescence of lanthanide probes—however only at relatively high pH values. A lanthanide-based analytical procedure was proposed in 2002 based on the finding that the europium complex with various tetracyclines binds hydrogen peroxide forming a luminescent complex. [ 1 ] FRET in lanthanide probes is a widely used technique to measure the distance between two points separated by approximately 15–100 Angstrom. [ 6 ] Measurements can be done under physiological conditions in vitro with genetically encoded dyes, and often in vivo as well. The technique relies on a distant- dependent transfer of energy from a donor fluorophore to an acceptor dye. Lanthanide probes has been used to study DNA-protein interactions (using a terbium chelate complex) to measure distances in DNA complexes bent by the CAP protein. [ 6 ] Lanthanide probes have been used to detect conformational changes in proteins. Recently the Shaker potassium ion channel , [ 6 ] a voltage-gated channel involved in nerve impulses was measured using this technique. [ 7 ] Some scientist also have used lanthanide based luminescence resonance energy transfer (LRET) which is very similar to FRET to study conformational changes in RNA polymerase upon binding to DNA and transcription initiation in prokaryotes. LRET was also used to study the interaction of the proteins dystrophin and actin in muscle cells. Dystrophin is present in the inner muscle cell membrane and is believed to stabilize muscle fibers by binding to actin filaments. Specifically labelled dystrophin with Tb labelled monoclonal antibodies labeled were used. [ 6 ] Traditional virus diagnostic procedures are being replaced by sensitive immunoassays with lanthanides. The time resolved fluorescence based technique is generally applicable and its performance has also been tested in the assay of viral antigens in clinical specimens. [ 6 ] Several systems have been proposed which combine MRI capability with lanthanides probes in dual assays. [ 4 ] The luminescent probe may for instance serve to localize the MRI contrast agent. [ 8 ] This has helped to visualize the delivery of nucleic acids into cultured cells. Lanthanides are not used for their fluorescence but their magnetic qualities. [ 8 ] [ 9 ] Lanthanide probes displays unique fluorescence properties, including long lifetime of fluorescence, large Stokes shift and narrow emission peak. These properties is highly advantageous to develop analytical probes for receptor-ligand interactions. Many lanthanide-based fluorescence studies have been developed for GPCRs , including CXCR1 , [ 10 ] insulin-like family peptide receptor 2, [ 11 ] protease-activated receptor 2 , [ 12 ] β2-adrenergic receptor [ 13 ] and C3a receptor . [ 14 ] The emitted photons from excited lanthanides are detected by highly sensitive devices and techniques such as single-photon detection. If the lifetime of the excited emitting level is long enough, then time-resolved detection (TRD) can be used to enhance the signal-to-noise ratio. [ 5 ] The instrumentation used to perform LRET is relatively simple, although slightly more complex than conventional fluorimeters. The general requirements are a pulsed UV excitation source and time-resolved detection. Light sources which emit short duration pulses can be divided into the following categories: [ 3 ] The most important factors in the selection of the pulsed light source for are the duration and intensity of the light. [ 3 ] Pulsed lasers for the 300 to 500 nm range have now replaced spark caps in fluorescence spectroscopy. There are four general types of pulsing lasers used: lasers with pulsed excitation, lasers with G-switching, mode locked lasers and cavity dumped lasers. Pulsed nitrogen lasers (337 nm) have often been used as an excitation source in time resolved fluorometry. [ 3 ] In time resolved fluorometry the fast photomultiplier tube is the only practical single photon detector. Good single photon resolution is also an advantage in counting photons from long decay fluorescent probes, such as lanthanide chelates. [ 4 ] These commercial instruments are available in the market today: Perkin-Elmer Micro Filter Fluorometer LS-2, Perkin-Elmer Luminescence Spectrometer Model LS 5, and LKB-Wallac Time-Resolved Fluorometer Model 1230. [ 3 ] Lanthanide probes' ligands must meet several chemical requirements for the probes to work properly. These qualities are: water solubility, large thermodynamic stability at physiological pHs , kinetic inertness and absorption above 330 nm to minimize destruction of live biological materials. [ 1 ] The chelates which have been studied and utilized to date can be classified into the following groups: [ 3 ] The efficiency of the energy transfer from the ligand to the ion is determined ligand-metal bond. The energy transfer is more efficient when bonded covalently than through ionic bonding. [ 15 ] Substituents in the ligand which are of electron-donating such as hydroxy , methoxy and methyl groups increase the fluorescence. [ 3 ] The opposite effect is seen when an electron-withdrawing group (such as nitro) is attached. [ 3 ] [ 4 ] Furthermore, the fluorescence intensity is increased by fluorine substitution to the ligand. The energy transfer to the metal ion increases as the electronegativity of the fluorinated group makes the europium-oxygen bond of a more covalent nature. Increased conjugation by aromatic substituents by replacing phenyl by naphtyl groups is shown to enhance fluorescence. [ 15 ]
https://en.wikipedia.org/wiki/Lanthanide_probes
Lanthanum(III) iodide is an inorganic compound containing lanthanum and iodine with the chemical formula LaI 3 . [ 1 ] Lanthanum(III) iodide can be synthesised by the reaction of lanthanum metal with mercury(II) iodide : [ 2 ] [ 3 ] It can also be prepared from the elements, that is by the reaction of metallic lanthanum with iodine: [ 2 ] While lanthanum(III) iodide solutions can be generated by dissolving lanthanum oxide in hydroiodic acid , the product will hydrolyse and form polymeric hydroxy species: [ 4 ] Lanthanum(III) iodide adopts the same crystal structure as plutonium(III) bromide , with 8-coordinate metal centres arranged in layers. [ 4 ] [ 5 ] This orthorhombic structure is typical of the triiodides of the lighter lanthanides (La–Nd), whereas heavier lanthanides tend to adopt the hexagonal bismuth(III) iodide structure. [ 3 ] Lanthanum(III) iodide is very soluble in water and is deliquescent . [ 4 ] Anhydrous lanthanum(III) iodide reacts with tetrahydrofuran to form a photoluminescent complex, LaI 3 (THF) 4 , with an average La–I bond length of 3.16 Å. [ 6 ] [ 7 ] This complex is a starting material for amide and cyclopentadienyl complexes of lanthanum. [ 6 ] [ 8 ] Lanthanum also forms a diiodide , LaI 2 . It is an electride and is best formulated {La III ,2I − ,e − }, with the electron delocalised in a conduction band. [ 4 ] Several other lanthanides form similar compounds, including CeI 2 , PrI 2 and GdI 2 . [ 9 ] Lanthanum diiodide adopts the same tetragonal crystal structure as PrI 2 . [ 10 ] Lanthanum(III) iodide reacts with lanthanum metal under an argon atmosphere in a tantalum capsule at 1225 K to form the mixed-valence compound La 2 I 5 . [ 11 ] Reduction of LaI 2 or LaI 3 with metallic sodium in an argon atmosphere at 550 °C gives lanthanum monoiodide, LaI, which has a hexagonal crystal structure. [ 12 ]
https://en.wikipedia.org/wiki/Lanthanum(III)_iodide
Lanthanum forms several alloys with nickel , including LaNi 5 , La 2 Ni 7 , LaNi 2 , LaNi 3 , La 2 Ni 3 , LaNi and La 3 Ni etc. [ 1 ] LaNi 5 is an intermetallic compound with a CaCu 5 structure. It belongs to the hexagonal crystal system. [ 4 ] It can be oxidized by air above 200 °C, and react with hydrochloric acid , sulfuric acid or nitric acid above 20 °C. [ 5 ] LaNi 5 can be used as a catalyst for hydrogenation reactions. [ 6 ] [ 7 ] In addition to LaNi 5 , there are other alloys such as La 2 Ni 7 , LaNi 2 , LaNi 3 , La 2 Ni 3 , LaNi, and La 3 Ni, and nonstoichiometric alloys such as LaNi 2.286 ( tetragonal , space group I 4̄2m). [ 8 ] The nickel atoms in La x Ni y can also be replaced by other atoms, such as LaNi 2.5 Co 2.5 . [ 9 ]
https://en.wikipedia.org/wiki/Lanthanum-nickel_alloy
Lanthanum acetate is an inorganic compound , a salt of lanthanum with acetic acid with the chemical formula La(CH 3 CO 2 ) 3 . According to X-ray crystallography , anhydrous lanthanum acetate is a coordination polymer . Each La(III) center is nine-coordinate, with two bidentate acetate ligands and the remaining sites occupied by oxygens provided by bridging acetate ligands. The praseodymium and holmium compounds are isostructural. [ 1 ] Lanthanum acetate can be formed by the reaction of lanthanum(III) oxide and acetic anhydride : It is also made in a reaction of lanthanum oxide with 50% acetic acid : Lanthanum(III) acetate forms colorless crystals. Lanthanum acetate dissolves in water. Lanthanum acetate forms hydrates of the composition La(CH 3 COO) 3 • n H 2 O , where n = 1 and 1.5. [ 2 ] [ 3 ] Lanthanum acetate and its hydrates decompose when heated. Lanthanum acetate is used in specialty glass manufacturing and in water treatment . Also, it is used to produce porous lanthanum oxyfluoride (LaOF) films. [ 4 ] It is also used as a component in the production of ceramic products and as a catalyst in the pharmaceutical industry.
https://en.wikipedia.org/wiki/Lanthanum_acetate
Lanthanum aluminate is an inorganic compound with the formula LaAlO 3 , often abbreviated as LAO . It is an optically transparent ceramic oxide with a distorted perovskite structure . Crystalline LaAlO 3 has a relatively high relative dielectric constant of ~25. LAO's crystal structure is a rhombohedral distorted perovskite with a pseudocubic lattice parameter of 3.787 angstroms at room temperature [ 2 ] (although one source claims the lattice parameter is 3.82 [ 3 ] ). Polished single crystal LAO surfaces show twin defects visible to the naked eye. Epitaxially grown thin films of LAO can serve various purposes for correlated electrons heterostructures and devices. LAO is sometimes used as an epitaxial insulator between two conductive layers. Epitaxial LAO films can be grown by several methods, most commonly by pulsed laser deposition (PLD) and molecular beam epitaxy (MBE). [ citation needed ] LAO-STO interfaces The most important and common use for epitaxial LAO is at the lanthanum aluminate-strontium titanate interface . In 2004, it was discovered that when 4 or more unit cells of LAO are epitaxially grown on strontium titanate (SrTiO 3 , STO), a conductive 2-dimensional layer is formed at their interface. [ 4 ] Individually, LaAlO 3 and SrTiO 3 are non-magnetic insulators , yet LaAlO 3 /SrTiO 3 interfaces exhibit electrical conductivity , [ 4 ] superconductivity , [ 5 ] ferromagnetism , [ 6 ] large negative in-plane magnetoresistance , [ 7 ] and giant persistent photoconductivity . [ 8 ] The study of how these properties emerge at the LaAlO 3 /SrTiO 3 interface is a growing area of research in condensed matter physics . Single crystals of lanthanum aluminate are commercially available as a substrate for the epitaxial growth of perovskites, [ 1 ] [ 9 ] and particularly for cuprate superconductors . Thin films of lanthanum aluminate were considered as candidate materials for high-k dielectrics in the early-mid 2000s. Despite their attractive relative dielectric constant of ~25, they were not stable enough in contact with silicon at the relevant temperatures (~1000 °C). [ 10 ]
https://en.wikipedia.org/wiki/Lanthanum_aluminate
The interface between lanthanum aluminate (LaAlO 3 ) and strontium titanate (SrTiO 3 ) is a notable materials interface because it exhibits properties not found in its constituent materials. Individually, LaAlO 3 and SrTiO 3 are non-magnetic insulators , yet LaAlO 3 /SrTiO 3 interfaces can exhibit electrical metallic conductivity , [ 1 ] superconductivity , [ 2 ] ferromagnetism , [ 3 ] large negative in-plane magnetoresistance , [ 4 ] and giant persistent photoconductivity . [ 5 ] The study of how these properties emerge at the LaAlO 3 /SrTiO 3 interface is a growing area of research in condensed matter physics . Under the right conditions, the LaAlO 3 /SrTiO 3 interface is electrically conductive, like a metal. The angular dependence of Shubnikov–de Haas oscillations indicates that the conductivity is two-dimensional, [ 6 ] leading many researchers to refer to it as a two-dimensional electron gas (2DEG). Two-dimensional does not mean that the conductivity has zero thickness, but rather that the electrons are confined to only move in two directions. It is also sometimes called a two-dimensional electron liquid (2DEL) to emphasize the importance of inter-electron interactions. [ 7 ] Not all LaAlO 3 /SrTiO 3 interfaces are conductive. Typically, conductivity is achieved only when: Conductivity can also be achieved when the SrTiO 3 is doped with oxygen vacancies; however, in that case, the interface is technically LaAlO 3 /SrTiO 3−x instead of LaAlO 3 /SrTiO 3 . The source of conductivity at the LaAlO 3 /SrTiO 3 interface has been debated for years. SrTiO 3 is a wide-band gap semiconductor that can be doped n-type in a variety of ways. Clarifying the mechanism behind the conductivity is a major goal of current research. Four leading hypotheses are: Polar gating was the first mechanism used to explain the conductivity at LaAlO 3 /SrTiO 3 interfaces. [ 1 ] It postulates that the LaAlO 3 , which is polar in the 001 direction (with alternating sheets of positive and negative charge), acts as an electrostatic gate on the semiconducting SrTiO 3 . [ 1 ] When the LaAlO 3 layer grows thicker than three unit cells, its valence band energy rises above the Fermi level , causing holes (or positively charged oxygen vacancies [ 9 ] ) to form on the outer surface of the LaAlO 3 . The positive charge on the surface of the LaAlO 3 attracts negative charge to nearby available states. In the case of the LaAlO 3 /SrTiO 3 interface, this means electrons accumulate in the surface of the SrTiO 3 , in the Ti d bands. The strengths of the polar gating hypothesis are that it explains why conductivity requires a critical thickness of four unit cells of LaAlO 3 and that it explains why conductivity requires the SrTiO 3 to be TiO 2 -terminated. The polar gating hypothesis also explains why alloying the LaAlO 3 increases the critical thickness for conductivity. [ 10 ] One weakness of the hypothesis is that it predicts that the LaAlO 3 films should exhibit a built-in electric field; so far, x-ray photoemission experiments [ 11 ] [ 12 ] [ 13 ] [ 14 ] and other experiments [ 15 ] [ 16 ] [ 17 ] have shown little to no built-in field in the LaAlO 3 films. The polar gating hypothesis also cannot explain why Ti 3+ is detected when the LaAlO 3 films are thinner than the critical thickness for conductivity. [ 12 ] The polar gating hypothesis is sometimes called the polar catastrophe hypothesis, [ 18 ] alluding to the counterfactual scenario where electrons don't accumulate at the interface and instead voltage in the LaAlO 3 builds up forever. The hypothesis has also been called the electronic reconstruction hypothesis, [ 18 ] highlighting the fact that electrons, not ions, move to compensate the building voltage. Another hypothesis is that the conductivity comes from free electrons left by oxygen vacancies in the SrTiO 3 . [ 19 ] SrTiO 3 is known to be easily doped by oxygen vacancies, so this was initially considered a promising hypothesis. However, electron energy loss spectroscopy measurements have bounded the density of oxygen vacancies well below the density necessary to supply the measured free electron densities. [ 20 ] Another proposed possibility is that oxygen vacancies in the surface of the LaAlO 3 are remotely doping the SrTiO 3 . [ 12 ] Under generic growth conditions, multiple mechanisms can coexist. A systematic study [ 21 ] across a wide growth parameter space demonstrated different roles played by oxygen vacancy formation and the polar gating at different interfaces. An obvious difference between oxygen vacancies and polar gating in creating the interface conductivity is that the carriers from oxygen vacancies are thermally activated as the donor level of oxygen vacancies is usually separated from the SrTiO 3 conduction band, consequently exhibiting the carrier freeze-out effect [ 22 ] at low temperatures; in contrast, the carriers originating from the polar gating are transferred into the SrTiO 3 conduction band (Ti 3d orbitals) and are therefore degenerate. [ 21 ] Lanthanum is a known dopant in SrTiO 3 , [ 23 ] so it has been suggested that La from the LaAlO 3 mixes into the SrTiO 3 and dopes it n-type. Multiple studies have shown that intermixing takes place at the interface; [ 24 ] however, it is not clear whether there is enough intermixing to provide all of the free carriers. For example, a flipped interface between a SrTiO 3 film and a LaAlO 3 substrate is insulating. [ 25 ] A fourth hypothesis is that the LaAlO 3 crystal structure undergoes octahedral rotations in response to the strain from the SrTiO 3 . These octahedral rotations in the LaAlO 3 induce octahedral rotations in the SrTiO 3 , increasing the Ti d-band width enough so that electrons are no longer localized. [ 26 ] Superconductivity was first observed in LaAlO 3 /SrTiO 3 interfaces in 2007, with a critical temperature of ~200 mK. [ 27 ] Like the conductivity, the superconductivity appears to be two-dimensional. [ 2 ] Hints of ferromagnetism in LaAlO 3 /SrTiO 3 were first seen in 2007, when Dutch researchers observed hysteresis in the magnetoresistance of LaAlO 3 /SrTiO 3 . [ 28 ] Follow up measurements with torque magnetometry indicated that the magnetism in LaAlO 3 /SrTiO 3 persisted all the way to room temperature. [ 29 ] In 2011, researchers at Stanford University used a scanning SQUID to directly image the ferromagnetism, and found that it occurred in heterogeneous patches. [ 3 ] Like the conductivity in LaAlO 3 /SrTiO 3 , the magnetism only appeared when the LaAlO 3 films were thicker than a few unit cells. [ 30 ] However, unlike conductivity, magnetism was seen at SrO-terminated surfaces as well as TiO 2 -terminated surfaces. [ 30 ] The discovery of ferromagnetism in a materials system that also superconducts spurred a flurry of research and debate, because ferromagnetism and superconductivity almost never coexist together. [ 3 ] Ferromagnetism requires electron spins to align, while superconductivity typically requires electron spins to anti-align. Magnetoresistance measurements are a major experimental tool used to understand the electronic properties of materials. The magnetoresistance of LaAlO 3 /SrTiO 3 interfaces has been used to reveal the 2D nature of conduction, carrier concentrations (through the hall effect ), electron mobilities, and more. [ 6 ] At low magnetic field, the magnetoresistance of LaAlO 3 /SrTiO 3 is parabolic versus field, as expected for an ordinary metal. [ 31 ] However, at higher fields, the magnetoresistance appears to become linear versus field. [ 31 ] Linear magnetoresistance can have many causes, but so far there is no scientific consensus on the cause of linear magnetoresistance in LaAlO 3 /SrTiO 3 interfaces. [ 31 ] Linear magnetoresistance has also been measured in pure SrTiO 3 crystals, [ 32 ] so it may be unrelated to the emergent properties of the interface. At low temperature (T < 30 K), the LaAlO 3 /SrTiO 3 interface exhibits negative in-plane magnetoresistance, [ 31 ] sometimes as large as -90%. [ 4 ] The large negative in-plane magnetoresistance has been ascribed to the interface's enhanced spin-orbit interaction. [ 4 ] [ 33 ] Experimentally, the charge density profile of the electron gas at the LaAlO 3 /SrTiO 3 interface has a strongly asymmetric shape with a rapid initial decay over the first 2 nm and a pronounced tail that extends to about 11 nm. [ 34 ] [ 35 ] A wide variety of theoretical calculations support this result. Importantly, to get electron distribution one have to take into account field-dependent dielectric constant of SrTiO 3 . [ 36 ] [ 37 ] [ 38 ] The 2D electron gas that arises at the LaAlO 3 /SrTiO 3 interface is notable for two main reasons. First, it has very high carrier concentration, on the order of 10 13 cm −2 . Second, if the polar gating hypothesis is true, the 2D electron gas has the potential to be totally free of disorder , unlike other 2D electron gases that require doping or gating to form. However, so far researchers have been unable to synthesize interfaces that realize the promise of low disorder. Most LaAlO 3 /SrTiO 3 interfaces are synthesized using pulsed laser deposition . A high-power laser ablates a LaAlO 3 target, and the plume of ejected material is deposited onto a heated SrTiO 3 substrate. Typical conditions used are: Some LaAlO 3 /SrTiO 3 interfaces have also been synthesized by molecular beam epitaxy , sputtering , and atomic layer deposition . [ 40 ] To better understand in the LaAlO 3 /SrTiO 3 interface, researchers have synthesized a number of analogous interfaces between other polar perovskite films and SrTiO 3 . Some of these analogues have properties similar to LaAlO 3 /SrTiO 3 , but some do not. As of 2015, there are no commercial applications of the LaAlO 3 /SrTiO 3 interface. However, speculative applications have been suggested, including field-effect devices, sensors, photodetectors, and thermoelectrics; [ 53 ] related LaVO 3 /SrTiO 3 is a functional solar cell [ 54 ] albeit hitherto with a low efficiency. [ 55 ]
https://en.wikipedia.org/wiki/Lanthanum_aluminate-strontium_titanate_interface
Lanthanum cuprate usually refers to the inorganic compound with the formula CuLa 2 O 4 . The name implies that the compound consists of a cuprate ([CuO n ] 2n- ) salt of lanthanum (La 3+ ). In fact it is a highly covalent solid. It is prepared by high temperature reaction of lanthanum oxide and copper(II) oxide follow by annealing under oxygen. [ 1 ] The material adopts a tetragonal structure related to potassium tetrafluoronickelate (K 2 NiF 4 ), which is orthorhombic. [ 1 ] [ 2 ] Replacement of some lanthanum by barium gives the quaternary phase CuLa 1.85 Ba 0.15 O 4 , called lanthanum barium copper oxide . That doped material displays superconductivity at −243 °C (30.1 K), which at the time of its discovery was a high temperature. This discovery initiated research on cuprate superconductors and was the basis of a Nobel Prize in Physics to Georg Bednorz and K. Alex Müller . [ 3 ]
https://en.wikipedia.org/wiki/Lanthanum_cuprate
Lanthanum hydroxide is La(OH) 3 , a hydroxide of the rare-earth element lanthanum . Lanthanum hydroxide can be obtained by adding an alkali such as ammonia to aqueous solutions of lanthanum salts such as lanthanum nitrate . This produces a gel-like precipitate that can then be dried in air. [ 2 ] Alternatively, it can be produced by hydration reaction (addition of water) to lanthanum oxide . [ 3 ] Lanthanum hydroxide does not react much with alkaline substances, however is slightly soluble in acidic solution. [ 2 ] In temperatures above 330 °C it decomposes into lanthanum oxide hydroxide (LaOOH), which upon further heating decomposes into lanthanum oxide ( La 2 O 3 ): [ 4 ] Lanthanum hydroxide crystallizes in the hexagonal crystal system . Each lanthanum ion in the crystal structure is surrounded by nine hydroxide ions in a tricapped trigonal prism . [ 5 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lanthanum_hydroxide
Lanthanum oxalate is an inorganic compound , a salt of lanthanum metal and oxalic acid with the chemical formula La 2 (C 2 O 4 ) 3 . [ 1 ] [ 2 ] Reaction of soluble lanthanum nitrate with an excess of oxalic acid : Also, a reaction of lanthanum chloride with oxalic acid : Lanthanum(III) oxalate forms colorless crystals that are poorly soluble in water. [ 3 ] The compound forms various crystallohydrates La 2 (C 2 O 4 ) 3 •n H 2 O , where n = 1, 2, 3, 7, and 10. [ 4 ] [ 5 ] The crystallohydrates decompose when heated. [ 6 ]
https://en.wikipedia.org/wiki/Lanthanum_oxalate
Lanthanum(III) oxide , also known as lanthana , chemical formula La 2 O 3 , is an inorganic compound containing the rare earth element lanthanum and oxygen . It is used in some ferroelectric materials, as a component of optical materials, and is a feedstock for certain catalysts, among other uses. Lanthanum oxide is a white solid that is insoluble in water, but dissolves in acidic solutions. La 2 O 3 absorbs moisture from air, converting to lanthanum hydroxide. [ 2 ] Lanthanum oxide has p-type semiconducting properties and a band gap of approximately 5.8 eV. [ 3 ] Its average room temperature resistivity is 10 kΩ·cm, which decreases with an increase in temperature. La 2 O 3 has the lowest lattice energy of the rare earth oxides, with very high dielectric constant ε = 27. At low temperatures, La 2 O 3 has an A- M 2 O 3 hexagonal crystal structure. The La 3+ metal atoms are surrounded by a 7 coordinate group of O 2− atoms, the oxygen ions are in an octahedral shape around the metal atom and there is one oxygen ion above one of the octahedral faces. [ 4 ] On the other hand, at high temperatures lanthanum oxide converts to a C- M 2 O 3 cubic crystal structure. The La 3+ ion is surrounded by six O 2− ions in a hexagonal configuration. [ 5 ] [ 6 ] Lanthanum oxide can crystallize in at least three polymorphs . [ 2 ] Hexagonal La 2 O 3 has been produced by spray pyrolysis of lanthanum chloride. [ 7 ] An alternative route to obtaining hexagonal La 2 O 3 involves precipitation of nominal La(OH) 3 from aqueous solution using a combination of 2.5% NH 3 and the surfactant sodium dodecyl sulfate followed by heating and stirring for 24 hours at 80 °C: Other routes include: Lanthanum oxide is used as an additive to develop certain ferroelectric materials, such as La-doped bismuth titanate ( Bi 4 Ti 3 O 12 - BLT). Lanthanum oxide is used in optical materials; often the optical glasses are doped with La 2 O 3 to improve the glass' refractive index, chemical durability, and mechanical strength. [ 8 ] The addition of the La 2 O 3 to the glass melt leads to a higher glass transition temperature from 658 °C to 679 °C. The addition also leads to a higher density, microhardness, and refractive index of the glass. Lanthanum oxide is most useful as a precursor to other lanthanum compounds. [ 9 ] Neither the oxide nor any of the derived materials enjoys substantial commercial value, unlike some of the other lanthanides. Many reports describe efforts toward practical applications of La 2 O 3 , as described below. La 2 O 3 forms glasses of high density, refractive index, and hardness. Together with oxides of tungsten , tantalum , and thorium , La 2 O 3 improves the resistance of the glass to attack by alkali. La 2 O 3 is an ingredient in some piezoelectric and thermoelectric materials. La 2 O 3 has been examined for the oxidative coupling of methane . [ 10 ]
https://en.wikipedia.org/wiki/Lanthanum_oxide
LaNi 5 is a hexagonal intermetallic compound composed of rare earth element lanthanum and transition metal nickel . It presents a calcium pentacopper (CaCu 5 ) crystal structure. It is a melting compound with the same composition and has hydrogen storage capacity. [ 3 ] LaNi 5 has a calcium pentacopper (CaCu 5 ) type crystal structure, with a hexagonal lattice, space group is P6/mmm (No. 191), with lanthanum atom is located at coordinate origin 1a (0,0,0), two nickel atoms are located at 2c (1/ 3,2/3,0) and (2/3,1/3,0), the other three at 3g (1/2,0,1/2), (0,1/2,1/2), (1/2,1/2,1/2), with a =511pm, c =397pm. The unit cell contains 1 LaNi 5 atom, the volume is 90×10 −24 cm 3 , the LaNi 5 unit cell contains a larger The six deformed tetrahedral voids can be used to fill in hydrogen atoms. [ 4 ] As a hydrogen storage alloy, LaNi 5 can absorb hydrogen to form the hydride LaNi 5 H x (x≈6) when the pressure is slightly high and the temperature is low, or when the pressure decreases or the temperature increases, hydrogen can be released to form repeated absorption and release of hydrogen. Energy must be added for the dehydrogenation process to proceed as it is an endothermic reaction. A decrease in temperature will cause the reaction to stop. [ 5 ] The hydrogen storage density per unit volume (crystal) of LaNi 5 H 6.5 at 2 bar is equal to the density of gaseous molecular hydrogen at 1800 bar, and all hydrogen can be desorbed at 2 bar. Although the hydrogen storage density in practical applications is reduced due to the aggregation of some LaNi 5 powders, it is still higher than the density of liquid hydrogen. This allows safe operation of hydrogen fuel. [ 5 ] In order to improve its hydrogen storage performance, metals such as lead or manganese are often used to partially replace nickel. Currently, LaNi 5 is commonly used in storage and transportation of hydrogen, hydrogen vehicle power, fuel cells, separation and purification of hydrogen, propylene hydrogenation catalysts, etc.
https://en.wikipedia.org/wiki/Lanthanum_pentanickel
Lanthanum strontium cobalt ferrite ( LSCF ), also called lanthanum strontium cobaltite ferrite is a specific ceramic oxide derived from lanthanum cobaltite of the ferrite group. It is a phase containing lanthanum(III) oxide , strontium oxide , cobalt oxide and iron oxide with the formula La x Sr 1− x Co y Fe 1− y O 3 , where 0.1≤ x ≤0.4 and 0.2≤ y ≤0.8. [ 1 ] It is black in color and crystallizes in a distorted hexagonal perovskite structure . [ 2 ] LSCF undergoes phase transformations at various temperatures depending on the composition. This material is a mixed ionic electronic conductor with comparatively high electronic conductivity (200+ S/cm) and good ionic conductivity (0.2 S/cm). [ 3 ] It is typically non-stoichiometric and can be reduced further at high temperature in low oxygen partial pressures or in the presence of a reducing agent such as carbon. [ 4 ] LSCF is being investigated as a material for intermediate temperature solid oxide fuel cell cathodes and, potentially as a direct carbon fuel cell anode. [ 2 ] LSCF is also investigated as a membrane material for separation of oxygen from air , for use in e.g. cleaner burning power plants. [ 5 ] This material -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lanthanum_strontium_cobalt_ferrite
In mathematics , Laplace's method , named after Pierre-Simon Laplace , is a technique used to approximate integrals of the form where f {\displaystyle f} is a twice- differentiable function , M {\displaystyle M} is a large number , and the endpoints a {\displaystyle a} and b {\displaystyle b} could be infinite. This technique was originally presented in the book by Laplace (1774) . In Bayesian statistics , Laplace's approximation can refer to either approximating the posterior normalizing constant with Laplace's method or approximating the posterior distribution with a Gaussian centered at the maximum a posteriori estimate . [ 1 ] [ 2 ] Laplace approximations are used in the integrated nested Laplace approximations method for fast approximations of Bayesian inference . Let the function f ( x ) {\displaystyle f(x)} have a unique global maximum at x 0 {\displaystyle x_{0}} . M > 0 {\displaystyle M>0} is a constant here. The following two functions are considered: Then, x 0 {\displaystyle x_{0}} is the global maximum of g {\displaystyle g} and h {\displaystyle h} as well. Hence: As M increases, the ratio for h {\displaystyle h} will grow exponentially, while the ratio for g {\displaystyle g} does not change. Thus, significant contributions to the integral of this function will come only from points x {\displaystyle x} in a neighborhood of x 0 {\displaystyle x_{0}} , which can then be estimated. To state and motivate the method, one must make several assumptions. It is assumed that x 0 {\displaystyle x_{0}} is not an endpoint of the interval of integration and that the values f ( x ) {\displaystyle f(x)} cannot be very close to f ( x 0 ) {\displaystyle f(x_{0})} unless x {\displaystyle x} is close to x 0 {\displaystyle x_{0}} . f ( x ) {\displaystyle f(x)} can be expanded around x 0 {\displaystyle x_{0}} by Taylor's theorem , where R = O ( ( x − x 0 ) 3 ) {\displaystyle R=O\left((x-x_{0})^{3}\right)} (see: big O notation ). Since f {\displaystyle f} has a global maximum at x 0 {\displaystyle x_{0}} , and x 0 {\displaystyle x_{0}} is not an endpoint, it is a stationary point , i.e. f ′ ( x 0 ) = 0 {\displaystyle f'(x_{0})=0} . Therefore, the second-order Taylor polynomial approximating f ( x ) {\displaystyle f(x)} is Then, just one more step is needed to get a Gaussian distribution. Since x 0 {\displaystyle x_{0}} is a global maximum of the function f {\displaystyle f} it can be stated, by definition of the second derivative , that f ″ ( x 0 ) ≤ 0 {\displaystyle f''(x_{0})\leq 0} , thus giving the relation f ( x ) ≈ f ( x 0 ) − 1 2 | f ″ ( x 0 ) | ( x − x 0 ) 2 {\displaystyle f(x)\approx f(x_{0})-{\frac {1}{2}}|f''(x_{0})|(x-x_{0})^{2}} for x {\displaystyle x} close to x 0 {\displaystyle x_{0}} . The integral can then be approximated with: If f ″ ( x 0 ) < 0 {\displaystyle f''(x_{0})<0} this latter integral becomes a Gaussian integral if we replace the limits of integration by − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } ; when M {\displaystyle M} is large this creates only a small error because the exponential decays very fast away from x 0 {\displaystyle x_{0}} . Computing this Gaussian integral we obtain: A generalization of this method and extension to arbitrary precision is provided by the book Fog (2008) . Suppose f ( x ) {\displaystyle f(x)} is a twice continuously differentiable function on [ a , b ] , {\displaystyle [a,b],} and there exists a unique point x 0 ∈ ( a , b ) {\displaystyle x_{0}\in (a,b)} such that: Then: Lower bound: Let ε > 0 {\displaystyle \varepsilon >0} . Since f ″ {\displaystyle f''} is continuous there exists δ > 0 {\displaystyle \delta >0} such that if | x 0 − c | < δ {\displaystyle |x_{0}-c|<\delta } then f ″ ( c ) ≥ f ″ ( x 0 ) − ε . {\displaystyle f''(c)\geq f''(x_{0})-\varepsilon .} By Taylor's Theorem , for any x ∈ ( x 0 − δ , x 0 + δ ) , {\displaystyle x\in (x_{0}-\delta ,x_{0}+\delta ),} Then we have the following lower bound: where the last equality was obtained by a change of variables Remember f ″ ( x 0 ) < 0 {\displaystyle f''(x_{0})<0} so we can take the square root of its negation. If we divide both sides of the above inequality by and take the limit we get: since this is true for arbitrary ε {\displaystyle \varepsilon } we get the lower bound: Note that this proof works also when a = − ∞ {\displaystyle a=-\infty } or b = ∞ {\displaystyle b=\infty } (or both). Upper bound: The proof is similar to that of the lower bound but there are a few inconveniences. Again we start by picking an ε > 0 {\displaystyle \varepsilon >0} but in order for the proof to work we need ε {\displaystyle \varepsilon } small enough so that f ″ ( x 0 ) + ε < 0. {\displaystyle f''(x_{0})+\varepsilon <0.} Then, as above, by continuity of f ″ {\displaystyle f''} and Taylor's Theorem we can find δ > 0 {\displaystyle \delta >0} so that if | x − x 0 | < δ {\displaystyle |x-x_{0}|<\delta } , then Lastly, by our assumptions (assuming a , b {\displaystyle a,b} are finite) there exists an η > 0 {\displaystyle \eta >0} such that if | x − x 0 | ≥ δ {\displaystyle |x-x_{0}|\geq \delta } , then f ( x ) ≤ f ( x 0 ) − η {\displaystyle f(x)\leq f(x_{0})-\eta } . Then we can calculate the following upper bound: If we divide both sides of the above inequality by and take the limit we get: Since ε {\displaystyle \varepsilon } is arbitrary we get the upper bound: And combining this with the lower bound gives the result. Note that the above proof obviously fails when a = − ∞ {\displaystyle a=-\infty } or b = ∞ {\displaystyle b=\infty } (or both). To deal with these cases, we need some extra assumptions. A sufficient (not necessary) assumption is that for n = 1 , {\displaystyle n=1,} and that the number η {\displaystyle \eta } as above exists (note that this must be an assumption in the case when the interval [ a , b ] {\displaystyle [a,b]} is infinite). The proof proceeds otherwise as above, but with a slightly different approximation of integrals: When we divide by we get for this term whose limit as n → ∞ {\displaystyle n\to \infty } is 0 {\displaystyle 0} . The rest of the proof (the analysis of the interesting term) proceeds as above. The given condition in the infinite interval case is, as said above, sufficient but not necessary. However, the condition is fulfilled in many, if not in most, applications: the condition simply says that the integral we are studying must be well-defined (not infinite) and that the maximum of the function at x 0 {\displaystyle x_{0}} must be a "true" maximum (the number η > 0 {\displaystyle \eta >0} must exist). There is no need to demand that the integral is finite for n = 1 {\displaystyle n=1} but it is enough to demand that the integral is finite for some n = N . {\displaystyle n=N.} This method relies on 4 basic concepts such as The “approximation” in this method is related to the relative error and not the absolute error . Therefore, if we set the integral can be written as where s {\displaystyle s} is a small number when M {\displaystyle M} is a large number obviously and the relative error will be Now, let us separate this integral into two parts: y ∈ [ − D y , D y ] {\displaystyle y\in [-D_{y},D_{y}]} region and the rest. Let’s look at the Taylor expansion of M ( f ( x ) − f ( x 0 ) ) {\displaystyle M(f(x)-f(x_{0}))} around x 0 {\displaystyle x_{0}} and translate x {\displaystyle x} to y {\displaystyle y} because we do the comparison in y-space, we will get Note that f ′ ( x 0 ) = 0 {\displaystyle f'(x_{0})=0} because x 0 {\displaystyle x_{0}} is a stationary point. From this equation you will find that the terms higher than second derivative in this Taylor expansion is suppressed as the order of 1 M {\displaystyle {\tfrac {1}{\sqrt {M}}}} so that exp ⁡ ( M ( f ( x ) − f ( x 0 ) ) ) {\displaystyle \exp(M(f(x)-f(x_{0})))} will get closer to the Gaussian function as shown in figure. Besides, Because we do the comparison in y-space, y {\displaystyle y} is fixed in y ∈ [ − D y , D y ] {\displaystyle y\in [-D_{y},D_{y}]} which will cause x ∈ [ − s D y , s D y ] {\displaystyle x\in [-sD_{y},sD_{y}]} ; however, s {\displaystyle s} is inversely proportional to M {\displaystyle {\sqrt {M}}} , the chosen region of x {\displaystyle x} will be smaller when M {\displaystyle M} is increased. Relying on the 3rd concept, even if we choose a very large D y , sD y will finally be a very small number when M {\displaystyle M} is increased to a huge number. Then, how can we guarantee the integral of the rest will tend to 0 when M {\displaystyle M} is large enough? The basic idea is to find a function m ( x ) {\displaystyle m(x)} such that m ( x ) ≥ f ( x ) {\displaystyle m(x)\geq f(x)} and the integral of e M m ( x ) {\displaystyle e^{Mm(x)}} will tend to zero when M {\displaystyle M} grows. Because the exponential function of M m ( x ) {\displaystyle Mm(x)} will be always larger than zero as long as m ( x ) {\displaystyle m(x)} is a real number, and this exponential function is proportional to m ( x ) , {\displaystyle m(x),} the integral of e M f ( x ) {\displaystyle e^{Mf(x)}} will tend to zero. For simplicity, choose m ( x ) {\displaystyle m(x)} as a tangent through the point x = s D y {\displaystyle x=sD_{y}} as shown in the figure: If the interval of the integration of this method is finite, we will find that no matter f ( x ) {\displaystyle f(x)} is continue in the rest region, it will be always smaller than m ( x ) {\displaystyle m(x)} shown above when M {\displaystyle M} is large enough. By the way, it will be proved later that the integral of e M m ( x ) {\displaystyle e^{Mm(x)}} will tend to zero when M {\displaystyle M} is large enough. If the interval of the integration of this method is infinite, m ( x ) {\displaystyle m(x)} and f ( x ) {\displaystyle f(x)} might always cross to each other. If so, we cannot guarantee that the integral of e M f ( x ) {\displaystyle e^{Mf(x)}} will tend to zero finally. For example, in the case of f ( x ) = sin ⁡ ( x ) x , {\displaystyle f(x)={\tfrac {\sin(x)}{x}},} ∫ 0 ∞ e M f ( x ) d x {\displaystyle \int _{0}^{\infty }e^{Mf(x)}dx} will always diverge. Therefore, we need to require that ∫ d ∞ e M f ( x ) d x {\displaystyle \int _{d}^{\infty }e^{Mf(x)}dx} can converge for the infinite interval case. If so, this integral will tend to zero when d {\displaystyle d} is large enough and we can choose this d {\displaystyle d} as the cross of m ( x ) {\displaystyle m(x)} and f ( x ) . {\displaystyle f(x).} You might ask why not choose ∫ d ∞ e f ( x ) d x {\displaystyle \int _{d}^{\infty }e^{f(x)}dx} as a convergent integral? Let me use an example to show you the reason. Suppose the rest part of f ( x ) {\displaystyle f(x)} is − ln ⁡ x , {\displaystyle -\ln x,} then e f ( x ) = 1 x {\displaystyle e^{f(x)}={\tfrac {1}{x}}} and its integral will diverge; however, when M = 2 , {\displaystyle M=2,} the integral of e M f ( x ) = 1 x 2 {\displaystyle e^{Mf(x)}={\tfrac {1}{x^{2}}}} converges. So, the integral of some functions will diverge when M {\displaystyle M} is not a large number, but they will converge when M {\displaystyle M} is large enough. Based on these four concepts, we can derive the relative error of this method. Laplace's approximation is sometimes written as where h {\displaystyle h} is positive. Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays in g ( x ) {\displaystyle g(x)} and what goes into h ( x ) . {\displaystyle h(x).} [ 3 ] First, use x 0 = 0 {\displaystyle x_{0}=0} to denote the global maximum, which will simplify this derivation. We are interested in the relative error, written as | R | {\displaystyle |R|} , where So, if we let and A 0 ≡ e − π y 2 {\displaystyle A_{0}\equiv e^{-\pi y^{2}}} , we can get since ∫ − ∞ ∞ A 0 d y = 1 {\displaystyle \int _{-\infty }^{\infty }A_{0}\,dy=1} . For the upper bound, note that | A + B | ≤ | A | + | B | , {\displaystyle |A+B|\leq |A|+|B|,} thus we can separate this integration into 5 parts with 3 different types (a), (b) and (c), respectively. Therefore, where ( a 1 ) {\displaystyle (a_{1})} and ( a 2 ) {\displaystyle (a_{2})} are similar, let us just calculate ( a 1 ) {\displaystyle (a_{1})} and ( b 1 ) {\displaystyle (b_{1})} and ( b 2 ) {\displaystyle (b_{2})} are similar, too, I’ll just calculate ( b 1 ) {\displaystyle (b_{1})} . For ( a 1 ) {\displaystyle (a_{1})} , after the translation of z ≡ π y 2 {\displaystyle z\equiv \pi y^{2}} , we can get This means that as long as D y {\displaystyle D_{y}} is large enough, it will tend to zero. For ( b 1 ) {\displaystyle (b_{1})} , we can get where and h ( x ) {\displaystyle h(x)} should have the same sign of h ( 0 ) {\displaystyle h(0)} during this region. Let us choose m ( x ) {\displaystyle m(x)} as the tangent across the point at x = s D y {\displaystyle x=sD_{y}} , i.e. m ( s y ) = g ( s D y ) − g ( 0 ) + g ′ ( s D y ) ( s y − s D y ) {\displaystyle m(sy)=g(sD_{y})-g(0)+g'(sD_{y})\left(sy-sD_{y}\right)} which is shown in the figure From this figure you can find that when s {\displaystyle s} or D y {\displaystyle D_{y}} gets smaller, the region satisfies the above inequality will get larger. Therefore, if we want to find a suitable m ( x ) {\displaystyle m(x)} to cover the whole f ( x ) {\displaystyle f(x)} during the interval of ( b 1 ) {\displaystyle (b_{1})} , D y {\displaystyle D_{y}} will have an upper limit. Besides, because the integration of e − α x {\displaystyle e^{-\alpha x}} is simple, let me use it to estimate the relative error contributed by this ( b 1 ) {\displaystyle (b_{1})} . Based on Taylor expansion, we can get and and then substitute them back into the calculation of ( b 1 ) {\displaystyle (b_{1})} ; however, you can find that the remainders of these two expansions are both inversely proportional to the square root of M {\displaystyle M} , let me drop them out to beautify the calculation. Keeping them is better, but it will make the formula uglier. Therefore, it will tend to zero when D y {\displaystyle D_{y}} gets larger, but don't forget that the upper bound of D y {\displaystyle D_{y}} should be considered during this calculation. About the integration near x = 0 {\displaystyle x=0} , we can also use Taylor's Theorem to calculate it. When h ′ ( 0 ) ≠ 0 {\displaystyle h'(0)\neq 0} and you can find that it is inversely proportional to the square root of M {\displaystyle M} . In fact, ( c ) {\displaystyle (c)} will have the same behave when h ( x ) {\displaystyle h(x)} is a constant. Conclusively, the integral near the stationary point will get smaller as M {\displaystyle {\sqrt {M}}} gets larger, and the rest parts will tend to zero as long as D y {\displaystyle D_{y}} is large enough; however, we need to remember that D y {\displaystyle D_{y}} has an upper limit which is decided by whether the function m ( x ) {\displaystyle m(x)} is always larger than g ( x ) − g ( 0 ) {\displaystyle g(x)-g(0)} in the rest region. However, as long as we can find one m ( x ) {\displaystyle m(x)} satisfying this condition, the upper bound of D y {\displaystyle D_{y}} can be chosen as directly proportional to M {\displaystyle {\sqrt {M}}} since m ( x ) {\displaystyle m(x)} is a tangent across the point of g ( x ) − g ( 0 ) {\displaystyle g(x)-g(0)} at x = s D y {\displaystyle x=sD_{y}} . So, the bigger M {\displaystyle M} is, the bigger D y {\displaystyle D_{y}} can be. In the multivariate case, where x {\displaystyle \mathbf {x} } is a d {\displaystyle d} -dimensional vector and f ( x ) {\displaystyle f(\mathbf {x} )} is a scalar function of x {\displaystyle \mathbf {x} } , Laplace's approximation is usually written as: where H ( f ) ( x 0 ) {\displaystyle H(f)(\mathbf {x} _{0})} is the Hessian matrix of f {\displaystyle f} evaluated at x 0 {\displaystyle \mathbf {x} _{0}} and where | ⋅ | {\displaystyle |\cdot |} denotes matrix determinant . Analogously to the univariate case, the Hessian is required to be negative-definite . [ 4 ] By the way, although x {\displaystyle \mathbf {x} } denotes a d {\displaystyle d} -dimensional vector, the term d x {\displaystyle d\mathbf {x} } denotes an infinitesimal volume here, i.e. d x := d x 1 d x 2 ⋯ d x d {\displaystyle d\mathbf {x} :=dx_{1}dx_{2}\cdots dx_{d}} . In extensions of Laplace's method, complex analysis , and in particular Cauchy's integral formula , is used to find a contour of steepest descent for an (asymptotically with large M ) equivalent integral, expressed as a line integral . In particular, if no point x 0 where the derivative of f {\displaystyle f} vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again, the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termed steepest descents ). The appropriate formulation for the complex z -plane is for a path passing through the saddle point at z 0 . Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one must not take the modulus. Also note that if the integrand is meromorphic , one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paper Symmetric functions and random partitions ). An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method . Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems . Given a contour C in the complex sphere , a function f {\displaystyle f} defined on that contour and a special point, such as infinity, a holomorphic function M is sought away from C , with prescribed jump across C , and with a given normalization at infinity. If f {\displaystyle f} and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution. An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour. The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, "steepest descent contours" solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov). The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models , random matrices and combinatorics . In the generalization, evaluation of the integral is considered equivalent to finding the norm of the distribution with density Denoting the cumulative distribution F ( x ) {\displaystyle F(x)} , if there is a diffeomorphic Gaussian distribution with density the norm is given by and the corresponding diffeomorphism is where Φ {\displaystyle \Phi } denotes cumulative standard normal distribution function. In general, any distribution diffeomorphic to the Gaussian distribution has density and the median -point is mapped to the median of the Gaussian distribution. Matching the logarithm of the density functions and their derivatives at the median point up to a given order yields a system of equations that determine the approximate values of γ {\displaystyle \gamma } and g {\displaystyle g} . The approximation was introduced in 2019 by D. Makogon and C. Morais Smith primarily in the context of partition function evaluation for a system of interacting fermions. [ 5 ] For complex integrals in the form: with t ≫ 1 , {\displaystyle t\gg 1,} we make the substitution t = iu and the change of variable s = c + i x {\displaystyle s=c+ix} to get the bilateral Laplace transform: We then split g ( c + ix ) in its real and complex part, after which we recover u = t / i . This is useful for inverse Laplace transforms , the Perron formula and complex integration. Laplace's method can be used to derive Stirling's approximation for a large integer N . From the definition of the Gamma function , we have Now we change variables, letting x = N z {\displaystyle x=Nz} so that d x = N d z . {\displaystyle dx=Ndz.} Plug these values back in to obtain This integral has the form necessary for Laplace's method with which is twice-differentiable: The maximum of f ( z ) {\displaystyle f(z)} lies at z 0 = 1, and the second derivative of f ( z ) {\displaystyle f(z)} has the value −1 at this point. Therefore, we obtain This article incorporates material from saddle point approximation on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Laplace's_method
Irrotational flow occurs where the curl of the velocity of the fluid is zero everywhere. That is when ∇ × v → = 0 {\displaystyle \nabla \times {\vec {v}}=0} Similarly, if it is assumed that the fluid is incompressible: ρ ( x , y , z , t ) = ρ (a constant) {\displaystyle \rho (x,y,z,t)=\rho {\text{ (a constant)}}} Then, starting with the continuity equation : ∂ ρ ∂ t + ∇ ⋅ ( ρ v → ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho {\vec {v}})=0} The condition of incompressibility means that the time derivative of the density is 0, and that the density can be pulled out of the divergence, and divided out, thus leaving the continuity equation for an incompressible system: ∇ ⋅ v → = 0 {\displaystyle \nabla \cdot {\vec {v}}=0} Now, the Helmholtz decomposition can be used to write the velocity as the sum of the gradient of a scalar potential and as the curl of a vector potential. That is: v → = − ∇ ϕ + ∇ × A → {\displaystyle {\vec {v}}=-\nabla \phi +\nabla \times {\vec {A}}} Note that imposing the condition that ∇ × v → = 0 {\displaystyle \nabla \times {\vec {v}}=0} implies that ∇ × ( ∇ × A → ) = 0 {\displaystyle \nabla \times (\nabla \times {\vec {A}})=0} The curl of the gradient is always 0. Note that the curl of the curl of a function is only uniformly 0 for the vector potential being 0 itself. So, by the condition of irrotational flow: v → = − ∇ ϕ {\displaystyle {\vec {v}}=-\nabla \phi } And then using the continuity equation ∇ ⋅ v → = 0 {\displaystyle \nabla \cdot {\vec {v}}=0} , the scalar potential can be substituted back in to find Laplace's Equation for irrotational flow: ∇ 2 ϕ = 0 {\displaystyle \nabla ^{2}\phi =0\,} Note that the Laplace equation is a well-studied linear partial differential equation. Its solutions are infinite; however, most solutions can be discarded when considering physical systems, as boundary conditions completely determine the velocity potential . Examples of common boundary conditions include the velocity of the fluid, determined by v → = − ∇ ϕ {\displaystyle {\vec {v}}=-\nabla \phi } , being 0 on the boundaries of the system. There is a great amount of overlap with electromagnetism when solving this equation in general, as the Laplace equation also models the electrostatic potential in a vacuum. [ 1 ] There are many reasons to study irrotational flow, among them;
https://en.wikipedia.org/wiki/Laplace_equation_for_irrotational_flow
In linear algebra , the Laplace expansion , named after Pierre-Simon Laplace , also called cofactor expansion , is an expression of the determinant of an n × n - matrix B as a weighted sum of minors , which are the determinants of some ( n − 1) × ( n − 1) - submatrices of B . Specifically, for every i , the Laplace expansion along the i th row is the equality det ( B ) = ∑ j = 1 n ( − 1 ) i + j b i , j m i , j , {\displaystyle {\begin{aligned}\det(B)&=\sum _{j=1}^{n}(-1)^{i+j}b_{i,j}m_{i,j},\end{aligned}}} where b i , j {\displaystyle b_{i,j}} is the entry of the i th row and j th column of B , and m i , j {\displaystyle m_{i,j}} is the determinant of the submatrix obtained by removing the i th row and the j th column of B . Similarly, the Laplace expansion along the j th column is the equality det ( B ) = ∑ i = 1 n ( − 1 ) i + j b i , j m i , j . {\displaystyle {\begin{aligned}\det(B)&=\sum _{i=1}^{n}(-1)^{i+j}b_{i,j}m_{i,j}.\end{aligned}}} (Each identity implies the other, since the determinants of a matrix and its transpose are the same.) The coefficient ( − 1 ) i + j m i , j {\displaystyle (-1)^{i+j}m_{i,j}} of b i , j {\displaystyle b_{i,j}} in the above sum is called the cofactor of b i , j {\displaystyle b_{i,j}} in B . The Laplace expansion is often useful in proofs, as in, for example, allowing recursion on the size of matrices. It is also of didactic interest for its simplicity and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute when compared to Gaussian elimination . Consider the matrix The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields: Laplace expansion along the second column yields the same result: It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero. Suppose B {\displaystyle B} is an n × n matrix and i , j ∈ { 1 , 2 , … , n } . {\displaystyle i,j\in \{1,2,\dots ,n\}.} For clarity we also label the entries of B {\displaystyle B} that compose its i , j {\displaystyle i,j} minor matrix M i j {\displaystyle M_{ij}} as ( a s t ) {\displaystyle (a_{st})} for 1 ≤ s , t ≤ n − 1. {\displaystyle 1\leq s,t\leq n-1.} Consider the terms in the expansion of | B | {\displaystyle |B|} that have b i j {\displaystyle b_{ij}} as a factor. Each has the form for some permutation τ ∈ S n with τ ( i ) = j {\displaystyle \tau (i)=j} , and a unique and evidently related permutation σ ∈ S n − 1 {\displaystyle \sigma \in S_{n-1}} which selects the same minor entries as τ . Similarly each choice of σ determines a corresponding τ i.e. the correspondence σ ↔ τ {\displaystyle \sigma \leftrightarrow \tau } is a bijection between S n − 1 {\displaystyle S_{n-1}} and { τ ∈ S n : τ ( i ) = j } . {\displaystyle \{\tau \in S_{n}\colon \tau (i)=j\}.} Using Cauchy's two-line notation , the explicit relation between τ {\displaystyle \tau } and σ {\displaystyle \sigma } can be written as where ( ← ) j {\displaystyle (\leftarrow )_{j}} is a temporary shorthand notation for a cycle ( n , n − 1 , ⋯ , j + 1 , j ) {\displaystyle (n,n-1,\cdots ,j+1,j)} . This operation decrements all indices larger than j so that every index fits in the set {1,2,...,n-1} The permutation τ can be derived from σ as follows. Define σ ′ ∈ S n {\displaystyle \sigma '\in S_{n}} by σ ′ ( k ) = σ ( k ) {\displaystyle \sigma '(k)=\sigma (k)} for 1 ≤ k ≤ n − 1 {\displaystyle 1\leq k\leq n-1} and σ ′ ( n ) = n {\displaystyle \sigma '(n)=n} . Then σ ′ {\displaystyle \sigma '} is expressed as Now, the operation which apply ( ← ) i {\displaystyle (\leftarrow )_{i}} first and then apply σ ′ {\displaystyle \sigma '} is (Notice applying A before B is equivalent to applying inverse of A to the upper row of B in two-line notation) where ( ← ) i {\displaystyle (\leftarrow )_{i}} is temporary shorthand notation for ( n , n − 1 , ⋯ , i + 1 , i ) {\displaystyle (n,n-1,\cdots ,i+1,i)} . the operation which applies τ {\displaystyle \tau } first and then applies ( ← ) j {\displaystyle (\leftarrow )_{j}} is above two are equal thus, where ( → ) j {\displaystyle (\rightarrow )_{j}} is the inverse of ( ← ) j {\displaystyle (\leftarrow )_{j}} which is ( j , j + 1 , ⋯ , n ) {\displaystyle (j,j+1,\cdots ,n)} . Thus Since the two cycles can be written respectively as n − i {\displaystyle n-i} and n − j {\displaystyle n-j} transpositions , And since the map σ ↔ τ {\displaystyle \sigma \leftrightarrow \tau } is bijective, from which the result follows. Similarly, the result holds if the index of the outer summation was replaced with j {\displaystyle j} . [ 1 ] Laplace's cofactor expansion can be generalised as follows. Consider the matrix The determinant of this matrix can be computed by using the Laplace's cofactor expansion along the first two rows as follows. Firstly note that there are 6 sets of two distinct numbers in {1, 2, 3, 4}, namely let S = { { 1 , 2 } , { 1 , 3 } , { 1 , 4 } , { 2 , 3 } , { 2 , 4 } , { 3 , 4 } } {\displaystyle S=\left\{\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\}\right\}} be the aforementioned set. By defining the complementary cofactors to be and the sign of their permutation to be The determinant of A can be written out as where H ′ {\displaystyle H^{\prime }} is the complementary set to H {\displaystyle H} . In our explicit example this gives us As above, it is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero. Let B = [ b i j ] {\displaystyle B=[b_{ij}]} be an n × n matrix and S {\displaystyle S} the set of k -element subsets of {1, 2, ... , n } , H {\displaystyle H} an element in it. Then the determinant of B {\displaystyle B} can be expanded along the k rows identified by H {\displaystyle H} as follows: where ε H , L {\displaystyle \varepsilon ^{H,L}} is the sign of the permutation determined by H {\displaystyle H} and L {\displaystyle L} , equal to ( − 1 ) ( ∑ h ∈ H h ) + ( ∑ ℓ ∈ L ℓ ) {\displaystyle (-1)^{\left(\sum _{h\in H}h\right)+\left(\sum _{\ell \in L}\ell \right)}} , b H , L {\displaystyle b_{H,L}} the square minor of B {\displaystyle B} obtained by deleting from B {\displaystyle B} rows and columns with indices in H {\displaystyle H} and L {\displaystyle L} respectively, and c H , L {\displaystyle c_{H,L}} (called the complement of b H , L {\displaystyle b_{H,L}} ) defined to be b H ′ , L ′ {\displaystyle b_{H',L'}} , H ′ {\displaystyle H'} and L ′ {\displaystyle L'} being the complement of H {\displaystyle H} and L {\displaystyle L} respectively. This coincides with the theorem above when k = 1 {\displaystyle k=1} . The same thing holds for any fixed k columns. The Laplace expansion is computationally inefficient for high-dimension matrices, with a time complexity in big O notation of O ( n !) . Alternatively, using a decomposition into triangular matrices as in the LU decomposition can yield determinants with a time complexity of O ( n 3 ) . [ 2 ] The following Python code implements the Laplace expansion:
https://en.wikipedia.org/wiki/Laplace_expansion
In differential equations , the Laplace invariant of any of certain differential operators is a certain function of the coefficients and their derivatives . Consider a bivariate hyperbolic differential operator of the second order whose coefficients are smooth functions of two variables. Its Laplace invariants have the form Their importance is due to the classical theorem: Theorem : Two operators of the form are equivalent under gauge transformations if and only if their Laplace invariants coincide pairwise. Here the operators are called equivalent if there is a gauge transformation that takes one to the other: Laplace invariants can be regarded as factorization "remainders" for the initial operator A : If at least one of Laplace invariants is not equal to zero, i.e. then this representation is a first step of the Laplace–Darboux transformations used for solving non-factorizable bivariate linear partial differential equations (LPDEs). If both Laplace invariants are equal to zero, i.e. then the differential operator A is factorizable and corresponding linear partial differential equation of second order is solvable. Laplace invariants have been introduced for a bivariate linear partial differential operator (LPDO) of order 2 and of hyperbolic type. They are a particular case of generalized invariants which can be constructed for a bivariate LPDO of arbitrary order and arbitrary type; see Invariant factorization of LPDOs .
https://en.wikipedia.org/wiki/Laplace_invariant
In mathematics , the Laplace limit is the maximum value of the eccentricity for which a solution to Kepler's equation , in terms of a power series in the eccentricity, converges. It is approximately Kepler's equation M = E − ε sin E relates the mean anomaly M with the eccentric anomaly E for a body moving in an ellipse with eccentricity ε. This equation cannot be solved for E in terms of elementary functions , but the Lagrange reversion theorem gives the solution as a power series in ε: or in general [ 1 ] [ 2 ] Laplace realized that this series converges for small values of the eccentricity, but diverges for any value of M other than a multiple of π if the eccentricity exceeds a certain value that does not depend on M . The Laplace limit is this value. It is the radius of convergence of the power series. Its Also The maximum Of the Function x c o s h ( x ) {\displaystyle {\frac {x}{cosh(x)}}} [ 3 ] It is the unique real solution of the transcendental equation [ 4 ] A closed-form expression in terms of r-Lambert special function and an infinite series representation were given by István Mező. [ 5 ] Laplace calculated the value 0.66195 in 1827. The Italian astronomer Francesco Carlini found the limit 0.66 five years before Laplace. Cauchy in the 1829 gave the precise value 0.66274. [ 6 ] This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Laplace_limit
The Laplace number ( La ), also known as the Suratman number ( Su ), is a dimensionless number used in the characterization of free surface fluid dynamics . It represents a ratio of surface tension to the momentum -transport (especially dissipation ) inside a fluid. It is named after Pierre-Simon Laplace and Indonesian physicist P. C. Suratman. [ 1 ] It is defined as follows: [ 2 ] where: Laplace number is related to Reynolds number (Re) and Weber number (We) in the following way: [ 2 ]
https://en.wikipedia.org/wiki/Laplace_number
In mathematics , the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space . It is usually denoted by the symbols ∇ ⋅ ∇ {\displaystyle \nabla \cdot \nabla } , ∇ 2 {\displaystyle \nabla ^{2}} (where ∇ {\displaystyle \nabla } is the nabla operator ), or Δ {\displaystyle \Delta } . In a Cartesian coordinate system , the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable . In other coordinate systems , such as cylindrical and spherical coordinates , the Laplacian also has a useful form. Informally, the Laplacian Δ f ( p ) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f ( p ) . The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics : the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation Δ f = 0 are called harmonic functions and represent the possible gravitational potentials in regions of vacuum . The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials ; the diffusion equation describes heat and fluid flow ; the wave equation describes wave propagation ; and the Schrödinger equation describes the wave function in quantum mechanics . In image processing and computer vision , the Laplacian operator has been used for various tasks, such as blob and edge detection . The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology . The Laplace operator is a second-order differential operator in the n -dimensional Euclidean space , defined as the divergence ( ∇ ⋅ {\displaystyle \nabla \cdot } ) of the gradient ( ∇ f {\displaystyle \nabla f} ). Thus if f {\displaystyle f} is a twice-differentiable real-valued function , then the Laplacian of f {\displaystyle f} is the real-valued function defined by: where the latter notations derive from formally writing: ∇ = ( ∂ ∂ x 1 , … , ∂ ∂ x n ) . {\displaystyle \nabla =\left({\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\right).} Explicitly, the Laplacian of f is thus the sum of all the unmixed second partial derivatives in the Cartesian coordinates x i : As a second-order differential operator, the Laplace operator maps C k functions to C k −2 functions for k ≥ 2 . It is a linear operator Δ : C k ( R n ) → C k −2 ( R n ) , or more generally, an operator Δ : C k (Ω) → C k −2 (Ω) for any open set Ω ⊆ R n . Alternatively, the Laplace operator can be defined as: ∇ 2 f ( x → ) = lim R → 0 2 n R 2 ( f s h e l l R − f ( x → ) ) = lim R → 0 2 n A n − 1 R 1 + n ∫ s h e l l R f ( r → ) − f ( x → ) d r n − 1 {\displaystyle \nabla ^{2}f({\vec {x}})=\lim _{R\rightarrow 0}{\frac {2n}{R^{2}}}(f_{shell_{R}}-f({\vec {x}}))=\lim _{R\rightarrow 0}{\frac {2n}{A_{n-1}R^{1+n}}}\int _{shell_{R}}f({\vec {r}})-f({\vec {x}})dr^{n-1}} where n {\displaystyle n} is the dimension of the space, f s h e l l R {\displaystyle f_{shell_{R}}} is the average value of f {\displaystyle f} on the surface of an n-sphere of radius R {\displaystyle R} , ∫ s h e l l R f ( r → ) d r n − 1 {\displaystyle \int _{shell_{R}}f({\vec {r}})dr^{n-1}} is the surface integral over an n-sphere of radius R {\displaystyle R} , and A n − 1 {\displaystyle A_{n-1}} is the hypervolume of the boundary of a unit n-sphere . [ 1 ] There are two conflicting conventions as to how the Laplace operator is defined: Δ = ∇ 2 = ∑ j = 1 n ( ∂ ∂ x j ) 2 , {\displaystyle \Delta =\nabla ^{2}=\sum _{j=1}^{n}{\Big (}{\frac {\partial }{\partial x_{j}}}{\Big )}^{2},} which is negative-definite in the sense that ∫ R n φ ( x ) ¯ Δ φ ( x ) d x = − ∫ R n | ∇ φ ( x ) | 2 d x < 0 {\displaystyle \int _{\mathbb {R} ^{n}}{\overline {\varphi (x)}}\Delta \varphi (x)\,dx=-\int _{\mathbb {R} ^{n}}|\nabla \varphi (x)|^{2}\,dx<0} for any smooth compactly supported function φ ∈ C c ∞ ( R n ) {\displaystyle \varphi \in C_{c}^{\infty }(\mathbb {R} ^{n})} which is not identically zero); Δ = − ∇ 2 = − ∑ j = 1 n ( ∂ ∂ x j ) 2 . {\displaystyle \Delta =-\nabla ^{2}=-\sum _{j=1}^{n}{\Big (}{\frac {\partial }{\partial x_{j}}}{\Big )}^{2}.} In the physical theory of diffusion , the Laplace operator arises naturally in the mathematical description of equilibrium . [ 2 ] Specifically, if u is the density at equilibrium of some quantity such as a chemical concentration, then the net flux of u through the boundary ∂ V (also called S ) of any smooth region V is zero, provided there is no source or sink within V : ∫ S ∇ u ⋅ n d S = 0 , {\displaystyle \int _{S}\nabla u\cdot \mathbf {n} \,dS=0,} where n is the outward unit normal to the boundary of V . By the divergence theorem , ∫ V div ⁡ ∇ u d V = ∫ S ∇ u ⋅ n d S = 0. {\displaystyle \int _{V}\operatorname {div} \nabla u\,dV=\int _{S}\nabla u\cdot \mathbf {n} \,dS=0.} Since this holds for all smooth regions V , one can show that it implies: div ⁡ ∇ u = Δ u = 0. {\displaystyle \operatorname {div} \nabla u=\Delta u=0.} The left-hand side of this equation is the Laplace operator, and the entire equation Δ u = 0 is known as Laplace's equation . Solutions of the Laplace equation, i.e. functions whose Laplacian is identically zero, thus represent possible equilibrium densities under diffusion. The Laplace operator itself has a physical interpretation for non-equilibrium diffusion as the extent to which a point represents a source or sink of chemical concentration, in a sense made precise by the diffusion equation . This interpretation of the Laplacian is also explained by the following fact about averages. Given a twice continuously differentiable function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } and a point p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} , the average value of f {\displaystyle f} over the ball with radius h {\displaystyle h} centered at p {\displaystyle p} is: [ 3 ] f ¯ B ( p , h ) = f ( p ) + Δ f ( p ) 2 ( n + 2 ) h 2 + o ( h 2 ) for h → 0 {\displaystyle {\overline {f}}_{B}(p,h)=f(p)+{\frac {\Delta f(p)}{2(n+2)}}h^{2}+o(h^{2})\quad {\text{for}}\;\;h\to 0} Similarly, the average value of f {\displaystyle f} over the sphere (the boundary of a ball) with radius h {\displaystyle h} centered at p {\displaystyle p} is: f ¯ S ( p , h ) = f ( p ) + Δ f ( p ) 2 n h 2 + o ( h 2 ) for h → 0. {\displaystyle {\overline {f}}_{S}(p,h)=f(p)+{\frac {\Delta f(p)}{2n}}h^{2}+o(h^{2})\quad {\text{for}}\;\;h\to 0.} If φ denotes the electrostatic potential associated to a charge distribution q , then the charge distribution itself is given by the negative of the Laplacian of φ : q = − ε 0 Δ φ , {\displaystyle q=-\varepsilon _{0}\Delta \varphi ,} where ε 0 is the electric constant . This is a consequence of Gauss's law . Indeed, if V is any smooth region with boundary ∂ V , then by Gauss's law the flux of the electrostatic field E across the boundary is proportional to the charge enclosed: ∫ ∂ V E ⋅ n d S = ∫ V div ⁡ E d V = 1 ε 0 ∫ V q d V . {\displaystyle \int _{\partial V}\mathbf {E} \cdot \mathbf {n} \,dS=\int _{V}\operatorname {div} \mathbf {E} \,dV={\frac {1}{\varepsilon _{0}}}\int _{V}q\,dV.} where the first equality is due to the divergence theorem . Since the electrostatic field is the (negative) gradient of the potential, this gives: − ∫ V div ⁡ ( grad ⁡ φ ) d V = 1 ε 0 ∫ V q d V . {\displaystyle -\int _{V}\operatorname {div} (\operatorname {grad} \varphi )\,dV={\frac {1}{\varepsilon _{0}}}\int _{V}q\,dV.} Since this holds for all regions V , we must have div ⁡ ( grad ⁡ φ ) = − 1 ε 0 q {\displaystyle \operatorname {div} (\operatorname {grad} \varphi )=-{\frac {1}{\varepsilon _{0}}}q} The same approach implies that the negative of the Laplacian of the gravitational potential is the mass distribution . Often the charge (or mass) distribution are given, and the associated potential is unknown. Finding the potential function subject to suitable boundary conditions is equivalent to solving Poisson's equation . Another motivation for the Laplacian appearing in physics is that solutions to Δ f = 0 in a region U are functions that make the Dirichlet energy functional stationary : E ( f ) = 1 2 ∫ U ‖ ∇ f ‖ 2 d x . {\displaystyle E(f)={\frac {1}{2}}\int _{U}\lVert \nabla f\rVert ^{2}\,dx.} To see this, suppose f : U → R is a function, and u : U → R is a function that vanishes on the boundary of U . Then: d d ε | ε = 0 E ( f + ε u ) = ∫ U ∇ f ⋅ ∇ u d x = − ∫ U u Δ f d x {\displaystyle \left.{\frac {d}{d\varepsilon }}\right|_{\varepsilon =0}E(f+\varepsilon u)=\int _{U}\nabla f\cdot \nabla u\,dx=-\int _{U}u\,\Delta f\,dx} where the last equality follows using Green's first identity . This calculation shows that if Δ f = 0 , then E is stationary around f . Conversely, if E is stationary around f , then Δ f = 0 by the fundamental lemma of calculus of variations . The Laplace operator in two dimensions is given by: In Cartesian coordinates , Δ f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}} where x and y are the standard Cartesian coordinates of the xy -plane. In polar coordinates , Δ f = 1 r ∂ ∂ r ( r ∂ f ∂ r ) + 1 r 2 ∂ 2 f ∂ θ 2 = ∂ 2 f ∂ r 2 + 1 r ∂ f ∂ r + 1 r 2 ∂ 2 f ∂ θ 2 , {\displaystyle {\begin{aligned}\Delta f&={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}}\\&={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}}}{\frac {\partial ^{2}f}{\partial \theta ^{2}}},\end{aligned}}} where r represents the radial distance and θ the angle. In three dimensions, it is common to work with the Laplacian in a variety of different coordinate systems. In Cartesian coordinates , Δ f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.} In cylindrical coordinates , Δ f = 1 ρ ∂ ∂ ρ ( ρ ∂ f ∂ ρ ) + 1 ρ 2 ∂ 2 f ∂ φ 2 + ∂ 2 f ∂ z 2 , {\displaystyle \Delta f={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}\left(\rho {\frac {\partial f}{\partial \rho }}\right)+{\frac {1}{\rho ^{2}}}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}},} where ρ {\displaystyle \rho } represents the radial distance, φ the azimuth angle and z the height. In spherical coordinates : Δ f = 1 r 2 ∂ ∂ r ( r 2 ∂ f ∂ r ) + 1 r 2 sin ⁡ θ ∂ ∂ θ ( sin ⁡ θ ∂ f ∂ θ ) + 1 r 2 sin 2 ⁡ θ ∂ 2 f ∂ φ 2 , {\displaystyle \Delta f={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial f}{\partial r}}\right)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}},} or Δ f = 1 r ∂ 2 ∂ r 2 ( r f ) + 1 r 2 sin ⁡ θ ∂ ∂ θ ( sin ⁡ θ ∂ f ∂ θ ) + 1 r 2 sin 2 ⁡ θ ∂ 2 f ∂ φ 2 , {\displaystyle \Delta f={\frac {1}{r}}{\frac {\partial ^{2}}{\partial r^{2}}}(rf)+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta {\frac {\partial f}{\partial \theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}},} by expanding the first and second term, these expressions read Δ f = ∂ 2 f ∂ r 2 + 2 r ∂ f ∂ r + 1 r 2 sin ⁡ θ ( cos ⁡ θ ∂ f ∂ θ + sin ⁡ θ ∂ 2 f ∂ θ 2 ) + 1 r 2 sin 2 ⁡ θ ∂ 2 f ∂ φ 2 , {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}\sin \theta }}\left(\cos \theta {\frac {\partial f}{\partial \theta }}+\sin \theta {\frac {\partial ^{2}f}{\partial \theta ^{2}}}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}f}{\partial \varphi ^{2}}},} where φ represents the azimuthal angle and θ the zenith angle or co-latitude . In particular, the above is equivalent to Δ f = ∂ 2 f ∂ r 2 + 2 r ∂ f ∂ r + 1 r 2 Δ S 2 f , {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}}}\Delta _{S^{2}}f,} where Δ S 2 f {\displaystyle \Delta _{S^{2}}f} is the Laplace-Beltrami operator on the unit sphere. In general curvilinear coordinates ( ξ 1 , ξ 2 , ξ 3 ): Δ = ∇ ξ m ⋅ ∇ ξ n ∂ 2 ∂ ξ m ∂ ξ n + ∇ 2 ξ m ∂ ∂ ξ m = g m n ( ∂ 2 ∂ ξ m ∂ ξ n − Γ m n l ∂ ∂ ξ l ) , {\displaystyle \Delta =\nabla \xi ^{m}\cdot \nabla \xi ^{n}{\frac {\partial ^{2}}{\partial \xi ^{m}\,\partial \xi ^{n}}}+\nabla ^{2}\xi ^{m}{\frac {\partial }{\partial \xi ^{m}}}=g^{mn}\left({\frac {\partial ^{2}}{\partial \xi ^{m}\,\partial \xi ^{n}}}-\Gamma _{mn}^{l}{\frac {\partial }{\partial \xi ^{l}}}\right),} where summation over the repeated indices is implied , g mn is the inverse metric tensor and Γ l mn are the Christoffel symbols for the selected coordinates. In arbitrary curvilinear coordinates in N dimensions ( ξ 1 , ..., ξ N ), we can write the Laplacian in terms of the inverse metric tensor , g i j {\displaystyle g^{ij}} : Δ = 1 det g ∂ ∂ ξ i ( det g g i j ∂ ∂ ξ j ) , {\displaystyle \Delta ={\frac {1}{\sqrt {\det g}}}{\frac {\partial }{\partial \xi ^{i}}}\left({\sqrt {\det g}}\,g^{ij}{\frac {\partial }{\partial \xi ^{j}}}\right),} from the Voss - Weyl formula [ 4 ] for the divergence . In spherical coordinates in N dimensions , with the parametrization x = rθ ∈ R N with r representing a positive real radius and θ an element of the unit sphere S N −1 , Δ f = ∂ 2 f ∂ r 2 + N − 1 r ∂ f ∂ r + 1 r 2 Δ S N − 1 f {\displaystyle \Delta f={\frac {\partial ^{2}f}{\partial r^{2}}}+{\frac {N-1}{r}}{\frac {\partial f}{\partial r}}+{\frac {1}{r^{2}}}\Delta _{S^{N-1}}f} where Δ S N −1 is the Laplace–Beltrami operator on the ( N − 1) -sphere, known as the spherical Laplacian. The two radial derivative terms can be equivalently rewritten as: 1 r N − 1 ∂ ∂ r ( r N − 1 ∂ f ∂ r ) . {\displaystyle {\frac {1}{r^{N-1}}}{\frac {\partial }{\partial r}}\left(r^{N-1}{\frac {\partial f}{\partial r}}\right).} As a consequence, the spherical Laplacian of a function defined on S N −1 ⊂ R N can be computed as the ordinary Laplacian of the function extended to R N ∖{0} so that it is constant along rays, i.e., homogeneous of degree zero. The Laplacian is invariant under all Euclidean transformations : rotations and translations . In two dimensions, for example, this means that: Δ ( f ( x cos ⁡ θ − y sin ⁡ θ + a , x sin ⁡ θ + y cos ⁡ θ + b ) ) = ( Δ f ) ( x cos ⁡ θ − y sin ⁡ θ + a , x sin ⁡ θ + y cos ⁡ θ + b ) {\displaystyle \Delta (f(x\cos \theta -y\sin \theta +a,x\sin \theta +y\cos \theta +b))=(\Delta f)(x\cos \theta -y\sin \theta +a,x\sin \theta +y\cos \theta +b)} for all θ , a , and b . In arbitrary dimensions, Δ ( f ∘ ρ ) = ( Δ f ) ∘ ρ {\displaystyle \Delta (f\circ \rho )=(\Delta f)\circ \rho } whenever ρ is a rotation, and likewise: Δ ( f ∘ τ ) = ( Δ f ) ∘ τ {\displaystyle \Delta (f\circ \tau )=(\Delta f)\circ \tau } whenever τ is a translation. (More generally, this remains true when ρ is an orthogonal transformation such as a reflection .) In fact, the algebra of all scalar linear differential operators, with constant coefficients, that commute with all Euclidean transformations, is the polynomial algebra generated by the Laplace operator. The spectrum of the Laplace operator consists of all eigenvalues λ for which there is a corresponding eigenfunction f with: − Δ f = λ f . {\displaystyle -\Delta f=\lambda f.} This is known as the Helmholtz equation . If Ω is a bounded domain in R n , then the eigenfunctions of the Laplacian are an orthonormal basis for the Hilbert space L 2 (Ω) . This result essentially follows from the spectral theorem on compact self-adjoint operators , applied to the inverse of the Laplacian (which is compact, by the Poincaré inequality and the Rellich–Kondrachov theorem ). [ 5 ] It can also be shown that the eigenfunctions are infinitely differentiable functions. [ 6 ] More generally, these results hold for the Laplace–Beltrami operator on any compact Riemannian manifold with boundary, or indeed for the Dirichlet eigenvalue problem of any elliptic operator with smooth coefficients on a bounded domain. When Ω is the n -sphere , the eigenfunctions of the Laplacian are the spherical harmonics . The vector Laplace operator , also denoted by ∇ 2 {\displaystyle \nabla ^{2}} , is a differential operator defined over a vector field . [ 7 ] The vector Laplacian is similar to the scalar Laplacian; whereas the scalar Laplacian applies to a scalar field and returns a scalar quantity, the vector Laplacian applies to a vector field , returning a vector quantity. When computed in orthonormal Cartesian coordinates , the returned vector field is equal to the vector field of the scalar Laplacian applied to each vector component. The vector Laplacian of a vector field A {\displaystyle \mathbf {A} } is defined as ∇ 2 A = ∇ ( ∇ ⋅ A ) − ∇ × ( ∇ × A ) . {\displaystyle \nabla ^{2}\mathbf {A} =\nabla (\nabla \cdot \mathbf {A} )-\nabla \times (\nabla \times \mathbf {A} ).} This definition can be seen as the Helmholtz decomposition of the vector Laplacian. In Cartesian coordinates , this reduces to the much simpler expression ∇ 2 A = ( ∇ 2 A x , ∇ 2 A y , ∇ 2 A z ) , {\displaystyle \nabla ^{2}\mathbf {A} =(\nabla ^{2}A_{x},\nabla ^{2}A_{y},\nabla ^{2}A_{z}),} where A x {\displaystyle A_{x}} , A y {\displaystyle A_{y}} , and A z {\displaystyle A_{z}} are the components of the vector field A {\displaystyle \mathbf {A} } , and ∇ 2 {\displaystyle \nabla ^{2}} just on the left of each vector field component is the (scalar) Laplace operator. This can be seen to be a special case of Lagrange's formula; see Vector triple product . For expressions of the vector Laplacian in other coordinate systems see Del in cylindrical and spherical coordinates . The Laplacian of any tensor field T {\displaystyle \mathbf {T} } ("tensor" includes scalar and vector) is defined as the divergence of the gradient of the tensor: ∇ 2 T = ( ∇ ⋅ ∇ ) T . {\displaystyle \nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} .} For the special case where T {\displaystyle \mathbf {T} } is a scalar (a tensor of degree zero), the Laplacian takes on the familiar form. If T {\displaystyle \mathbf {T} } is a vector (a tensor of first degree), the gradient is a covariant derivative which results in a tensor of second degree, and the divergence of this is again a vector. The formula for the vector Laplacian above may be used to avoid tensor math and may be shown to be equivalent to the divergence of the Jacobian matrix shown below for the gradient of a vector: ∇ T = ( ∇ T x , ∇ T y , ∇ T z ) = [ T x x T x y T x z T y x T y y T y z T z x T z y T z z ] , where T u v ≡ ∂ T u ∂ v . {\displaystyle \nabla \mathbf {T} =(\nabla T_{x},\nabla T_{y},\nabla T_{z})={\begin{bmatrix}T_{xx}&T_{xy}&T_{xz}\\T_{yx}&T_{yy}&T_{yz}\\T_{zx}&T_{zy}&T_{zz}\end{bmatrix}},{\text{ where }}T_{uv}\equiv {\frac {\partial T_{u}}{\partial v}}.} And, in the same manner, a dot product , which evaluates to a vector, of a vector by the gradient of another vector (a tensor of 2nd degree) can be seen as a product of matrices: A ⋅ ∇ B = [ A x A y A z ] ∇ B = [ A ⋅ ∇ B x A ⋅ ∇ B y A ⋅ ∇ B z ] . {\displaystyle \mathbf {A} \cdot \nabla \mathbf {B} ={\begin{bmatrix}A_{x}&A_{y}&A_{z}\end{bmatrix}}\nabla \mathbf {B} ={\begin{bmatrix}\mathbf {A} \cdot \nabla B_{x}&\mathbf {A} \cdot \nabla B_{y}&\mathbf {A} \cdot \nabla B_{z}\end{bmatrix}}.} This identity is a coordinate dependent result, and is not general. An example of the usage of the vector Laplacian is the Navier-Stokes equations for a Newtonian incompressible flow : ρ ( ∂ v ∂ t + ( v ⋅ ∇ ) v ) = ρ f − ∇ p + μ ( ∇ 2 v ) , {\displaystyle \rho \left({\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {v} \right)=\rho \mathbf {f} -\nabla p+\mu \left(\nabla ^{2}\mathbf {v} \right),} where the term with the vector Laplacian of the velocity field μ ( ∇ 2 v ) {\displaystyle \mu \left(\nabla ^{2}\mathbf {v} \right)} represents the viscous stresses in the fluid. Another example is the wave equation for the electric field that can be derived from Maxwell's equations in the absence of charges and currents: ∇ 2 E − μ 0 ϵ 0 ∂ 2 E ∂ t 2 = 0. {\displaystyle \nabla ^{2}\mathbf {E} -\mu _{0}\epsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}=0.} This equation can also be written as: ◻ E = 0 , {\displaystyle \Box \,\mathbf {E} =0,} where ◻ ≡ 1 c 2 ∂ 2 ∂ t 2 − ∇ 2 , {\displaystyle \Box \equiv {\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2},} is the D'Alembertian , used in the Klein–Gordon equation . First of all, we say that a smooth function u : Ω ⊂ R N → R {\displaystyle u\colon \Omega \subset \mathbb {R} ^{N}\to \mathbb {R} } is superharmonic whenever − Δ u ≥ 0 {\displaystyle -\Delta u\geq 0} . Let u : Ω → R {\displaystyle u\colon \Omega \to \mathbb {R} } be a smooth function, and let K ⊂ Ω {\displaystyle K\subset \Omega } be a connected compact set. If u {\displaystyle u} is superharmonic, then, for every x ∈ K {\displaystyle x\in K} , we have u ( x ) ≥ inf Ω u + c ‖ u ‖ L 1 ( K ) , {\displaystyle u(x)\geq \inf _{\Omega }u+c\lVert u\rVert _{L^{1}(K)}\;,} for some constant c > 0 {\displaystyle c>0} depending on Ω {\displaystyle \Omega } and K {\displaystyle K} . [ 8 ] A version of the Laplacian can be defined wherever the Dirichlet energy functional makes sense, which is the theory of Dirichlet forms . For spaces with additional structure, one can give more explicit descriptions of the Laplacian, as follows. The Laplacian also can be generalized to an elliptic operator called the Laplace–Beltrami operator defined on a Riemannian manifold . The Laplace–Beltrami operator, when applied to a function, is the trace ( tr ) of the function's Hessian : Δ f = tr ⁡ ( H ( f ) ) {\displaystyle \Delta f=\operatorname {tr} {\big (}H(f){\big )}} where the trace is taken with respect to the inverse of the metric tensor . The Laplace–Beltrami operator also can be generalized to an operator (also called the Laplace–Beltrami operator) which operates on tensor fields , by a similar formula. Another generalization of the Laplace operator that is available on pseudo-Riemannian manifolds uses the exterior derivative , in terms of which the "geometer's Laplacian" is expressed as Δ f = δ d f . {\displaystyle \Delta f=\delta df.} Here δ is the codifferential , which can also be expressed in terms of the Hodge star and the exterior derivative. This operator differs in sign from the "analyst's Laplacian" defined above. More generally, the "Hodge" Laplacian is defined on differential forms α by Δ α = δ d α + d δ α . {\displaystyle \Delta \alpha =\delta d\alpha +d\delta \alpha .} This is known as the Laplace–de Rham operator , which is related to the Laplace–Beltrami operator by the Weitzenböck identity . The Laplacian can be generalized in certain ways to non-Euclidean spaces, where it may be elliptic , hyperbolic , or ultrahyperbolic . In Minkowski space the Laplace–Beltrami operator becomes the D'Alembert operator ◻ {\displaystyle \Box } or D'Alembertian: ◻ = 1 c 2 ∂ 2 ∂ t 2 − ∂ 2 ∂ x 2 − ∂ 2 ∂ y 2 − ∂ 2 ∂ z 2 . {\displaystyle \square ={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-{\frac {\partial ^{2}}{\partial x^{2}}}-{\frac {\partial ^{2}}{\partial y^{2}}}-{\frac {\partial ^{2}}{\partial z^{2}}}.} It is the generalization of the Laplace operator in the sense that it is the differential operator which is invariant under the isometry group of the underlying space and it reduces to the Laplace operator if restricted to time-independent functions. The overall sign of the metric here is chosen such that the spatial parts of the operator admit a negative sign, which is the usual convention in high-energy particle physics . The D'Alembert operator is also known as the wave operator because it is the differential operator appearing in the wave equations , and it is also part of the Klein–Gordon equation , which reduces to the wave equation in the massless case. The additional factor of c in the metric is needed in physics if space and time are measured in different units; a similar factor would be required if, for example, the x direction were measured in meters while the y direction were measured in centimeters. Indeed, theoretical physicists usually work in units such that c = 1 in order to simplify the equation. The d'Alembert operator generalizes to a hyperbolic operator on pseudo-Riemannian manifolds .
https://en.wikipedia.org/wiki/Laplace_operator
The Laplace pressure is the pressure difference between the inside and the outside of a curved surface that forms the boundary between two fluid regions. [ 1 ] The pressure difference is caused by the surface tension of the interface between liquid and gas, or between two immiscible liquids. The Laplace pressure is determined from the Young–Laplace equation given as [ 2 ] Δ P ≡ P inside − P outside = γ ( 1 R 1 + 1 R 2 ) , {\displaystyle \Delta P\equiv P_{\text{inside}}-P_{\text{outside}}=\gamma \left({\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}\right),} where R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} are the principal radii of curvature and γ {\displaystyle \gamma } (also denoted as σ {\displaystyle \sigma } ) is the surface tension. Although signs for these values vary, sign convention usually dictates positive curvature when convex and negative when concave. The Laplace pressure is commonly used to determine the pressure difference in spherical shapes such as bubbles or droplets. In this case, R 1 {\displaystyle R_{1}} = R 2 {\displaystyle R_{2}} : Δ P = γ 2 R {\displaystyle \Delta P=\gamma {\frac {2}{R}}} For a gas bubble within a liquid, there is only one surface. For a gas bubble with a liquid wall, beyond which is again gas, there are two surfaces, each contributing to the total pressure difference. If the bubble is spherical and the outer radius differs from the inner radius by a small distance, R o = R i + d {\displaystyle R_{o}=R_{i}+d} , the difference in pressure between the outer and inner regions of gas is Δ P = Δ P i + Δ P o = 2 γ ( 1 R i + 1 R i + d ) = 4 γ R i ( 1 − 1 2 d R i + d ) ≈ 4 γ R i + O ( d ) . {\displaystyle \Delta P=\Delta P_{i}+\Delta P_{o}=2\gamma \left({\frac {1}{R_{i}}}+{\frac {1}{R_{i}+d}}\right)={\frac {4\gamma }{R_{i}}}\left(1-{\frac {1}{2}}{\frac {d}{R_{i}+d}}\right)\approx {\frac {4\gamma }{R_{i}}}+{\mathcal {O}}(d).} A common example of use is finding the pressure inside an air bubble in pure water, where γ {\displaystyle \gamma } = 72 mN/m at 25 °C (298 K). The extra pressure inside the bubble is given here for three bubble sizes: A 1 mm bubble has negligible extra pressure. Yet when the diameter is ~3 μm, the bubble has an extra atmosphere inside than outside. When the bubble is only several hundred nanometers, the pressure inside can be several atmospheres. One should bear in mind that the surface tension in the numerator can be much smaller in the presence of surfactants or contaminants. The same calculation can be done for small oil droplets in water, where even in the presence of surfactants and a fairly low interfacial tension γ {\displaystyle \gamma } = 5–10 mN/m, the pressure inside 100 nm diameter droplets can reach several atmospheres. [ 3 ]
https://en.wikipedia.org/wiki/Laplace_pressure
In mathematics , Laplace's principle is a basic theorem in large deviations theory which is similar to Varadhan's lemma . It gives an asymptotic expression for the Lebesgue integral of exp(− θφ ( x )) over a fixed set A as θ becomes large. Such expressions can be used, for example, in statistical mechanics to determining the limiting behaviour of a system as the temperature tends to absolute zero . Let A be a Lebesgue-measurable subset of d - dimensional Euclidean space R d and let φ : R d → R be a measurable function with Then where ess inf denotes the essential infimum . Heuristically, this may be read as saying that for large θ , The Laplace principle can be applied to the family of probability measures P θ given by to give an asymptotic expression for the probability of some event A as θ becomes large. For example, if X is a standard normally distributed random variable on R , then for every measurable set A . This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Laplace_principle_(large_deviations_theory)
In astronomy and orbital mechanics , the Laplace sphere concerns a specific kind of Three-body problem with orbits. The prototype idea is to study the Sun - Earth - Moon system, and determine if it would be possible for the Sun to steal away the Moon from Earth orbit, into solar orbit. More generally, it is applied to any satellite of a body (often called the 'planet') that is, in turn, orbiting a much more massive body (often called the 'star'). Besides the moon, the satellite is usually a small planetoid , exoplanet , or a spacecraft that is orbiting the earth. [ 1 ] The Laplace sphere is a region around a planet where a satellite would maintain a stable orbit around the planet, rather than being pulled off toward the star, with its greater gravitational force , despite its larger distance. The 'sphere' region is actually an ellipsoid , specifically a prolate spheroid with its long axis perpendicular to the star-planet orbit. This results in the fact that the satellite with an eccentric orbit is safer with its apsis pointing up or down, than pointing in the plane of the planet's orbit. The derivation eliminates higher-order terms on the assumption that the star's mass is much larger than the planet's, and the planet's mass is much larger than the satellite's. [ 2 ] [ 3 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Laplace_sphere
In mathematics , the Laplace transform is a powerful integral transform used to switch a function from the time domain to the s-domain . The Laplace transform can be used in some cases to solve linear differential equations with given initial conditions . First consider the following property of the Laplace transform: One can prove by induction that Now we consider the following differential equation: with given initial conditions Using the linearity of the Laplace transform it is equivalent to rewrite the equation as obtaining Solving the equation for L { f ( t ) } {\displaystyle {\mathcal {L}}\{f(t)\}} and substituting f ( i ) ( 0 ) {\displaystyle f^{(i)}(0)} with c i {\displaystyle c_{i}} one obtains The solution for f ( t ) is obtained by applying the inverse Laplace transform to L { f ( t ) } . {\displaystyle {\mathcal {L}}\{f(t)\}.} Note that if the initial conditions are all zero, i.e. then the formula simplifies to We want to solve with initial conditions f (0) = 0 and f′ (0)=0. We note that and we get The equation is then equivalent to We deduce Now we apply the Laplace inverse transform to get
https://en.wikipedia.org/wiki/Laplace_transform_applied_to_differential_equations
In mathematics, the Laplace–Carson transform , named after Pierre Simon Laplace and John Renshaw Carson , is an integral transform with significant applications in the field of physics and engineering, particularly in the field of railway engineering . Let V ( j , t ) {\displaystyle V(j,t)} be a function and p {\displaystyle p} a complex variable. The Laplace–Carson transform is defined as: [ 1 ] The inverse Laplace–Carson transform is: where a 0 {\displaystyle a_{0}} is a real-valued constant, i ∞ {\displaystyle i\infty } refers to the imaginary axis, which indicates the integral is carried out along a straight line parallel to the imaginary axis lying to the right of all the singularities of the following expression: This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Laplace–Carson_transform
In the mathematical field of graph theory , the Laplacian matrix , also called the graph Laplacian , admittance matrix , Kirchhoff matrix, or discrete Laplacian , is a matrix representation of a graph . Named after Pierre-Simon Laplace , the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method . The Laplacian matrix relates to many functional graph properties. Kirchhoff's theorem can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the Fiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established by Cheeger's inequality . The spectral decomposition of the Laplacian matrix allows the construction of low-dimensional embeddings that appear in many machine learning applications and determines a spectral layout in graph drawing . Graph-based signal processing is based on the graph Fourier transform that extends the traditional discrete Fourier transform by substituting the standard basis of complex sinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal. The Laplacian matrix is the easiest to define for a simple graph but more common in applications for an edge-weighted graph , i.e., with weights on its edges — the entries of the graph adjacency matrix . Spectral graph theory relates properties of a graph to a spectrum, i.e., eigenvalues and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices. Given a simple graph G {\displaystyle G} with n {\displaystyle n} vertices v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} , its Laplacian matrix L n × n {\textstyle L_{n\times n}} is defined element-wise as [ 1 ] or equivalently by the matrix where D is the degree matrix , and A is the graph's adjacency matrix . Since G {\textstyle G} is a simple graph, A {\textstyle A} only contains 1s or 0s and its diagonal elements are all 0s. Here is a simple example of a labelled, undirected graph and its Laplacian matrix. We observe for the undirected graph that both the adjacency matrix and the Laplacian matrix are symmetric and that the row- and column-sums of the Laplacian matrix are all zeros (which directly implies that the Laplacian matrix is singular). For directed graphs , either the indegree or outdegree might be used, depending on the application, as in the following example: In the directed graph, the adjacency matrix and Laplacian matrix are asymmetric. In its Laplacian matrix, column-sums or row-sums are zero, depending on whether the indegree or outdegree has been used. The | v | × | e | {\textstyle |v|\times |e|} oriented incidence matrix B with element B ve for the vertex v and the edge e (connecting vertices v i {\textstyle v_{i}} and v j {\textstyle v_{j}} , with i ≠ j ) is defined by Even though the edges in this definition are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian | v | × | v | {\textstyle |v|\times |v|} matrix L defined as where B T {\textstyle B^{\textsf {T}}} is the matrix transpose of B . An alternative product B T B {\displaystyle B^{\textsf {T}}B} defines the so-called | e | × | e | {\textstyle |e|\times |e|} edge-based Laplacian, as opposed to the original commonly used vertex-based Laplacian matrix L . The Laplacian matrix of a directed graph is by definition generally non-symmetric, while, e.g., traditional spectral clustering is primarily developed for undirected graphs with symmetric adjacency and Laplacian matrices. A trivial approach to applying techniques requiring the symmetry is to turn the original directed graph into an undirected graph and build the Laplacian matrix for the latter. In the matrix notation, the adjacency matrix of the undirected graph could, e.g., be defined as a Boolean sum of the adjacency matrix A {\displaystyle A} of the original directed graph and its matrix transpose A T {\displaystyle A^{T}} , where the zero and one entries of A {\displaystyle A} are treated as logical, rather than numerical, values, as in the following example: A vertex with a large degree, also called a heavy node , results in a large diagonal entry in the Laplacian matrix dominating the matrix properties. Normalization is aimed to make the influence of such vertices more equal to that of other vertices, by dividing the entries of the Laplacian matrix by the vertex degrees. To avoid division by zero, isolated vertices with zero degrees are excluded from the process of the normalization. The symmetrically normalized Laplacian matrix is defined as: [ 1 ] where D + {\displaystyle D^{+}} is the Moore–Penrose inverse of the degree matrix. The elements of L sym {\textstyle L^{\text{sym}}} are thus given by The symmetrically normalized Laplacian matrix is symmetric if and only if the adjacency matrix is symmetric. For a non-symmetric adjacency matrix of a directed graph, either of indegree and outdegree can be used for normalization: The left (random-walk) normalized Laplacian matrix is defined as: where D + {\displaystyle D^{+}} is the Moore–Penrose inverse . The elements of L rw {\textstyle L^{\text{rw}}} are given by Similarly, the right normalized Laplacian matrix is defined as The left or right normalized Laplacian matrix is not symmetric if the adjacency matrix is symmetric, except for the trivial case of all isolated vertices. For example, The example also demonstrates that if G {\displaystyle G} has no isolated vertices, then D + A {\displaystyle D^{+}A} right stochastic and hence is the matrix of a random walk , so that the left normalized Laplacian L rw := D + L = I − D + A {\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A} has each row summing to zero. Thus we sometimes alternatively call L rw {\displaystyle L^{\text{rw}}} the random-walk normalized Laplacian. In the less uncommonly used right normalized Laplacian L D + = I − A D + {\displaystyle LD^{+}=I-AD^{+}} each column sums to zero since A D + {\displaystyle AD^{+}} is left stochastic . For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic D out + A {\displaystyle D_{\text{out}}^{+}A} , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic A D in + {\displaystyle AD_{\text{in}}^{+}} . Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In spectral clustering and graph-based signal processing , where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization. The Laplacian matrix is defined by where D is the degree matrix and A is the adjacency matrix of the graph. For directed graphs , either the indegree or outdegree might be used, depending on the application, as in the following example: Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values. For graphs with weighted edges one can define a weighted incidence matrix B and use it to construct the corresponding symmetric Laplacian as L = B B T {\displaystyle L=BB^{\textsf {T}}} . An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the incidence matrix as for regular graphs and introduce a matrix just holding the values of the weights. A spring system is an example of this model used in mechanics to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges. We thus reuse the definition of the weightless | v | × | e | {\textstyle |v|\times |e|} incidence matrix B with element B ve for the vertex v and the edge e (connecting vertexes v i {\textstyle v_{i}} and v j {\textstyle v_{j}} , with i > j ) defined by We now also define a diagonal | e | × | e | {\textstyle |e|\times |e|} matrix W containing the edge weights. Even though the edges in the definition of B are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian | v | × | v | {\textstyle |v|\times |v|} matrix L defined as where B T {\textstyle B^{\textsf {T}}} is the matrix transpose of B . The construction is illustrated in the following example, where every edge e i {\textstyle e_{i}} is assigned the weight value i , with i = 1 , 2 , 3 , 4. {\textstyle i=1,2,3,4.} Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrix A {\displaystyle A} of the original directed graph and its matrix transpose A T {\displaystyle A^{T}} as in the following example: where the zero and one entries of A {\displaystyle A} are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2. Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using the indegree and outdegree , as in the following example: The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix. The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph , a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights. Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors. The symmetrically normalized Laplacian is defined as where L is the unnormalized Laplacian, A is the adjacency matrix, D is the degree matrix, and D + {\displaystyle D^{+}} is the Moore–Penrose inverse . Since the degree matrix D is diagonal, its reciprocal square root ( D + ) 1 / 2 {\textstyle (D^{+})^{1/2}} is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of D . If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example: The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix A is symmetric and the diagonal entries of D are nonnegative, in which case we can use the term the symmetric normalized Laplacian . The symmetric normalized Laplacian matrix can be also written as using the weightless | v | × | e | {\textstyle |v|\times |e|} incidence matrix B and the diagonal | e | × | e | {\textstyle |e|\times |e|} matrix W containing the edge weights and defining the new | v | × | e | {\textstyle |v|\times |e|} weighted incidence matrix S = ( D + ) 1 / 2 B W 1 / 2 {\textstyle S=(D^{+})^{1/2}BW^{{1}/{2}}} whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge e = {u, v} has an entry 1 d u {\textstyle {\frac {1}{\sqrt {d_{u}}}}} in the row corresponding to u , an entry − 1 d v {\textstyle -{\frac {1}{\sqrt {d_{v}}}}} in the row corresponding to v , and has 0 entries elsewhere. The random walk normalized Laplacian is defined as where D is the degree matrix. Since the degree matrix D is diagonal, its inverse D + {\textstyle D^{+}} is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of D . For the isolated vertices (those with degree 0), a common choice is to set the corresponding element L i , i rw {\textstyle L_{i,i}^{\text{rw}}} to 0. The matrix elements of L rw {\textstyle L^{\text{rw}}} are given by The name of the random-walk normalized Laplacian comes from the fact that this matrix is L rw = I − P {\textstyle L^{\text{rw}}=I-P} , where P = D + A {\textstyle P=D^{+}A} is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let e i {\textstyle e_{i}} denote the i-th standard basis vector. Then x = e i P {\textstyle x=e_{i}P} is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex i {\textstyle i} ; i.e., x j = P ( v i → v j ) {\textstyle x_{j}=\mathbb {P} \left(v_{i}\to v_{j}\right)} . More generally, if the vector x {\textstyle x} is a probability distribution of the location of a random walker on the vertices of the graph, then x ′ = x P t {\textstyle x'=xP^{t}} is the probability distribution of the walker after t {\textstyle t} steps. The random walk normalized Laplacian can also be called the left normalized Laplacian L rw := D + L {\displaystyle L^{\text{rw}}:=D^{+}L} since the normalization is performed by multiplying the Laplacian by the normalization matrix D + {\displaystyle D^{+}} on the left. It has each row summing to zero since P = D + A {\displaystyle P=D^{+}A} is right stochastic , assuming all the weights are non-negative. In the less uncommonly used right normalized Laplacian L D + = I − A D + {\displaystyle LD^{+}=I-AD^{+}} each column sums to zero since A D + {\displaystyle AD^{+}} is left stochastic . For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization: The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic D out + A {\displaystyle D_{\text{out}}^{+}A} , while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic A D in + {\displaystyle AD_{\text{in}}^{+}} . Negative weights present several challenges for normalization: For an (undirected) graph G and its Laplacian matrix L with eigenvalues λ 0 ≤ λ 1 ≤ ⋯ ≤ λ n − 1 {\textstyle \lambda _{0}\leq \lambda _{1}\leq \cdots \leq \lambda _{n-1}} : Because λ i {\textstyle \lambda _{i}} can be written as the inner product of the vector M v i {\textstyle M\mathbf {v} _{i}} with itself, this shows that λ i ≥ 0 {\textstyle \lambda _{i}\geq 0} and so the eigenvalues of L {\textstyle L} are all non-negative. i.e., L rw {\textstyle L^{\text{rw}}} is similar to the normalized Laplacian L sym {\textstyle L^{\text{sym}}} . For this reason, even if L rw {\textstyle L^{\text{rw}}} is in general not symmetric, it has real eigenvalues — exactly the same as the eigenvalues of the normalized symmetric Laplacian L sym {\textstyle L^{\text{sym}}} . The graph Laplacian matrix can be further viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian operator obtained by the finite difference method . (See Discrete Poisson equation ) [ 2 ] In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation stencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous Neumann boundary condition , i.e., free boundary. Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size. The generalized Laplacian Q {\displaystyle Q} is defined as: [ 3 ] Notice the ordinary Laplacian is a generalized Laplacian. The Laplacian of a graph was first introduced to model electrical networks. In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances. The weight of edge ( i , j ) is, by convention, minus the reciprocal of the impedance directly between i and j . In models of such networks, the entries of the adjacency matrix are complex, but the Kirchhoff matrix remains symmetric, rather than being Hermitian . Such a matrix is usually called an " admittance matrix ", denoted Y {\displaystyle Y} , rather than a "Laplacian". This is one of the rare applications that give rise to complex symmetric matrices . There are other situations in which entries of the adjacency matrix are complex-valued, and the Laplacian does become a Hermitian matrix . The Magnetic Laplacian for a directed graph with real weights w i j {\displaystyle w_{ij}} is constructed as the Hadamard product of the real symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with the complex entries which encode the edge direction into the phase in the complex plane. In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameter q {\displaystyle q} is called electric charge. [ 4 ] In the following example q = 1 / 4 {\displaystyle q=1/4} : The deformed Laplacian is commonly defined as where I is the identity matrix, A is the adjacency matrix, D is the degree matrix, and s is a (complex-valued) number. [ 5 ] The standard Laplacian is just Δ ( 1 ) {\textstyle \Delta (1)} and Δ ( − 1 ) = D + A {\textstyle \Delta (-1)=D+A} is the signless Laplacian. The signless Laplacian is defined as where D {\displaystyle D} is the degree matrix, and A {\displaystyle A} is the adjacency matrix. [ 6 ] Like the signed Laplacian L {\displaystyle L} , the signless Laplacian Q {\displaystyle Q} also is positive semi-definite as it can be factored as where R {\textstyle R} is the incidence matrix. Q {\displaystyle Q} has a 0-eigenvector if and only if it has a bipartite connected component (isolated vertices being bipartite connected components). This can be shown as This has a solution where x ≠ 0 {\displaystyle \mathbf {x} \neq \mathbf {0} } if and only if the graph has a bipartite connected component. An analogue of the Laplacian matrix can be defined for directed multigraphs. [ 7 ] In this case the Laplacian matrix L is defined as where D is a diagonal matrix with D i , i equal to the outdegree of vertex i and A is a matrix with A i , j equal to the number of edges from i to j (including loops).
https://en.wikipedia.org/wiki/Laplacian_matrix
In potential theory (a branch of mathematics ), the Laplacian of the indicator is obtained by letting the Laplace operator work on the indicator function of some domain D . It is a generalisation of the derivative (or "prime function") of the Dirac delta function to higher dimensions ; it is non-zero only on the surface of D . It can be viewed as a surface delta prime function , the derivative of a surface delta function (a generalization of the Dirac delta). The Laplacian of the indicator is also analogous to the second derivative of the Heaviside step function in one dimension. The Laplacian of the indicator can be thought of as having infinitely positive and negative values when evaluated very near the boundary of the domain D . Therefore, it is not strictly a function but a generalized function or measure . Similarly to the derivative of the Dirac delta function in one dimension, the Laplacian of the indicator only makes sense as a mathematical object when it appears under an integral sign; i.e. it is a distribution function. Just as in the formulation of distribution theory, it is in practice regarded as a limit of a sequence of smooth functions ; one may meaningfully take the Laplacian of a bump function , which is smooth by definition, and let the bump function approach the indicator in the limit . Paul Dirac introduced the Dirac δ -function , as it has become known, as early as 1930. [ 1 ] The one-dimensional Dirac δ -function is non-zero only at a single point. Likewise, the multidimensional generalisation, as it is usually made, is non-zero only at a single point. In Cartesian coordinates, the d -dimensional Dirac δ -function is a product of d one-dimensional δ -functions; one for each Cartesian coordinate (see e.g. generalizations of the Dirac delta function ). A generalisation of the Dirac delta is possible beyond a single point. The point zero, in one dimension, can be considered as the boundary of the positive halfline . The function 1 x >0 equals 1 on the positive halfline and zero otherwise, and is also known as the Heaviside step function . Formally, the Dirac δ -function and its derivative can be viewed as the first and second derivative of the Heaviside step function, i.e. ∂ x 1 x >0 and ∂ x 2 1 x > 0 {\displaystyle \partial _{x}^{2}\mathbf {1} _{x>0}} . The analogue of the step function in higher dimensions is the indicator function , which can be written as 1 x ∈ D , where D is some domain. The indicator function is also known as the characteristic function. In analogy with the one-dimensional case, the following higher-dimensional generalisations of the Dirac δ -function and its derivative have been proposed: [ 2 ] Here n is the outward normal vector . Here the Dirac δ -function is generalised to a surface delta function on the boundary of some domain D in d ≥ 1 dimensions. This definition gives the usual one-dimensional case, when the domain is taken to be the positive halfline. It is zero except on the boundary of the domain D (where it is infinite), and it integrates to the total surface area enclosing D , as shown below . The one-dimensional Dirac delta prime function is generalised to a multidimensional surface delta prime function on the boundary of some domain D in d ≥ 1 dimensions. In one dimension and by taking D equal to the positive halfline, the usual one-dimensional δ' -function can be recovered. Both the normal derivative of the indicator and the Laplacian of the indicator are supported by surfaces rather than points . The generalisation is useful in e.g. quantum mechanics, as surface interactions can lead to boundary conditions in d > 1, while point interactions cannot. Naturally, point and surface interactions coincide for d =1. Both surface and point interactions have a long history in quantum mechanics, and there exists a sizeable literature on so-called surface delta potentials or delta-sphere interactions. [ 3 ] Surface delta functions use the one-dimensional Dirac δ -function, but as a function of the radial coordinate r , e.g. δ( r − R ) where R is the radius of the sphere. Although seemingly ill-defined, derivatives of the indicator function can formally be defined using the theory of distributions or generalized functions : one can obtain a well-defined prescription by postulating that the Laplacian of the indicator, for example, is defined by two integrations by parts when it appears under an integral sign. Alternatively, the indicator (and its derivatives) can be approximated using a bump function (and its derivatives). The limit, where the (smooth) bump function approaches the indicator function, must then be put outside of the integral. This section will prove that the Laplacian of the indicator is a surface delta prime function . The surface delta function will be considered below. First, for a function f in the interval ( a , b ), recall the fundamental theorem of calculus assuming that f is locally integrable. Now for a < b it follows, by proceeding heuristically, that Here 1 a < x < b is the indicator function of the domain a < x < b . The indicator equals one when the condition in its subscript is satisfied, and zero otherwise. In this calculation, two integrations by parts (combined with the fundamental theorem of calculus as shown above) show that the first equality holds; the boundary terms are zero when a and b are finite, or when f vanishes at infinity. The last equality shows a sum of outward normal derivatives, where the sum is over the boundary points a and b , and where the signs follow from the outward direction (i.e. positive for b and negative for a ). Although derivatives of the indicator do not formally exist, following the usual rules of partial integration provides the 'correct' result. When considering a finite d -dimensional domain D , the sum over outward normal derivatives is expected to become an integral , which can be confirmed as follows: where the limit is of x approaching surface β from inside domain D , n β is the unit vector normal to surface β, and ∇ x is now the multidimensional gradient operator. As before, the first equality follows by two integrations by parts (in higher dimensions this proceeds by Green's second identity ) where the boundary terms disappear as long as the domain D is finite or if f vanishes at infinity; e.g. both 1 x ∈ D and ∇ x 1 x ∈ D are zero when evaluated at the 'boundary' of R d when the domain D is finite. The third equality follows by the divergence theorem and shows, again, a sum (or, in this case, an integral) of outward normal derivatives over all boundary locations. The divergence theorem is valid for piecewise smooth domains D , and hence D needs to be piecewise smooth. Thus the surface delta prime function (a.k.a. Dirac δ' -function) exists on a piecewise smooth surface, and is equivalent to the Laplacian of the indicator function of the domain D encompassed by that piecewise smooth surface. Naturally, the difference between a point and a surface disappears in one dimension. In electrostatics, a surface dipole (or Double layer potential ) can be modelled by the limiting distribution of the Laplacian of the indicator. The calculation above derives from research on path integrals in quantum physics. [ 2 ] This section will prove that the (inward) normal derivative of the indicator is a surface delta function . For a finite domain D or when f vanishes at infinity, it follows by the divergence theorem that By the product rule , it follows that Following from the analysis of the section above , the two terms on the left-hand side are equal, and thus The gradient of the indicator vanishes everywhere, except near the boundary of D , where it points in the normal direction. Therefore, only the component of ∇ x f ( x ) in the normal direction is relevant. Suppose that, near the boundary, ∇ x f ( x ) is equal to n x g ( x ), where g is some other function. Then it follows that The outward normal n x was originally only defined for x in the surface, but it can be defined to exist for all x ; for example by taking the outward normal of the boundary point nearest to x . The foregoing analysis shows that − n x ⋅ ∇ x 1 x ∈ D can be regarded as the surface generalisation of the one-dimensional Dirac delta function . By setting the function g equal to one, it follows that the inward normal derivative of the indicator integrates to the surface area of D . In electrostatics, surface charge densities (or single boundary layers ) can be modelled using the surface delta function as above. The usual Dirac delta function be used in some cases, e.g. when the surface is spherical. In general, the surface delta function discussed here may be used to represent the surface charge density on a surface of any shape. The calculation above derives from research on path integrals in quantum physics. [ 2 ] This section shows how derivatives of the indicator can be treated numerically under an integral sign. In principle, the indicator cannot be differentiated numerically, since its derivative is either zero or infinite. But, for practical purposes, the indicator can be approximated by a bump function , indicated by I ε ( x ) and approaching the indicator for ε → 0. Several options are possible, but it is convenient to let the bump function be non-negative and approach the indicator from below , i.e. This ensures that the family of bump functions is identically zero outside of D . This is convenient, since it is possible that the function f is only defined in the interior of D . For f defined in D , we thus obtain the following: where the interior coordinate α approaches the boundary coordinate β from the interior of D , and where there is no requirement for f to exist outside of D . When f is defined on both sides of the boundary, and is furthermore differentiable across the boundary of D , then it is less crucial how the bump function approaches the indicator. If the test function f is possibly discontinuous across the boundary, then distribution theory for discontinuous functions may be used to make sense of surface distributions, see e.g. section V in . [ 4 ] In practice, for the surface delta function this usually means averaging the value of f on both sides of the boundary of D before integrating over the boundary. Likewise, for the surface delta prime function it usually means averaging the outward normal derivative of f on both sides of the boundary of the domain D before integrating over the boundary. In quantum mechanics , point interactions are well known and there is a large body of literature on the subject. A well-known example of a one-dimensional singular potential is the Schrödinger equation with a Dirac delta potential . [ 5 ] [ 6 ] The one-dimensional Dirac delta prime potential, on the other hand, has caused controversy. [ 7 ] [ 8 ] [ 9 ] The controversy was seemingly settled by an independent paper, [ 10 ] although even this paper attracted later criticism. [ 2 ] [ 11 ] A lot more attention has been focused on the one-dimensional Dirac delta prime potential recently. [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] A point on the one-dimensional line can be considered both as a point and as surface; as a point marks the boundary between two regions. Two generalisations of the Dirac delta-function to higher dimensions have thus been made: the generalisation to a multidimensional point, [ 29 ] [ 30 ] as well as the generalisation to a multidimensional surface. [ 2 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] The former generalisations are known as point interactions, whereas the latter are known under different names, e.g. "delta-sphere interactions" and "surface delta interactions". The latter generalisations may use derivatives of the indicator, as explained here, or the one-dimensional Dirac δ -function as a function of the radial coordinate r . The Laplacian of the indicator has been used in fluid dynamics, e.g. to model the interfaces between different media. [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] The divergence of the indicator and the Laplacian of the indicator (or of the characteristic function , as the indicator is also known) have been used as the sample information from which surfaces can be reconstructed. [ 41 ] [ 42 ]
https://en.wikipedia.org/wiki/Laplacian_of_the_indicator
Laplacian smoothing is an algorithm to smooth a polygonal mesh . [ 1 ] [ 2 ] For each vertex in a mesh, a new position is chosen based on local information (such as the position of neighbours) and the vertex is moved there. In the case that a mesh is topologically a rectangular grid (that is, each internal vertex is connected to four neighbours) then this operation produces the Laplacian of the mesh. More formally, the smoothing operation may be described per-vertex as: Where N {\displaystyle N} is the number of adjacent vertices to node i {\displaystyle i} , x ¯ j {\displaystyle {\bar {x}}_{j}} is the position of the j {\displaystyle j} -th adjacent vertex and x ¯ i {\displaystyle {\bar {x}}_{i}} is the new position for node i {\displaystyle i} . [ 3 ] This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Laplacian_smoothing
The south bank of the Humber Estuary in England is a relatively unpopulated area containing large scale industrial development built from the 1950s onward, including national scale petroleum and chemical plants as well as gigawatt scale gas fired power stations. Historically the south bank was undeveloped, and mostly unpopulated, excluding the medieval port of Grimsby and lesser havens at Barton upon Humber and Barrow upon Humber . Industrial activity increased from the 19th century onwards, primarily brick and tile works utilising the clay extracted from the banks of the Humber; this plus the addition of chalk extraction at the edge of the Lincolnshire Wolds formed the basis of cement industries. Grimsby expanded during the industrial 19th century, and Immingham Dock was established in 1911, and a large scale cement works established near South Ferriby in 1938. Most of the brick and tile works ceased operation in around the 1950s. From the 1950s onwards a number of chemical plants were built between Immingham and Grimsby, and two major oil refineries built south of Immingham Dock in the 1960s. Growth and development of the oil and chemical industries took place through the 20th century with some contraction of chemical works occurring in the late 20th century. At the end of the 20th century and beginning of the 21st century a number of combined cycle gas turbine power stations were built (see also dash for gas ), some of which utilised 'waste' steam to provide nearby petroleum and chemical plants with heat energy. During the same time frame a large area of former clay workings from earlier brick and tile activity was converted into water parks in the Barton area. The port of Grimsby , [ map 1 ] was a significant local town and market in the medieval period, with fish being the predominant traded good. From around the 14th century the port's importance in international trade diminished, in part due to competition from Hull, Boston, as well as the Hanseatic League ; whilst coastal trade and inland waterway trade became more important. In addition to fish a trade foodstuffs also took place, as well as coals from Newcastle and the export of peat dug in Yorkshire. [ 1 ] Grimsby's population declined from around 1,400 in 1377 to around 750 by 1600 and to around 400 by the early 1700s. In the late 1700s a new dock was built at Grimsby, under the engineer John Rennie , opened 1800. In the 1840s the Sheffield, Ashton-under-Lyne and Manchester Railway constructed a rail line to the town, and a new dock was constructed in the same period; the town redeveloped as a port, and its growth re-initiated. Several new docks constructed between 1850 and 1900 with a third fish dock added in 1934. Rail connections linked the port to South Yorkshire, Lancashire and the Midlands; the net tonnage handled by the port increased from 163,000 in the 1850s to 3,777,000 by 1911. The port was also a major fishing centre, landing around 20% of the total UK catch (1934). [ 2 ] The town's population rose consistently from 1,500 in 1801, to 75,000 in 1901, and to 92,000 in 1931. [ 3 ] Neighbouring Cleethorpes also developed as a residential area for Grimsby as well as a seaside resort during the 19th century. [ 4 ] [ 5 ] In the 20th century, port based industries formed the main economic activities, with fishing being particularly important, influencing other industries in the town, specifically food processing, in particular frozen foods. In the late 1960s around 3,500 were employed directly in the fishing industry; 10,000 were employed in food industries of which 6,000 was fish processing activities; 2,500 in shipbuilding and repair; other lesser employment activities included engineering, and timber related businesses. Most of Grimsby's industries were concentrated on the Dock's estate, and later Pyewipe, west of the main centre. [ 6 ] In 1911 Immingham Dock was opened, [ map 2 ] constructed for the Great Central Railway , primarily for the export of coal; the new dock was located at a point where the deep water channel of the Humber Estuary swung close to the south bank, with estuary side jetties that and could handle ships up to 30,000 deadweight tonnage . [ 7 ] In the interwar period industry was developed on the north bank of the Humber in out of town locations: petroleum refining at Salt End ( BP Saltend ); smelting and cement manufacture at Melton ( Capper Pass and the Humber Cement Works ); and aircraft at Brough ( Blackburn Aeroplane & Motor Company , later British Aerospace ). [ 8 ] During the 1970s and early 1980s the fishing industry of Grimsby declined (to less than 15% of 1970 levels by tonnage by 1983 [ 9 ] ) due to fuel costs ( 1973 fuel crisis ), decline in fish stocks, Icelandic exclusion zones (see cod war ), and new EEC fishing limits, though the port's market share remained roughly constant at around 20%, [ 10 ] imported landings from Icelandic ships (as well as from ships of Norway, Faroes, Denmark, Belgium and Holland) became important to the continuation of Grimsby's role as a 'fish port'. [ 11 ] [ 12 ] After the end of the Second World War the Humber bank and area between Grimsby and Immingham began to be developed by capital intensive industries, main focused on petroleum and chemicals. In addition to access to a modern port (Immingham) the regional advantages were availability of large areas of undeveloped flat land at low cost, with the Humber allowing discharge or effluent. The area was earmarked as being suitable for 'special' industries, such as those dealing with hazardous products or processes. Several large conglomerates or their subsidiaries acquired land banks in the area, and began developing industrial sites. [ 13 ] Grimsby Corporation acquired 694 acres (281 ha) of land between 1946 and 1953, who then improved the road and rail links, [ note 1 ] and sought industrial developers. [ 14 ] An additional 189 acres (76 ha) was acquired by the corporation in Great Coates in 1960 and developed into a light industrial estate. [ map 3 ] [ note 2 ] [ 15 ] Developers included British Titan Products (1949, titanium dioxide pigment), Fisons (1950, phosphate fertilizer), CIBA Laboratories (1951, pharmaceuticals), Laporte Industries (1953, titanium dioxide pigment), Courtaulds (1957, viscose and acrylic fibres). [ 13 ] [ 16 ] By 1961 developments occupied around 1,100 acres (450 ha) and employed over 4,000 persons. [ 14 ] A limit to development was the fresh water supply available to industry. By the beginning of the 1960s the Fisons, Laporte, CIBA, Titan, and Courtaulds were consuming 10,000,000 imperial gallons (45,000,000 L; 12,000,000 US gal) per day, all of which was acquired from the chalk aquifer , [ 17 ] some from the companies' own boreholes; this combined with Grimsby's water demand gave a total requirement of around 30,000,000 imperial gallons (140,000,000 L; 36,000,000 US gal) per day, which was considered close to what the aquifer could sustainably supply. As a consequence additional sources of supply were sought by the water board. [ 18 ] In the late 1960s two oil refineries were established near Immingham : ( Total - Fina and Continental Oil ) supplied from an estuary pier beyond Cleethorpes at Tetney . [ 19 ] Initially rail transport links were good, but road transport infrastructure very poor, essentially rural lanes. [ 20 ] In the late 1960s the government identified the Humber region generally as suitable for large scale industrial development; subsequently development of the road networks on both banks was authorised (see M180 motorway , also M62 ), as well as the construction of the Humber Bridge . [ 19 ] A number of proposed or potential large scale developments in the latter part of the 20th century were not taken forward: The CEGB acquired a 360 acres (146 ha) site near Killingholme in 1960, and obtained consent for a 4 GW oil fired power station in 1972; the project was abandoned after the 1973 oil crisis ; in 1985 the Killingholme site was listed as a possible NIREX disposal site for low level nuclear waste ; in 1986 the CEGB listed Killingholme as a potential site for a coal fired power station; [ 21 ] [ note 3 ] A plan to reclaim land from the Humber at Pyewipe west of Grimsby using colliery waste was supported by Great Grimsby borough council as a potential source of new development land, interest in a reclamation scheme dated from at least the mid 1970s, and a report in the 1980s found the scheme feasible but expensive, the scheme was not supported by Humberside County Council who had sufficient development land elsewhere; [ 23 ] Dow chemicals also acquired 490 acres (200 ha) of land in the 1970s. [ 24 ] By 1987 9,000 were employed in the South Humber bank area (excluding Grimsby-Cleethorpes, and rural north Lincolnshire). [ 25 ] During the 1990s dash for gas several gas turbine powered power stations with heat recovery steam generators were built in the area, including several gigawatt class output units: National Power and Powergen built adjacent 665 and 900 MW combined cycle gas turbine (CCGT) power stations near North Killingholme in the early 1990s; [ 26 ] [ 27 ] Humber Power Ltd. built CCGT plant in two phases (1994–1999), with final power output 1.2 GW; [ 28 ] and ConocoPhillips built a combined heat and power plant using Gas turbine/HRSG/auxiliary boilers in two phases (opened 2004, about 730 MW and 2009, about 480 MW), used to supply heat (steam) to both the Lindsey and Humber refineries. [ 29 ] [ 30 ] The Humber Sea Terminal at North Killingholme Haven, [ map 4 ] is a modern RO-RO port terminal based on an estuary pier with 25 feet (7.5 m) minimum water depth; as of 2014 the terminal is operated by Simon Group Ltd a subsidiary of C.RO Ports SA . [ 31 ] [ 32 ] [ 33 ] Six Ro-Ro terminals were developed in 2000 (1&2), 2003 (3&4) and 2007 (5&6). [ 34 ] [ 35 ] Despite these developments the general character of the north Lincolnshire area in 1990 was agricultural, much of it large scale arable farming on high grade land, [ 36 ] a pattern that is unchanged at the beginning of the 21st century. [ 37 ] A 13+9 MW combined heat and power gas and steam turbine plant (established 2003 [ 40 ] ) owned by Npower remains on site. [ 41 ] In 1982 Fisons sold its fertilizer business to Norsk Hydro . [ 47 ] In the late 1980s Norsk Hydro built an ammonium nitrate fertilizer plant at Immingham. [ 42 ] In 2000 the company announced it was to close the ammonium nitrate and nitric acid plant at Immingham, resulting in 150 redundancies and ending fertilizer manufacture at the site. [ 48 ] In 2004 Norsk Hydro's fertilizer business was demerged as Yara International . [ 49 ] As of 2014 Yara operates a dry ice plant at Immingham, [ 50 ] as well as operating a distribution centre for liquid fertilizer products. [ 51 ] In 1992 Ciba completed a £230 million expansion to the Grimsby plant, including two production units, an 8 MW gas fired CHP power plant, and an effluent treatment plant. [ 54 ] [ 55 ] In the mid 1990s Allied Colloids (Bradford) established at production facility between the Ciba and Courtlauld's plants near Grimsby. Allied Colloids was acquired by Ciba Specialty Chemicals ( Ciba-Geigy group spin off, 1996) in 1998. [ 56 ] The Allied Colloids site at Grimsby was included in BASF's 2008 acquisitions. In 2010 BASF Performance Products plc was formed incorporating former Ciba plants; the subsidiary was merged in to BASF plc in 2013. [ 57 ] In the 1950s Laporte was seeking a site for expansion from its titanium dioxide plant in Kingsway, Luton – the company had acquired 40 acres (16 ha) of land near Grimsby in 1947 for £4,000, but the nearby land was acquired by BTP and the land was sold another site was sought. A 100 acres (40 ha) site near Stallingborough containing a former coastal gun battery was acquired, as a result the plant became known as the 'Battery works'. Construction (contracted to Taylor Woodrow ) began 1950 with 2500 piles driven to stabilise the ground. In addition to the rail connection an estuary pier was also constructed (reconstructed 1955). Simon Carves was contracted to build the 100t per day pyrites fueled sulphuric acid plant. Both the acid and pigment plant became operational in 1953, with a workforce of about 280. Initial planned capacity was 8,000 t per year in two streams; the production capacity was increased by 8 times over the next 15 years, including extension to the acid production, with a sulphur burning plant (Simon Carves) operational by 1958, and a third acid plant built in 1961. [ 58 ] A research laboratory was opened in 1960. Other production at the site included phthalic anhydride (1966), through a joint venture "Laporte-Synres" with Chemische Industrie Synres (Netherlands); and the synthetic clay laponite (1968). A plant producing titanium diozide pigment by the chloride process was commissioned in 1970, and expansion begun in 1976. By 1977 employment was nearly 1600. [ 58 ] In 1980/1, in part due to increased energy costs, Laporte announced it was to shut down its 40,000t titanium dioxide pa sulphate process with the loss of 1,000 jobs; later reduced to a halving of production. In 1983/4 Laporte sold its titanium dioxide business to SCM Corporation (USA), the Laponite production facilities were subsequently transferred to Laporte in Widness . Further expansion of the chloride process for titanium dioxide by SCM led to a production capacity of 78,000 pa by 1986, whilst production capacity via the sulphate process was 31,000 t pa. [ 58 ] [ 59 ] In 1990 SCM announced it was to reduce production by 24,000 t from 110,000 t pa to comply with EEC environmental regulations. [ 60 ] SCM was acquired by Hanson plc (1986), which demerged Millennium Chemicals (1996), [ 61 ] then acquired by Lyondell Chemical Company in 2004. [ 62 ] An expansion of titanium dioxide production in 1995 to 1999 increased capacity to 150,000 t pa. [ 63 ] In 2007 Millennium Inorganic Chemicals was acquired by Saudi Arabian firm Cristal ( National Titanium Dioxide Company Limited ). [ 64 ] [ 65 ] In 2009 the plant employed 400 workers; production was halted temporarily after European demand dropped 35% due to a recession. [ 66 ] In 2019 Cristal was acquired by Tronox . Cristal’s North American TiO2 business was sold to British chemicals firm Ineos as a condition of the acquisition required by the US Federal Trade Commission [ 67 ] [ 68 ] There have been serious process safety incidents involving titanium tetrachloride at the plant: In 2010 a vessel containing titanium tetrachloride and hydrochloric acid ruptured injuring three operators with inhalation and chemical burns from the toxic/corrosive substances. One of the operators subsequently died from his injuries. [ 69 ] [ 70 ] In 2012 the Health and Safety Executive stopped production for 3 months after the release of titanium tetrachloride in 2011. [ 71 ] A twin 6 MW has turbine plus 3 MW steam turbine is operated by NPower Cogen (since 2004, formerly TXU Energy ) at the site. [ 72 ] Akzo Nobel acquired the plant in 1998, [ 77 ] forming the company Acordis after merging with its own fibre business, which was divested to CVC Capital Partners in 1999. In 2004 production facilities for Tencel were sold to Lenzing . [ 77 ] As of 2013 the plant had a capacity of 40,000 t per year of Lyocell /Tencell. [ 78 ] The other production plant (as part of Accordis), entered administration in 2005 at which point employment had been reduced to 475, was restarted as Fibres Worldwide with a workforce of 275, but entered administration in 2006. The plant being acquired by Bluestar Group (china) in late 2006, with the product used as a carbon fibre precursor ( Polyacrylonitrile ). Production ended in 2013 due to loss of demand, [ 77 ] A 48 MW gas powered CH&P power station at the site was spun off as Humber Energy Ltd. in 2005 whilst the parent was in administration; the firm was acquired by GDF Suez subsidiary Cofely in 2013. [ 79 ] [ 80 ] In mid 2015 a 1,200,000 square feet (110,000 m 2 ) building space industrial estate was approved for the site. [ 81 ] [ 82 ] Revertex began production of Lithene (liquid polybutadienes ) in 1974 near Stallingborough. [ 86 ] In 1963 the Harlow Chemical Company (Harco) was established as a joint venture between Revertex and Hoescht for chemical production. [ 87 ] In 1976 Harco began the construction of a 30,000 t pa resin emulsion plant at a greenfield site near Stallingborough, [ 88 ] [ 89 ] the plant began operations in 1978. [ 90 ] Additional dispersion production transferred from Harlow to Stallingborough in 1991. [ 91 ] Revertex was acquired by Yule Catto in 1981. [ 92 ] Doverstrand Ltd. (then a Reichhold Chemicals /Yule Catto jv) was renamed Synthomer Ltd. in 1995. [ 93 ] In 2001 Yule Catto took over Harco, acquiring the 50% shareholding of partner Clariant , [ 94 ] and merged the business into its Synthomer subsidiary in 2002, resulting in the merger of the adjacent Synthomer and Harco activities at Stallingborough. [ 95 ] [ map 11 ] Latex production ended late 2011, and further adhesive chemical production facilities were established at the site c. 2012 . [ 91 ] [ 96 ] In 2007 construction of a hydrodesulfurization unit and steam Methane reformer was begun. [ 99 ] In 2009 workers at the plant went on strike due to preferential employment of foreign works, leading to a series of sympathy walkouts at other UK chemical, energy and petroleum plants, (see 2009 Lindsey Oil Refinery strikes ). The strike delayed the installation of the desulphurisation unit by 6 months. [ 100 ] A fire and explosion occurred at the plant in 2010, [ 101 ] killing one worker. [ 102 ] The fire further delayed the de-sulphurisation unit. [ 103 ] The de-sulphurisation unit was official inaugurated in 2011. [ 104 ] In 2010 Total announced it planned to sell the refinery, citing overcapacity; [ 105 ] by late 2011 the company had failed to sell the plant, and halted the sales process. [ 106 ] The Tetney monobuoy (operational 1971 [ 107 ] ), a SBM in the Humber Estuary is used to discharge oil tankers with the oil stored at the Tetney Oil Terminal , [ map 14 ] and transferred via pipeline. [ 108 ] A fire and explosion occurred at the plant in 2001. [ 109 ] The owner Humber Power Limited was a venture of Midland Power , ABB Energy Ventures, Tomen Group , British Energy and TotalFinaElf . Ownership was consolidated in TotalFinaElf, who sold 60% to GB Gas Holdings Ltd., a subsidiary of Centrica (2001). [ 111 ] In 2005 Centrica took 100% ownership of the plant. [ 110 ] In early 2014 Centrica began to seek buyers for a number of its gas power plants, including its South Humber and Killingholme plants, [ 112 ] in early 2015 it decided to retain the plant, but sought to reduce the output from 1,285 to 540 MW from April 2015. [ 113 ] In July 2015 Centrica announced it was to overhaul the gas turbines at a cost of £63 million, increasing total capacity by 14 MW. [ 114 ] In 2009 the plant was expanded raising generating capacity from 730 to 1,180 MW, with one 285 MW GE 9FB gas turbine, with a 200 MW Toshiba steam turbine driven via a HRSG. [ 30 ] Energy production at the plant is primarily determined by heat supply requirements. [ 30 ] In 2013 Vitol acquired the plant through acquisition of Phillips 66 subsidiary Phillips 66 Power Operations Ltd. ; the plant was renamed Immingham CHP . [ 115 ] In 2000 NRG Energy acquired the plant for £410 million, [ 116 ] and in 2004 Centrica acquired the plant for £142 million after a fall in electricity prices. [ 117 ] [ 118 ] In early 2014 Centrica began to seek buyers for a number of its gas power plants, including its South Humber and Killingholme plants, [ 119 ] and in early 2015 began discussion on the closure of the plant, having received no acceptable bids for the plant. [ 120 ] Sometimes referred to as Killingholme A . [ 27 ] [ 121 ] In 1996 a water cooling system was fitted to the plant, designed to reduce plume formation. [ 122 ] In 2002 the plant was mothballed due to low electricity prices; the plant was restarted in 2005. [ 123 ] In June 2015 E.On announced it was to close the powerstation. [ 124 ] Sometimes referred to as Killingholme B . [ 27 ] [ 121 ] Barton upon Humber dates to the pre- Norman Conquest period, and was the location for a ferry crossing of the Humber from at least that period. [ 125 ] The towns was once an important port, but declined after the establishment of Kingston upon Hull ( c. 1300 ). [ 126 ] The town remained an important port for north Lincolnshire, and in 1801 had a population of around 1,700, more than Grimsby. [ 127 ] Due to the presence of suitable soil, brick and tile making was carried out in the Barton area; in the 1840s one tilery had been established for over a hundred years; chalk was also quarried in the area, from at least 1790. Other industries in 1840 included whiting manufacture, rope making, tanning, plus trade in agricultural produce. [ 128 ] At Barton upon Humber clay had been extracted for tile making since at least the 18th century. [ 128 ] Several brick and tile manufactures operated during the 19th century, with growth stimulated in part by the end of the Brick tax in 1850. By 1892 works included Ness End , West Field , Humber Brick and Tile , Barton , Morris's , Dinsdale-Ellis-Wilson , Garside's , Blyth's Ing , Burton's , Mackrill's (Briggs) , Pioneer , Hoe Hill and Spencer's . The works extended along most of the Humber bank from Barton Cliff around 1 mile west of barton Haven to Barrow Haven . The works reduced in number during the first half of the 20th century. [ 129 ] [ 130 ] By the 1970 much of the foreshore had been extracted, and the majority of works were no longer active. [ 131 ] Several of the works had industrial railways, generally connecting the workings to the works; in some cases clay was exported directly, such as that supplied to G.T. Earle 's cement works in Wilmington, Kingston upon Hull from the Humber Brick & Tile works ( c. 1893–1900 ). [ 132 ] Many of the Barton brick and tile works closed in the 1950s. [ 133 ] [ 134 ] As of 2009 the Blyth's tile works at Hoe Hill is still operational, producing tiles using non-modern methods at a small scale. [ 129 ] The clay extraction, and brick and tile industry extended further east along the Humber bank. There were further works at Barrow Haven , New Holland including the Old Ferry Brick Works and Barrow Tileries (Barrow); the Atlas , and New Holland Stock brick works east of Barrow haven, the Quebec Brick & Tile works approximately 1.5 miles east of New Holland, and scattered works eastward on the bank as far as South Killingholme Haven , as well as brick works as South Ferriby and along the New River Ancholme near Ferriby Sluice . [ 135 ] A site at East Halton was used to supply G.T. Earle's cement works in Stoneferry , [ 136 ] whilst the same firm's Wilmington works was supplied with clay from pits near North Killingholme Haven (1909–13), and later from pits between Barton and Barrow on Humber (1913–69). [ 137 ] In the 1890s George Henry Skelsey used funds from a public listing of his company to build a cement plant, Port Adamant Works , at Barton, west of the Haven, replacing a site he had acquired in 1885 at Morley Street, Stoneferry , Hull in the 1880s. Clay and chalk for the process were sourced on site, with chalk brought from the New Cliff chalk quarry , [ map 19 ] by a short narrow gauge rail line. Initially the plant had a capacity of around 330t per week, using chamber kilns , supplemented by shaft kilns in around 1901 increasing weekly capacity by around 320 t. In 1911 the company became part of British Portland Cement Manufacturers , and a rotary kiln was installed in 1912 replacing the earlier kilns. The plant closed in 1927 after the acquisition and establishment by the parent company of the large Humber Cement Works and Hope Cement Works . [ 138 ] [ 139 ] Adjacent west of the New Cliff quarry was Barton Cliff Quarry , [ map 20 ] (chalk) also connected by a short rail line to the Humber foreshore; the quarry closed 1915. [ 139 ] To the south west was Leggott's Quarry , [ map 21 ] (also known as "Ferriby Quarry"), also connected by a short rail line to the foreshore. [ 140 ] The two quarries supplied chalk, including to G.T. Earle's Stoneferry and Wilmington plants respectively. [ 136 ] [ 137 ] In 1938 Eastwoods Ltd established a cement works near South Ferriby, west of Ferriby Sluice. [ map 22 ] The initial plant consisted of a single 200.1 by 8.2 feet (61 by 2.5 m) rotary kiln with an output of around 200t per day by the wet process . Chalk was supplied from a quarry, Middlegate quarry , [ map 23 ] south east of South Ferriby, which was crushed at the quarry and transported to the cement plant by aerial ropeway , whilst clay was supplied from west adjacent west of the plant. In 1962 the plant became part of Rugby Portland Cement Co. Ltd . In 1967 a semi- dry process rotary kiln was installed and the first kiln ceased operating. In 1974 the excavations at the chalk quarry were extended below the chalk through a relatively thin layer of red chalk and carrstone to the underlying clay, which was also extracted for use in the process – a conveyor belt system was installed to transport the materials to plant; the clay extraction west of the plant then ceased. A second rotary kiln was added in 1978. Ownership passed to Rugby Group (1979), RMC (2000), and then to CEMEX 2005. [ 141 ] A modern tile manufacturer Goxhill Tilieries (as of 2014 part of the Wienerberger group via Sandtoft ) is located east of New Holland and north of Goxhill (near the former Quebec brickyard). [ map 24 ] The company Sandtoft was established in 1904 as brick maker, and started tile production at Goxhill in 1934. Concrete tile manufacturing capacity was expanded during the 20th century. [ 142 ] [ 143 ] A fertilizer works was established at Barton, near the river bank east of the Haven in 1874 by "The Farmers Company". [ 144 ] In 1968 the owner A.C.C. ( Associated Chemical Companies ) established new chemically based fertilizer production at the site, [ map 25 ] including a 180t per day Nitric acid plant, a 317t per day ammonium nitrate plant, plus a 475t per day fertilizer plant. [ 145 ] In 1965 A.C.C. became a full subsidiary of Albright and Wilson , including the Barton plant. [ 146 ] The fertilizer business of Albright and Wilson was acquired by ICI in 1983, [ 147 ] Loss of UK market share caused ICI to close the plant in the late 1980s, as well as other fertilizer production facilities. [ 42 ] Subsequently, the site was sold to Glanford borough, and later redeveloped together with former brick yards as a park Water's Edge . In 1992 Kimberly-Clark established a large nappy mill outside Barton upon Humber, [ map 26 ] the plant was built at a cost of about £100,000, for the manufacture of Huggies nappies. [ 148 ] The plant was closed in 2013, due to the company ceasing most of its production of nappies in the European market. [ 149 ] In August 2013 Wren Kitchens took over the 180 acres (73 ha) site and began conversion of the 750,000 square feet (70,000 m 2 ) factory space into head offices, plus manufacturing and warehousing. [ 150 ] In April 2020, Wren began an extension project to its facility at the cost of £130 million. There are also private wharfs at Barton-upon-Humber (Waterside), [ map 27 ] Barrow Haven, [ map 28 ] and New Holland. [ map 29 ] [ 151 ] At the Barton foreshore directly west of Barton Haven the brick works had been closed and demolished by 1955, and an extension of a fertilizer works, BritAg, was built on the site. After closure the site was acquired by Glanford Borough Council in c. 1990 from ICI for £335,000, [ note 4 ] indemnifying the company from any responsibility for cleaning up the site. Initially the council planned to reclaim and clean up the land, and establish an industrial estate on site. The local authority failed to gain funding for the redevelopment and cleanup, and in 1996 Glanford's successor North Lincolnshire Council inherited the site and terminated the redevelopment plans due to their cost, instead undertaking to clean up the site and create a 'water park'. After remediation of the harmful chemical residues from the fertilizer operation the site was converted into a 86 acres (35 ha) county part, Water's Edge , [ map 30 ] incorporating the worked out clay pits as reed beds. [ 134 ] [ 152 ] [ 153 ] Tile and brickyards east of Barton Haven which were abandoned in the 1950s now form part of the 100 acres (40 ha) Far Ings National Nature Reserve , [ map 31 ] established in 1983 by the Lincolnshire Wildlife Trust. [ 133 ] [ 154 ] From 2013/4 Leggott's (or Ferriby) quarry has been reused as an airsoft recreation site. [ 155 ] [ 156 ] [ 157 ] Download coordinates as:
https://en.wikipedia.org/wiki/Laporte_"Battery_Works",_Stallingborough
In signal processing , a lapped transform is a type of linear discrete block transformation where the basis functions of the transformation overlap the block boundaries, yet the number of coefficients overall resulting from a series of overlapping block transforms remains the same as if a non-overlapping block transform had been used. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Lapped transforms substantially reduce the blocking artifacts that otherwise occur with block transform coding techniques, in particular those using the discrete cosine transform . The best known example is the modified discrete cosine transform used in the MP3 , Vorbis , AAC , and Opus audio codecs . [ 5 ] Although the best-known application of lapped transforms has been for audio coding, they have also been used for video and image coding and various other applications. They are used in video coding for coding I-frames in VC-1 and for image coding in the JPEG XR format. More recently, a form of lapped transform has also been used in the development of the Daala video coding format . [ 5 ] This linear algebra -related article is a stub . You can help Wikipedia by expanding it . This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lapped_transform
Laptop theft (or notebook theft ) is a significant threat to users of laptop computers. Many methods to protect the data and to prevent theft have been developed, including alarms, laptop locks, and visual deterrents such as stickers or labels. Victims of laptop theft can lose hardware, software, and essential data that has not been backed up . Thieves also may have access to sensitive data and personal information. Some systems authorize access based on credentials stored on the laptop including MAC addresses , web cookies , cryptographic keys and stored passwords . According to the FBI , losses due to laptop theft totaled more than $3.5 million in 2005. The Computer Security Institute /FBI Computer Crime & Security Survey found the average theft of a laptop to cost a company $31,975. [ 1 ] In a study surveying 329 private and public organizations published by Intel in 2010, 7.1% of employee laptops were lost or stolen before the end of their usefulness lifespan. [ 2 ] Furthermore, it was determined that the average total negative economic impact of a stolen laptop was $49,256—primarily due to compromised data, and efforts to retroactively protect organizations and people from the potential consequences of that compromised data. The total cost of lost laptops to all organizations involved in the study was estimated at $2.1 billion. [ 3 ] Of the $48B lost from the U.S. economy as a result of data breaches , 28% resulted from stolen laptops or other portable devices. [ 4 ] In the 2011, Bureau Brief prepared by the NSW Bureau of Crime Statistics and Research it was reported that thefts of laptops have been on the increase over the last 10 years, attributed in part by an increase in ownership but also because they are an attractive proposition for thieves and opportunists. In 2001 2,907 laptops were stolen from New South Wales dwellings, but by 2010 this had risen to 6,492, second only to cash of items taken by thieves. The Bureau reports that one in four break-ins in 2010 resulted in a laptop being stolen. This startling trend in burglaries lends itself to an increase in identity theft and fraud due to the personal and financial information commonly found on laptops. These statistics do not take into account unreported losses so the figures could arguably be much higher. [ 5 ] Businesses have much to lose if an unencrypted or poorly secured laptop is misappropriated, yet many do not adequately assess this risk and take appropriate action. Loss of sensitive company information is of significant risk to all businesses and measures should be taken to adequately protect this data. A survey conducted in multiple countries suggested that employees are often careless or deliberately circumvent security procedures, which leads to the loss of the laptop. According to the survey, employees were most likely to lose a laptop while travelling at hotels, airports, rental cars, and conference events. [ 6 ] Behling and Wood examined the issue of laptop security and theft. Their survey of employees in southern New England highlighted that not only were security measures fundamentally basic but that training employees in security measures was limited and inadequate. They concluded that trends in laptop thefts needed to be monitored to assess what intervention measures were required. [ 7 ] Passwords are no longer adequate to protect laptops. There are many solutions that can improve the strength of a laptop's protection. Full disk encryption (FDE) is an increasingly popular and cost-effective approach. FDE can be taken on from a software-based approach, a hardware-based approach, or both-end-based approach. FDE provides protection before the operating system starts up with pre-boot authentication , however precautions still need to be taken against cold boot attacks . There are a number of tools available, both commercial and open source that enable a user to circumvent passwords for Windows, Mac OS X, and Linux. One example is TrueCrypt which allows users to create a virtual encrypted disk on their computer. [ 8 ] Passwords provide a basic security measure for files stored on a laptop, though combined with disk encryption software they can reliably protect data against unauthorized access. Remote Laptop Security (RLS) is available to confidently secure data even when the laptop is not in the owner's possession. With Remote Laptop Security, the owner of a laptop can deny access rights to the stolen laptop from any computer with Internet access. A number of computer security measures have emerged that aim at protecting data. The Kensington Security Slot along with a locking cable provides physical security against thefts of opportunity. This is a cord that is attached to something heavy that cannot be moved, and is then locked into the case of the laptop, but this is not 100% secure. [ 9 ] The Noble security lock slot is a different way to attach a security cable. [ 10 ] [ 11 ] Another possible approach to limiting the consequences of laptop theft is to issue thin client devices to field employees instead of conventional laptops, so that all data will reside on the server and therefore may be less liable to loss or compromise. If a thin client is lost or stolen, it can easily and inexpensively be replaced. However, a thin client depends on network access to the server, which is not available aboard airliners or any other location without network access. This approach can be coupled with strong authentication as such single sign-on (SSO). In 2006 a laptop in custody of a data analyst was stolen that contained personal and health data of about 26.5 million active duty troops and veterans. [ 12 ] The agency has estimated that it will cost between $100 million to $500 million to prevent and cover possible losses from the data theft. [ 13 ] In 2007, the United States Department of Veterans Affairs agreed to pay $20 million to current and former military personnel to settle a class action lawsuit. [ 14 ] In 2007 the Financial Services Authority (FSA) fined the UK's largest building society, Nationwide , £980,000 for inadequate procedures when an employee's laptop was stolen during a domestic burglary. The laptop had details of 11 million customers' names and account numbers and, whilst the device was password protected, the information was unencrypted. The FSA noted that the systems and controls fell short, given that it took the Nationwide three weeks to take any steps to investigate the content on the missing laptop. The substantial fine was invoked to reinforce the FSA's commitment to reducing financial crime. [ 15 ] In 2010 VA reported the theft of the laptop from an unidentified contractor; the computer contained personally identifiable information on 644 veterans, including data from some VA medical centers' records. After learning about the unencrypted laptop, VA investigated how many VA contractors might not be complying with the encryption requirement and learned that 578 vendors had refused to sign new contract clauses that required them to encrypt veteran data on their computers, an apparent violation of rules. LoJack for Laptops has compiled a list of the top ten places from which laptops are stolen: [ 16 ] To provide some context, the Ponemon Institute released a study that indicates over 600,000 laptops will be lost or stolen at US airports every year, with 65–69% of them remaining unclaimed. [ 17 ]
https://en.wikipedia.org/wiki/Laptop_theft
LAR1 ('Lichen-Associated Rhizobiales 1') refers to a specific bacterial lineage in the order Hyphomicrobiales (formerly Rhizobiales) that has most frequently been found directly in association with lichens . [ 1 ] This lineage is currently known to associate with lichens that have a green-algal photosynthetic partner (as opposed to a cyanobacterial partner) and a fungal partner in the Lecanoromycetes (though other groups of fungi have not yet been examined). This lineage has been documented in association with all green-algal lichens specifically tested (all from North America), and was also found in a sequence library derived from Antarctic lichens. [ 2 ] The specific ecological niche occupied by this lineage indicates that it may rely on certain nutrients that are abundant in green-algal lichen thalli but are rarer in other environments. The LAR1 lineage is currently defined based on sequences of the 16S rRNA gene alone, since it remains uncultured in the laboratory. In spite of its resistance to being cultured, at least one potentially significant metabolic function can be inferred through circumstantial evidence: nitrogen fixation . Since nitrogen is required for growth by all biological systems, but is generally biologically inaccessible due to its high activation energy , many eukaryotes have established relationships with specialized bacteria that are capable of nitrogen fixation (converting dinitrogen gas into a molecular form which is easily assimilated). [ 3 ] Many lichens grow in extremely nutrient-poor environments and may rely on nitrogen-fixing bacteria to provide them with enough molecular nitrogen to survive. [ 4 ] It has been documented by numerous researchers that microbes associated with green-algal lichens have the potential to fix nitrogen in abundance. [ 5 ] [ 6 ] [ 7 ] [ 8 ] However, nearly all of these studies have relied solely on culture-based methods, which may provide an inaccurate picture of what the most abundant or important nitrogen-fixers are. Independent studies on lichens have used culture-free techniques to detect the presence of nifH , the primary gene involved in nitrogen fixation, and have uncovered sequences that share the same phylogenetic affinities as the LAR1 lineage. [ 1 ] [ 9 ] However, the diversity of bacteria found in environmental samples, the frequency with which horizontal gene transfer occurs in bacteria, and the lack of physiological studies make a definitive statement regarding the metabolic activity of this uncultured lineage impossible at this point.
https://en.wikipedia.org/wiki/Lar1
The chemical companies of the large-scale chemical synthesis in Poland ( Polish : Wielka synteza chemiczna ) at the beginning of the 21st century, underwent the a process of intensive reorganization. [ 1 ] Since the majority of entities in the sector of great chemical synthesis was owned by State Treasury at that time, the reorganisation and privatisation strategy was developed for the sector of great chemical synthesis. [ 2 ] It identified, e.g. weak connections between the particular production plants and the necessity to reorganise them. The effect of this strategy was, e.g. the consolidation of the chemical and fertiliser industry, by establishing Grupa Azoty. [ 3 ] [ 4 ] The results of the 'great chemical synthesis' — the largest chemical plants in Poland — include: In addition:
https://en.wikipedia.org/wiki/Large-scale_chemical_synthesis_in_Poland
Large-signal modeling is a common analysis method used in electronic engineering to describe nonlinear devices in terms of the underlying nonlinear equations. In circuits containing nonlinear elements such as transistors , diodes , and vacuum tubes , under "large signal conditions", AC signals have high enough magnitude that nonlinear effects must be considered. [ 1 ] "Large signal" is the opposite of " small signal ", which means that the circuit can be reduced to a linearized equivalent circuit around its operating point with sufficient accuracy. A small signal model takes a circuit and based on an operating point (bias) and linearizes all the components. Nothing changes because the assumption is that the signal is so small that the operating point (gain, capacitance, etc.) doesn't change. A large signal model, on the other hand, takes into account the fact that the large signal actually affects the operating point, as well as that elements are non-linear and circuits can be limited by power supply values to avoid variation in operating point. A small signal model ignores simultaneous variations in the gain and supply values. In the domain of artificial (machine) intelligence, Large Signal Models enable human-centric interactions and knowledge discovery of signal data similar to how prompts allow users to query an LLM based on unstructured text from the web. Users can ask general questions about relationships between the focus dataset and results from pre-compiled LSTM built on a signal dataset across a large range of domains. This is achieved by layering in latent pattern detection and knowledge graph-based (KG-based) explainability into an LSTM inference pipeline. This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Large-signal_model
The Large Angle and Spectrometric Coronagraph ( LASCO ) on board the Solar and Heliospheric Observatory satellite (SOHO) consists of three solar coronagraphs with nested fields of view: [ 1 ] The first principal investigator was Dr. Guenter Brueckner . These coronagraphs monitor the solar corona by using an optical system to create, in effect, an artificial solar eclipse . The white light coronagraphs C2 and C3 produce images of the corona over much of the visible spectrum, while the C1 interferometer produces images of the corona in a number of very narrow visible wavelength bands. LASCO C3, the clear coronagraph picture, has a shutter time of about 19 seconds. LASCO C2, the orange picture, has a shutter speed of about 26 seconds. The three LASCO cameras have a resolution of one megapixel . The base unit of LASCO's pictures are blocks of 32x32 pixels. If only one bit is missing, as it could occur due to disturbances, the whole block is gated out. The LASCO instruments have limited technical capabilities compared to more recently developed instruments. They were built in the late 1980s, when digital cameras were an emerging technology. There are two kinds of disturbances that repeatedly occur:
https://en.wikipedia.org/wiki/Large_Angle_and_Spectrometric_Coronagraph
Large Interferometer For Exoplanets ( LIFE ) is a project started in 2017 to develop the science, technology and a roadmap for a space mission to detect and characterize the atmospheres of dozens of warm, terrestrial extrasolar planets . The current plan is for a nulling interferometer operating in the mid-infrared . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The LIFE space observatory concept is different from previous space missions, which covered a similar wavelength regime in the mid-infrared (MIR). This includes recent missions such as James Webb Space Telescope , Spitzer Space Telescope , and older missions such as ISO , IRAS , and AKARI . When present in sufficient quantities in the atmosphere, chemicals that are indicators of life are known as atmospheric biomarkers. The LIFE Mission is designed to observe in the mid-infrared light, where many of these molecules show spectral features.
https://en.wikipedia.org/wiki/Large_Interferometer_For_Exoplanets
The Large Molecule Heimat is a dense gas cloud located in the molecular cloud Sagittarius B2 . [ 2 ] Many species of molecule, including aminoacetonitrile (a molecule related to glycine ), ethyl formate , [ 3 ] and butyronitrile , [ 3 ] have been detected in the Large Molecule Heimat. [ 2 ] [ 4 ] This nebula-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Large_Molecule_Heimat
The Large-Scale Concept Ontology for Multimedia project was a series of workshops held from April 2004 to September 2006 [ 1 ] for the purpose of defining a standard formal vocabulary for the annotation and retrieval of video. The Large-Scale Concept Ontology for Multimedia project was sponsored by the Disruptive Technology Office and brought together representatives from a variety of research communities, such as multimedia learning, information retrieval, computational linguistics, library science, and knowledge representation, as well as "user" communities such as intelligence agencies and broadcasters, to work collaboratively towards defining a set of 1,000 concepts. [ 2 ] Individually, each concept was to meet the following criteria: [ 3 ] Jointly, these concepts were to meet the additional criterion of providing broad (domain independent) coverage. [ 3 ] High-level target areas for coverage included physical objects, including animate objects (such as people, mobs, and animals), and inanimate objects, ranging from large-scale (such as buildings and highways) to small-scale (such as telephones and appliances); actions and events; locations and settings; and graphics. The effort was led by Dr. Milind Naphade, who was the principal investigator along with researchers from Carnegie Mellon University , Columbia University , and IBM . [ 1 ] The project had two main "tracks": the development and deployment of keyframe annotation tools (performed by CMU and Columbia), and the development of the Large-Scale Concept Ontology for Multimedia concept hierarchy itself. The second track was executed in two phases: The first consisted in the manual construction of an 884 concept hierarchy, was performed collaboratively among the research and user community representatives. The second track, performed by knowledge representation experts at Cycorp, Inc. , involved the mapping of the concepts into the Cyc knowledge base and the use of the Cyc inference engine to semi-automatically refine, correct, and expand the concept hierarchy. The mapping/expansion phase of the project was motivated by a desire to increase breadth—the mapping had the effect of moving from 884 concepts to well past the initial goal of 1000—and to move Large-Scale Concept Ontology for Multimedia from a one-dimensional hierarchy of concepts, to a full-blown ontology of rich semantic connections. [ 3 ] The outputs of the effort included: [ 1 ] Several sets of concept detectors were developed and released for public use: Since its release, Large-Scale Concept Ontology for Multimedia has begun to be used successfully in visual recognition research: Apart from research done by project participants, it has been used by independent research in concept extraction from images, [ 4 ] [ 5 ] and has served as the basis for a video annotation tool. [ 6 ]
https://en.wikipedia.org/wiki/Large_Scale_Concept_Ontology_for_Multimedia
The Large Ultraviolet Optical Infrared Surveyor , commonly known as LUVOIR ( / l uː ˈ v w ɑːr / ), is a multi-wavelength space telescope concept being developed by NASA under the leadership of a Science and Technology Definition Team . It is one of four large astrophysics space mission concepts studied in preparation for the National Academy of Sciences 2020 Astronomy and Astrophysics Decadal Survey . [ 2 ] [ 3 ] While LUVOIR is a concept for a general-purpose observatory, it has the key science goal of characterizing a wide range of exoplanets , including those that might be habitable . An additional goal is to enable a broad range of astrophysics , from the reionization epoch, through galaxy formation and evolution, to star and planet formation . Powerful imaging and spectroscopy observations of Solar System bodies would also be possible. LUVOIR would be a Large Strategic Science Mission and was considered for a development start sometime in the 2020s. The LUVOIR Study Team, under Study Scientist Aki Roberge , has produced designs for two variants of LUVOIR: one with a 15.1 m diameter telescope mirror ( LUVOIR-A ) and one with an 8 m diameter mirror ( LUVOIR-B ). [ 4 ] LUVOIR would be able to observe ultraviolet , visible , and near-infrared wavelengths of light . The Final Report on the 5-year LUVOIR mission concept study was publicly released on 26 August 2019. [ 5 ] On 4 November 2021, the 2020 Astrophysics Decadal Survey recommended development of a "large (~6 m aperture) infrared/optical/ultraviolet (IR/O/UV) space telescope", with the science goals of searching for signatures of life on planets outside of the Solar System and enabling a wide range of transformative astrophysics. Such a mission draws upon both the LUVOIR and HabEx mission concepts. [ 6 ] [ 7 ] [ 8 ] In 2016, NASA began considering four different space telescope concepts for future Large Strategic Science Missions. [ 9 ] They are the Habitable Exoplanet Imaging Mission (HabEx), Large Ultraviolet Optical Infrared Surveyor (LUVOIR), Lynx X-ray Observatory (Lynx), and Origins Space Telescope (OST). In 2019, the four teams turned in their final reports to the National Academy of Sciences , whose independent Decadal survey committee advises NASA on which mission should take top priority. If funded, LUVOIR would launch in approximately 2039 using a heavy launch vehicle, and it would be placed in an orbit around the Sun–Earth Lagrange point 2 . [ 5 ] LUVOIR's main goals are to investigate exoplanets , cosmic origins , and the Solar System . [ 4 ] LUVOIR would be able to analyze the structure and composition of exoplanet atmospheres and surfaces. It could also detect biosignatures arising from life in the atmosphere of a distant exoplanet. [ 10 ] Atmospheric biosignatures of interest include CO 2 , CO , molecular oxygen ( O 2 ), ozone ( O 3 ), water ( H 2 O ), and methane ( CH 4 ). LUVOIR's multi-wavelength capability would also provide key information to help understand how a host star's UV radiation regulates the atmospheric photochemistry on habitable planets . LUVOIR will also observe large numbers of exoplanets spanning a wide range of characteristics (mass, host star type, age, etc.), with the goal of placing the Solar System in a broader context of planetary systems. Over its five-year primary mission, LUVOIR-A is expected to identify and study 54 potentially habitable exoplanets , while LUVOIR-B is expected to identify 28. [ 1 ] The scope of astrophysics investigations include explorations of cosmic structure in the far reaches of space and time, formation and evolution of galaxies , and the birth of stars and planetary systems . In the area of Solar System studies, LUVOIR can provide up to about 25 km imaging resolution in visible light at Jupiter, permitting detailed monitoring of atmospheric dynamics in Jupiter , Saturn , Uranus , and Neptune over long timescales. Sensitive, high resolution imaging and spectroscopy of Solar System comets , asteroids , moons , and Kuiper Belt objects that will not be visited by spacecraft in the foreseeable future can provide vital information on the processes that formed the Solar System ages ago. Furthermore, LUVOIR has an important role to play by studying plumes from the ocean moons of the outer Solar System, in particular Europa and Enceladus , over long timescales. LUVOIR would be equipped with an internal coronagraph instrument, called ECLIPS for Extreme Coronagraph for LIving Planetary Systems, to enable direct observations of Earth-like exoplanets. An external starshade is also an option for the smaller LUVOIR design (LUVOIR-B). Other candidate science instruments studied are: High-Definition Imager (HDI), a wide-field near-UV, optical, and near-infrared camera ; LUMOS , a LUVOIR Ultraviolet Multi-Object Spectrograph ; and POLLUX, an ultraviolet spectropolarimeter . POLLUX (high-resolution UV spectropolarimeter ) is being studied by a European consortium, with leadership and support from the CNES , France. The observatory can observe wavelengths of light from the far-ultraviolet to the near-infrared . To enable the extreme wavefront stability needed for coronagraphic observations of Earth-like exoplanets, [ 11 ] the LUVOIR design incorporates three principles. First, vibrations and mechanical disturbances throughout the observatory are minimized. Second, the telescope and coronagraph both incorporate several layers of wavefront control through active optics. Third, the telescope is actively heated to a precise 270 K (−3 °C; 26 °F) to control thermal disturbances. The LUVOIR technology development plan is supported with funding from NASA's Astrophysics Strategic Mission Concept Studies program, the Goddard Space Flight Center , the Marshall Space Flight Center , the Jet Propulsion Laboratory and related programs at Northrop Grumman Aerospace Systems and Ball Aerospace . LUVOIR-A, previously known as the High Definition Space Telescope ( HDST ), was proposed by the Association of Universities for Research in Astronomy (AURA) on 6 July 2015. [ 12 ] It would be composed of 36 mirror segments with an aperture of 15.1 metres (50 ft) in diameter, offering images up to 24 times sharper than the Hubble Space Telescope . [ 13 ] LUVOIR-A would be large enough to find and study the dozens of Earthlike planets in our nearby neighborhood . It could resolve objects such as the nucleus of a small galaxy or a gas cloud on the way to collapsing into a star and planets . [ 12 ] The case for HDST was made in a report entitled "From Cosmic Birth to Living Earths", on the future of astronomy commissioned by AURA, which runs the Hubble and other observatories on behalf of NASA and the National Science Foundation . [ 14 ] Ideas for the original HDST proposal included an internal coronagraph , a disk that blocks light from the central star, making a dim planet more visible, and a starshade that would float kilometers out in front of it to perform the same function. [ 15 ] LUVOIR-A folds so it only needs an 8-metre wide payload fairing. [ 5 ] Initial cost estimates are approximately US$10 billion, [ 15 ] with lifetime cost estimates of $18 billion to $24 billion. [ 1 ] LUVOIR-B, previously known as the Advanced Technology Large-Aperture Space Telescope ( ATLAST ), [ 16 ] [ 17 ] [ 18 ] [ 19 ] is an 8 meter architecture initially developed by the Space Telescope Science Institute , [ 20 ] the science operations center for the Hubble Space Telescope (HST) and the James Webb Space Telescope (JWST). While smaller than LUVOIR-A, it is being designed to produce an angular resolution that is 5–10 times better than the JWST, and a sensitivity limit that is up to 2,000 times better than HST. [ 16 ] [ 17 ] [ 20 ] The LUVOIR Study Team expects that the telescope would be able to be serviced – similar to HST – either by an uncrewed spacecraft or by astronauts via Orion or Starship . Instruments such as cameras could potentially be replaced and returned to Earth for analysis of their components and future upgrades. [ 19 ] The original backronym used for the initial mission concept, "ATLAST", was a pun referring to the time taken to decide on a successor for HST. ATLAST itself had three different proposed architectures – an 8 metres (26 ft) monolithic mirror telescope, a 16.8 metres (55 ft) segmented mirror telescope, and a 9.2 metres (30 ft) segmented mirror telescope. The current LUVOIR-B architecture adopts JWST design heritage, essentially being an incrementally larger variant of the JWST, which has a 6.5 m segmented main mirror. Running on solar power , it would use an internal coronagraph or an external occulter which can characterize the atmosphere and surface of an Earth-sized exoplanet in the habitable zone of long-lived stars at distances up to 140 light-years (43 pc), including its rotation rate, climate, and habitability. The telescope would also allow researchers to glean information on the nature of the dominant surface features, changes in cloud cover and climate, and, potentially, seasonal variations in surface vegetation. [ 21 ] LUVOIR-B was designed to launch on a heavy-lift rocket with an industry-standard 5 metres (16 ft) diameter launch fairing. Lifetime cost estimates range from $12 billion to $18 billion. [ 1 ]
https://en.wikipedia.org/wiki/Large_Ultraviolet_Optical_Infrared_Surveyor
Large deformation diffeomorphic metric mapping ( LDDMM ) is a specific suite of algorithms used for diffeomorphic mapping and manipulating dense imagery based on diffeomorphic metric mapping within the academic discipline of computational anatomy , to be distinguished from its precursor based on diffeomorphic mapping . The distinction between the two is that diffeomorphic metric maps satisfy the property that the length associated to their flow away from the identity induces a metric on the group of diffeomorphisms , which in turn induces a metric on the orbit of shapes and forms within the field of computational anatomy. The study of shapes and forms with the metric of diffeomorphic metric mapping is called diffeomorphometry . A diffeomorphic mapping system is a system designed to map, manipulate, and transfer information which is stored in many types of spatially distributed medical imagery. Diffeomorphic mapping is the underlying technology for mapping and analyzing information measured in human anatomical coordinate systems which have been measured via Medical imaging [ citation needed ] . Diffeomorphic mapping is a broad term that actually refers to a number of different algorithms, processes, and methods. It is attached to many operations and has many applications for analysis and visualization. Diffeomorphic mapping can be used to relate various sources of information which are indexed as a function of spatial position as the key index variable. Diffeomorphisms are by their Latin root structure preserving transformations, which are in turn differentiable and therefore smooth, allowing for the calculation of metric based quantities such as arc length and surface areas. Spatial location and extents in human anatomical coordinate systems can be recorded via a variety of Medical imaging modalities, generally termed multi-modal medical imagery, providing either scalar and or vector quantities at each spatial location. Examples are scalar T1 or T2 magnetic resonance imagery , or as 3x3 diffusion tensor matrices diffusion MRI and diffusion-weighted imaging , to scalar densities associated to computed tomography (CT), or functional imagery such as temporal data of functional magnetic resonance imaging and scalar densities such as Positron emission tomography (PET) . Computational anatomy is a subdiscipline within the broader field of neuroinformatics within bioinformatics and medical imaging . The first algorithm for dense image mapping via diffeomorphic metric mapping was Beg's LDDMM [ 1 ] [ 2 ] for volumes and Joshi's landmark matching for point sets with correspondence, [ 3 ] [ 4 ] with LDDMM algorithms now available for computing diffeomorphic metric maps between non-corresponding landmarks [ 5 ] and landmark matching intrinsic to spherical manifolds, [ 6 ] curves, [ 7 ] currents and surfaces, [ 8 ] [ 9 ] [ 10 ] tensors, [ 11 ] varifolds, [ 12 ] and time-series. [ 13 ] [ 14 ] [ 15 ] The term LDDMM was first established as part of the National Institutes of Health supported Biomedical Informatics Research Network . [ 16 ] In a more general sense, diffeomorphic mapping is any solution that registers or builds correspondences between dense coordinate systems in medical imaging by ensuring the solutions are diffeomorphic. There are now many codes organized around diffeomorphic registration [ 17 ] including ANTS, [ 18 ] DARTEL, [ 19 ] DEMONS, [ 20 ] StationaryLDDMM, [ 21 ] FastLDDMM, [ 22 ] [ 23 ] as examples of actively used computational codes for constructing correspondences between coordinate systems based on dense images. The distinction between diffeomorphic metric mapping forming the basis for LDDMM and the earliest methods of diffeomorphic mapping is the introduction of a Hamilton principle of least-action in which large deformations are selected of shortest length corresponding to geodesic flows. This important distinction arises from the original formulation of the Riemannian metric corresponding to the right-invariance. The lengths of these geodesics give the metric in the metric space structure of human anatomy. Non-geodesic formulations of diffeomorphic mapping in general does not correspond to any metric formulation. Diffeomorphic mapping 3-dimensional information across coordinate systems is central to high-resolution Medical imaging and the area of Neuroinformatics within the newly emerging field of bioinformatics . Diffeomorphic mapping 3-dimensional coordinate systems as measured via high resolution dense imagery has a long history in 3-D beginning with Computed Axial Tomography (CAT scanning) in the early 80's by the University of Pennsylvania group led by Ruzena Bajcsy , [ 24 ] and subsequently the Ulf Grenander school at Brown University with the HAND experiments. [ 25 ] [ 26 ] In the 90's there were several solutions for image registration which were associated to linearizations of small deformation and non-linear elasticity. [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] The central focus of the sub-field of Computational anatomy (CA) within medical imaging is mapping information across anatomical coordinate systems at the 1 millimeter morphome scale. In CA mapping of dense information measured within Magnetic resonance image (MRI) based coordinate systems such as in the brain has been solved via inexact matching of 3D MR images one onto the other. The earliest introduction of the use of diffeomorphic mapping via large deformation flows of diffeomorphisms for transformation of coordinate systems in image analysis and medical imaging was by Christensen, Rabbitt and Miller [ 17 ] [ 32 ] and Trouvé. [ 33 ] The introduction of flows, which are akin to the equations of motion used in fluid dynamics, exploit the notion that dense coordinates in image analysis follow the Lagrangian and Eulerian equations of motion. This model becomes more appropriate for cross-sectional studies in which brains and or hearts are not necessarily deformations of one to the other. Methods based on linear or non-linear elasticity energetics which grows with distance from the identity mapping of the template, is not appropriate for cross-sectional study. Rather, in models based on Lagrangian and Eulerian flows of diffeomorphisms, the constraint is associated to topological properties, such as open sets being preserved, coordinates not crossing implying uniqueness and existence of the inverse mapping, and connected sets remaining connected. The use of diffeomorphic methods grew quickly to dominate the field of mapping methods post Christensen's original paper, with fast and symmetric methods becoming available. [ 19 ] [ 34 ] Such methods are powerful in that they introduce notions of regularity of the solutions so that they can be differentiated and local inverses can be calculated. The disadvantages of these methods is that there was no associated global least-action property which could score the flows of minimum energy. This contrasts the geodesic motions which are central to the study of Rigid body kinematics and the many problems solved in Physics via Hamilton's principle of least action . In 1998, Dupuis, Grenander and Miller [ 35 ] established the conditions for guaranteeing the existence of solutions for dense image matching in the space of flows of diffeomorphisms. These conditions require an action penalizing kinetic energy measured via the Sobolev norm on spatial derivatives of the flow of vector fields. The large deformation diffeomorphic metric mapping (LDDMM) code that Faisal Beg derived and implemented for his PhD at Johns Hopkins University [ 36 ] developed the earliest algorithmic code which solved for flows with fixed points satisfying the necessary conditions for the dense image matching problem subject to least-action. Computational anatomy now has many existing codes organized around diffeomorphic registration [ 17 ] including ANTS, [ 18 ] DARTEL, [ 19 ] DEMONS, [ 37 ] LDDMM, [ 2 ] StationaryLDDMM [ 21 ] as examples of actively used computational codes for constructing correspondences between coordinate systems based on dense images. These large deformation methods have been extended to landmarks without registration via measure matching, [ 38 ] curves, [ 39 ] surfaces, [ 40 ] dense vector [ 41 ] and tensor [ 42 ] imagery, and varifolds removing orientation. [ 43 ] Deformable shape in computational anatomy (CA) [ 44 ] [ 45 ] [ 46 ] [ 47 ] is studied via the use of diffeomorphic mapping for establishing correspondences between anatomical coordinates in Medical Imaging. In this setting, three dimensional medical images are modelled as a random deformation of some exemplar, termed the template I t e m p {\displaystyle I_{temp}} , with the set of observed images element in the random orbit model of CA for images I ∈ I ≐ { I = I temp ∘ φ , φ ∈ Diff V } {\displaystyle I\in {\mathcal {I}}\doteq \{I=I_{\text{temp}}\circ \varphi ,\varphi \in \operatorname {Diff} _{V}\}} . The template is mapped onto the target by defining a variational problem in which the template is transformed via the diffeomorphism used as a change of coordinate to minimize a squared-error matching condition between the transformed template and the target. The diffeomorphisms are generated via smooth flows φ t , t ∈ [ 0 , 1 ] {\displaystyle \varphi _{t},t\in [0,1]} , with φ ≐ φ 1 {\displaystyle \varphi \doteq \varphi _{1}} , satisfying the Lagrangian and Eulerian specification of the flow field associated to the ordinary differential equation, with v t , t ∈ [ 0 , 1 ] {\displaystyle v_{t},t\in [0,1]} the Eulerian vector fields determining the flow. The vector fields are guaranteed to be 1-time continuously differentiable v t ∈ C 1 {\displaystyle v_{t}\in C^{1}} by modelling them to be in a smooth Hilbert space v ∈ V {\displaystyle v\in V} supporting 1-continuous derivative. [ 48 ] The inverse φ t − 1 , t ∈ [ 0 , 1 ] {\displaystyle \varphi _{t}^{-1},t\in [0,1]} is defined by the Eulerian vector-field with flow given by To ensure smooth flows of diffeomorphisms with inverse, the vector fields with components in R 3 {\displaystyle {\mathbb {R} }^{3}} must be at least 1-time continuously differentiable in space [ 49 ] [ 50 ] which are modelled as elements of the Hilbert space ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} using the Sobolev embedding theorems so that each element v i ∈ H 0 3 , i = 1 , 2 , 3 , {\displaystyle v_{i}\in H_{0}^{3},i=1,2,3,} has 3-times square-integrable weak-derivatives. Thus ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} embeds smoothly in 1-time continuously differentiable functions. [ 37 ] [ 50 ] The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm In CA the space of vector fields ( V , ‖ ⋅ ‖ V ) {\displaystyle (V,\|\cdot \|_{V})} are modelled as a reproducing Kernel Hilbert space (RKHS) defined by a 1-1, differential operator A : V → V ∗ {\displaystyle A:V\rightarrow V^{*}} determining the norm ‖ v ‖ V 2 ≐ ∫ R 3 A v ⋅ v d x , v ∈ V , {\displaystyle \|v\|_{V}^{2}\doteq \int _{R^{3}}Av\cdot v\,dx,\ v\in V\ ,} where the integral is calculated by integration by parts when A v {\displaystyle Av} is a generalized function in the dual space V ∗ {\displaystyle V^{*}} . The differential operator is selected so that the Green's kernel, the inverse of the operator, is continuously differentiable in each variable implying that the vector fields support 1-continuous derivative ; see [ 48 ] for the necessary conditions on the norm for existence of solutions. The original large deformation diffeomorphic metric mapping (LDDMM) algorithms of Beg, Miller, Trouve, Younes [ 51 ] was derived taking variations with respect to the vector field parameterization of the group, since v = ϕ ˙ ∘ ϕ − 1 {\displaystyle v={\dot {\phi }}\circ \phi ^{-1}} are in a vector spaces. Beg solved the dense image matching minimizing the action integral of kinetic energy of diffeomorphic flow while minimizing endpoint matching term according to min v : ϕ ˙ = v ∘ ϕ , ϕ 0 = i d C ( v ) ≐ 1 2 ∫ 0 1 ∫ R 3 A v t ⋅ v t d x d t + 1 2 ∫ R 3 | I ∘ ϕ 1 − 1 − J | 2 d x {\textstyle \min _{v:{\dot {\phi }}=v\circ \phi ,\phi _{0}=id}C(v)\doteq {\frac {1}{2}}\int _{0}^{1}\int _{R^{3}}Av_{t}\cdot v_{t}\,dx\,dt+{\frac {1}{2}}\int _{R^{3}}|I\circ \phi _{1}^{-1}-J|^{2}\,dx} Update until convergence, ϕ t o l d ← ϕ t n e w {\displaystyle \phi _{t}^{old}\leftarrow \phi _{t}^{new}} each iteration, with ϕ t 1 ≐ ϕ 1 ∘ ϕ t − 1 {\displaystyle \phi _{t1}\doteq \phi _{1}\circ \phi _{t}^{-1}} : This implies that the fixed point at t = 0 {\displaystyle t=0} satisfies which in turn implies it satisfies the Conservation equation given by the Endpoint Matching Condition according to [ 52 ] [ 53 ] The landmark matching problem has a pointwise correspondence defining the endpoint condition with geodesics given by the following minimum: Joshi originally defined the registered landmark matching probleme,. [ 3 ] Update until convergence, ϕ t o l d ← ϕ t n e w {\displaystyle \phi _{t}^{old}\leftarrow \phi _{t}^{new}} each iteration, with ϕ t 1 ≐ ϕ 1 ∘ ϕ t − 1 {\displaystyle \phi _{t1}\doteq \phi _{1}\circ \phi _{t}^{-1}} : This implies that the fixed point satisfy with The Calculus of variations was used in Beg [49] [ 53 ] to derive the iterative algorithm as a solution which when it converges satisfies the necessary maximizer conditions given by the necessary conditions for a first order variation requiring the variation of the endpoint with respect to a first order variation of the vector field. The directional derivative calculates the Gateaux derivative as calculated in Beg's original paper [49] and. [ 54 ] [ 55 ] The first order variation in the vector fields v + ϵ δ v {\displaystyle v+\epsilon \delta v} requires the variation of ϕ − 1 {\displaystyle \phi ^{-1}} generalizes the matrix perturbation of the inverse via ( ϕ + ϵ δ ϕ ∘ ϕ ) ∘ ( ϕ − 1 + ϵ δ ϕ − 1 ∘ ϕ − 1 ) = i d + o ( ϵ ) {\displaystyle (\phi +\epsilon \delta \phi \circ \phi )\circ (\phi ^{-1}+\epsilon \delta \phi ^{-1}\circ \phi ^{-1})=id+o(\epsilon )} giving δ ϕ − 1 ∘ ϕ − 1 = − ( D ϕ 1 − 1 ) δ ϕ {\displaystyle \delta \phi ^{-1}\circ \phi ^{-1}=-(D\phi _{1}^{-1})\delta \phi } . To express the variation in terms of δ v {\displaystyle \delta v} , use the solution to the Lie bracket d d t ( δ ϕ | ϕ ) = ( D v ) | ϕ δ ϕ | ϕ + δ v | ϕ {\displaystyle {\frac {d}{dt}}\left(\delta \phi _{|\phi }\right)=(Dv)_{|\phi }\delta \phi _{|\phi }+\delta v_{|\phi }} giving Taking the directional derivative of the image endpoint condition E ( ϕ ) = ∫ X | I ∘ ϕ − 1 − J | 2 d x {\displaystyle E(\phi )=\int _{X}|I\circ \phi ^{-1}-J|^{2}dx} gives Substituting ϕ t 1 ≐ ϕ 1 ∘ ϕ t − 1 {\displaystyle \phi _{t1}\doteq \phi _{1}\circ \phi _{t}^{-1}} gives the necessary condition for an optimum: Take the variation in the vector fields v + ϵ δ v {\displaystyle v+\epsilon \delta v} of 1 2 ∑ i | ϕ 1 ( x i ) − y i ) | 2 {\displaystyle {\frac {1}{2}}\sum _{i}|\phi _{1}(x_{i})-y_{i})|^{2}} use the chain rule for the perturbation δ ϕ ∘ ϕ {\displaystyle \delta \phi \circ \phi } to gives the first variation LDDMM matching based on the principal eigenvector of the diffusion tensor matrix takes the image I ( x ) , x ∈ R 3 {\displaystyle I(x),x\in {\mathbb {R} }^{3}} as a unit vector field defined by the first eigenvector. [ 41 ] The group action becomes where ‖ ⋅ ‖ {\displaystyle \|\cdot \|} that denotes image squared-error norm. LDDMM matching based on the entire tensor matrix [ 56 ] has group action φ ⋅ M = ( λ 1 e ^ 1 e ^ 1 T + λ 2 e ^ 2 e ^ 2 T + λ 3 e ^ 3 e ^ 3 T ) ∘ φ − 1 , {\displaystyle \varphi \cdot M=(\lambda _{1}{\hat {e}}_{1}{\hat {e}}_{1}^{T}+\lambda _{2}{\hat {e}}_{2}{\hat {e}}_{2}^{T}+\lambda _{3}{\hat {e}}_{3}{\hat {e}}_{3}^{T})\circ \varphi ^{-1},} transformed eigenvectors The variational problem matching onto vector image I ′ ( x ) , x ∈ R 3 {\displaystyle I^{\prime }(x),x\in {\mathbb {R} }^{3}} with endpoint becomes The variational problem matching onto: M ′ ( x ) , x ∈ R 3 {\displaystyle M^{\prime }(x),x\in {\mathbb {R} }^{3}} with endpoint with ‖ ⋅ ‖ F {\displaystyle \|\cdot \|_{F}} Frobenius norm, giving variational problem High angular resolution diffusion imaging (HARDI) addresses the well-known limitation of DTI, that is, DTI can only reveal one dominant fiber orientation at each location. HARDI measures diffusion along n {\displaystyle n} uniformly distributed directions on the sphere and can characterize more complex fiber geometries by reconstructing an orientation distribution function (ODF) that characterizes the angular profile of the diffusion probability density function of water molecules. The ODF is a function defined on a unit sphere, S 2 {\displaystyle {\mathbb {S} }^{2}} . [ 57 ] Denote the square-root ODF ( ODF {\displaystyle {\sqrt {\text{ODF}}}} ) as ψ ( s ) {\displaystyle \psi ({\bf {s}})} , where ψ ( s ) {\displaystyle \psi ({\bf {s}})} is non-negative to ensure uniqueness and ∫ s ∈ S 2 ψ 2 ( s ) d s = 1 {\displaystyle \int _{{\bf {s}}\in {\mathbb {S} }^{2}}\psi ^{2}({\bf {s}})d{\bf {s}}=1} . The metric defines the distance between two ODF {\displaystyle {\sqrt {\text{ODF}}}} functions ψ 1 , ψ 2 ∈ Ψ {\displaystyle \psi _{1},\psi _{2}\in \Psi } as where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the normal dot product between points in the sphere under the L 2 {\displaystyle \mathrm {L} ^{2}} metric. The template and target are denoted ψ t e m p ( s , x ) {\displaystyle \psi _{\mathrm {temp} }({\bf {s}},x)} , ψ t a r g ( s , x ) {\displaystyle \psi _{\mathrm {targ} }({\bf {s}},x)} , s ∈ S 2 {\displaystyle {\bf {s}}\in {{\mathbb {S} }^{2}}} x ∈ X {\displaystyle x\in X} indexed across the unit sphere and the image domain, with the target indexed similarly. Define the variational problem assuming that two ODF volumes can be generated from one to another via flows of diffeomorphisms ϕ t {\displaystyle \phi _{t}} , which are solutions of ordinary differential equations ϕ ˙ t = v t ( ϕ t ) , t ∈ [ 0 , 1 ] , ϕ 0 = i d {\displaystyle {\dot {\phi }}_{t}=v_{t}(\phi _{t}),t\in [0,1],\phi _{0}={id}} . The group action of the diffeomorphism on the template is given according to ϕ 1 ⋅ ψ ( x ) ≐ ( D ϕ 1 ) ψ ∘ ϕ 1 − 1 ( x ) , x ∈ X {\displaystyle \phi _{1}\cdot \psi (x)\doteq (D\phi _{1})\psi \circ \phi _{1}^{-1}(x),x\in X} , where ( D ϕ 1 ) {\displaystyle (D\phi _{1})} is the Jacobian of the affined transformed ODF and is defined as The LDDMM variational problem is defined as Beg solved the early LDDMM algorithms by solving the variational matching taking variations with respect to the vector fields. [ 58 ] Another solution by Vialard, [ 59 ] reparameterizes the optimization problem in terms of the state q t ≐ I ∘ ϕ t − 1 , q 0 = I {\displaystyle q_{t}\doteq I\circ \phi _{t}^{-1},q_{0}=I} , for image I ( x ) , x ∈ X = R 3 {\displaystyle I(x),x\in X=R^{3}} , with the dynamics equation controlling the state by the control given in terms of the advection equation according to q ˙ t = − ∇ q t ⋅ v t {\displaystyle {\dot {q}}_{t}=-\nabla q_{t}\cdot v_{t}} . The endpoint matching term E ( q 1 ) ≐ 1 2 ‖ q 1 − J ‖ 2 {\displaystyle E(q_{1})\doteq {\frac {1}{2}}\|q_{1}-J\|^{2}} gives the variational problem: The Hamiltonian dynamics with advected state and control dynamics q t = I ∘ ϕ t − 1 {\displaystyle q_{t}=I\circ \phi _{t}^{-1}} , q ˙ = − ∇ q ⋅ v {\displaystyle {\dot {q}}=-\nabla q\cdot v} with extended Hamiltonian H ( q , p , v ) = ( p | − ∇ q ⋅ v ) − 1 2 ( A v | v ) {\displaystyle H(q,p,v)=(p|-\nabla q\cdot v)-{\frac {1}{2}}(Av|v)} gives the variational problem [ 53 ] The first variation gives the condition on the optimizing vector field A v = − p ∇ q {\displaystyle Av=-p\nabla q} , with the endpoint condition p 1 = − ∂ E ∂ q ( q 1 ) {\displaystyle p_{1}=-{\frac {\partial E}{\partial q}}(q_{1})} and dynamics on the Lagrange multipliers determined by the Gatteux derivative conditions ( − p ˙ − ∇ ⋅ ( p v ) | δ q ) ) = 0 {\displaystyle (-{\dot {p}}-\nabla \cdot (pv)|\delta q))=0} and the state ( δ p | q ˙ + ∇ q ⋅ v ) = 0 {\displaystyle (\delta p|{\dot {q}}+\nabla q\cdot v)=0} . Software suites containing a variety of diffeomorphic mapping algorithms include the following:
https://en.wikipedia.org/wiki/Large_deformation_diffeomorphic_metric_mapping
Large dense core vesicle (LDCVs) are lipid vesicles in neurons and secretory cells which may be filled with neurotransmitters , such as catecholamines or neuropeptides . LDVCs release their content through SNARE -mediated exocytosis similar to synaptic vesicles . [ 1 ] One key difference between synaptic vesicles and LDCVs is that protein synaptophysin which is present in the membrane of synaptic vesicles is absent in LDCVs. [ 2 ] LDCVs have an electron dense core which appear as a black circle in micrographs obtained with transmission electron microscopy . [ 1 ] This neuroanatomy article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Large_dense_core_vesicles
Large eddy simulation ( LES ) is a mathematical model for turbulence used in computational fluid dynamics . It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, [ 1 ] and first explored by Deardorff (1970). [ 2 ] LES is currently applied in a wide variety of engineering applications, including combustion , [ 3 ] acoustics, [ 4 ] and simulations of the atmospheric boundary layer. [ 5 ] The simulation of turbulent flows by numerically solving the Navier–Stokes equations requires resolving a very wide range of time and length scales, all of which affect the flow field. Such a resolution can be achieved with direct numerical simulation (DNS), but DNS is computationally expensive, and its cost prohibits simulation of practical engineering systems with complex geometry or flow configurations, such as turbulent jets, pumps, vehicles, and landing gear. The principal idea behind LES is to reduce the computational cost by ignoring the smallest length scales, which are the most computationally expensive to resolve, via low-pass filtering of the Navier–Stokes equations. Such a low-pass filtering, which can be viewed as a time- and spatial-averaging, effectively removes small-scale information from the numerical solution. This information is not irrelevant, however, and its effect on the flow field must be modelled, a task which is an active area of research for problems in which small-scales can play an important role, such as near-wall flows, [ 6 ] [ 7 ] reacting flows, [ 3 ] and multiphase flows. [ 8 ] An LES filter can be applied to a spatial and temporal field ϕ ( x , t ) {\displaystyle \phi ({\boldsymbol {x}},t)} and perform a spatial filtering operation, a temporal filtering operation, or both. The filtered field, denoted with a bar, is defined as: [ 9 ] [ 10 ] where G {\displaystyle G} is the filter convolution kernel. This can also be written as: The filter kernel G {\displaystyle G} has an associated cutoff length scale Δ {\displaystyle \Delta } and cutoff time scale τ c {\displaystyle \tau _{c}} . Scales smaller than these are eliminated from ϕ ¯ {\displaystyle {\overline {\phi }}} . Using the above filter definition, any field ϕ {\displaystyle \phi } may be split up into a filtered and sub-filtered (denoted with a prime) portion, as It is important to note that the large eddy simulation filtering operation does not satisfy the properties of a Reynolds operator . The governing equations of LES are obtained by filtering the partial differential equations governing the flow field ρ u ( x , t ) {\displaystyle \rho {\boldsymbol {u}}({\boldsymbol {x}},t)} . There are differences between the incompressible and compressible LES governing equations, which lead to the definition of a new filtering operation. For incompressible flow, the continuity equation and Navier–Stokes equations are filtered, yielding the filtered incompressible continuity equation, and the filtered Navier–Stokes equations, where p ¯ {\displaystyle {\bar {p}}} is the filtered pressure field and S ¯ i j {\displaystyle {\bar {S}}_{ij}} is the rate-of-strain tensor evaluated using the filtered velocity. The nonlinear filtered advection term u i u j ¯ {\displaystyle {\overline {u_{i}u_{j}}}} is the chief cause of difficulty in LES modeling. It requires knowledge of the unfiltered velocity field, which is unknown, so it must be modeled. The analysis that follows illustrates the difficulty caused by the nonlinearity, namely, that it causes interaction between large and small scales, preventing separation of scales. The filtered advection term can be split up, following Leonard (1975), [ 11 ] as: where τ i j {\displaystyle \tau _{ij}} is the residual stress tensor, so that the filtered Navier-Stokes equations become with the residual stress tensor τ i j {\displaystyle \tau _{ij}} grouping all unclosed terms. Leonard decomposed this stress tensor as τ i j = L i j + C i j + R i j {\displaystyle \tau _{ij}=L_{ij}+C_{ij}+R_{ij}} and provided physical interpretations for each term. L i j = u ¯ i u ¯ j ¯ − u ¯ i u ¯ j {\displaystyle L_{ij}={\overline {{\bar {u}}_{i}{\bar {u}}_{j}}}-{\bar {u}}_{i}{\bar {u}}_{j}} , the Leonard tensor, represents interactions among large scales, R i j = u i ′ u j ′ ¯ {\displaystyle R_{ij}={\overline {u_{i}^{\prime }u_{j}^{\prime }}}} , the Reynolds stress-like term, represents interactions among the sub-filter scales (SFS), and C i j = u ¯ i u j ′ ¯ + u ¯ j u i ′ ¯ {\displaystyle C_{ij}={\overline {{\bar {u}}_{i}u_{j}^{\prime }}}+{\overline {{\bar {u}}_{j}u_{i}^{\prime }}}} , the Clark tensor, [ 12 ] represents cross-scale interactions between large and small scales. [ 11 ] Modeling the unclosed term τ i j {\displaystyle \tau _{ij}} is the task of sub-grid scale (SGS) models. This is made challenging by the fact that the subgrid stress tensor τ i j {\displaystyle \tau _{ij}} must account for interactions among all scales, including filtered scales with unfiltered scales. The filtered governing equation for a passive scalar ϕ {\displaystyle \phi } , such as mixture fraction or temperature, can be written as where J ϕ {\displaystyle J_{\phi }} is the diffusive flux of ϕ {\displaystyle \phi } , and q j {\displaystyle q_{j}} is the sub-filter flux for the scalar ϕ {\displaystyle \phi } . The filtered diffusive flux J ϕ ¯ {\displaystyle {\overline {J_{\phi }}}} is unclosed, unless a particular form is assumed for it, such as a gradient diffusion model J ϕ = D ϕ ∂ ϕ ∂ x i {\displaystyle J_{\phi }=D_{\phi }{\frac {\partial \phi }{\partial x_{i}}}} . q j {\displaystyle q_{j}} is defined analogously to τ i j {\displaystyle \tau _{ij}} , and can similarly be split up into contributions from interactions between various scales. This sub-filter flux also requires a sub-filter model. Using Einstein notation , the Navier–Stokes equations for an incompressible fluid in Cartesian coordinates are Filtering the momentum equation results in If we assume that filtering and differentiation commute, then This equation models the changes in time of the filtered variables u i ¯ {\displaystyle {\bar {u_{i}}}} . Since the unfiltered variables u i {\displaystyle u_{i}} are not known, it is impossible to directly calculate ∂ u i u j ∂ x j ¯ {\displaystyle {\overline {\frac {\partial u_{i}u_{j}}{\partial x_{j}}}}} . However, the quantity ∂ u i ¯ u j ¯ ∂ x j {\displaystyle {\frac {\partial {\bar {u_{i}}}{\bar {u_{j}}}}{\partial x_{j}}}} is known. A substitution is made: Let τ i j = u i u j ¯ − u ¯ i u ¯ j {\displaystyle \tau _{ij}={\overline {u_{i}u_{j}}}-{\bar {u}}_{i}{\bar {u}}_{j}} . The resulting set of equations are the LES equations: For the governing equations of compressible flow, each equation, starting with the conservation of mass, is filtered. This gives: which results in an additional sub-filter term. However, it is desirable to avoid having to model the sub-filter scales of the mass conservation equation. For this reason, Favre [ 13 ] proposed a density-weighted filtering operation, called Favre filtering, defined for an arbitrary quantity ϕ {\displaystyle \phi } as: which, in the limit of incompressibility, becomes the normal filtering operation. This makes the conservation of mass equation: This concept can then be extended to write the Favre-filtered momentum equation for compressible flow. Following Vreman: [ 14 ] where σ i j {\displaystyle \sigma _{ij}} is the shear stress tensor, given for a Newtonian fluid by: and the term ∂ ∂ x j ( σ ¯ i j − σ ~ i j ) {\displaystyle {\frac {\partial }{\partial x_{j}}}\left({\overline {\sigma }}_{ij}-{\tilde {\sigma }}_{ij}\right)} represents a sub-filter viscous contribution from evaluating the viscosity μ ( T ) {\displaystyle \mu (T)} using the Favre-filtered temperature T ~ {\displaystyle {\tilde {T}}} . The subgrid stress tensor for the Favre-filtered momentum field is given by By analogy, the Leonard decomposition may also be written for the residual stress tensor for a filtered triple product ρ ϕ ψ ¯ {\displaystyle {\overline {\rho \phi \psi }}} . The triple product can be rewritten using the Favre filtering operator as ρ ¯ ϕ ψ ~ {\displaystyle {\overline {\rho }}{\widetilde {\phi \psi }}} , which is an unclosed term (it requires knowledge of the fields ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } , when only the fields ϕ ~ {\displaystyle {\tilde {\phi }}} and ψ ~ {\displaystyle {\tilde {\psi }}} are known). It can be broken up in a manner analogous to u i u j ¯ {\displaystyle {\overline {u_{i}u_{j}}}} above, which results in a sub-filter stress tensor ρ ¯ ( ϕ ψ ~ − ϕ ~ ψ ~ ) {\displaystyle {\overline {\rho }}\left({\widetilde {\phi \psi }}-{\tilde {\phi }}{\tilde {\psi }}\right)} . This sub-filter term can be split up into contributions from three types of interactions: the Leondard tensor L i j {\displaystyle L_{ij}} , representing interactions among resolved scales; the Clark tensor C i j {\displaystyle C_{ij}} , representing interactions between resolved and unresolved scales; and the Reynolds tensor R i j {\displaystyle R_{ij}} , which represents interactions among unresolved scales. [ 15 ] In addition to the filtered mass and momentum equations, filtering the kinetic energy equation can provide additional insight. The kinetic energy field can be filtered to yield the total filtered kinetic energy: and the total filtered kinetic energy can be decomposed into two terms: the kinetic energy of the filtered velocity field E f {\displaystyle E_{f}} , and the residual kinetic energy k r {\displaystyle k_{r}} , such that E ¯ = E f + k r {\displaystyle {\overline {E}}=E_{f}+k_{r}} . The conservation equation for E f {\displaystyle E_{f}} can be obtained by multiplying the filtered momentum transport equation by u i ¯ {\displaystyle {\overline {u_{i}}}} to yield: where ϵ f = 2 ν S i j ¯ S i j ¯ {\displaystyle \epsilon _{f}=2\nu {\bar {S_{ij}}}{\bar {S_{ij}}}} is the dissipation of kinetic energy of the filtered velocity field by viscous stress, and Π = − τ i j r S i j ¯ {\displaystyle \Pi =-\tau _{ij}^{r}{\bar {S_{ij}}}} represents the sub-filter scale (SFS) dissipation of kinetic energy. The terms on the left-hand side represent transport, and the terms on the right-hand side are sink terms that dissipate kinetic energy. [ 9 ] The Π {\displaystyle \Pi } SFS dissipation term is of particular interest, since it represents the transfer of energy from large resolved scales to small unresolved scales. On average, Π {\displaystyle \Pi } transfers energy from large to small scales. However, instantaneously Π {\displaystyle \Pi } can be positive or negative, meaning it can also act as a source term for E f {\displaystyle E_{f}} , the kinetic energy of the filtered velocity field. The transfer of energy from unresolved to resolved scales is called backscatter (and likewise the transfer of energy from resolved to unresolved scales is called forward-scatter ). [ 16 ] Large eddy simulation involves the solution to the discrete filtered governing equations using computational fluid dynamics . LES resolves scales from the domain size L {\displaystyle L} down to the filter size Δ {\displaystyle \Delta } , and as such a substantial portion of high wave number turbulent fluctuations must be resolved. This requires either high-order numerical schemes , or fine grid resolution if low-order numerical schemes are used. Chapter 13 of Pope [ 9 ] addresses the question of how fine a grid resolution Δ x {\displaystyle \Delta x} is needed to resolve a filtered velocity field u ¯ ( x ) {\displaystyle {\overline {u}}({\boldsymbol {x}})} . Ghosal [ 17 ] found that for low-order discretization schemes, such as those used in finite volume methods, the truncation error can be the same order as the subfilter scale contributions, unless the filter width Δ {\displaystyle \Delta } is considerably larger than the grid spacing Δ x {\displaystyle \Delta x} . While even-order schemes have truncation error, they are non-dissipative, [ 18 ] and because subfilter scale models are dissipative, even-order schemes will not affect the subfilter scale model contributions as strongly as dissipative schemes. The filtering operation in large eddy simulation can be implicit or explicit. Implicit filtering recognizes that the subfilter scale model will dissipate in the same manner as many numerical schemes. In this way, the grid, or the numerical discretization scheme, can be assumed to be the LES low-pass filter. While this takes full advantage of the grid resolution, and eliminates the computational cost of calculating a subfilter scale model term, it is difficult to determine the shape of the LES filter that is associated with some numerical issues. Additionally, truncation error can also become an issue. [ 19 ] In explicit filtering, an LES filter is applied to the discretized Navier–Stokes equations, providing a well-defined filter shape and reducing the truncation error. However, explicit filtering requires a finer grid than implicit filtering, and the computational cost increases with ( Δ x ) 4 {\displaystyle (\Delta x)^{4}} . Chapter 8 of Sagaut (2006) covers LES numerics in greater detail. [ 10 ] Inlet boundary conditions affect the accuracy of LES significantly, and the treatment of inlet conditions for LES is a complicated problem. Theoretically, a good boundary condition for LES should contain the following features: [ 20 ] (1) providing accurate information of flow characteristics, i.e. velocity and turbulence; (2) satisfying the Navier-Stokes equations and other physics; (3) being easy to implement and adjust to different cases. Currently, methods of generating inlet conditions for LES are broadly divided into two categories classified by Tabor et al.: [ 21 ] The first method for generating turbulent inlets is to synthesize them according to particular cases, such as Fourier techniques, principle orthogonal decomposition (POD) and vortex methods. The synthesis techniques attempt to construct turbulent field at inlets that have suitable turbulence-like properties and make it easy to specify parameters of the turbulence, such as turbulent kinetic energy and turbulent dissipation rate. In addition, inlet conditions generated by using random numbers are computationally inexpensive. However, one serious drawback exists in the method. The synthesized turbulence does not satisfy the physical structure of fluid flow governed by Navier-Stokes equations. [ 20 ] The second method involves a separate and precursor calculation to generate a turbulent database which can be introduced into the main computation at the inlets. The database (sometimes named as ‘library’) can be generated in a number of ways, such as cyclic domains, pre-prepared library, and internal mapping. However, the method of generating turbulent inflow by precursor simulations requires large calculation capacity. Researchers examining the application of various types of synthetic and precursor calculations have found that the more realistic the inlet turbulence, the more accurate LES predicts results. [ 20 ] To discuss the modeling of unresolved scales, first the unresolved scales must be classified. They fall into two groups: resolved sub-filter scales (SFS), and sub-grid scales (SGS). The resolved sub-filter scales represent the scales with wave numbers larger than the cutoff wave number k c {\displaystyle k_{c}} , but whose effects are dampened by the filter. Resolved sub-filter scales only exist when filters non-local in wave-space are used (such as a box or Gaussian filter). These resolved sub-filter scales must be modeled using filter reconstruction. Sub-grid scales are any scales that are smaller than the cutoff filter width Δ {\displaystyle \Delta } . The form of the SGS model depends on the filter implementation. As mentioned in the Numerical methods for LES section, if implicit LES is considered, no SGS model is implemented and the numerical effects of the discretization are assumed to mimic the physics of the unresolved turbulent motions. Without a universally valid description of turbulence, empirical information must be utilized when constructing and applying SGS models, supplemented with fundamental physical constraints such as Galilean invariance [ 9 ] . [ 22 ] Two classes of SGS models exist; the first class is functional models and the second class is structural models . Some models may be categorized as both. Functional models are simpler than structural models, focusing only on dissipating energy at a rate that is physically correct. These are based on an artificial eddy viscosity approach, where the effects of turbulence are lumped into a turbulent viscosity. The approach treats dissipation of kinetic energy at sub-grid scales as analogous to molecular diffusion. In this case, the deviatoric part of τ i j {\displaystyle \tau _{ij}} is modeled as: where ν t {\displaystyle \nu _{\mathrm {t} }} is the turbulent eddy viscosity and S ¯ i j = 1 2 ( ∂ u ¯ i ∂ x j + ∂ u ¯ j ∂ x i ) {\displaystyle {\bar {S}}_{ij}={\frac {1}{2}}\left({\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}+{\frac {\partial {\bar {u}}_{j}}{\partial x_{i}}}\right)} is the rate-of-strain tensor. Based on dimensional analysis, the eddy viscosity must have units of [ ν t ] = m 2 s {\displaystyle \left[\nu _{\mathrm {t} }\right]={\frac {\mathrm {m^{2}} }{\mathrm {s} }}} . Most eddy viscosity SGS models model the eddy viscosity as the product of a characteristic length scale and a characteristic velocity scale. The first SGS model developed was the Smagorinsky–Lilly SGS model, which was developed by Smagorinsky [ 1 ] and used in the first LES simulation by Deardorff. [ 2 ] It models the eddy viscosity as: where Δ {\displaystyle \Delta } is the grid size and C {\displaystyle C} is a constant. This method assumes that the energy production and dissipation of the small scales are in equilibrium - that is, ϵ = Π {\displaystyle \epsilon =\Pi } . Germano et al. [ 23 ] identified a number of studies using the Smagorinsky model that each found different values for the Smagorinsky constant C {\displaystyle C} for different flow configurations. In an attempt to formulate a more universal approach to SGS models, Germano et al. proposed a dynamic Smagorinsky model, which utilized two filters: a grid LES filter, denoted f ¯ {\displaystyle {\overline {f}}} , and a test LES filter, denoted f ^ {\displaystyle {\hat {f}}} for any turbulent field f {\displaystyle f} . The test filter is larger in size than the grid filter and adds an additional smoothing of the turbulence field over the already smoothed fields represented by the LES. Applying the test filter to the LES equations (which are obtained by applying the "grid" filter to Navier-Stokes equations) results in a new set of equations that are identical in form but with the SGS stress τ i j = u i u j ¯ − u ¯ i u ¯ j {\displaystyle \tau _{ij}={\overline {u_{i}u_{j}}}-{\bar {u}}_{i}{\bar {u}}_{j}} replaced by T i j = u i u j ¯ ^ − u ¯ ^ i u ¯ ^ j {\displaystyle T_{ij}={\widehat {\overline {u_{i}u_{j}}}}-{\hat {\bar {u}}}_{i}{\hat {\bar {u}}}_{j}} . Germano et al. noted that even though neither τ i j {\displaystyle \tau _{ij}} nor T i j {\displaystyle T_{ij}} can be computed exactly because of the presence of unresolved scales, there is an exact relation connecting these two tensors. This relation, known as the Germano identity is L i j = T i j − τ ^ i j . {\displaystyle L_{ij}=T_{ij}-{\hat {\tau }}_{ij}.} Here L i j = u ¯ i u ¯ j ^ − u ¯ i ^ u ¯ j ^ {\displaystyle L_{ij}={\widehat {{\bar {u}}_{i}{\bar {u}}_{j}}}-{\widehat {{\bar {u}}_{i}}}{\widehat {{\bar {u}}_{j}}}} can be explicitly evaluated as it involves only the filtered velocities and the operation of test filtering. The significance of the identity is that if one assumes that turbulence is self similar so that the SGS stress at the grid and test levels have the same form τ i j − ( τ k k / 3 ) δ i j = − 2 C Δ 2 | S ¯ i j | S ¯ i j {\displaystyle \tau _{ij}-(\tau _{kk}/3)\delta _{ij}=-2C\Delta ^{2}|{\bar {S}}_{ij}|{\bar {S}}_{ij}} and T i j − ( T k k / 3 ) δ i j = − 2 C Δ ^ 2 | S ¯ ^ i j | S ¯ ^ i j {\displaystyle T_{ij}-(T_{kk}/3)\delta _{ij}=-2C{\hat {\Delta }}^{2}|{\hat {\bar {S}}}_{ij}|{\hat {\bar {S}}}_{ij}} , then the Germano identity provides an equation from which the Smagorinsky coefficient C {\displaystyle C} (which is no longer a 'constant') can potentially be determined. [Inherent in the procedure is the assumption that the coefficient C {\displaystyle C} is invariant of scale (see review [ 24 ] )]. In order to do this, two additional steps were introduced in the original formulation. First, one assumed that even though C {\displaystyle C} was in principle variable, the variation was sufficiently slow that it can be moved out of the filtering operation C ( . ) ^ = C ( . ) ^ {\displaystyle {\widehat {C(.)}}=C{\widehat {(.)}}} . Second, since C {\displaystyle C} was a scalar, the Germano identity was contracted with a second rank tensor (the rate of strain tensor was chosen) to convert it to a scalar equation from which C {\displaystyle C} could be determined. Lilly [ 25 ] found a less arbitrary and therefore more satisfactory approach for obtaining C from the tensor identity. He noted that the Germano identity required the satisfaction of nine equations at each point in space (of which only five are independent) for a single quantity C {\displaystyle C} . The problem of obtaining C {\displaystyle C} was therefore over-determined. He proposed therefore that C {\displaystyle C} be determined using a least square fit by minimizing the residuals. This results in Here and for brevity α i j = − 2 Δ ^ 2 | S ¯ ^ | S ¯ ^ i j {\displaystyle \alpha _{ij}=-2{\hat {\Delta }}^{2}|{\hat {\bar {S}}}|{\hat {\bar {S}}}_{ij}} , β i j = − 2 Δ 2 | S ¯ | S ¯ i j {\displaystyle \beta _{ij}=-2\Delta ^{2}|{\bar {S}}|{\bar {S}}_{ij}} Initial attempts to implement the model in LES simulations proved unsuccessful. First, the computed coefficient was not at all "slowly varying" as assumed and varied as much as any other turbulent field. Secondly, the computed C {\displaystyle C} could be positive as well as negative. The latter fact in itself should not be regarded as a shortcoming as a priori tests using filtered DNS fields have shown that the local subgrid dissipation rate − τ i j S ¯ i j {\displaystyle -\tau _{ij}{\bar {S}}_{ij}} in a turbulent field is almost as likely to be negative as it is positive even though the integral over the fluid domain is always positive representing a net dissipation of energy in the large scales. A slight preponderance of positive values as opposed to strict positivity of the eddy-viscosity results in the observed net dissipation. This so-called "backscatter" of energy from small to large scales indeed corresponds to negative C values in the Smagorinsky model. Nevertheless, the Germano-Lilly formulation was found not to result in stable calculations. An ad hoc measure was adopted by averaging the numerator and denominator over homogeneous directions (where such directions exist in the flow) When the averaging involved a large enough statistical sample that the computed C {\displaystyle C} was positive (or at least only rarely negative) stable calculations were possible. Simply setting the negative values to zero (a procedure called "clipping") with or without the averaging also resulted in stable calculations. Meneveau proposed [ 26 ] an averaging over Lagrangian fluid trajectories with an exponentially decaying "memory". This can be applied to problems lacking homogeneous directions and can be stable if the effective time over which the averaging is done is long enough and yet not so long as to smooth out spatial inhomogenieties of interest. Lilly's modification of the Germano method followed by a statistical averaging or synthetic removal of negative viscosity regions seems ad hoc, even if it could be made to "work". An alternate formulation of the least square minimization procedure known as the "Dynamic Localization Model" (DLM) was suggested by Ghosal et al. [ 27 ] In this approach one first defines a quantity with the tensors τ i j {\displaystyle \tau _{ij}} and T i j {\displaystyle T_{ij}} replaced by the appropriate SGS model. This tensor then represents the amount by which the subgrid model fails to respect the Germano identity at each spatial location. In Lilly's approach, C {\displaystyle C} is then pulled out of the hat operator making E i j {\displaystyle E_{ij}} an algebraic function of C {\displaystyle C} which is then determined by requiring that E i j E i j {\displaystyle E_{ij}E_{ij}} considered as a function of C have the least possible value. However, since the C {\displaystyle C} thus obtained turns out to be just as variable as any other fluctuating quantity in turbulence, the original assumption of the constancy of C {\displaystyle C} cannot be justified a posteriori. In the DLM approach one avoids this inconsistency by not invoking the step of removing C from the test filtering operation. Instead, one defines a global error over the entire flow domain by the quantity where the integral ranges over the whole fluid volume. This global error E [ C ( x , y , z , t ) ] {\displaystyle E[C(x,y,z,t)]} is then a functional of the spatially varying function C ( x , y , z , t ) {\displaystyle C(x,y,z,t)} (here the time instant, t {\displaystyle t} , is fixed and therefore appears just as a parameter) which is determined so as to minimize this functional. The solution to this variational problem is that C {\displaystyle C} must satisfy a Fredholm integral equation of the second kind where the functions K ( x , y ) {\displaystyle K({\boldsymbol {x}},{\boldsymbol {y}})} and f ( x ) {\displaystyle f({\boldsymbol {x}})} are defined in terms of the resolved fields L i j , α i j , β i j {\displaystyle L_{ij},\alpha _{ij},\beta _{ij}} and are therefore known at each time step and the integral ranges over the whole fluid domain. The integral equation is solved numerically by an iteration procedure and convergence was found to be generally rapid if used with a pre-conditioning scheme. Even though this variational approach removes an inherent inconsistency in Lilly's approach, the C ( x , y , z , t ) {\displaystyle C(x,y,z,t)} obtained from the integral equation still displayed the instability associated with negative viscosities. This can be resolved by insisting that E [ C ] {\displaystyle E[C]} be minimized subject to the constraint C ( x , y , z , t ) ≥ 0 {\displaystyle C(x,y,z,t)\geq 0} . This leads to an equation for C {\displaystyle C} that is nonlinear Here the suffix + indicates the "positive part of" that is, x + = ( x + | x | ) / 2 {\displaystyle x_{+}=(x+|x|)/2} . Even though this superficially looks like "clipping" it is not an ad hoc scheme but a bonafide solution of the constrained variational problem. This DLM(+) model was found to be stable and yielded excellent results for forced and decaying isotropic turbulence, channel flows and a variety of other more complex geometries. If a flow happens to have homogeneous directions (let us say the directions x and z) then one can introduce the ansatz C = C ( y , t ) {\displaystyle C=C(y,t)} . The variational approach then immediately yields Lilly's result with averaging over homogeneous directions without any need for ad hoc modifications of a prior result. One shortcoming of the DLM(+) model was that it did not describe backscatter which is known to be a real "thing" from analyzing DNS data. Two approaches were developed to address this. In one approach due to Carati et al. [ 28 ] a fluctuating force with amplitude determined by the fluctuation-dissipation theorem is added in analogy to Landau's theory of fluctuating hydrodynamics. In the second approach, one notes that any "backscattered" energy appears in the resolved scales only at the expense of energy in the subgrid scales. The DLM can be modified in a simple way to take into account this physical fact so as to allow for backscatter while being inherently stable. This k-equation version of the DLM, DLM(k) replaces Δ | S ¯ | {\displaystyle \Delta |{\bar {S}}|} in the Smagorinsky eddy viscosity model by k {\displaystyle {\sqrt {k}}} as an appropriate velocity scale. The procedure for determining C {\displaystyle C} remains identical to the "unconstrained" version except that the tensors α i j = − 2 Δ ^ K S ¯ ^ i j {\displaystyle \alpha _{ij}=-2{\hat {\Delta }}{\sqrt {K}}{\hat {\bar {S}}}_{ij}} , β i j = − 2 Δ ^ k S ¯ i j {\displaystyle \beta _{ij}=-2{\hat {\Delta }}{\sqrt {k}}{\bar {S}}_{ij}} where the sub-test scale kinetic energy K is related to the subgrid scale kinetic energy k by K = k + L i i / 2 {\displaystyle K=k+L_{ii}/2} (follows by taking the trace of the Germano identity). To determine k we now use a transport equation where ν {\displaystyle \nu } is the kinematic viscosity and C ∗ , D {\displaystyle C_{*},D} are positive coefficients representing kinetic energy dissipation and diffusion respectively. These can be determined following the dynamic procedure with constrained minimization as in DLM(+). This approach, though more expensive to implement than the DLM(+) was found to be stable and resulted in good agreement with experimental data for a variety of flows tested. Furthermore, it is mathematically impossible for the DLM(k) to result in an unstable computation as the sum of the large scale and SGS energies is non-increasing by construction. Both of these approaches incorporating backscatter works well. They yield models that are slightly less dissipative with somewhat improved performance over the DLM(+). The DLM(k) model additionally yields the subgrid kinetic energy, which may be a physical quantity of interest. These improvements are achieved at a somewhat increased cost in model implementation. The Dynamic Model originated at the 1990 Summer Program of the Center for Turbulence Research (CTR) at Stanford University . A series of "CTR-Tea" seminars celebrated the 30th Anniversary Archived 2022-10-30 at the Wayback Machine of this important milestone in turbulence modeling.
https://en.wikipedia.org/wiki/Large_eddy_simulation
Large numbers , far beyond those encountered in everyday life—such as simple counting or financial transactions—play a crucial role in various domains. These expansive quantities appear prominently in mathematics , cosmology , cryptography , and statistical mechanics . While they often manifest as large positive integers , they can also take other forms in different contexts (such as P-adic number ). Googology delves into the naming conventions and properties of these immense numerical entities. [ 1 ] [ 2 ] Since the customary, traditional (non-technical) decimal format of large numbers can be lengthy, other systems have been devised that allows for shorter representation. For example, a billion is represented as 13 characters (1,000,000,000) in decimal format, but is only 3 characters (10 9 ) when expressed in exponential format . A trillion is 17 characters in decimal, but only 4 (10 12 ) in exponential. Values that vary dramatically can be represented and compared graphically via logarithmic scale . A natural language numbering system allows for representing large numbers using names that more clearly distinguish numeric scale than a series of digits. For example "billion" may be easier to comprehend for some readers than "1,000,000,000". But, as names, a numeric value can be lengthy. For example, "2,345,789" is "two million, three hundred forty five thousand, seven hundred and eighty nine". Standard notation is a variation of English's natural language numbering, where it is shortened into a suffix. Examples are 2,343,678,900 = 2.34 B (B = billion). Scientific notation was devised to represent the vast range of values encountered in scientific research in a format that is more compact than traditional formats yet allows for high precision when called for. A value is represented as a decimal fraction times a multiple power of 10 . The factor is intended to make reading comprehension easier than a lengthy series of zeros. For example, 1.0 × 10 9 expresses one billion—1 followed by nine zeros. The reciprocal , one billionth, is 1.0 × 10 −9 . Sometimes the *10^ becomes an e, like 1 billion as 1e9. Examples of large numbers describing real-world things: In astronomy and cosmology large numbers for measures of length and time are encountered. For instance, according to the prevailing Big Bang model , the universe is approximately 13.8 billion years old (equivalent to 4.355 × 10 17 seconds). The observable universe spans 93 billion light years (approximately 8.8 × 10 26 meters) and hosts around 5 × 10 22 stars, organized into roughly 125 billion galaxies (as observed by the Hubble Space Telescope). As a rough estimate, there are about 10 80 atoms within the observable universe. [ 7 ] According to Don Page , physicist at the University of Alberta, Canada, the longest finite time that has so far been explicitly calculated by any physicist is (which corresponds to the scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire universe, observable or not, assuming a certain inflationary model with an inflaton whose mass is 10 −6 Planck masses ), roughly 10^10^1.288*10^3.884 T [ 8 ] [ 9 ] This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where the universe's history repeats itself arbitrarily many times due to properties of statistical mechanics ; this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again. Combinatorial processes give rise to astonishingly large numbers. The factorial function, which quantifies permutations of a fixed set of objects, grows superexponentially as the number of objects increases. Stirling's formula provides a precise asymptotic expression for this rapid growth. In statistical mechanics, combinatorial numbers reach such immense magnitudes that they are often expressed using logarithms . Gödel numbers , along with similar representations of bit-strings in algorithmic information theory , are vast—even for mathematical statements of moderate length. Remarkably, certain pathological numbers surpass even the Gödel numbers associated with typical mathematical propositions. Logician Harvey Friedman has made significant contributions to the study of very large numbers, including work related to Kruskal's tree theorem and the Robertson–Seymour theorem . To help viewers of Cosmos distinguish between "millions" and "billions", astronomer Carl Sagan stressed the "b". Sagan never did, however, say " billions and billions ". The public's association of the phrase and Sagan came from a Tonight Show skit. Parodying Sagan's effect, Johnny Carson quipped "billions and billions". [ 10 ] The phrase has, however, now become a humorous fictitious number—the Sagan . Cf. , Sagan Unit . A standardized way of writing very large numbers allows them to be easily sorted in increasing order, and one can get a good idea of how much larger a number is than another one. To compare numbers in scientific notation, say 5×10 4 and 2×10 5 , compare the exponents first, in this case 5 > 4, so 2×10 5 > 5×10 4 . If the exponents are equal, the mantissa (or coefficient) should be compared, thus 5×10 4 > 2×10 4 because 5 > 2. Tetration with base 10 gives the sequence 10 ↑ ↑ n = 10 → n → 2 = ( 10 ↑ ) n 1 {\displaystyle 10\uparrow \uparrow n=10\to n\to 2=(10\uparrow )^{n}1} , the power towers of numbers 10, where ( 10 ↑ ) n {\displaystyle (10\uparrow )^{n}} denotes a functional power of the function f ( n ) = 10 n {\displaystyle f(n)=10^{n}} (the function also expressed by the suffix "-plex" as in googolplex, see the googol family ). These are very round numbers, each representing an order of magnitude in a generalized sense. A crude way of specifying how large a number is, is specifying between which two numbers in this sequence it is. More precisely, numbers in between can be expressed in the form ( 10 ↑ ) n a {\displaystyle (10\uparrow )^{n}a} , i.e., with a power tower of 10s, and a number at the top, possibly in scientific notation, e.g. 10 10 10 10 10 4.829 = ( 10 ↑ ) 5 4.829 {\displaystyle 10^{10^{10^{10^{10^{4.829}}}}}=(10\uparrow )^{5}4.829} , a number between 10 ↑ ↑ 5 {\displaystyle 10\uparrow \uparrow 5} and 10 ↑ ↑ 6 {\displaystyle 10\uparrow \uparrow 6} (note that 10 ↑ ↑ n < ( 10 ↑ ) n a < 10 ↑ ↑ ( n + 1 ) {\displaystyle 10\uparrow \uparrow n<(10\uparrow )^{n}a<10\uparrow \uparrow (n+1)} if 1 < a < 10 {\displaystyle 1<a<10} ). (See also extension of tetration to real heights .) Thus googolplex is 10 10 100 = ( 10 ↑ ) 2 100 = ( 10 ↑ ) 3 2 {\displaystyle 10^{10^{100}}=(10\uparrow )^{2}100=(10\uparrow )^{3}2} . Another example: Thus the "order of magnitude" of a number (on a larger scale than usually meant), can be characterized by the number of times ( n ) one has to take the l o g 10 {\displaystyle log_{10}} to get a number between 1 and 10. Thus, the number is between 10 ↑ ↑ n {\displaystyle 10\uparrow \uparrow n} and 10 ↑ ↑ ( n + 1 ) {\displaystyle 10\uparrow \uparrow (n+1)} . As explained, a more precise description of a number also specifies the value of this number between 1 and 10, or the previous number (taking the logarithm one time less) between 10 and 10 10 , or the next, between 0 and 1. Note that I.e., if a number x is too large for a representation ( 10 ↑ ) n x {\displaystyle (10\uparrow )^{n}x} the power tower can be made one higher, replacing x by log 10 x , or find x from the lower-tower representation of the log 10 of the whole number. If the power tower would contain one or more numbers different from 10, the two approaches would lead to different results, corresponding to the fact that extending the power tower with a 10 at the bottom is then not the same as extending it with a 10 at the top (but, of course, similar remarks apply if the whole power tower consists of copies of the same number, different from 10). If the height of the tower is large, the various representations for large numbers can be applied to the height itself. If the height is given only approximately, giving a value at the top does not make sense, so the double-arrow notation (e.g. 10 ↑ ↑ ( 7.21 × 10 8 ) {\displaystyle 10\uparrow \uparrow (7.21\times 10^{8})} ) can be used. If the value after the double arrow is a very large number itself, the above can recursively be applied to that value. Examples: Similarly to the above, if the exponent of ( 10 ↑ ) {\displaystyle (10\uparrow )} is not exactly given then giving a value at the right does not make sense, and instead of using the power notation of ( 10 ↑ ) {\displaystyle (10\uparrow )} , it is possible to add 1 {\displaystyle 1} to the exponent of ( 10 ↑ ↑ ) {\displaystyle (10\uparrow \uparrow )} , to obtain e.g. ( 10 ↑ ↑ ) 3 ( 2.8 × 10 12 ) {\displaystyle (10\uparrow \uparrow )^{3}(2.8\times 10^{12})} . If the exponent of ( 10 ↑ ↑ ) {\displaystyle (10\uparrow \uparrow )} is large, the various representations for large numbers can be applied to this exponent itself. If this exponent is not exactly given then, again, giving a value at the right does not make sense, and instead of using the power notation of ( 10 ↑ ↑ ) {\displaystyle (10\uparrow \uparrow )} it is possible use the triple arrow operator, e.g. 10 ↑ ↑ ↑ ( 7.3 × 10 6 ) {\displaystyle 10\uparrow \uparrow \uparrow (7.3\times 10^{6})} . If the right-hand argument of the triple arrow operator is large the above applies to it, obtaining e.g. 10 ↑ ↑ ↑ ( 10 ↑ ↑ ) 2 ( 10 ↑ ) 497 ( 9.73 × 10 32 ) {\displaystyle 10\uparrow \uparrow \uparrow (10\uparrow \uparrow )^{2}(10\uparrow )^{497}(9.73\times 10^{32})} (between 10 ↑ ↑ ↑ 10 ↑ ↑ ↑ 4 {\displaystyle 10\uparrow \uparrow \uparrow 10\uparrow \uparrow \uparrow 4} and 10 ↑ ↑ ↑ 10 ↑ ↑ ↑ 5 {\displaystyle 10\uparrow \uparrow \uparrow 10\uparrow \uparrow \uparrow 5} ). This can be done recursively, so it is possible to have a power of the triple arrow operator. Then it is possible to proceed with operators with higher numbers of arrows, written ↑ n {\displaystyle \uparrow ^{n}} . Compare this notation with the hyper operator and the Conway chained arrow notation : An advantage of the first is that when considered as function of b , there is a natural notation for powers of this function (just like when writing out the n arrows): ( a ↑ n ) k b {\displaystyle (a\uparrow ^{n})^{k}b} . For example: and only in special cases the long nested chain notation is reduced; for ″ b ″ = 1 {\displaystyle ''b''=1} obtains: Since the b can also be very large, in general it can be written instead a number with a sequence of powers ( 10 ↑ n ) k n {\displaystyle (10\uparrow ^{n})^{k_{n}}} with decreasing values of n (with exactly given integer exponents k n {\displaystyle {k_{n}}} ) with at the end a number in ordinary scientific notation. Whenever a k n {\displaystyle {k_{n}}} is too large to be given exactly, the value of k n + 1 {\displaystyle {k_{n+1}}} is increased by 1 and everything to the right of ( n + 1 ) k n + 1 {\displaystyle ({n+1})^{k_{n+1}}} is rewritten. For describing numbers approximately, deviations from the decreasing order of values of n are not needed. For example, 10 ↑ ( 10 ↑ ↑ ) 5 a = ( 10 ↑ ↑ ) 6 a {\displaystyle 10\uparrow (10\uparrow \uparrow )^{5}a=(10\uparrow \uparrow )^{6}a} , and 10 ↑ ( 10 ↑ ↑ ↑ 3 ) = 10 ↑ ↑ ( 10 ↑ ↑ 10 + 1 ) ≈ 10 ↑ ↑ ↑ 3 {\displaystyle 10\uparrow (10\uparrow \uparrow \uparrow 3)=10\uparrow \uparrow (10\uparrow \uparrow 10+1)\approx 10\uparrow \uparrow \uparrow 3} . Thus is obtained the somewhat counterintuitive result that a number x can be so large that, in a way, x and 10 x are "almost equal" (for arithmetic of large numbers see also below). If the superscript of the upward arrow is large, the various representations for large numbers can be applied to this superscript itself. If this superscript is not exactly given then there is no point in raising the operator to a particular power or to adjust the value on which it act, instead it is possible to simply use a standard value at the right, say 10, and the expression reduces to 10 ↑ n 10 = ( 10 → 10 → n ) {\displaystyle 10\uparrow ^{n}10=(10\to 10\to n)} with an approximate n . For such numbers the advantage of using the upward arrow notation no longer applies, so the chain notation can be used instead. The above can be applied recursively for this n , so the notation ↑ n {\displaystyle \uparrow ^{n}} is obtained in the superscript of the first arrow, etc., or a nested chain notation, e.g.: If the number of levels gets too large to be convenient, a notation is used where this number of levels is written down as a number (like using the superscript of the arrow instead of writing many arrows). Introducing a function f ( n ) = 10 ↑ n 10 {\displaystyle f(n)=10\uparrow ^{n}10} = (10 → 10 → n ), these levels become functional powers of f , allowing us to write a number in the form f m ( n ) {\displaystyle f^{m}(n)} where m is given exactly and n is an integer which may or may not be given exactly (for example: f 2 ( 3 × 10 5 ) {\displaystyle f^{2}(3\times 10^{5})} ). If n is large, any of the above can be used for expressing it. The "roundest" of these numbers are those of the form f m (1) = (10→10→ m →2). For example, ( 10 → 10 → 3 → 2 ) = 10 ↑ 10 ↑ 10 10 10 10 {\displaystyle (10\to 10\to 3\to 2)=10\uparrow ^{10\uparrow ^{10^{10}}10}10} Compare the definition of Graham's number: it uses numbers 3 instead of 10 and has 64 arrow levels and the number 4 at the top; thus G < 3 → 3 → 65 → 2 < ( 10 → 10 → 65 → 2 ) = f 65 ( 1 ) {\displaystyle G<3\rightarrow 3\rightarrow 65\rightarrow 2<(10\to 10\to 65\to 2)=f^{65}(1)} , but also G < f 64 ( 4 ) < f 65 ( 1 ) {\displaystyle G<f^{64}(4)<f^{65}(1)} . If m in f m ( n ) {\displaystyle f^{m}(n)} is too large to give exactly, it is possible to use a fixed n , e.g. n = 1, and apply the above recursively to m , i.e., the number of levels of upward arrows is itself represented in the superscripted upward-arrow notation, etc. Using the functional power notation of f this gives multiple levels of f . Introducing a function g ( n ) = f n ( 1 ) {\displaystyle g(n)=f^{n}(1)} these levels become functional powers of g , allowing us to write a number in the form g m ( n ) {\displaystyle g^{m}(n)} where m is given exactly and n is an integer which may or may not be given exactly. For example, if (10→10→ m →3) = g m (1). If n is large any of the above can be used for expressing it. Similarly a function h , etc. can be introduced. If many such functions are required, they can be numbered instead of using a new letter every time, e.g. as a subscript, such that there are numbers of the form f k m ( n ) {\displaystyle f_{k}^{m}(n)} where k and m are given exactly and n is an integer which may or may not be given exactly. Using k =1 for the f above, k =2 for g , etc., obtains (10→10→ n → k ) = f k ( n ) = f k − 1 n ( 1 ) {\displaystyle f_{k}(n)=f_{k-1}^{n}(1)} . If n is large any of the above can be used to express it. Thus is obtained a nesting of forms f k m k {\displaystyle {f_{k}}^{m_{k}}} where going inward the k decreases, and with as inner argument a sequence of powers ( 10 ↑ n ) p n {\displaystyle (10\uparrow ^{n})^{p_{n}}} with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation. When k is too large to be given exactly, the number concerned can be expressed as f n ( 10 ) {\displaystyle {f_{n}}(10)} =(10→10→10→ n ) with an approximate n . Note that the process of going from the sequence 10 n {\displaystyle 10^{n}} =(10→ n ) to the sequence 10 ↑ n 10 {\displaystyle 10\uparrow ^{n}10} =(10→10→ n ) is very similar to going from the latter to the sequence f n ( 10 ) {\displaystyle {f_{n}}(10)} =(10→10→10→ n ): it is the general process of adding an element 10 to the chain in the chain notation; this process can be repeated again (see also the previous section). Numbering the subsequent versions of this function a number can be described using functions f q k m q k {\displaystyle {f_{qk}}^{m_{qk}}} , nested in lexicographical order with q the most significant number, but with decreasing order for q and for k ; as inner argument yields a sequence of powers ( 10 ↑ n ) p n {\displaystyle (10\uparrow ^{n})^{p_{n}}} with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation. For a number too large to write down in the Conway chained arrow notation it size can be described by the length of that chain, for example only using elements 10 in the chain; in other words, one could specify its position in the sequence 10, 10→10, 10→10→10, .. If even the position in the sequence is a large number same techniques can be applied again. Numbers expressible in decimal notation: Numbers expressible in scientific notation: Numbers expressible in (10 ↑) n k notation: Bigger numbers: Some notations for extremely large numbers: These notations are essentially functions of integer variables, which increase very rapidly with those integers. Ever-faster-increasing functions can easily be constructed recursively by applying these functions with large integers as argument. A function with a vertical asymptote is not helpful in defining a very large number, although the function increases very rapidly: one has to define an argument very close to the asymptote, i.e. use a very small number, and constructing that is equivalent to constructing a very large number, e.g. the reciprocal. The following illustrates the effect of a base different from 10, base 100. It also illustrates representations of numbers and the arithmetic. 100 12 = 10 24 {\displaystyle 100^{12}=10^{24}} , with base 10 the exponent is doubled. 100 100 12 = 10 2 ∗ 10 24 {\displaystyle 100^{100^{12}}=10^{2*10^{24}}} , ditto. 100 100 100 12 ≈ 10 10 2 ∗ 10 24 + 0.30103 {\displaystyle 100^{100^{100^{12}}}\approx 10^{10^{2*10^{24}+0.30103}}} , the highest exponent is very little more than doubled (increased by log 10 2). For a number 10 n {\displaystyle 10^{n}} , one unit change in n changes the result by a factor 10. In a number like 10 6.2 × 10 3 {\displaystyle 10^{\,\!6.2\times 10^{3}}} , with the 6.2 the result of proper rounding using significant figures, the true value of the exponent may be 50 less or 50 more. Hence the result may be a factor 10 50 {\displaystyle 10^{50}} too large or too small. This seems like extremely poor accuracy, but for such a large number it may be considered fair (a large error in a large number may be "relatively small" and therefore acceptable). In the case of an approximation of an extremely large number, the relative error may be large, yet there may still be a sense in which one wants to consider the numbers as "close in magnitude". For example, consider The relative error is a large relative error. However, one can also consider the relative error in the logarithms; in this case, the logarithms (to base 10) are 10 and 9, so the relative error in the logarithms is only 10%. The point is that exponential functions magnify relative errors greatly – if a and b have a small relative error, the relative error is larger, and will have an even larger relative error. The question then becomes: on which level of iterated logarithms to compare two numbers? There is a sense in which one may want to consider to be "close in magnitude". The relative error between these two numbers is large, and the relative error between their logarithms is still large; however, the relative error in their second-iterated logarithms is small: Such comparisons of iterated logarithms are common, e.g., in analytic number theory . One solution to the problem of comparing large numbers is to define classes of numbers, such as the system devised by Robert Munafo, [ 13 ] which is based on different "levels" of perception of an average person. Class 0 – numbers between zero and six – is defined to contain numbers that are easily subitized , that is, numbers that show up very frequently in daily life and are almost instantly comparable. Class 1 – numbers between six and 1,000,000=10 6 – is defined to contain numbers whose decimal expressions are easily subitized, that is, numbers who are easily comparable not by cardinality , but "at a glance" given the decimal expansion. Each class after these are defined in terms of iterating this base-10 exponentiation, to simulate the effect of another "iteration" of human indistinguishibility. For example, class 5 is defined to include numbers between 10 10 10 10 6 and 10 10 10 10 10 6 , which are numbers where X becomes humanly indistinguishable from X 2 [ 14 ] (taking iterated logarithms of such X yields indistinguishibility firstly between log( X ) and 2log( X ), secondly between log(log( X )) and 1+log(log( X )), and finally an extremely long decimal expansion whose length can't be subitized). There are some general rules relating to the usual arithmetic operations performed on very large numbers: Hence: Given a strictly increasing integer sequence/function f 0 ( n ) {\displaystyle f_{0}(n)} ( n ≥1), it is possible to produce a faster-growing sequence f 1 ( n ) = f 0 n ( n ) {\displaystyle f_{1}(n)=f_{0}^{n}(n)} (where the superscript n denotes the n th functional power ). This can be repeated any number of times by letting f k ( n ) = f k − 1 n ( n ) {\displaystyle f_{k}(n)=f_{k-1}^{n}(n)} , each sequence growing much faster than the one before it. Thus it is possible to define f ω ( n ) = f n ( n ) {\displaystyle f_{\omega }(n)=f_{n}(n)} , which grows much faster than any f k {\displaystyle f_{k}} for finite k (here ω is the first infinite ordinal number , representing the limit of all finite numbers k). This is the basis for the fast-growing hierarchy of functions, in which the indexing subscript is extended to ever-larger ordinals. For example, starting with f 0 ( n ) = n + 1: The busy beaver function Σ is an example of a function which grows faster than any computable function. Its value for even relatively small input is huge. The values of Σ( n ) for n = 1, 2, 3, 4, 5 are 1, 4, 6, 13, 4098 [ 15 ] (sequence A028444 in the OEIS ). Σ(6) is not known but is at least 10↑↑15. Although all the numbers discussed above are very large, they are all still finite . Certain fields of mathematics define infinite and transfinite numbers . For example, aleph-null is the cardinality of the infinite set of natural numbers , and aleph-one is the next greatest cardinal number. c {\displaystyle {\mathfrak {c}}} is the cardinality of the reals . The proposition that c = ℵ 1 {\displaystyle {\mathfrak {c}}=\aleph _{1}} is known as the continuum hypothesis .
https://en.wikipedia.org/wiki/Large_numbers
In Ramsey theory , a set S of natural numbers is considered to be a large set if and only if Van der Waerden's theorem can be generalized to assert the existence of arithmetic progressions with common difference in S . That is, S is large if and only if every finite partition of the natural numbers has a cell containing arbitrarily long arithmetic progressions having common differences in S . Necessary conditions for largeness include: Two sufficient conditions are: The first sufficient condition implies that if S is a thick set , then S is large. Other facts about large sets include: If S {\displaystyle S} is large, then for any m {\displaystyle m} , S ∩ { x : x ≡ 0 ( mod m ) } {\displaystyle S\cap \{x:x\equiv 0{\pmod {m}}\}} is large. A set is k -large , for a natural number k > 0, when it meets the conditions for largeness when the restatement of van der Waerden's theorem is concerned only with k -colorings. Every set is either large or k -large for some maximal k . This follows from two important, albeit trivially true, facts: It is unknown whether there are 2-large sets that are not also large sets. Brown, Graham, and Landman (1999) conjecture that no such sets exists.
https://en.wikipedia.org/wiki/Large_set_(Ramsey_theory)
Large woody debris ( LWD ) are the logs, sticks, branches, and other wood that falls into streams and rivers . This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus a major influence on the shape of the stream channel. [ 1 ] Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons. [ 2 ] The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. [ 3 ] Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece. [ citation needed ] Large woody debris slows the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction. [ 4 ]
https://en.wikipedia.org/wiki/Large_woody_debris
The history of life on Earth seems to show a clear trend; for example, it seems intuitive that there is a trend towards increasing complexity in living organisms. More recently evolved organisms, such as mammals , appear to be much more complex than organisms, such as bacteria , which have existed for a much longer period of time. However, there are theoretical and empirical problems with this claim. From a theoretical perspective, it appears that there is no reason to expect evolution to result in any largest-scale trends, although small-scale trends, limited in time and space, are expected ( Gould , 1997). From an empirical perspective, it is difficult to measure complexity and, when it has been measured, the evidence does not support a largest-scale trend (McShea, 1996). Many of the founding figures of evolution supported the idea of Evolutionary progress which has fallen from favour, but the work of Francisco J. Ayala and Michael Ruse suggests is still influential. McShea (1998) discusses eight features of organisms that might indicate largest-scale trends in evolution: entropy , energy intensiveness, evolutionary versatility, developmental depth, structural depth, adaptedness, size, complexity . He calls these "live hypotheses", meaning that trends in these features are currently being considered by evolutionary biologists. McShea observes that the most popular hypothesis, among scientists, is that there is a largest-scale trend towards increasing complexity. Evolutionary theorists agree that there are local trends in evolution, such as increasing brain size in hominids , but these directional changes do not persist indefinitely, and trends in opposite directions also occur ( Gould , 1997). Evolution causes organisms to adapt to their local environment; when the environment changes, the direction of the trend may change. The question of whether there is evolutionary progress is better formulated as the question of whether there are any largest-scale trends in evolution (McShea, 1998). That is, is there a consistent directional change throughout the history of life on Earth? Organisms adapt to their local environment. As long as the local environment is stable, we can expect to observe small-scale trends, as organisms become increasingly adapted to the local environment. Gould (1997) argues that there are no global (largest-scale) trends in evolution, because traits that are advantageous for some local environment are detrimental for some other local environment. Although it is difficult to measure complexity, it seems uncontroversial that mammals are more complex than bacteria. Gould (1997) agrees, but claims that this apparent largest-scale trend is a statistical artifact . Bacteria represent a minimum level of complexity for life on Earth today. Gould (1997) argues that there is no selective pressure for higher levels of complexity, but there is selective pressure against complexity below the level of bacteria. This minimum required level of complexity, combined with random mutation , implies that the average level of complexity of life must increase over time. Gould (1997) uses the analogy of a random walk that begins near a wall. Although the walk is random, the walker cannot pass through the wall, so we should expect the walker to move increasingly further from the wall as time passes. This does not imply that the walker is driven away from the wall. The wall is analogous to the complexity level of bacteria. We should expect evolution to wander increasingly further from this level of complexity, but it does not imply that evolution is driven towards increasing complexity. In response to Gould's (1997) critique, Turney (2000) presents a computational model in which there is a largest-scale trend towards increasing evolutionary versatility. This trend requires continual change. Although this model shows that largest-scale trends are compatible with evolutionary theory, the model has not yet been empirically confirmed. Evolutionary theory might not predict largest-scale trends, but there may be such trends nonetheless. McShea (1996) looks at the empirical evidence for a trend towards increasing complexity in Metazoan fossils. He concludes that the evidence is not decisive and further investigation is required.
https://en.wikipedia.org/wiki/Largest-scale_trends_in_evolution
There have been many extremely large explosions, accidental and intentional, caused by modern high explosives , boiling liquid expanding vapour explosions (BLEVEs), older explosives such as gunpowder , volatile petroleum -based fuels such as petrol , and other chemical reactions. This list contains the largest known examples, sorted by date. An unambiguous ranking in order of severity is not possible; a 1994 study by historian Jay White of 130 large explosions suggested that they need to be ranked by an overall effect of power, quantity, radius, loss of life and property destruction, but concluded that such rankings are difficult to assess. [ 1 ] The weight of an explosive does not correlate directly with the energy or destructive effect of an explosion, as these can depend upon many other factors such as containment, proximity, purity, preheating, and external oxygenation (in the case of thermobaric weapons , gas leaks and BLEVEs). For this article, explosion means "the sudden conversion of potential energy (chemical or mechanical) into kinetic energy", [ 2 ] as defined by the US National Fire Protection Association , or the common dictionary meaning, "a violent and destructive shattering or blowing apart of something". [ 3 ] No distinction is made as to whether it is a deflagration with subsonic propagation or a detonation with supersonic propagation. The resulting explosions can still be ranked by their effects however, using TNT equivalence . On 4 April 1585, during the Spanish siege of Antwerp, a fortified bridge named "Puente Farnesio" (after the commander of the Spanish forces, Alessandro Farnese ) had been built by the Spanish on the River Scheldt . The Dutch launched four large hellburners (explosive fire ships filled with gunpowder and rocks) to destroy the bridge and thereby isolate the city from reinforcement. Three of the hellburners failed to reach the target, but one containing four tons of explosive [ 4 ] struck the bridge. It did not explode immediately, which gave time for some Spaniards, believing the ship to be a conventional fire ship, to board it to attempt to extinguish it. There was then a devastating blast that killed 800 Spaniards on the bridge, [ 5 ] throwing bodies, rocks and pieces of metal a distance of several kilometres. A small tsunami arose in the river, the ground shook for kilometres around and a large, dark cloud covered the area. The blast was felt as far as 35 kilometres (22 mi) away in Ghent, where windows vibrated. About nine o'clock in the morning of 30 May 1626, an explosion of combustibles at the Wanggongchang Armory in Ming -era Beijing , China, destroyed almost everything within an area of two square kilometres (0.77 sq mi) surrounding the site. The estimated death toll was 20,000. About half of Beijing, from Xuanwumen Gate in the South to the modern West Chang'an Boulevard in the North, was affected. Guard units stationed as far away as Tongzhou , nearly 40 kilometres (25 mi) away, reported hearing the blast and feeling the earth tremble. [ 6 ] On 16 February 1646, 80 barrels (5.72 tons) of gunpowder were accidentally ignited by a stray spark during the Battle of Torrington in the English Civil War , destroying the church in which the magazine was located and killing several Royalist guards and a large number of Parliamentarian prisoners who were being kept there. The explosion effectively ended the battle, bringing victory to the Parliamentarians. It almost killed the Parliamentarian commander, Sir Thomas Fairfax . Great damage was caused. [ 7 ] [ 8 ] About 30 tonnes of gunpowder exploded on 12 October 1654, destroying much of the city of Delft in the Netherlands. More than a hundred people were killed and thousands were injured. [ 9 ] [ 10 ] On 22 July 1686, 80 tons of gunpowder exploded in the castle of Buda, killing 1,500 Ottoman defenders and destroying a large portion of the defences. According to contemporary accounts, the blast wave also pushed the Danube out of its riverbed, destroying boats and causing flooding on the left (Pest) bank. The cause of the explosion was most likely a shot fired by a famed Italian artillery officer and Franciscan friar , "Fiery" Gabriel, which penetrated into the underground ammunition dump. On 26 September 1687, the Parthenon , up until then intact, was ruined partially when an Ottoman ammunition bunker inside was struck by a Venetian mortar. 300 Turkish soldiers were killed in the explosion. On 18 August 1769, the Bastion of San Nazaro in Brescia , Italy was struck by lightning. The resulting fire ignited 90 tonnes of gunpowder being stored, and the subsequent explosion destroyed one-sixth of the city and killed 3,000 people. On 12 January 1807, a ship carrying hundreds of barrels of black powder exploded in the city of Leiden in the Kingdom of Holland . The disaster killed 151 people and destroyed more than 200 buildings in the city. On 26 August 1810, in Almeida , Portugal, during the Peninsular War phase of the Napoleonic Wars , French Grande Armée forces commanded by Marshal André Masséna besieged the garrison; the garrison was commanded by British Brigadier General William Cox. A shell made a chance hit on the medieval castle, within the star fortress , which was being used as the powder magazine. It ignited 4,000 prepared charges, which in turn ignited 68 tonnes of black powder and 1,000,000 musket cartridges. The ensuing explosions killed 600 defenders and wounded 300. The mediaeval castle was destroyed and sections of the defences were damaged. Unable to reply to the French cannonade without gunpowder, Cox was forced to capitulate the next day with the survivors of the blast and 100 cannons. The French losses during the operation were 58 killed and 320 wounded. On 27 April 1813, the magazine of Fort York in York , Ontario (now Toronto ) was fired by retreating British troops during an American invasion. 13.6 tonnes of gunpowder and thirty thousand cartridges exploded sending debris, cannonballs and musketballs over the American troops. Thirty-eight soldiers, including General Zebulon Pike , the American commander, were killed and 222 were wounded. On 27 July 1816, a fort built in the War of 1812 by the British Army at Prospect Bluff in Spanish West Florida , and occupied by about 330 Maroons , Seminole , and Choctaw , was attacked by Andrew Jackson 's navy as part of the First Seminole War . There was an exchange of cannon fire; the first red-hot cannonball fired by the navy entered the fort's powder magazine, which exploded. [ 11 ] The explosion, heard more than 100 miles (160 km) away, [ 12 ] destroyed the entire post which was supplied initially with "three thousand stand of arms, from five to six hundred barrels of powders and a great quantity of fixed ammunition, shot[s], shells". [ 13 ] About 270 men, women and children lay dead. [ 14 ] General Edmund P. Gaines later said that the "explosion was awful and the scene horrible beyond description". Reports mention no American military casualties. [ 11 ] On 30 December 1848, in Multan during the Second Anglo-Sikh war , a mortar shell hit 180 tonnes of gunpowder stored in a mosque, causing an explosion and many casualties. [ 15 ] The 6 October 1854 great fire of Newcastle and Gateshead , UK, caused the explosion of combustibles in a bond warehouse on the quayside, which rained masonry and flaming timbers across wide areas of both cities, and left a crater with a depth of 40 feet (12 m) and 50 feet (15 m) in diameter. The explosion was heard at locations as far as 40 miles (64 km) away. 53 people died, and 400 to 500 were injured. [ 16 ] On 6 November 1856 lightning struck 3,000 to 6,000 hundredweight (about 150–300 tonnes) of gunpowder stored by the Ottoman Empire in the bell tower of the Agios Ioannis church near the Palace of the Grand Master of the Knights of Rhodes in Rhodes , causing a blast that destroyed large parts of the city and killed 4,000 people. [ 17 ] [ 18 ] During the US Civil War at 4:44 a.m. on 30 July 1864, the Union Army of the Potomac besieging the Confederate Army of Northern Virginia at Petersburg, Virginia detonated a mine containing 320 kegs of gunpowder, totalling 8,000 pounds (3,600 kg) under the Confederate entrenchments . The explosion killed 278 Confederate soldiers of the 18th and 22nd South Carolina regiments [ 19 ] and created a crater 170 feet (52 m) long, 100 to 120 feet (30 to 37 m) wide, and at least 30 feet (9 m) deep. After the explosion, attacking Union forces charged into the crater instead of around its rim. Trapped in the crater of their own making, the Union forces were easy targets for the Confederate soldiers once they recovered from the shock of the explosion. Union forces suffered 3798 casualties (killed, wounded, or captured) vs 1491 total losses for the Confederates. The Union forces failed to break through the Confederate defences despite the success of the mine. The Battle of the Crater (as it was later named) was thus a victory for the Confederacy . However, the siege continued. In 1865 during the US Civil War, after the Union Army captured Fort Fisher , North Carolina, the accidental explosion of the fort magazine resulted in an estimated 200 deaths. On 25 May 1865, in Mobile, Alabama , in the United States, an ordnance depot (magazine) exploded, killing 300 people. This event occurred six weeks after the end of the American Civil War , during the occupation of the city by victorious Federal troops . On 10 October 1885 in New York City , the U.S. Army Corps of Engineers detonated 300,000 pounds (150 t) of explosives on Flood Rock, annihilating the island, in order to clear the Hell Gate tidal strait for the benefit of East River shipping traffic. [ 21 ] The explosion sent a geyser of water 250 ft (76 m) in the air; [ 22 ] the blast was felt as far away as Princeton, New Jersey . [ 21 ] The explosion has been described as "the largest planned explosion before testing began for the atomic bomb". [ 22 ] Rubble from the detonation was used in 1890 to fill the gap between Great Mill Rock and Little Mill Rock, merging the two into a single island, Mill Rock . [ 21 ] On 3 November 1893, in Santander, Spain , the steamship Cabo Machichaco caught fire when it was docked. The ship was laden with 51 tons of dynamite and 12 tons of sulphuric acid from Galdácano, Basque Country , but authorities were unaware of this. Municipal firefighters and crew from other vessels boarded Cabo Machichaco to help fight the fire, while local dignitaries and a large crowd of people watched from the shore. At 4:45 pm an enormous explosion destroyed the ship and nearby buildings and generated a huge wave that washed over the seafront. Pieces of iron and débris were thrown as far as Peñacastillo, 8 km (5 mi) away, where a person was killed by the falling débris. 590 people were killed, and between 500 and 2,000 were injured. [ 23 ] [ 24 ] On 19 February 1896, an explosives train at Braamfontein station in Johannesburg , loaded with between 56 and 60 tons of blasting gelatine for the gold mines of the Witwatersrand and having been standing for three and a half days in searing heat, was struck by a shunting train. The load exploded, leaving a crater in the Braamfontein rail yard 60 metres (200 ft) long, 50 metres (160 ft) wide and 8 metres (26 ft) deep. The explosion was heard up to 200 kilometres (120 mi) away. 75 people were killed, and more than 200 injured. Surrounding suburbs were destroyed, and roughly 3,000 people lost their homes. Almost every window in Johannesburg was broken. [ 25 ] On 15 February 1898, more than 5 tons of gunpowder exploded in the USS Maine in the Havana Harbour , Cuba , killing 266 on board. Spanish investigations found that it was likely started by spontaneous combustion of the adjacent coal bunker or accidental ignition of volatile gases. The 1898 US Navy investigation blamed an assumed mine, which caused public outrage in the United States and sympathy for the Spanish–American War . [ 26 ] On 15 October 1907, approximately 40,000 kegs of combustible powder exploded in Fontanet, Indiana , killing between 50 and 80 people, and destroying the town. The sound of the explosion was heard over 200 miles (320 km) away, with damage occurring to buildings 25 miles (40 km) away. [ 27 ] On 9 March 1911, the village of Pleasant Prairie and neighbouring town of Bristol , 4 miles (6.4 km) away, were levelled by the explosion of five magazines holding 300 tons of dynamite, 105,000 kegs of black blasting powder , and five rail wagons filled with dynamite housed at a 190-acre (77-hectare) DuPont blasting powder plant. A crater 100 ft (30 m) deep was left where the plant was. Several hundred people were injured. The plant was closed at the time, so deaths were few, with only three plant employees being killed, E. S. "Old Man" Thompson, Clarence Brady and Joseph Flynt, and Alice Finch, who died of a heart attack after the blast rattled her home in Elgin, Illinois , forty miles (64 km) away. Most buildings in a 5-mile (8.0 km) radius were rendered flat or uninhabitable. The explosion was felt within a radius of 130 miles (210 km), and widely thought to be an earthquake. Residents in nearby Lake County, Illinois saw the fireball and remembering the Peshtigo fire , fled their houses and jumped into Lake Michigan . Police in Chicago scoured the streets, looking for the site of a bombing. Windows were shattered in Madison, Wisconsin , 85 miles (137 km) away, and the explosion was heard as far as 500 miles (800 km) away. A DuPont spokesman was reported as being perplexed by the coverage of the blast, quoted as saying "explosions occur every day in steel mills, flouring mills and grain elevators with hardly a line in the paper". [ 28 ] [ 29 ] [ 30 ] Alum Chine was a Welsh freighter (out of Cardiff ) carrying 343 tons of dynamite for use during construction of the Panama Canal . It was anchored off Hawkins Point , near the entrance to Baltimore Harbor in Baltimore, Maryland . The ship exploded on 7 March 1913, killing more than 30 people, injuring about 60, and destroying a tug and two barges. Most accounts describe two distinct explosions. [ 31 ] On 27 May 1915, the minelayer HMS Princess Irene suffered a blast. Wreckage was thrown up to 20 miles (30 km), a collier boat one-half mile (800 m) away had its crane blown off and a crew member killed by a fragment weighing 70 pounds (30 kg). A child ashore was killed by another fragment. A case of butter was found six miles (10 km) away. A total of 352 people were killed but one crew member survived, with severe burns. The ship had been loaded with 300 naval mines containing more than 150 tons of high explosive. An inquiry blamed faulty priming, possibly by untrained personnel. On 2 April 1916, an explosion blew through the gunpowder mill at Uplees , near Faversham , Kent, when 200 tons of TNT ignited. 105 people died in the explosion. The munitions factory was next to the Thames estuary , and the explosion was heard across the estuary as far away as Norwich , Great Yarmouth , and Southend-on-Sea , where domestic windows were blown out and two large plate-glass shop windows shattered. On 31 May 1916, three British Grand Fleet battlecruisers were destroyed by cordite deflagrations initiated by armour-piercing shells fired by the Imperial German Navy 's High Seas Fleet . At 16:02 HMS Indefatigable was cut in two by deflagration of the forward magazine and sank immediately with all but two of its crew of 1,019. German eyewitness reports and the testimony of modern divers suggest all its magazines exploded. The wreck is now a debris field. At 16:25 HMS Queen Mary was cut in two by detonation of the forward magazine and sank with all but 21 of its crew of 1,283. As the rear section capsized it also exploded. At 18:30 HMS Invincible was cut in two by detonation of the midships magazine and sank in 90 seconds. Six of its crew survived; 1,026 men died, including Rear Admiral Hood . An armoured cruiser, HMS Defence , was a fourth ship to suffer an explosive deflagration at Jutland with at least 893 men killed. The rear magazine was seen to detonate followed by more explosions as the cordite flash travelled along an ammunition passage beneath its broadside guns. Eyewitness reports suggest that HMS Black Prince may also have suffered an explosion as it was lost during the night action with 857 dead, all hands. British reports say it was seen to explode. German reports speak of the ship being overwhelmed at close range and sinking. Finally, during the confused night actions in the early hours of 1 June, the German pre-dreadnought SMS Pommern was hit by one, or possibly two, torpedoes from the British destroyer HMS Onslaught , which detonated one of Pommern 's 17-centimetre (6.7 in) gun magazines . The resulting explosion broke the ship in half and killed the entire crew of 839. On the morning of 1 July 1916, a series of 19 mines of varying sizes was blown to start the Battle of the Somme . The explosions constituted what was then the loudest human-made sound in history, and could be heard in London . The largest single charge was the Lochnagar mine south of La Boisselle with 60,000 lb (27 t) of ammonal explosive. The mine created a crater 300 ft (90 m) across and 90 ft (30 m) deep, with a rim 15 ft (5 m) high. The crater is known as Lochnagar Crater after the trench from where the main tunnel was started. On 30 July 1916, sabotage by German agents caused 1,000 short tons (910 t) of explosives bound for Europe, along with another 50 short tons (45 t) on Johnson Barge No. 17 , to explode in Jersey City, New Jersey , a major dock in New York Harbor . There were few deaths, but about 100 injuries. Damage included buildings on Ellis Island , parts of the Statue of Liberty , and much of Jersey City. [ citation needed ] On 19 January 1917, parts of Silvertown in East London were devastated by a TNT explosion at the Brunner-Mond munitions factory. The explosion killed 73 people and injured hundreds. The blast was felt across London and Essex and was heard more than 100 mi (160 km) away, with the resulting fires visible for 30 mi (50 km). On 10 February 1917, a chain reaction in an ammunition plant Explosivstoffwerk Thorn in Quickborn-Heide (northern Germany) killed at least 115 people (some sources say more than 200 people), mostly young female workers. [ 32 ] [ 33 ] Škoda Works in Bolevec, Pilsen (modern Plzeň ) was the biggest ammunition plant in Austria-Hungary . A series of explosions on 25 May 1917 killed 300 workers. [ 34 ] This event inspired Karel Čapek to write the novel Krakatit (1922). On 7 June 1917, a series of large British mines, containing a total of more than 455 tons of ammonal explosive, was detonated beneath German lines on the Messines - Wytschaete ridge. The explosions created 19 large craters, killed about 10,000 German soldiers, and were heard as far away as London and Dublin. Determining the power of explosions is difficult, but this was probably the largest planned explosion in history until the 1945 Trinity atomic weapon test, and the largest non-nuclear planned explosion until the 1947 British Heligoland detonation (below). The Messines mines detonation killed more people than any other non-nuclear deliberate explosion in history. On 6 December 1917, SS Imo and SS Mont-Blanc collided in the harbour of Halifax, Nova Scotia . Mont-Blanc carried 2,653 tonnes of various explosives, mostly picric acid . After the collision the ship caught fire, drifted into town, and exploded. The explosion killed 1,950 people and destroyed much of Halifax. An evaluation of the explosion's force puts it at 2.9 kilotons of TNT (12 TJ ). [ 35 ] Halifax historian Jay White in 1994 concluded: "Halifax Harbour remains unchallenged in overall magnitude as long as five criteria are considered together: number of casualties, force of blast, radius of devastation, quantity of explosive material, and total value of property destroyed." [ 36 ] On 1 July 1918, the National Shell Filling Factory No 6 ( Chilwell , near Nottingham , England) was partly destroyed when 8 tons of TNT exploded in the dry mix part of the factory. Approximately 140 workers – mainly young women, known as the 'Chilwell Canaries' because contact with picric acid turned their skin yellow – were killed, though the true number has never been established. An unknown number of people were injured, though estimates are about 250. Because of the sensitivity of the subject, reports of the explosion were censored until after the Armistice . The cause of the explosion was never officially established, though present-day authorities on explosives consider it was due to a combination of factors: an exceptionally hot day, high production demands and lax safety precautions. On 2 July 1918, a munitions factory near Syracuse, New York , exploded after a mixing motor in the main TNT building overheated. The fire rapidly spread through the wooden structure of the main factory. Approximately 1–3 tons of TNT were involved in the blast, which levelled the structure and killed 50 workers (conflicting reports mention 52 deaths). On 4 October 1918, an ammunition plant – operated by the T. A. Gillespie Company and located in New Jersey in the Morgan area of Sayreville in Middlesex County – exploded and caused a fire. The subsequent series of explosions continued for three days. The facility, said to be one of the largest in the world at the time, was destroyed, along with more than 300 buildings, forcing the reconstruction of South Amboy and Sayreville. More than 100 people died due to this accident. [ 37 ] During a three-day period, a total of 12,000,000 pounds (5,400 t) of explosives was destroyed. [ 38 ] On 21 September 1921, a BASF silo filled with 4,500 tonnes of fertilizer exploded, killing about 560, largely destroying Oppau , Germany, and causing damage more than 30 km (19 mi) away. On 1 March 1924, an explosion destroyed a building in Nixon, New Jersey , used for processing ammonium nitrate . The explosion caused fires in surrounding buildings in the Nixon Nitration Works that contained other highly flammable materials. The disaster killed 20 people and destroyed 40 buildings. On 17 July 1932, a train carrying 320 to 330 tons of dynamite from the De Beers factory at Somerset West to the Witwatersrand exploded and flattened the small town of Leeudoringstad in South Africa. Five people were killed and 11 injured in the sparsely-populated area. On 10 February 1933, a gas storage in Neunkirchen , Territory of the Saar Basin , detonated during maintenance work. The detonation could be heard at a distance of 124 miles (200 km). The death toll was 68, and 160 were injured. On 18 March 1937, a natural gas leak caused an explosion, destroying the London School of New London , Texas . The disaster killed more than 295 students and teachers, making it the deadliest school disaster in American history. Letters of sympathy were sent from around the world, including a telegram from Adolf Hitler . On 1 March 1939, Warehouse No. 15 of the Imperial Japanese Army 's Kinya ammunition dump in Hirakata, Osaka Prefecture, Japan , suffered a catastrophic explosion, the sound of which could be heard throughout the Keihan area . Additional explosions followed during the next few days as the depot burned, for a total of 29 explosions by 3 March. Japanese officials reported that 94 people died, 604 were injured, and 821 houses were damaged, with 4,425 households in all suffering the effects of the explosions. [ 39 ] [ 40 ] On 13 September 1939, the French cruiser Pluton exploded and sank while offloading naval mines in Casablanca , in French Morocco . The explosion killed 186 men, destroyed three nearby armed trawlers, and damaged nine more. On 12 September 1940, nearly 300,000 pounds (140 t) of gunpowder exploded at the Hercules Company in the Kenvil area of Roxbury, New Jersey . At least 51 people were killed, more than 100 injured, and twenty buildings flattened. It remains unknown if this was an industrial accident, or sabotage by pro- IRA or pro- Nazi factions. On 6 April 1941, SS Clan Fraser was moored in Piraeus Harbour, Greece. Three German Luftwaffe bombs struck the ship, igniting 350 tonnes of TNT; a barge nearby carried an additional 100 tonnes which also detonated. Royal Navy warships HMS Ajax and HMS Calcutta attempted to tow the stricken vessel out of harbour and succeeded in getting beyond the breakwater, after the tow line had broken three times. It then exploded, levelling large areas of the port. This was witnessed by post-war author Roald Dahl , who was piloting a Hawker Hurricane fighter plane for the Royal Air Force . On 24 May 1941, HMS Hood sank in three minutes after the stern magazine detonated during the Battle of the Denmark Strait . The wreck has been located in three pieces, suggesting additional detonation of a forward magazine. There were only three survivors from the crew of 1,418. On 25 November 1941, HMS Barham was sunk by the German submarine U-331 ; 862 crew were lost. The main magazine's explosion was filmed by a Pathé News cameraman aboard nearby HMS Valiant . During World War II, German invading forces in Serbia used Smederevo Fortress for ammunition storage. On 5 June 1941 it exploded, [ 41 ] blasting through the entirety of Smederevo and reaching settlements as far as 10 km (6.2 mi) away. Much of the southern wall of the fortress was destroyed, the nearby railway station, packed with people, was blown away, and most of the buildings in the city were turned into debris. About 2,500 people died in the explosion, and half of the inhabitants were injured [ 42 ] (approximately 5,500). On Wednesday, 29 April 1942, an explosion destroyed the entire Produits Chimiques de Tessenderloo factory and much of the surrounding town of Tessenderlo in German-occupied Belgium . A nearby school was largely destroyed, with 60 schoolchildren losing their lives. The blast hurled steel beams as long as 15 metres into fields hundreds of metres away and left a crater 70 metres wide and 23 metres deep. The explosion occurred when factory workers tried to separate big chunks of newly arrived ammonium nitrate (200 t) using dynamite, after failing to do so using regular tools. In total, 189 people died and more than 900 were injured in the incident. [ 43 ] [ 44 ] On the night of 10 June 1942, the German submarine U-68 torpedoed the 8,600-ton British freighter Surrey in the Caribbean Sea . Five thousand tons of dynamite in the cargo detonated after the ship sank. The shock wave lifted U-68 out of the water as if it had suffered a torpedo hit, and both diesel engines and the gyrocompass were disabled. [ 45 ] On the night of 3 November 1942, torpedoes detonated the ammunition cargo of the 6,690-ton British freighter Hatimura . Both the freighter and attacking submarine U-132 were destroyed by the explosion. [ 46 ] On 28 March 1943, in the port of Naples , a fire began on Caterina Costa , an 8,060-ton motor ship carrying arms and supplies (1,000 tons of gas, 900 tons of explosives, tanks and others); the fire became uncontrollable, causing a devastating explosion. A large number of buildings around were destroyed or badly damaged. Some ships nearby caught fire and sank, and hot parts of the ship and tanks were thrown great distances. More than 600 people died and more than 3,000 were wounded. On 14 April 1944, SS Fort Stikine , carrying about 1,400 long tons (1,400 t) of explosives (among other goods), caught fire and exploded, killing about 800 people. Debris fell across the city landing miles away from the site of the explosion. The bales of cotton aboard the boat caught fire and fell from the sky causing fires in other parts of the city. The explosion was strong enough to be detected on seismographs in Simla, a city more than 1700 km from the site of the explosion. On 20 April 1944, the Dutch steam trawler ST Voorbode , loaded with 124,000 kilograms (124 t) of explosives, caught fire and exploded in Norway at the quay in the centre of Bergen . The air pressure from the explosion and the tsunami that resulted flattened whole neighbourhoods near the harbour. Fires broke out in the aftermath, leaving 5,000 people homeless. 160 people were killed, and 5,000 wounded. On 20 April 1944, the Liberty ship SS Paul Hamilton was attacked 30 miles (48 km) off Cape Bengut near Algiers by Luftwaffe bombers. The ship was destroyed within 30 seconds killing all 580 personnel aboard when the cargo of bombs and explosives detonated. On 21 May 1944, an ammunition handling accident in Hawaii's Pearl Harbor destroyed nine amphibious vessels : six LSTs and three LCTs . Four more LSTs, ten tugs, and a net tender were damaged. Eleven buildings were destroyed ashore and nine more damaged. Between 132 and nearly 400 military personnel were killed. On 4 July 1944, a barge loaded with ammunition exploded in the harbour of Aarhus, Denmark, killing 39 people and injuring another 250. On 17 July 1944, in Port Chicago, California , SS E. A. Bryan exploded while loading ammunition bound for the Pacific region, with an estimated 4,606 short tons (4,178 t) of high explosive (HE), incendiary bombs, depth charges, and other ammunition. Another 429 short tons (389 t) waiting on nearby rail cars also exploded. The total explosive content is described as between 1,600 [ 47 ] and 2,136 [ 48 ] tons of TNT. 320 were killed instantly, another 390 wounded. Most of the killed and wounded were African American enlisted men. After the explosion, 258 fellow sailors refused to load ordnance; 50 of these, called the "Port Chicago 50", were convicted of mutiny even though they were willing to obey any order that did not involve loading ordnance under unsafe conditions. [ 49 ] On 20 October 1944, a liquefied natural gas storage tank in Cleveland , Ohio, split and leaked its contents, which spread, caught fire, and exploded. A half hour later, another tank exploded as well. The explosions destroyed 1 square mile (2.6 km 2 ), killed 130, and left 600 homeless. On 10 November 1944, USS Mount Hood exploded in Seeadler Harbor at Manus Island in Australian New Guinea , with an estimated 3,800 tons of ordnance material on board. Mushrooming smoke rose to 7,000 feet (2,100 m), obscuring the surrounding area for a radius of approximately 500 yards (460 m). Mount Hood ' s former position was revealed by a trench in the ocean floor 1,000 feet (300 m) long, 200 feet (61 m) wide, and 30 to 40 feet (9.1 to 12.2 m) deep. The largest remaining piece of the hull was found in the trench and measured 16 by 10 feet (4.9 by 3.0 m). All 296 men aboard the ship were killed. USS Mindanao was 350 yards (320 m) away and suffered extensive damage, with 23 crew killed, and 174 injured. Several other nearby ships were also damaged or destroyed. All together 372 were killed and 371 injured in the blast. On 27 November 1944, the RAF Ammunition Depot at Fauld, Staffordshire , became the site of the largest explosion in the UK, when 3,700 tonnes of bombs stored in underground bunkers covering 17,000 m 2 (180,000 sq ft) exploded en masse. The explosion was caused by bombs being taken out of store, primed for use, and replaced with the detonators still installed when unused. The crater was 40 [ 50 ] metres (130 ft) deep and covered 5 hectares. The death toll was approximately 78, including RAF personnel, six Italian prisoners of war, civilian employees, and local people. In the similar Port Chicago disaster (above), about half the weight of bombs was high explosive. If the same is true of the Fauld Explosion, it would have been equivalent to about 2 kilotons of TNT. On 19 December 1944, the Japanese aircraft carrier Unryu exploded when torpedoes fired by the US submarine USS Redfish detonated the forward magazine. Only 145 men were rescued while 1,238 officers, crewmen and passengers lost their lives. [ 51 ] On 28 December 1944, while transporting ammunition to Mindoro , Philippines , the Liberty ship SS John Burke was hit by a Japanese kamikaze aircraft , and disintegrated in a tremendous explosion with the loss of all crew. [ 52 ] On 7 April 1945, after six hours of battle , Japanese battleship Yamato 's magazine exploded as it sank, resulting in a mushroom cloud rising six kilometres (3.7 mi) above the wreck, and which could be seen from Kyushu , 160 kilometres (99 mi) away. 3,055 crewmen were killed. On 7 May 1945, 100 tons of TNT were stacked on a wooden tower and exploded to test the instrumentation prior to the test of the first atomic bomb. On 12 November 1945 in Japan, when Allied occupation troops were trying to dispose of 530 tons of ammunition, there was an explosion in a tunnel in Soeda, Fukuoka Prefecture , Kyushu Island . According to a confirmed official report, 147 local residents were killed and 149 people injured. [ 53 ] On 16 April 1947, the ship SS Grandcamp , loaded with about 2,300 tons of ammonium nitrate , exploded in port at Texas City, Texas . 581 died and more than 5,000 were injured. This is generally considered the worst industrial accident in United States history. On 18 April 1947, British engineers attempted to destroy the abandoned German fortifications on the evacuated island of Heligoland in what became known as the "British Bang". The island had been fortified during the war with a submarine base and airfield. [ 54 ] [ 55 ] Roughly 4000 tons [ 56 ] [ 57 ] of surplus World War II ammunition were placed in various locations around the island and set off. A significant portion of the fortifications were destroyed, although some survived. According to Willmore, [ 57 ] the energy released was 1.3×10 13 J, or about 3.2 kilotons of TNT equivalent. The blast is listed in the Guinness Book of World Records under largest single explosive detonation , although Minor Scale in 1985 was larger (see below). On 28 July 1947, the Norwegian cargo ship Ocean Liberty exploded in the French port of Brest . The cargo consisted of 3,300 tonnes of ammonium nitrate in addition to paraffin and petrol. The explosion killed 22 people, hundreds were injured, 4,000–5,000 buildings were damaged. [ 58 ] On 18 August 1947, a naval ammunition warehouse containing mostly mines and torpedoes exploded in Cádiz , in southern Spain, for unknown reasons. The explosion of 200 tons of TNT destroyed a large portion of the city. Officially, the explosion killed 150 people; the real death toll is suspected to be greater. On 19 December 1947, the Liberty class cargo ship General Vatutin exploded in the Soviet port of Magadan at Nagayeva Bay on the Russian Far East . The ship transported 3,313 tonnes of ammonal and TNT for the mining industry. Another cargo ship Vyborg , carrying 193 tonnes of chemical substances including detonators and fuse cords, also detonated from the explosion. More than 90 people were killed, more than 500 were injured. The explosion caused a tsunami with broken ice, damaging and destroying many buildings. [ 59 ] In December 1947, a Swiss Army ammunition dump exploded at Mitholz , Switzerland. The explosion of 3,000 tonnes of ammunition killed nine people and destroyed every house in the village. [ 60 ] On 15 July 1949 in the German town of Prüm , an underground bunker inside the hill of Kalvarienberg and used previously by the German Army to store ammunition, but now filled with French Army munitions, caught fire. After a mostly successful evacuation, the 500 tonnes of ammunition in the bunker exploded and destroyed large parts of the town. 12 people died and 15 were injured severely. [ 61 ] The South Amboy powder pier explosion occurred on 19 May 1950. More than 420 tons of explosives in transit at the Raritan River Port in South Amboy, New Jersey detonated due to unknown causes, killing 31 people and injuring more than 350. On 7 August 1956, seven lorries from the Colombian National Army , carrying more than 40 tons of dynamite, exploded. The explosion killed more than 1,000 people, and left a crater 25 metres (82 ft) deep and 60 metres (200 ft) in diameter. [ 62 ] [ 63 ] On 29 September 1957, an explosion occurred within stainless steel containers located in a concrete canyon 8.2 m (27 feet) deep used to store high-level waste. The explosion completely destroyed one of the containers, out of 14 total containers ("cans") in the canyon. The explosion was caused because the cooling system in one of the tanks at Mayak, containing about 70–80 tons of liquid radioactive waste , failed and was not repaired. The temperature in it started to rise, resulting in evaporation and a chemical explosion of the dried waste, consisting mainly of ammonium nitrate and acetates. The explosion was estimated to have had a force of at least 70 tons of TNT . [ 64 ] On 5 April 1958, an underwater mountain at Ripple Rock , British Columbia , Canada was levelled by the explosion of 1,375 tonnes of Nitramex 2H , an ammonium nitrate-based explosive. This was one of the largest non-nuclear planned explosions on record, and the subject of the first Canadian Broadcasting Corporation live broadcast coast-to-coast. On 18 July 1963, a test blast of 50 tons of TNT in the Iron Range area of Queensland , Australia, tested the effects of nuclear weapons on tropical rainforest, military targets and ability of troops to transit through the resulting debris field. [ 65 ] On 17 September 1964, the offshore disposal of the ship Village , containing 7,348 short tons (6,666 t) of obsolete munitions, caused unexpected detonations five minutes after sinking off New Jersey . The detonations were detected on seismic instruments around the world; the incident encouraged intentional detonation of subsequent disposal operations to determine detectability of underwater nuclear testing. [ 66 ] A series of tests, Operation Sailor Hat, was performed off Kaho'olawe Island , Hawaii , in 1965, using conventional explosives to simulate the shock effects of nuclear blasts on naval vessels. Each test included the detonation of 500 short tons (450 t) of high explosives. On 14 July 1965, Coastal Mariner was loaded with 4,040 short tons (3,670 t) of obsolete munitions containing 512 short tons (464 t) of high explosives. The cargo was detonated at a depth of 1,000 feet (300 m) and created a 600-foot (200 m) water spout, but was not deep enough to be recorded on seismic instruments. On 16 September 1965, Santiago Iglesias was similarly detonated with 8,715 short tons (7,906 t) of obsolete munitions. [ 66 ] On 4 January 1966, an LPG spill occurred near Lyon , France, and resulted in a cloud of propane vapour which persisted until it was ignited by a car passing by. Several tanks erupted in a boiling liquid expanding vapour explosion , causing the deaths of 18 people, the injury of 81 and extensive damage to the site. On 21 October 1966, a mud flow protection dam near Alma-Ata , Kazakhstan was created by a series of four preliminary explosions of 1,800 tonnes total and a final explosion of 3,600 tonnes of ammonium nitrate-based explosive. On 14 April 1967, the dam was reinforced by an explosion of 3,900 tonnes of ammonium nitrate-based explosive. On 23 May 1966, Izaac Van Zandt was loaded with 8,000 short tons (7,300 t) of obsolete munitions containing 400 short tons (360 t) of high explosives. The cargo was detonated off Puget Sound at a depth of 4,000 feet (1,200 m). [ 66 ] On 28 July 1966, Horace Greeley was loaded with obsolete munitions and detonated off New Jersey at a depth of 4,000 feet (1,200 m). [ 66 ] On 3 July 1969, an N1 rocket in the USSR exploded upon impacting its launch pad at Baikonur Cosmodrome , after a turbopump exploded in one of the engines. The entire rocket contained about 680,000 kg (680 t) of paraffin and 1,780,000 kg (1,780 t) of liquid oxygen. [ 67 ] Using a standard energy release of 43 MJ/kg of paraffin gives about 29 TJ for the energy of the explosion (about 6.93 kt TNT equivalent ). Investigators later determined that as much as 85% of the fuel in the rocket did not detonate, meaning that the blast yield was likely no more than 1 kt TNT equivalent . [ 68 ] Comparing explosions of initially unmixed fuels is difficult (being part detonation and part deflagration ). On 9 March 1972, 2,000 tons (4 million pounds) of explosive were detonated inside three levels of tunnels in the Old Reliable Mine near Mammoth, Arizona . [ 69 ] The blast was an experimental attempt to break up the ore body so that metals (primarily copper) could be extracted using sulphuric acid in a heap-leach process. The benefits of increased production were short-lived while the costs of managing acid mine drainage due to the sulphide ore body being exposed to oxygen continue to the present. On 1 June 1974, a pipe failure at the Nypro chemical plant in Flixborough , England, caused a large release of flammable cyclohexane vapour, which ignited. The resulting fuel-air explosion destroyed the plant, killing 28 people and injuring 36 more. Beyond the plant 1,821 houses and 167 shops and factories had suffered to a greater or lesser degree. [ 70 ] Fires burned for 16 days. The explosion occurred during a weekend, otherwise the casualties would have been much greater. This explosion caused a significant strengthening of safety regulations for chemical plants in the United Kingdom. On 11 November 1977, a freight train carrying 40 tons of dynamite in South Korea from Gwangju suddenly exploded at Iri station (present-day Iksan ), Jeollabuk-do province. The cause of the explosion was accidental ignition by a drunk guard. 59 people died, and 185 others seriously wounded; all together, more than 1,300 people were injured or killed. On 11 July 1978, an overloaded tanker lorry carrying 23 tons of liquefied propylene crashed and ruptured in Spain, emitting a white cloud of ground-hugging fumes which spread into a nearby campground and discothèque before reaching an ignition source and exploding. 217 people were killed and 200 more severely burned. In 1983 near Murdock, Illinois , at least two tank wagons of a burning derailed train exploded into BLEVEs; one of them was thrown nearly three-quarters mile (1.2 km). [ 71 ] On 27 May 1983, an explosion at an illegal fireworks factory near Benton, Tennessee , killed eleven people, injured one, and caused damage within a radius of several miles. The blast created a mushroom cloud 600 to 800 feet (180 to 240 m) tall and was heard as far as fifteen miles (24 km) away. [ 72 ] On 7 January 1983, an explosion in Newark, New Jersey in the Texaco oil tank farm was felt for 100–130 miles from epicentre, claiming 1 life and injuring 22–24 people. Many very large detonations have been performed in order to simulate the effects of nuclear weapons on vehicles and other military material. [ citation needed ] The largest publicly known test was conducted by the United States Defense Nuclear Agency (now part of the Defense Threat Reduction Agency ) on 27 June 1985 at the White Sands Missile Range in New Mexico. This test, named Minor Scale, used 4,744 short tons (4,304 t) of ANFO , with a yield of about 4 kt (3,900 long tons; 4,400 short tons). [ 73 ] Misty Picture was another similar test a few years later, slightly smaller at 4,685 short tons or 4,250 t. On 4 May 1988, about 4,250 short tons (3,860 metric tons) of ammonium perchlorate (NH 4 ClO 4 ) caught fire and set off explosions near Henderson, Nevada . A 16-inch (41 cm) natural gas pipeline ruptured under the stored ammonium perchlorate and added fuel to the later, larger explosions. There were seven detonations in total, the largest being the last. Two people were killed and hundreds injured. The largest explosion was estimated to be equivalent to 0.25 kilotons of TNT (1.0 TJ). [ 74 ] [ 75 ] The accident was caught on video by a broadcast engineer servicing a transmitter on Black Mountain , between Henderson and Las Vegas . [ 76 ] The Arzamas explosion, known also as Arzamas train disaster, occurred on 4 June 1988, when three goods wagons transporting hexogen to Kazakhstan exploded on a railway crossing in Arzamas , Gorky Oblast , USSR. Explosion of 118 tons of hexogen made a 26-metre (85 ft) deep crater, and caused major damage, killing 91 people and injuring 1,500. 151 buildings were destroyed. On 4 June 1989, a gas explosion destroyed two trains (37 cars and two locomotives) in the USSR. At least 575 people died and more than 800 were injured. [ citation needed ] On 14 February 1996, a Chinese Long March 3B rocket veered severely off course immediately after clearing the launch tower at the Xichang Satellite Launch Center , then crashed into a nearby city and exploded on impact. The rocket did not have a flight termination system that would have allowed the vehicle to be destroyed mid-air. After the disaster, foreign media were kept in a bunker for five hours while, some alleged, the Chinese People's Liberation Army attempted to "clean up" the damage. Officials later blamed the failure on an "unexpected gust of wind" although video shows this is not the case. Xinhua News Agency initially reported 6 deaths and 57 injuries. [ 77 ] [ 78 ] On 13 May 2000, 177 tonnes of fireworks exploded in Enschede , in the Netherlands, in which 23 people were killed and 947 were injured. [ 79 ] The first explosion had the order of 800 kg TNT equivalence; the final explosion was in the range of 4,000–5,000 kg TNT. [ 80 ] On 21 September 2001, an explosion occurred at a fertilizer factory in Toulouse , France. The disaster caused 31 deaths, 2,500 seriously wounded, and 8,000 minor injuries. The blast (estimated yield of 20–40 tons of TNT, comparable in scale to the military test Operation Blowdown ) was heard 80 km away (50 miles) and registered 3.4 on the Richter magnitude scale. It damaged about 30,000 buildings over about two-thirds of the city, for an estimated total cost of about €2 billion. [ 81 ] A train exploded in North Korea on 22 April 2004. According to officials, 54 people were killed and 1,249 were injured. [ 82 ] On 3 November 2004, about 284 tonnes of fireworks exploded in Kolding , in Denmark. One firefighter was killed, and a mass evacuation of 2,000 people saved many lives. The cost of the damage was estimated at €100 million. On 23 March 2005, there was a hydrocarbon leak due to incorrect operations during a refinery startup which caused a vapour cloud explosion when ignited by a running vehicle engine. There were 15 deaths and more than 170 injured. On 11 December 2005, there was a series of major explosions at the 60,000,000 imp gal (270,000,000 L) capacity Buncefield oil depot near Hemel Hempstead , Hertfordshire , England. The explosions were heard more than 100 mi (160 km) away, as far as the Netherlands and France, and the resulting flames were visible for many miles around the depot. A smoke cloud covered Hemel Hempstead and nearby parts of west Hertfordshire and Buckinghamshire . There were no fatalities, but there were around 43 injuries (2 serious). The British Geological Survey estimated the equivalent yield of the explosion as 29.5 tonnes TNT. [ 83 ] On 30 January 2007, a Sea Launch Zenit-3SL space rocket exploded on takeoff . The explosion consumed the roughly 400,000 kg (400 t) of paraffin and liquid oxygen aboard. This rocket was launched from an uncrewed ship in the middle of the Pacific Ocean, so there were no casualties; the launch platform was damaged and the NSS-8 satellite was destroyed. On 22 March 2007, there was a series of explosions over 2.5 hours in an arms depot in the Mozambican capital of Maputo . The incident was blamed on high temperatures. Officials confirmed 93 human fatalities and more than 300 injuries. [ 84 ] [ 85 ] On 15 March 2008, at an ex-military ammunition depot in the village of Gërdec in the Vorë Municipality, Albania (14 kilometres from Tirana , the capital), US and Albanian munitions experts were preparing to destroy stockpiles of obsolete ammunition. The main explosion, involving more than 400 tons of propellant in containers, destroyed hundreds of houses within a few kilometres from the depot and broke windows in cars on the Tirana-Durrës highway. A large fire caused a series of smaller but powerful explosions that continued until 2 a.m. the next day. The explosions could be heard as far away as the Macedonian capital of Skopje , 170 km (110 mi) away.[1] There were 26 killed, 318 houses were destroyed completely, 200 buildings were seriously damaged, and 188 buildings were less seriously damaged. [ 86 ] On the morning of 23 October 2009, there was a major explosion at the petrol tanks at the Caribbean Petroleum Corporation oil refinery and oil depot in Bayamón, Puerto Rico . [ 87 ] The explosion was seen and heard from 50 miles (80 km) away and left a smoke plume with tops as high as 30,000 feet (9 km). It caused a 3.0 earthquake and blew glass out of windows around the city. The resulting fire was extinguished on 25 October. On 13 and 23 November 2009, 120 tons of Soviet-era artillery shells blew in two separate sets of explosions at the 31st Arsenal of the Caspian Sea Flotilla 's ammunition depot near Ulyanovsk , killing ten people. [ 88 ] [ 89 ] About 5:45 am local time on 11 July 2011, a fire at a munitions dump at Evangelos Florakis Naval Base near Zygi , Cyprus, caused the explosion of 98 cargo containers holding various types of munitions. The naval base was destroyed, as was Cyprus's biggest power plant , the "Vassilikos" power plant 500 m (1,600 ft) away. The explosion also caused 13 deaths and more than 60 injuries. Injuries were reported as far as 5 km (3.1 mi) away and damaged houses were reported as far as 10 km (6.2 mi) away. [ 90 ] [ 91 ] Seismometers at the Mediterranean region recorded the explosion as a M3.0 seismic event . [ 92 ] On 11 March 2011 in Japan, the Tōhoku earthquake caused natural gas containers in the Cosmo Oil Refinery of Ichihara , Chiba Prefecture , to catch fire, destroying storage tanks and injuring six people. [ 93 ] As it burned, several pressurized liquefied propane gas storage tanks exploded into fireballs. [ 94 ] It was extinguished by the Cosmo Oil Company on 21 March 2011. [ 93 ] On 17 April 2013, a fire culminating in an explosion shortly before 8 p.m. CDT (00:50 UTC , 18 April) destroyed the West Fertilizer Company plant in West, Texas , United States, located 18 miles (29 km) north of Waco, Texas . [ 95 ] [ 96 ] The blast killed 15 people, injured more than 160, and destroyed over 150 buildings. The United States Geological Survey recorded the explosion as a 2.1-magnitude earthquake, the equivalent of 7.5–10 tons of TNT. [ 97 ] [ 98 ] [ 99 ] On 6 July 2013, a train of 73 tank wagons of light crude oil ran away down a slight incline, after being left unattended for the night, when the air brakes failed after the locomotive engines were shut down following a small fire. It derailed twelve kilometres away in Lac-Mégantic , Quebec , Canada, igniting the Bakken light crude oil from 44 DOT-111 oil tank wagons. Approximately 3–4 minutes after the initial blast, there was a second explosion from 12 oil tank wagons. A series of smaller blasts followed into the early morning hours, igniting the oil of a total 73 oil tank wagons. The disaster is known to have killed 42 people; five more were missing and presumed dead. [ 100 ] On 12 August 2015, at 23:30, two explosions occurred in the Chinese port Tianjin at a warehouse operated by Ruihai Logistics. The more powerful explosion was estimated at 336 tons TNT equivalent. [ 101 ] 173 people were killed, and 8 remain missing. [ 102 ] On 5 June 2016, a fire at the largest military armoury in the island nation of Sri Lanka caused a series of explosions that lasted for about 5 hours. One soldier was killed and several others were injured. [ 103 ] On 20 December 2016, a fireworks explosion occurred at the San Pablito Market in the city of Tultepec , north of Mexico City . At least 42 people were killed, and dozens injured. On 6 September 2017, an ammunition explosion occurred at ammunition depot in Kalynivka , near Vinnytsia , Ukraine . On 14 January 2020, an ethylene oxide tank exploded at the IQOXE (Chemical Industries of Ethylene Oxide) plant in Tarragona (Spain). On 4 August 2020, a warehouse containing 2,750 tonnes (3,030 short tons) of ammonium nitrate exploded following a fire in the Port of Beirut , Lebanon. The explosion generated a pressure wave felt more than 240 kilometres (150 mi) away. A study by researchers from the Blast and Impact Dynamics Research Group at the University of Sheffield estimated the energy of the Beirut explosion to be equivalent to 0.5–1.2 kt of TNT. [ 104 ] At least 218 people were killed, more than 7,000 injured, and about 300,000 made homeless. Much of central Beirut was devastated by the blast with property damage estimated at US$10–15 billion. A Ukrainian drone attack on a Russian weapons depot in Toropets , Tver Oblast , caused an explosion large enough to be detected as an earthquake by monitoring stations. Local residents were told to evacuate and schools in the region were closed. [ 105 ] On Saturday, April 26, 2025, a massive explosion and fire erupted at the Shahid Rajaee port near Bandar Abbas in southern Iran , at least 70 people were killed and 1200 others were injured , according to state media reports. [ 106 ] The most powerful non-nuclear weapons ever designed are the United States' MOAB (standing for Massive Ordnance Air Blast , tested in 2003 and used on 13 April 2017 , in Achin District, Afghanistan) and the Russian " Father of All Bombs " (tested in 2007). The MOAB contains 18,700 lb (8.5 t) of Composition H6 explosive, which is 1.35 times as powerful as TNT, giving the bomb an approximate yield of 11 t TNT. It would require about 250 MOAB blasts to equal the Halifax explosion (2.9 kt). Large conventional explosions have been conducted for nuclear testing purposes. Some of the larger ones are listed below. [ 107 ] Other smaller tests include Air Vent I and Flat Top I-III series of 20 tons TNT at Nevada Test Site in 1963–64, Pre Mine Throw and Mine Throw in 1970–1974, Mixed Company 1 & 2 of 20 tons TNT, Middle Gust I-V series of 20 or 100 tons TNT in the early 1970s, Pre Dice Throw and Pre Dice Throw II in 1975, Pre-Direct Course in 1982, SHIST in 1994, and the series Dipole Might in the 1990s and 2000s. Divine Strake was a planned test of 700 tons ANFO at the Nevada Test Site in 2006, but was cancelled. These yields are approximated by the amount of the explosive material and its properties. They are rough estimates and are not authoritative.
https://en.wikipedia.org/wiki/Largest_artificial_non-nuclear_explosions
In physics , Larmor precession (named after Joseph Larmor ) is the precession of the magnetic moment of an object about an external magnetic field . The phenomenon is conceptually similar to the precession of a tilted classical gyroscope in an external torque-exerting gravitational field. Objects with a magnetic moment also have angular momentum and effective internal electric current proportional to their angular momentum; these include electrons , protons , other fermions , many atomic and nuclear systems, as well as classical macroscopic systems. The external magnetic field exerts a torque on the magnetic moment, where τ → {\displaystyle {\vec {\tau }}} is the torque, μ → {\displaystyle {\vec {\mu }}} is the magnetic dipole moment, J → {\displaystyle {\vec {J}}} is the angular momentum vector, B → {\displaystyle {\vec {B}}} is the external magnetic field, × {\displaystyle \times } symbolizes the cross product , and γ {\displaystyle \gamma } is the gyromagnetic ratio which gives the proportionality constant between the magnetic moment and the angular momentum. The angular momentum vector J → {\displaystyle {\vec {J}}} precesses about the external field axis with an angular frequency known as the Larmor frequency , where ω {\displaystyle \omega } is the angular frequency , [ 1 ] B {\displaystyle B} is the magnitude of the applied magnetic field, and γ {\displaystyle \gamma } is the gyromagnetic ratio for a particle of charge − e {\displaystyle -e} , [ 2 ] equal to − e g 2 m {\displaystyle -{\frac {eg}{2m}}} , where m {\displaystyle m} is the mass of the precessing system, while g {\displaystyle g} is the g -factor of the system. The g -factor is the unit-less proportionality factor relating the system's angular momentum to the intrinsic magnetic moment; in classical physics it is 1 for any rigid object in which the charge and mass density are identically distributed. The Larmor frequency is independent of the angle between J → {\displaystyle {\vec {J}}} and B → {\displaystyle {\vec {B}}} . In nuclear physics the g -factor of a given system includes the effect of the nucleon spins, their orbital angular momenta, and their couplings . Generally, the g -factors are very difficult to calculate for such many-body systems, but they have been measured to high precision for most nuclei. The Larmor frequency is important in NMR spectroscopy . The gyromagnetic ratios, which give the Larmor frequencies at a given magnetic field strength, have been measured and tabulated. [ 3 ] Crucially, the Larmor frequency is independent of the polar angle between the applied magnetic field and the magnetic moment direction. This is what makes it a key concept in fields such as nuclear magnetic resonance (NMR) and electron paramagnetic resonance (EPR), since the precession rate does not depend on the spatial orientation of the spins. The above equation is the one that is used in most applications. However, a full treatment must include the effects of Thomas precession , yielding the equation (in CGS units , which are used so that E has the same units as B ): where γ {\displaystyle \gamma } is the relativistic Lorentz factor (not to be confused with the gyromagnetic ratio above). Notably, for the electron g is very close to 2 ( 2.002... ), so if one sets g = 2, one arrives at The spin precession of an electron in an external electromagnetic field is described by the Bargmann–Michel–Telegdi (BMT) equation (named after Valentine Bargmann , Louis Michel and Valentine Telegdi ) [ 4 ] where a τ {\displaystyle a^{\tau }} , e {\displaystyle e} , m {\displaystyle m} , and μ {\displaystyle \mu } are polarization four-vector, charge, mass, and magnetic moment, u τ {\displaystyle u^{\tau }} is four-velocity of electron (in a system of units in which c = 1 {\displaystyle c=1} ), a τ a τ = − u τ u τ = − 1 {\displaystyle a^{\tau }a_{\tau }=-u^{\tau }u_{\tau }=-1} , u τ a τ = 0 {\displaystyle u^{\tau }a_{\tau }=0} , and F τ σ {\displaystyle F^{\tau \sigma }} is electromagnetic field-strength tensor. Using equations of motion, one can rewrite the first term on the right side of the BMT equation as ( − u τ w λ + u λ w τ ) a λ {\displaystyle (-u^{\tau }w^{\lambda }+u^{\lambda }w^{\tau })a_{\lambda }} , where w τ = d u τ / d s {\displaystyle w^{\tau }=du^{\tau }/ds} is four-acceleration. This term describes Fermi–Walker transport and leads to Thomas precession . The second term is associated with Larmor precession. When electromagnetic fields are uniform in space or when gradient forces like ∇ ( μ ⋅ B ) {\displaystyle \nabla ({\boldsymbol {\mu }}\cdot {\boldsymbol {B}})} can be neglected, the particle's translational motion is described by The BMT equation is then written as [ 5 ] The Beam-Optical version of the Thomas-BMT, from the Quantum Theory of Charged-Particle Beam Optics , applicable in accelerator optics. [ 6 ] [ 7 ] A 1935 paper published by Lev Landau and Evgeny Lifshitz predicted the existence of ferromagnetic resonance of the Larmor precession, which was independently verified in experiments by J. H. E. Griffiths (UK) [ 8 ] and E. K. Zavoiskij (USSR) in 1946. [ 9 ] [ 10 ] Larmor precession is important in nuclear magnetic resonance , magnetic resonance imaging , electron paramagnetic resonance , muon spin resonance , and neutron spin echo . It is also important for the alignment of cosmic dust grains, which is a cause of the polarization of starlight . To calculate the spin of a particle in a magnetic field, one must in general also take into account Thomas precession if the particle is moving. The spin angular momentum of an electron precesses counter-clockwise about the direction of the magnetic field. An electron has a negative charge, so the direction of its magnetic moment is opposite to that of its spin.
https://en.wikipedia.org/wiki/Larmor_precession
The Larock indole synthesis is a hetero annulation reaction that uses palladium as a catalyst to synthesize indoles from an ortho-iodo aniline and a disubstituted alkyne . [ 1 ] It is also known as Larock hetero annulation . The reaction is extremely versatile and can be used to produce varying types of indoles. Larock indole synthesis was first proposed by Richard C. Larock in 1991 at Iowa State University . [ 2 ] The reaction usually occurs with an o -iodianiline or its derivatives, 2–5 equivalents of an alkyne, palladium(II) (PdII), an excess of sodium or potassium carbonate base, PPh 3 , and 1 equivalent of LiCl or n-Bu 4 NCl. N-methyl, N-acetyl, and N-tosyl derivatives of ortho-iodoanilines have been shown to be the most successful anilines that can be used to produce good to excellent yields. [ 3 ] Either LiCl or n-Bu 4 N are used depending on the reaction conditions, but LiCl appears to be the more effective base in Larock indole annulation. [ 3 ] The stoichiometry of LiCl is also considerably important, as more than 1 equivalent of LiCl will slow the rate of reaction and lower the overall yield. [ 1 ] Bases other than sodium or potassium carbonate have been used to produce a good overall yield of the annulation reaction. [ 3 ] For example, KOAc can be used with 1 equivalent of LiCl. However, the reaction using KOAc must be used at 120 °C to reach completion of the reaction at a reasonable time. In contrast K 2 CO 3 can be used at 100 °C. The Larock indole synthesis is a flexible reaction partly due to the variety of substituted alkynes that can be used in the annulation reaction. In particular, alkynes with substituents including alkyls, aryls, alkenyls, hydroxyls, and silyls have been successfully used. [ 3 ] However, bulkier tertiary alkyl or trimethylsilyl groups have been shown to provide a higher yield. [ 1 ] The annulation reaction will also proceed more efficiently when 2–5 equivalents of an alkyne is used. Less than two equivalents appear to create suboptimal conditions for the reaction. 5% mol of PPh 3 was initially used in the reaction as a catalyst. [ 1 ] However, later experiments have shown that PPh 3 does not significantly improve the overall yield and is not necessary. [ 3 ] The Larock indole synthesis proceeds via the following intermediate steps: [ 3 ] The carbopalladation step is regioselective when unsymmetrical alkynes are used. [ 1 ] [ 3 ] Although it was previously believed that the alkyne is inserted with the less sterically-hindering R-group adjacent to the arylpalladium, Larock et al. observed that the larger more sterically-hindering R-group is inserted next to the arylpalladium. [ 1 ] They suggest that the driving force of the alkyne insertion may be the steric hindrance present in the developing carbon-carbon bond and the orientation of the alkyne prior to syn-insertion of the alkyne into the aryl palladium bond. [ 3 ] Alkyne insertion occurs so that the large substituent on the alkyne avoids steric strain from the short developing carbon-carbon bond by interacting with the longer carbon-palladium bond. o -bromoanilines or o -chloroanilines do not undergo Larock indole synthesis. However, researchers from Boehringer-Ingelheim were able to successfully use both o -bromoanilines and o -chloroanilines to form indoles by using N-methyl-2-pyrrolidone (NMP) as the solvent with 1,1'bis(di-tert-butylphosphino)ferrocene as the palladium ligand. [ 4 ] O -bromoanilines and o -chloroanilines are more readily available and cost-effective over using o -iodianiline in Larock indole synthesis. [ 2 ] Monguchi et al. also derived 2- and 2,3-substituted indoles without using LiCl. [ 5 ] The optimized Indole reaction uses 10% Pd/C (3.0 mol%) with 1.1 equivalent of NaOAc, and NMP at 110–130 °C. Monguchi et al. state that their optimized condition of the Larock indole synthesis without LiCl is a more mild, environmentally benign, and efficient strategy for producing indoles. Indoles are one of the most prevalent heterocyclic structures found in biological processes, so the production of indole derivatives are important in a diversity of fields. Nishikawa et al. derived iso-tryptophan by using Larock indole synthesis with pre-synthesized α-C-glucosylpropargyl glycine and o-iodo-tosylanilide. [ 6 ] This reaction produced the product which had the reverse regioselectivity of normal Larock indole synthesis. The larger substituent was placed adjacent to the forming carbon-carbon bond, rather than the carbon-palladium bond. The explanation for the reverse regioselectivity which produced the iso-tryptophan is unknown. Optically active tryptophan which adheres to the regioselectivity of the Larock indole synthesis can also be synthesized using o-iodoaniline with propargyl substituted bislactim ethyl ether. Propargyl substituted bislactim ethyl ether is generated by using Schöllkopf chiral auxiliary bis lactam ether with n-BuLi, THF, and 3-halo-1-9trimethylsily1)-1-propyne and extracting the trans-isomer of the propargyl-substituted bislactim. [ 7 ] Other relevant applications include the synthesis of 5-HT 1D receptor agonist MK-0462, an anti- migraine drug. [ 8 ]
https://en.wikipedia.org/wiki/Larock_indole_synthesis