text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In materials science , a Lomer–Cottrell junction is a particular configuration of dislocations that forms when two perfect dislocations interact on interacting slip planes in a crystalline material. [ 1 ]
The sessile or immobile nature of the Lomer–Cottrell dislocation forms a strong barrier to further dislocation motion. Trailing dislocations pile up behind this junction, leading to an increase in the stress required to sustain deformation. This mechanism is a key contributor to work hardening in ductile materials like aluminum and copper . [ 1 ]
When two perfect dislocations encounter along a slip plane, each perfect dislocation can split into two Shockley partial dislocations : a leading dislocation and a trailing dislocation. When the two leading Shockley partials combine, they form a separate dislocation with a burgers vector that is not in the slip plane. This is the Lomer–Cottrell dislocation. It is sessile and immobile in the slip plane, acting as a barrier against other dislocations in the plane. The trailing dislocations pile up behind the Lomer–Cottrell dislocation, and an ever greater force is required to push additional dislocations into the pile-up.
For an FCC crystal with slip planes of the form {111}, consider the following reactions:
The resulting dislocation lies along a crystal direction that is not a slip plane at room temperature in FCC materials. This configuration contributes to immobility of the Lomer-Cottrell junction.
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lomer–Cottrell_junction |
A Lommel polynomial R m ,ν ( z ) is a polynomial in 1/ z giving the recurrence relation
where J ν ( z ) is a Bessel function of the first kind. [ 1 ]
They are given explicitly by
This polynomial -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lommel_polynomial |
A Lomond still is a type of still that was sometimes used for whisky distillation, invented in 1955 by Alistair
Cunningham of Hiram Walker . [ 1 ] It is used for batch distillation like a pot still , but has three perforated plates which can be cooled independently, controlling the reflux through the apparatus in a manner similar to coffey stills . This allows the distiller to produce different kinds of whisky in the same still. Lomond stills, despite their name, have never been used at the Loch Lomond distillery only at the Hiram Walker Glenburgie , Miltonduff , Inverleven and Scapa distilleries. (Loch Lomond uses a straight necked pot still design). For a time, the only remaining Lomond still was in the Scapa distillery, where it is used as a wash still, in combination with a traditional pot still. [ 2 ] In 2010, Bruichladdich distillery installed the original still salvaged from the demolished Inverleven distillery as a gin still. [ 3 ] In 2015 new Lomond stills were installed at InchDairnie distillery. [ 4 ] Loch Lomond Distillery has Lomond Stills installed, though it is unknown how long they have been there. [ 5 ] | https://en.wikipedia.org/wiki/Lomond_still |
Loncastuximab tesirine , sold under the brand name Zynlonta , is a monoclonal antibody conjugate medication used to treat large B-cell lymphoma and high-grade B-cell lymphoma . [ 2 ] [ 3 ] It is an antibody-drug conjugate (ADC) composed of a humanized antibody targeting the protein CD19 . [ 2 ]
The most common side effects include increased levels of gamma-glutamyltransferase (GGT, a liver enzyme), neutropenia (low levels of neutrophils , a type of white blood cell), tiredness , anemia (low levels of red blood cells), thrombocytopenia (low levels of blood platelets), nausea (feeling sick), peripheral edema (swelling due to fluid retention, especially of the ankles and feet) and rash . [ 3 ]
Loncastuximab tesirine was approved for medical use in the United States in April 2021, [ 2 ] [ 4 ] [ 5 ] and in the European Union in December 2022. [ 3 ] The US Food and Drug Administration (FDA) considers it to be a first-in-class medication . [ 6 ]
Loncastuximab tesirine is indicated for the treatment of adults with relapsed or refractory large B-cell lymphoma and high-grade B-cell lymphoma. [ 2 ] [ 3 ]
The humanized monoclonal antibody is stochastically conjugated via a valine-alanine cleavable, maleimide linker to a cytotoxic (anticancer) pyrrolobenzodiazepine (PBD) dimer . [ medical citation needed ] The antibody binds to CD19, a protein which is highly expressed on the surface of B-cell hematological tumors [ 7 ] including certain forms of lymphomas and leukemias . [ medical citation needed ] After binding to the tumor cells the antibody is internalized, the cytotoxic drug PBD is released and the cancer cells are killed. [ medical citation needed ] PBD dimers are generated out of PBD monomers, a class of natural products produced by various actinomycetes . PBD dimers work by crosslinking specific sites of the DNA , blocking the cancer cells’ division that cause the cells to die. [ medical citation needed ] As a class of DNA-crosslinking agents they are significantly more potent than systemic chemotherapeutic drugs. [ 8 ]
The benefit and side effects of loncastuximab tesirine were evaluated in one clinical trial, ADCT-402-201 (LOTIS-2 / NCT03589469), that included 145 participants with relapsed or refractory diffuse large B-cell lymphoma after at least two prior treatments that did not work or were no longer working. [ 2 ] [ 5 ] Participants received loncastuximab tesirine 0.15 mg/kg every 3 weeks for 2 treatment cycles, then 0.075 mg/kg every 3 weeks for subsequent treatment cycles. [ 5 ] Loncastuximab tesirine treatment was continued until either disease worsened or participants experienced unacceptable side effects (toxicity). [ 5 ] The benefit of loncastuximab tesirine was evaluated by measuring how many participants had complete or partial tumor shrinkage (response) and by how long that response lasted. [ 5 ] Participants in the clinical trial were also evaluated for side effects for the purpose of this drug application. [ 5 ] Trials were conducted at 28 sites in the United States, the United Kingdom, Italy, and Switzerland. [ 5 ]
Loncastuximab tesirine was granted orphan drug designation by the FDA for the treatment of diffuse large B-cell lymphoma. [ 9 ] [ 6 ] Loncastuximab tesirine was approved under FDA's accelerated approval program. [ 5 ]
On 15 September 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Zynlonta, intended for the treatment of adults with diffuse large B-cell lymphoma (DLBCL) and high-grade B-cell lymphoma (HGBL). [ 10 ] The applicant for this medicinal product is ADC Therapeutics (NL) B.V. [ 10 ] Loncastuximab tesirine was approved for medical use in the European Union in December 2022. [ 3 ] [ 11 ]
Given its mechanism of action, loncastuximab tesirine may be appealing in patients ineligible for CAR-T cell therapy. [ 12 ] | https://en.wikipedia.org/wiki/Loncastuximab_tesirine |
The London Hydraulic Power Company was established in 1883 to install a hydraulic power network in London. This expanded to cover most of central London at its peak, before being replaced by electricity, with the final pump house closing in 1977.
The company was set up by an act of Parliament , the London Hydraulic Power Act 1884 ( 47 & 48 Vict. c. lxxii), sponsored by railway engineer Sir James Allport , [ 1 ] [ a ] to install a network of high- pressure cast iron water mains under London . It merged the Wharves and Warehouses Steam Power and Hydraulic Pressure Company , founded in 1871 by Edward B. Ellington , and the General Hydraulic Power Company , founded in 1882. The network gradually expanded to cover an area mostly north of the Thames from Hyde Park in the west to Docklands in the east. [ 3 ]
The system was used as a cleaner and more compact alternative to steam engines , to power workshop machinery, lifts , cranes , theatre machinery (including revolving stages at the London Palladium and the London Coliseum , safety curtains at the Theatre Royal, Drury Lane , the lifting mechanism for the cinema organ at the Leicester Square theatre and the complete Palm Court orchestra platform), [ 1 ] and the backup mechanism of Tower Bridge . [ 3 ] It was also used to supply fire hydrants , mostly those inside buildings. The water, pumped straight from the Thames, was heated in winter to prevent freezing. [ 1 ]
The pressure was maintained at a nominal 800 pounds per square inch (5.5 MPa) (55 BAR) by five hydraulic power stations, originally driven by coal -fired steam engines . [ 1 ] These were at:
Short-term storage was provided by hydraulic accumulators , which were large vertical pistons loaded with heavy weights.
The mains crossed the River Thames via Vauxhall Bridge , Waterloo Bridge and Southwark Bridge and via the Rotherhithe Tunnel as well as the Tower Subway . [ 6 ]
The system pumped 6.5 million gallons of water each week in 1893; this grew to 32 million gallons in 1933.
From about 1904, business began to decline as electric power became more popular. The company began to replace its steam engines with electric motors from 1923. At its peak, the network consisted of 180 miles (290 km) of pipes, and the total power output was about 7,000 horsepower (5.2 MW).
The system finally closed in June 1977. The company, as a UK statutory authority , had the legal right to dig up the public highways to install and maintain its pipe network. This made it attractive to Mercury Communications (a subsidiary of Cable & Wireless ) who bought the company and used the pipes as telecommunications ducts . [ 3 ] [ 7 ] Wapping Hydraulic Power Station, the last of the five to close, later became an arts centre and restaurant. | https://en.wikipedia.org/wiki/London_Hydraulic_Power_Company |
London dispersion forces ( LDF , also known as dispersion forces , London forces , instantaneous dipole–induced dipole forces, fluctuating induced dipole bonds [ 1 ] or loosely as van der Waals forces ) are a type of intermolecular force acting between atoms and molecules that are normally electrically symmetric; that is, the electrons are symmetrically distributed with respect to the nucleus. [ 2 ] They are part of the van der Waals forces . The LDF is named after the German physicist Fritz London . They are the weakest of the intermolecular forces .
The electron distribution around an atom or molecule undergoes fluctuations in time. These fluctuations create instantaneous electric fields which are felt by other nearby atoms and molecules, which in turn adjust the spatial distribution of their own electrons. The net effect is that the fluctuations in electron positions in one atom induce a corresponding redistribution of electrons in other atoms, such that the electron motions become correlated. While the detailed theory requires a quantum-mechanical explanation (see quantum mechanical theory of dispersion forces ) , the effect is frequently described as the formation of instantaneous dipoles that (when separated by vacuum ) attract each other. The magnitude of the London dispersion force is frequently described in terms of a single parameter called the Hamaker constant , typically symbolized A {\displaystyle A} . For atoms that are located closer together than the wavelength of light , the interaction is essentially instantaneous and is described in terms of a "non-retarded" Hamaker constant. For entities that are farther apart, the finite time required for the fluctuation at one atom to be felt at a second atom ("retardation") requires use of a "retarded" Hamaker constant. [ 3 ] [ 4 ] [ 5 ]
While the London dispersion force between individual atoms and molecules is quite weak and decreases quickly with separation R {\displaystyle R} like 1 R 6 {\displaystyle {\frac {1}{R^{6}}}} , in condensed matter (liquids and solids), the effect is cumulative over the volume of materials, [ 6 ] or within and between organic molecules, such that London dispersion forces can be quite strong in bulk solid and liquids and decay much more slowly with distance. For example, the total force per unit area between two bulk solids decreases by 1 R 3 {\displaystyle {\frac {1}{R^{3}}}} [ 7 ] where R {\displaystyle R} is the separation between them. The effects of London dispersion forces are most obvious in systems that are very non-polar (e.g., that lack ionic bonds ), such as hydrocarbons and highly symmetric molecules like bromine (Br 2, a liquid at room temperature) or iodine (I 2, a solid at room temperature). In hydrocarbons and waxes , the dispersion forces are sufficient to cause condensation from the gas phase into the liquid or solid phase. Sublimation heats of e.g. hydrocarbon crystals reflect the dispersion interaction. Liquification of oxygen and nitrogen gases into liquid phases is also dominated by attractive London dispersion forces.
When atoms/molecules are separated by a third medium (rather than vacuum), the situation becomes more complex. In aqueous solutions , the effects of dispersion forces between atoms or molecules are frequently less pronounced due to competition with polarizable solvent molecules. That is, the instantaneous fluctuations in one atom or molecule are felt both by the solvent (water) and by other molecules.
Larger and heavier atoms and molecules exhibit stronger dispersion forces than smaller and lighter ones. [ 8 ] This is due to the increased polarizability of molecules with larger, more dispersed electron clouds . The polarizability is a measure of how easily electrons can be redistributed; a large polarizability implies that the electrons are more easily redistributed. This trend is exemplified by the halogens (from smallest to largest: F 2 , Cl 2 , Br 2 , I 2 ). The same increase of dispersive attraction occurs within and between organic molecules in the order RF, RCl, RBr, RI (from smallest to largest) or with other more polarizable heteroatoms . [ 9 ] Fluorine and chlorine are gases at room temperature, bromine is a liquid, and iodine is a solid. The London forces are thought to arise from the motion of electrons.
The first explanation of the attraction between noble gas atoms was given by Fritz London in 1930. [ 10 ] [ 11 ] [ 12 ] He used a quantum-mechanical theory based on second-order perturbation theory . The perturbation is because of the Coulomb interaction between the electrons and nuclei of the two moieties (atoms or molecules). The second-order perturbation expression of the interaction energy contains a sum over states. The states appearing in this sum are simple products of the stimulated electronic states of the monomers . Thus, no intermolecular antisymmetrization of the electronic states is included, and the Pauli exclusion principle is only partially satisfied.
London wrote a Taylor series expansion of the perturbation in 1 R {\displaystyle {\frac {1}{R}}} , where R {\displaystyle R} is the distance between the nuclear centers of mass of the moieties.
This expansion is known as the multipole expansion because the terms in this series can be regarded as energies of two interacting multipoles, one on each monomer. Substitution of the multipole-expanded form of V into the second-order energy yields an expression that resembles an expression describing the interaction between instantaneous multipoles (see the qualitative description above). Additionally, an approximation, named after Albrecht Unsöld , must be introduced in order to obtain a description of London dispersion in terms of polarizability volumes , α ′ {\displaystyle \alpha '} , and ionization energies , I {\displaystyle I} , (ancient term: ionization potentials ).
In this manner, the following approximation is obtained for the dispersion interaction E A B d i s p {\displaystyle E_{AB}^{\rm {disp}}} between two atoms A {\displaystyle A} and B {\displaystyle B} . Here α A ′ {\displaystyle \alpha '_{A}} and α B ′ {\displaystyle \alpha '_{B}} are the polarizability volumes of the respective atoms. The quantities I A {\displaystyle I_{A}} and I B {\displaystyle I_{B}} are the first ionization energies of the atoms, and R {\displaystyle R} is the intermolecular distance.
Note that this final London equation does not contain instantaneous dipoles (see molecular dipoles ). The "explanation" of the dispersion force as the interaction between two such dipoles was invented after London arrived at the proper quantum mechanical theory. The authoritative work [ 13 ] contains a criticism of the instantaneous dipole model [ 14 ] and a modern and thorough exposition of the theory of intermolecular forces.
The London theory has much similarity to the quantum mechanical theory of light dispersion , which is why London coined the phrase "dispersion effect". In physics, the term "dispersion" describes the variation of a quantity with frequency, which is the fluctuation of the electrons in the case of the London dispersion.
Dispersion forces are usually dominant over the three van der Waals forces (orientation, induction, dispersion) between atoms and molecules, with the exception of molecules that are small and highly polar, such as water. The following contribution of the dispersion to the total intermolecular interaction energy has been given: [ 15 ] | https://en.wikipedia.org/wiki/London_dispersion_force |
The London equations, developed by brothers Fritz and Heinz London in 1935, [ 1 ] are constitutive relations for a superconductor relating its superconducting current to electromagnetic fields in and around it. Whereas Ohm's law is the simplest constitutive relation for an ordinary conductor , the London equations are the simplest meaningful description of superconducting phenomena, and form the genesis of almost any modern introductory text on the subject. [ 2 ] [ 3 ] [ 4 ] A major triumph of the equations is their ability to explain the Meissner effect , [ 5 ] wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold.
There are two London equations when expressed in terms of measurable fields:
Here j s {\displaystyle {\mathbf {j} }_{\rm {s}}} is the (superconducting) current density , E and B are respectively the electric and magnetic fields within the superconductor, e {\displaystyle e\,} is the charge of an electron or proton, m {\displaystyle m\,} is electron mass, and n s {\displaystyle n_{\rm {s}}\,} is a phenomenological constant loosely associated with a number density of superconducting carriers. [ 6 ]
The two equations can be combined into a single "London Equation" [ 6 ] [ 7 ] in terms of a specific vector potential A s {\displaystyle \mathbf {A} _{\rm {s}}} which has been gauge fixed to the "London gauge", giving: [ 8 ]
In the London gauge, the vector potential obeys the following requirements, ensuring that it can be interpreted as a current density: [ 9 ]
The first requirement, also known as Coulomb gauge condition, leads to the constant superconducting electron density ρ ˙ s = 0 {\displaystyle {\dot {\rho }}_{\rm {s}}=0} as expected from the continuity equation. The second requirement is consistent with the fact that supercurrent flows near the surface. The third requirement ensures no accumulation of superconducting electrons on the surface. These requirements do away with all gauge freedom and uniquely determine the vector potential. One can also write the London equation in terms of an arbitrary gauge [ 10 ] A {\displaystyle \mathbf {A} } by simply defining A s = ( A + ∇ ϕ ) {\displaystyle \mathbf {A} _{\rm {s}}=(\mathbf {A} +\nabla \phi )} , where ϕ {\displaystyle \phi } is a scalar function and ∇ ϕ {\displaystyle \nabla \phi } is the change in gauge which shifts the arbitrary gauge to the London gauge.
The vector potential expression holds for magnetic fields that vary slowly in space. [ 4 ]
If the second of London's equations is manipulated by applying Ampere's law , [ 11 ]
then it can be turned into the Helmholtz equation for magnetic field:
where the inverse of the laplacian eigenvalue:
is the characteristic length scale, λ s {\displaystyle \lambda _{\rm {s}}} , over which external magnetic fields are exponentially suppressed: it is called the London penetration depth : typical values are from 50 to 500 nm .
For example, consider a superconductor within free space where the magnetic field outside the superconductor is a constant value pointed parallel to the superconducting boundary plane in the z direction. If x leads perpendicular to the boundary then the solution inside the superconductor may be shown to be
From here the physical meaning of the London penetration depth can perhaps most easily be discerned.
While it is important to note that the above equations cannot be formally derived, [ 12 ] the Londons did follow a certain intuitive logic in the formulation of their theory. Substances across a stunningly wide range of composition behave roughly according to Ohm's law , which states that current is proportional to electric field. However, such a linear relationship is impossible in a superconductor for, almost by definition, the electrons in a superconductor flow with no resistance whatsoever. To this end, the London brothers imagined electrons as if they were free electrons under the influence of a uniform external electric field. According to the Lorentz force law
these electrons should encounter a uniform force, and thus they should in fact accelerate uniformly. Assume that the electrons in the superconductor are now driven by an electric field, then according to the definition of current density j s = − n s e v s {\displaystyle \mathbf {j} _{\rm {s}}=-n_{\rm {s}}e\mathbf {v} _{\rm {s}}} we should have
∂ j s ∂ t = − n s e ∂ v ∂ t = n s e 2 m E {\displaystyle {\frac {\partial \mathbf {j} _{s}}{\partial t}}=-n_{\rm {s}}e{\frac {\partial \mathbf {v} }{\partial t}}={\frac {n_{\rm {s}}e^{2}}{m}}\mathbf {E} }
This is the first London equation. To obtain the second equation, take the curl of the first London equation and apply Faraday's law ,
to obtain
As it currently stands, this equation permits both constant and exponentially decaying solutions. The Londons recognized from the Meissner effect that constant nonzero solutions were nonphysical, and thus postulated that not only was the time derivative of the above expression equal to zero, but also that the expression in the parentheses must be identically zero:
∇ × j s + n s e 2 m B = 0 {\displaystyle \nabla \times \mathbf {j} _{\rm {s}}+{\frac {n_{\rm {s}}e^{2}}{m}}\mathbf {B} =0}
This results in the second London equation and j s = − n s e 2 m A s {\displaystyle \mathbf {j} _{s}=-{\frac {n_{\rm {s}}e^{2}}{m}}\mathbf {A} _{\rm {s}}} (up to a gauge transformation which is fixed by choosing "London gauge") since the magnetic field is defined through B = ∇ × A s . {\displaystyle B=\nabla \times A_{\rm {s}}.}
Additionally, according to Ampere's law ∇ × B = μ 0 j s {\displaystyle \nabla \times \mathbf {B} =\mu _{0}\mathbf {j} _{\rm {s}}} , one may derive that: ∇ × ( ∇ × B ) = ∇ × μ 0 j s = − μ 0 n s e 2 m B . {\displaystyle \nabla \times (\nabla \times \mathbf {B} )=\nabla \times \mu _{0}\mathbf {j} _{\rm {s}}=-{\frac {\mu _{0}n_{\rm {s}}e^{2}}{m}}\mathbf {B} .}
On the other hand, since ∇ ⋅ B = 0 {\displaystyle \nabla \cdot \mathbf {B} =0} , we have ∇ × ( ∇ × B ) = − ∇ 2 B {\displaystyle \nabla \times (\nabla \times \mathbf {B} )=-\nabla ^{2}\mathbf {B} } , which leads to the spatial distribution of magnetic field obeys :
∇ 2 B = 1 λ s 2 B {\displaystyle \nabla ^{2}\mathbf {B} ={\frac {1}{\lambda _{\rm {s}}^{2}}}\mathbf {B} }
with penetration depth λ s = m μ 0 n s e 2 {\displaystyle \lambda _{\rm {s}}={\sqrt {\frac {m}{\mu _{0}n_{\rm {s}}e^{2}}}}} . In one dimension, such Helmholtz equation has the solution form B z ( x ) = B 0 e − x / λ s . {\displaystyle B_{z}(x)=B_{0}e^{-x/\lambda _{\rm {s}}}.\,}
Inside the superconductor ( x > 0 ) {\displaystyle (x>0)} , the magnetic field exponetially decay, which well explains the Meissner effect. With the magnetic field distribution, we can use Ampere's law ∇ × B = μ 0 j s {\displaystyle \nabla \times \mathbf {B} =\mu _{0}\mathbf {j} _{\rm {s}}} again to see that the supercurrent j s {\displaystyle \mathbf {j} _{\rm {s}}} also flows near the surface of superconductor, as expected from the requirement for interpreting j s {\displaystyle \mathbf {j} _{\rm {s}}} as physical current.
While the above rationale holds for superconductor, one may also argue in the same way for a perfect conductor. However, one important fact that distinguishes the superconductor from perfect conductor is that perfect conductor does not exhibit Meissner effect for T < T c {\displaystyle T<T_{c}} . In fact, the postulation ∇ × j s + n s e 2 m B = 0 {\displaystyle \nabla \times \mathbf {j} _{\rm {s}}+{\frac {n_{\rm {s}}e^{2}}{m}}\mathbf {B} =0} does not hold for a perfect conductor. Instead, the time derivative must be kept and cannot be simply removed. This results in the fact that the time derivative of B {\displaystyle \mathbf {B} } field (instead of B {\displaystyle \mathbf {B} } field) obeys:
∇ 2 ∂ B ∂ t = 1 λ s 2 ∂ B ∂ t . {\displaystyle \nabla ^{2}{\frac {\partial \mathbf {B} }{\partial t}}={\frac {1}{\lambda _{\rm {s}}^{2}}}{\frac {\partial \mathbf {B} }{\partial t}}.}
For T < T c {\displaystyle T<T_{c}} , deep inside a perfect conductor we have B ˙ = 0 {\displaystyle {\dot {\mathbf {B} }}=0} rather than B = 0 {\displaystyle \mathbf {B} =0} as the superconductor. Consequently, whether the magnetic flux inside a perfect conductor will vanish depends on the initial condition (whether it's zero-field cooled or not).
It is also possible to justify the London equations by other means. [ 13 ] [ 14 ] Current density is defined according to the equation
Taking this expression from a classical description to a quantum mechanical one, we must replace values j s {\displaystyle \mathbf {j} _{\rm {s}}} and v s {\displaystyle \mathbf {v} _{\rm {s}}} by the expectation values of their operators. The velocity operator
is defined by dividing the gauge-invariant, kinematic momentum operator by the particle mass m . [ 15 ] Note we are using − e {\displaystyle -e} as the electron charge.
We may then make this replacement in the equation above. However, an important assumption from the microscopic theory of superconductivity is that the superconducting state of a system is the ground state, and according to a theorem of Bloch's, [ 16 ] in such a state the canonical momentum p is zero. This leaves
which is the London equation according to the second formulation above. | https://en.wikipedia.org/wiki/London_equations |
The London moment (after Fritz London ) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis. [ 1 ] The term may also refer to the magnetic moment of any rotation of any superconductor , caused by the electrons lagging behind the rotation of the object, although the field strength is independent of the charge carrier density in the superconductor.
A magnetometer determines the orientation of the generated field, which is interpolated to determine the axis of rotation. Gyroscopes of this type can be extremely accurate and stable. For example, those used in the Gravity Probe B experiment measured changes in gyroscope spin axis orientation to better than 0.5 milliarcseconds (1.4 × 10 −7 degrees) over a one-year period. [ 2 ] This is equivalent to an angular separation the width of a human hair viewed from 32 kilometers (20 miles) away. [ 3 ]
The GP-B gyro consists of a near-perfect spherical rotating mass made of fused quartz , which provides a dielectric support for a thin layer of niobium superconducting material. To eliminate friction found in conventional bearings, the rotor assembly is centered by the electric field from six electrodes. After the initial spin-up by a jet of helium which brings the rotor to 4,000 RPM , the polished gyroscope housing is evacuated to an ultra-high vacuum to further reduce drag on the rotor. Provided the suspension electronics remain powered, the extreme rotational symmetry , lack of friction, and low drag will allow the angular momentum of the rotor to keep it spinning for about 15,000 years. [ 4 ]
A sensitive DC SQUID magnetometer able to discriminate changes as small as one quantum , or about 2 × 10 −15 Wb , is used to monitor the gyroscope. A precession, or tilt, in the orientation of the rotor causes the London moment magnetic field to shift relative to the housing. The moving field passes through a superconducting pickup loop fixed to the housing, inducing a small electric current. The current produces a voltage across a shunt resistance , which is resolved to spherical coordinates by a microprocessor. The system is designed to minimize Lorentz torque on the rotor. [ 5 ]
The magnetic field strength associated with a rotating superconductor is given by:
where M and Q are the mass and the charge of the superconducting charge carriers respectively. [ 6 ] For the case of Cooper pairs of electrons, M = 2 m e and Q = 2 e . Despite the electrons existing in a strongly interacting environment, m e denotes here the mass of the bare electrons [ 7 ] (as in vacuum), and not e.g. the effective mass of conducting electrons of the normal phase.
Named for the physical scientist Fritz London , and moment as in magnetic moment . | https://en.wikipedia.org/wiki/London_moment |
In superconductors , the London penetration depth (usually denoted as λ {\displaystyle \lambda } or λ L {\displaystyle \lambda _{L}} ) characterizes the distance to which a magnetic field penetrates into a superconductor and becomes equal to e − 1 {\displaystyle e^{-1}} times that of the magnetic field at the surface of the superconductor. [ 1 ] Typical values of λ L range from 50 to 500 nm. It was first derived by Geertruida de Haas-Lorentz in 1925, and later by Fritz and Heinz London in their London equations (1935). [ 2 ]
The London penetration depth results from considering the London equation and Ampère's circuital law . [ 1 ] If one considers a superconducting half-space , i.e. superconducting for x>0, and weak external magnetic field B 0 applied along z direction in the empty space x <0, then inside the superconductor the magnetic field is given by [ 1 ] B ( x ) = B 0 exp ( − x λ L ) , {\displaystyle B(x)=B_{0}\exp \left(-{\frac {x}{\lambda _{L}}}\right),} λ L {\displaystyle \lambda _{L}} can be seen as the distance across in which the magnetic field becomes e {\displaystyle e} times weaker. The form of λ L {\displaystyle \lambda _{L}} is found by this method to be [ 1 ] λ L = m μ 0 n q 2 , {\displaystyle \lambda _{L}={\sqrt {\frac {m}{\mu _{0}nq^{2}}}},} for charge carriers of mass m {\displaystyle m} , number density n {\displaystyle n} and charge q {\displaystyle q} .
The penetration depth is determined by the superfluid density, which is an important quantity that determines T c in high-temperature superconductors. If some superconductors have some node in their energy gap , the penetration depth at 0 K depends on magnetic field because superfluid density is changed by magnetic field and vice versa. So, accurate and precise measurements of the absolute value of penetration depth at 0 K are very important to understand the mechanism of high-temperature superconductivity.
There are various experimental techniques to determine the London penetration depth, and in particular its temperature dependence. London penetration depth can be measured by muon spin spectroscopy when the superconductor does not have an intrinsic magnetic constitution. The penetration depth is directly converted from the depolarization rate of muon spin in relation which σ ( T ) is proportional to λ 2 ( T ). The shape of σ ( T ) is different with the kind of superconducting energy gap in temperature, so that this immediately indicates the shape of energy gap and gives some clues about the origin of superconductivity. | https://en.wikipedia.org/wiki/London_penetration_depth |
The lone divider procedure is a procedure for proportional cake-cutting . It involves a heterogenous and divisible resource, such as a birthday cake, and n partners with different preferences over different parts of the cake. It allows the n people to divide the cake among them such that each person receives a piece with a value of at least 1/ n of the total value according to his own subjective valuation.
The procedure was developed by Hugo Steinhaus for n = 3 people. [ 1 ] It was later extended by Harold W. Kuhn to n > 3, using the Frobenius–Konig theorem . [ 2 ] A description of the cases n = 3, n = 4 appears in [ 3 ] : 31–35 and the general case is described in. [ 4 ] : 83–87
For convenience we normalize the valuations such that the value of the entire cake is n for all agents. The goal is to give each agent a piece with a value of at least 1.
Step 1 . One player chosen arbitrarily, called the divider , cuts the cake into n pieces whose value in his/her eyes is exactly 1.
Step 2 . Each of the other n − 1 partners evaluates the resulting n pieces and says which of these pieces he considers "acceptable", i.e., worth at least 1.
Now the game proceeds according to the replies of the players in step 3. We present first the case n = 3 and then the general case.
There are two cases.
There are several ways to describe the general case; the shorter description appears in [ 5 ] and is based on the concept of envy-free matching – a matching in which no unmatched agent is adjacent to a matched piece.
Step 3 . Construct a bipartite graph G = ( X + Y , E ) in which each vertex in X is an agent, each vertex in Y is a piece, and there is an edge between an agent x and a piece y iff x values y at least 1.
Step 4 . Find a maximum-cardinality envy-free matching in G . Note that the divider is adjacent to all n pieces, so | N G ( X )| = n ≥ | X | (where N G ( X ) is the set of neighbors of X in Y ). Hence, a non-empty envy-free matching exists.
Step 5 . Give each matched piece to its matched agent. Note that each matched agent has a value of at least 1, and thus goes home happily.
Step 6 . Recursively divide the remaining cake among the remaining agents. Note that each remaining agent values each piece given away at less than 1, so he values the remaining cake at more than the number of agents, so the precondition for recursion is satisfied.
At each iteration, the algorithm asks the lone divider at most n mark queries, and each of the other agents at most n eval queries. There are at most n iterations. Therefore, the total number of queries in the Robertson-Webb query model is O( n 2 ) per agent, and O( n 3 ) overall. This is much more than required for last diminisher (O( n ) per agent) and for Even-Paz (O(log n ) per agent). | https://en.wikipedia.org/wiki/Lone_divider |
In chemistry, a lone pair refers to a pair of valence electrons that are not shared with another atom in a covalent bond [ 1 ] and is sometimes called an unshared pair or non-bonding pair . Lone pairs are found in the outermost electron shell of atoms. They can be identified by using a Lewis structure . Electron pairs are therefore considered lone pairs if two electrons are paired but are not used in chemical bonding . Thus, the number of electrons in lone pairs plus the number of electrons in bonds equals the number of valence electrons around an atom.
Lone pair is a concept used in valence shell electron pair repulsion theory (VSEPR theory) which explains the shapes of molecules . They are also referred to in the chemistry of Lewis acids and bases . However, not all non-bonding pairs of electrons are considered by chemists to be lone pairs. Examples are the transition metals where the non-bonding pairs do not influence molecular geometry and are said to be stereochemically inactive. In molecular orbital theory (fully delocalized canonical orbitals or localized in some form), the concept of a lone pair is less distinct, as the correspondence between an orbital and components of a Lewis structure is often not straightforward. Nevertheless, occupied non-bonding orbitals (or orbitals of mostly nonbonding character) are frequently identified as lone pairs.
A single lone pair can be found with atoms in the nitrogen group , such as nitrogen in ammonia . Two lone pairs can be found with atoms in the chalcogen group, such as oxygen in water. The halogens can carry three lone pairs, such as in hydrogen chloride .
In VSEPR theory the electron pairs on the oxygen atom in water form the vertices of a tetrahedron with the lone pairs on two of the four vertices. The H–O–H bond angle is 104.5°, less than the 109° predicted for a tetrahedral angle , and this can be explained by a repulsive interaction between the lone pairs. [ 2 ] [ 3 ] [ 4 ]
Various computational criteria for the presence of lone pairs have been proposed. While electron density ρ( r ) itself generally does not provide useful guidance in this regard, the Laplacian of the electron density is revealing, and one criterion for the location of the lone pair is where L ( r ) = – ∇ 2 ρ( r ) is a local maximum. The minima of the electrostatic potential V ( r ) is another proposed criterion. Yet another considers the electron localization function (ELF). [ 5 ]
The pairs often exhibit a negative polar character with their high charge density and are located closer to the atomic nucleus on average compared to the bonding pair of electrons. The presence of a lone pair decreases the bond angle between the bonding pair of electrons, due to their high electric charge, which causes great repulsion between the electrons. They are also involved in the formation of a dative bond . For example, the creation of the hydronium (H 3 O + ) ion occurs when acids are dissolved in water and is due to the oxygen atom donating a lone pair to the hydrogen ion.
This can be seen more clearly when looked at it in two more common molecules . For example, in carbon dioxide (CO 2 ), which does not have a lone pair, the oxygen atoms are on opposite sides of the carbon atom ( linear molecular geometry ), whereas in water (H 2 O) which has two lone pairs, the angle between the hydrogen atoms is 104.5° ( bent molecular geometry ). This is caused by the repulsive force of the oxygen atom's two lone pairs pushing the hydrogen atoms further apart, until the forces of all electrons on the hydrogen atom are in equilibrium . This is an illustration of the VSEPR theory .
Lone pairs can contribute to a molecule's dipole moment . NH 3 has a dipole moment of 1.42 D. As the electronegativity of nitrogen (3.04) is greater than that of hydrogen (2.2) the result is that the N-H bonds are polar with a net negative charge on the nitrogen atom and a smaller net positive charge on the hydrogen atoms. There is also a dipole associated with the lone pair and this reinforces the contribution made by the polar covalent N-H bonds to ammonia's dipole moment . In contrast to NH 3 , NF 3 has a much lower dipole moment of 0.234 D. Fluorine is more electronegative than nitrogen and the polarity of the N-F bonds is opposite to that of the N-H bonds in ammonia, so that the dipole due to the lone pair opposes the N-F bond dipoles, resulting in a low molecular dipole moment. [ 6 ]
A lone pair can contribute to the existence of chirality in a molecule, when three other groups attached to an atom all differ. The effect is seen in certain amines , phosphines , [ 7 ] sulfonium and oxonium ions , sulfoxides , and even carbanions .
The resolution of enantiomers where the stereogenic center is an amine is usually precluded because the energy barrier for nitrogen inversion at the stereo center is low, which allow the two stereoisomers to rapidly interconvert at room temperature. As a result, such chiral amines cannot be resolved, unless the amine's groups are constrained in a cyclic structure (such as in Tröger's base ).
A stereochemically active lone pair is also expected for divalent lead and tin ions due to their formal electronic configuration of n s 2 . In the solid state this results in the distorted metal coordination observed in the tetragonal litharge structure adopted by both PbO and SnO.
The formation of these heavy metal n s 2 lone pairs which was previously attributed to intra-atomic hybridization of the metal s and p states [ 8 ] has recently been shown to have a strong anion dependence. [ 9 ] This dependence on the electronic states of the anion can explain why some divalent lead and tin materials such as PbS and SnTe show no stereochemical evidence of the lone pair and adopt the symmetric rocksalt crystal structure. [ 10 ] [ 11 ]
In molecular systems the lone pair can also result in a distortion in the coordination of ligands around the metal ion. The lone-pair effect of lead can be observed in supramolecular complexes of lead(II) nitrate , and in 2007 a study linked the lone pair to lead poisoning . [ 12 ] Lead ions can replace the native metal ions in several key enzymes, such as zinc cations in the ALAD enzyme, which is also known as porphobilinogen synthase , and is important in the synthesis of heme , a key component of the oxygen-carrying molecule hemoglobin . This inhibition of heme synthesis appears to be the molecular basis of lead poisoning (also called "saturnism" or "plumbism"). [ 13 ] [ 14 ] [ 15 ]
Computational experiments reveal that although the coordination number does not change upon substitution in calcium-binding proteins, the introduction of lead distorts the way the ligands organize themselves to accommodate such an emerging lone pair: consequently, these proteins are perturbed. This lone-pair effect becomes dramatic for zinc-binding proteins, such as the above-mentioned porphobilinogen synthase, as the natural substrate cannot bind anymore – in those cases the protein is inhibited .
In Group 14 elements (the carbon group ), lone pairs can manifest themselves by shortening or lengthening single bond ( bond order 1) lengths, [ 16 ] as well as in the effective order of triple bonds as well. [ 17 ] [ 18 ] The familiar alkynes have a carbon-carbon triple bond ( bond order 3) and a linear geometry of 180° bond angles (figure A in reference [ 19 ] ). However, further down in the group ( silicon , germanium , and tin ), formal triple bonds have an effective bond order 2 with one lone pair (figure B [ 19 ] ) and trans -bent geometries. In lead , the effective bond order is reduced even further to a single bond, with two lone pairs for each lead atom (figure C [ 19 ] ). In the organogermanium compound ( Scheme 1 in the reference), the effective bond order is also 1, with complexation of the acidic isonitrile (or isocyanide ) C-N groups, based on interaction with germanium's empty 4p orbital. [ 19 ] [ 20 ]
In elementary chemistry courses, the lone pairs of water are described as "rabbit ears": two equivalent electron pairs of approximately sp 3 hybridization, while the HOH bond angle is 104.5°, slightly smaller than the ideal tetrahedral angle of arccos(–1/3) ≈ 109.47°. The smaller bond angle is rationalized by VSEPR theory by ascribing a larger space requirement for the two identical lone pairs compared to the two bonding pairs. In more advanced courses, an alternative explanation for this phenomenon considers the greater stability of orbitals with excess s character using the theory of isovalent hybridization , in which bonds and lone pairs can be constructed with sp x hybrids wherein nonintegral values of x are allowed, so long as the total amount of s and p character is conserved (one s and three p orbitals in the case of second-row p-block elements).
To determine the hybridization of oxygen orbitals used to form the bonding pairs and lone pairs of water in this picture, we use the formula 1 + x cos θ = 0, which relates bond angle θ with the hybridization index x . According to this formula, the O–H bonds are considered to be constructed from O bonding orbitals of ~sp 4.0 hybridization (~80% p character, ~20% s character), which leaves behind O lone pairs orbitals of ~sp 2.3 hybridization (~70% p character, ~30% s character). These deviations from idealized sp 3 hybridization (75% p character, 25% s character) for tetrahedral geometry are consistent with Bent's rule : lone pairs localize more electron density closer to the central atom compared to bonding pairs; hence, the use of orbitals with excess s character to form lone pairs (and, consequently, those with excess p character to form bonding pairs) is energetically favorable.
However, theoreticians often prefer an alternative description of water that separates the lone pairs of water according to symmetry with respect to the molecular plane. In this model, there are two energetically and geometrically distinct lone pairs of water possessing different symmetry: one (σ) in-plane and symmetric with respect to the molecular plane and the other (π) perpendicular and anti-symmetric with respect to the molecular plane. The σ-symmetry lone pair (σ(out)) is formed from a hybrid orbital that mixes 2s and 2p character, while the π-symmetry lone pair (p) is of exclusive 2p orbital parentage. The s character rich O σ(out) lone pair orbital (also notated n O (σ) ) is an ~sp 0.7 hybrid (~40% p character, 60% s character), while the p lone pair orbital (also notated n O (π) ) consists of 100% p character.
Both models are of value and represent the same total electron density, with the orbitals related by a unitary transformation . In this case, we can construct the two equivalent lone pair hybrid orbitals h and h ' by taking linear combinations h = c 1 σ(out) + c 2 p and h ' = c 1 σ(out) – c 2 p for an appropriate choice of coefficients c 1 and c 2 . For chemical and physical properties of water that depend on the overall electron distribution of the molecule, the use of h and h ' is just as valid as the use of σ(out) and p. In some cases, such a view is intuitively useful. For example, the stereoelectronic requirement for the anomeric effect can be rationalized using equivalent lone pairs, since it is the overall donation of electron density into the antibonding orbital that matters. An alternative treatment using σ/π separated lone pairs is also valid, but it requires striking a balance between maximizing n O (π) -σ* overlap (maximum at 90° dihedral angle) and n O (σ) -σ* overlap (maximum at 0° dihedral angle), a compromise that leads to the conclusion that a gauche conformation (60° dihedral angle) is most favorable, the same conclusion that the equivalent lone pairs model rationalizes in a much more straightforward manner. [ 21 ] Similarly, the hydrogen bonds of water form along the directions of the "rabbit ears" lone pairs, as a reflection of the increased availability of electrons in these regions. This view is supported computationally. [ 5 ] However, because only the symmetry-adapted canonical orbitals have physically meaningful energies, phenomena that have to do with the energies of individual orbitals, such as photochemical reactivity or photoelectron spectroscopy , are most readily explained using σ and π lone pairs that respect the molecular symmetry . [ 21 ] [ 22 ]
Because of the popularity of VSEPR theory , the treatment of the water lone pairs as equivalent is prevalent in introductory chemistry courses, and many practicing chemists continue to regard it as a useful model. A similar situation arises when describing the two lone pairs on the carbonyl oxygen atom of a ketone . [ 23 ] However, the question of whether it is conceptually useful to derive equivalent orbitals from symmetry-adapted ones, from the standpoint of bonding theory and pedagogy, is still a controversial one, with recent (2014 and 2015) articles opposing [ 24 ] and supporting [ 25 ] the practice. | https://en.wikipedia.org/wiki/Lone_pair |
Long-Term Pavement Performance Program , known as LTPP , is a research project supported by the Federal Highway Administration (FHWA) to collect and analyze pavement data in the United States and Canada. Currently, the LTPP acquires the largest road performance database. [ 1 ] [ 2 ]
The LTPP was initiated by the Transportation Research Board (TRB) of the National Research Council (NRC) in the early 1980s. The FHWA with the cooperation of the American Association of State Highway and Transportation Officials (AASHTO) sponsored the program. The program was focusing on examining the deterioration of the nation’s highway and bridge infrastructure system. In the early 1980s, TRB and NRC suggested that a "Strategic Highway Research Program (SHRP)" should be started to concentrate on research and development activities that would majorly contribute to highway transportation improvement. Later in 1986, the detailed programs were published entitled "Strategic Highway Research Program—Research Plans".
The LTPP program collects data from in-service roads and analyzes it as planned by the SHRP. The LTPP aims to understand the possible reasons behind the poor or good performance of pavements. Hence, the effects of different parameters such as weather, maintenance actions, material and traffic on performance are studied. Data is collected in a timely manner, and then analyzed to understand and predict the performance of roads. It is worth mentioning that the LTPP program was transferred from SHRP to the FHWA in 1992 to continue the work.
LTPP Data is collected by four regional contractors. New data is regularly uploaded to the online platform every six months. The number of the pavement test sections monitored in the LTPP program is more than 2,500. These pavement sections include both asphalt and Portland cement concrete . Road sections are across different states of the United States and provinces of Canada.
The LTPP holds an annual international data analysis contest in collaboration with the ASCE . The participants are supposed to use the LTPP data. [ 3 ] | https://en.wikipedia.org/wiki/Long-Term_Pavement_Performance |
Long-fiber-reinforced thermoplastic ( LFRTs ) is a type of easily mouldable thermoplastic used to create a variety of components used primarily in the automotive industry. LFRTs are one of the fastest growing categories in thermoplastic technologies. Leading this expansion is one of the oldest forms, glass mat thermoplastic (GMT) and two of the segment’s newest: precompounded (pelletized) LFRTs (long-fiber-reinforced thermoplastics), also known as LFTs, and inline compounded (ILC) or direct LFTs (D-LFTs). [ 1 ]
LFRTs differ from the composite structures used in the aerospace industry for components such as aircraft parts. The fibers in LFRTs are relatively short (6.35 mm/0.25 in. or greater) compared to the fibres contained in composite aircraft components. High performance composites usually contain fibers as long as the component itself (6 metres or longer). [ citation needed ]
Their structural properties and low cost per part have enabled LFRTs to replace metal parts in the automotive industry. In addition, some new organic fibers can even be recyclable. With the independence of choosing the reinforcement from a wide range of fibers and the matrix from a wide range of thermoplastics polymer in the LFRTs, its property can be changed according to customer needs. LFRTs have become an increasingly valuable and popular part of building envelope components such as windows and doors. [ citation needed ]
LFRT components or semi-finished products are made by compression or injection molding. Fibers are contained in the polymer matrix, often in the form of a granulate raw material. [ 2 ] Long Fiber Reinforced Thermoplastic Compounds are typically 10–12 mm in length. Fiber is unidirectional along the length of the 12 mm pellet.
[ 1 ] | https://en.wikipedia.org/wiki/Long-fiber-reinforced_thermoplastic |
Long-lived fission products (LLFPs) are radioactive materials with a long half-life (more than 200,000 years) produced by nuclear fission of uranium and plutonium . Because of their persistent radiotoxicity , it is necessary to isolate them from humans and the biosphere and to confine them in nuclear waste repositories for geological periods of time. The focus of this article is radioisotopes ( radionuclides ) generated by fission reactors .
Nuclear fission produces fission products , as well as actinides from nuclear fuel nuclei that capture neutrons but fail to fission, and activation products from neutron activation of reactor or environmental materials.
The high short-term radioactivity of spent nuclear fuel is primarily from fission products with short half-life .
The radioactivity in the fission product mixture is mostly due to short-lived isotopes such as 131 I and 140 Ba, after about four months 141 Ce, 95 Zr/ 95 Nb and 89 Sr constitute the largest contributors, while after about two or three years the largest share is taken by 144 Ce/ 144 Pr, 106 Ru/ 106 Rh and 147 Pm.
Note that in the case of a release of radioactivity from a power reactor or used fuel, only some elements are released. As a result, the isotopic signature of the radioactivity is very different from an open air nuclear detonation where all the fission products are dispersed.
After several years of cooling, most radioactivity is from the fission products caesium-137 and strontium-90 , which are each produced in about 6% of fissions, and have half-lives of about 30 years. Other fission products with similar half-lives have much lower fission product yields , lower decay energy , and several ( 151 Sm, 155 Eu, 113m Cd) are also quickly destroyed by neutron capture while still in the reactor, so are not responsible for more than a tiny fraction of the radiation production at any time. Therefore, in the period from several years to several hundred years after use, radioactivity of spent fuel can be modeled simply as exponential decay of the 137 Cs and 90 Sr. These are sometimes known as medium-lived fission products. [ 1 ] [ 2 ]
Krypton-85 , the 3rd most active MLFP, is a noble gas which is allowed to escape during current nuclear reprocessing ; however, its inertness means that it does not concentrate in the environment, but diffuses to a uniform low concentration in the atmosphere. Spent fuel in the U.S. and some other countries is not likely to be reprocessed until decades after use, and by that time most of the 85 Kr will have decayed.
No fission products have a half-life in the range of 100 a–210 ka ...
... nor beyond 15.7 Ma [ 7 ]
After 137 Cs and 90 Sr have decayed to low levels, the bulk of radioactivity from spent fuel come not from fission products but actinides , notably plutonium-239 (half-life 24 ka ), plutonium-240 (6.56 ka), americium-241 (432 years), americium-243 (7.37 ka), curium -245 (8.50 ka), and curium-246 (4.73 ka). These can be recovered by nuclear reprocessing (either before or after most 137 Cs and 90 Sr decay) and fissioned, offering the possibility of greatly reducing waste radioactivity in the time scale of about 10 3 to 10 5 years. 239 Pu is usable as fuel in existing thermal reactors , but some minor actinides like 241 Am, as well as the non- fissile and less- fertile isotope plutonium-242 , are better destroyed in fast reactors , accelerator-driven subcritical reactors , or fusion reactors . Americium-241 has some industrial applications and is used in smoke detectors and is thus often separated from waste as it fetches a price that makes such separation economic.
On scales greater than 10 5 years, fission products, chiefly 99 Tc , again represent a significant proportion of the remaining, though lower radioactivity, along with longer-lived actinides like neptunium-237 and plutonium-242 , if those have not been destroyed.
The most abundant long-lived fission products have total decay energy around 100–300 keV, only part of which appears in the beta particle; the rest is lost to a neutrino that has no effect. In contrast, actinides undergo multiple alpha decays , each with decay energy around 4–5 MeV.
Only seven fission products have long half-lives, and these are much longer than 30 years, in the range of 200,000 to 16 million years. These are known as long-lived fission products (LLFP). Three have relatively high yields of about 6%, while the rest appear at much lower yields. (This list of seven excludes isotopes with very slow decay and half-lives longer than the age of the universe, which are effectively stable and already found in nature, as well as a few nuclides like technetium -98 and samarium -146 that are "shadowed" from beta decay and can only occur as direct fission products, not as beta decay products of more neutron-rich initial fission products. The shadowed fission products have yields on the order of one millionth as much as iodine-129.)
The first three have similar half-lives, between 200 thousand and 300 thousand years; the last four have longer half-lives, in the low millions of years.
In total, the other six LLFPs, in thermal reactor spent fuel, initially release only a bit more than 10% as much energy per unit time as Tc-99 for U-235 fission, or 25% as much for 65% U-235+35% Pu-239. About 1000 years after fuel use, radioactivity from the medium-lived fission products Cs-137 and Sr-90 drops below the level of radioactivity from Tc-99 or LLFPs in general. (Actinides, if not removed, will be emitting more radioactivity than either at this point.) By about 1 million years, Tc-99 radioactivity will have declined below that of Zr-93, though immobility of the latter means it is probably still a lesser hazard. By about 3 million years, Zr-93 decay energy will have declined below that of I-129.
Nuclear transmutation is under consideration as a disposal method, primarily for Tc-99 and I-129 as these both represent the greatest biohazards and have the greatest neutron capture cross sections , although transmutation is still slow compared to fission of actinides in a reactor. Transmutation has also been considered for Cs-135, but is almost certainly not worthwhile for the other LLFPs. Given that stable Caesium-133 is also produced in nuclear fission and both it and its neutron activation product 134 Cs are neutron poisons , transmutation of 135 Cs might necessitate isotope separation . 99 Tc is particularly attractive for transmutation not only due to the undesirable properties of the product to be destroyed and the relatively high neutron absorption cross section but also because 100 Tc rapidly beta decays to stable 100 Ru . Ruthenium has no radioactive isotopes with half lives much longer than a year and the price of ruthenium is relatively high, making the destruction of 99 Tc into a potentially lucrative source of producing a precious metal from an undesirable feedstock. | https://en.wikipedia.org/wiki/Long-lived_fission_product |
The long-range identification and tracking ( LRIT ) of ships was established as an international system on 19 May 2006 by the International Maritime Organization (IMO) as resolution MSC.202 (81). [ 1 ] This resolution amends Chapter V of the International Convention for the Safety of Life at Sea (SOLAS) , regulation 19-1 and binds all governments which have contracted to the IMO. [ 2 ]
The LRIT regulation will apply to the following ship types engaged on international voyages:
These ships must report their position to their flag administration at least four times a day. Most vessels set their existing satellite communications systems to automatically make these reports. Other contracting governments may request information about vessels in which they have a legitimate interest under the regulation.
The LRIT system consists of the already installed (generally) shipborne satellite communications equipment, communications service providers (CSPs), application service providers (ASPs), LRIT data centres, the LRIT data distribution plan and the International LRIT data exchange. Certain aspects of the performance of the LRIT system are reviewed or audited by the LRIT coordinator acting on behalf of the IMO and its contracting governments.
Some [ who? ] confuse the functions of LRIT with that of AIS ( Automatic Identification System ), a collision avoidance system also mandated by the IMO, which operates in the VHF radio band, with a range only slightly greater than line-of-sight. While AIS was originally designed for short-range operation as a collision avoidance and navigational aid, it has now been shown to be possible to receive AIS signals by satellite in many, but not all, parts of the world. This is becoming known as S-AIS and is completely different from LRIT. The only similarity is that AIS is also collected from space for determining location of vessels, but requires no action from the vessels themselves except they must have their AIS system turned on. LRIT requires the active, willing participation of the vessel involved, which is, in and of itself, a very useful indication as to whether the vessel in question is a lawful actor. Thus the information collected from the two systems, S-AIS and LRIT, are mutually complementary, and S-AIS clearly does not make LRIT superfluous in any manner. Indeed, because of co-channel interference near densely populated or congested sea areas satellites are having a difficult time in detecting AIS from space in those areas. Fixes are under development by several organizations, but how effective they will be remains to be seen.
Following the EU Council Resolution of 2 October 2007, EU Member States (MS) decided to establish an EU LRIT Data Centre (EU LRIT DC). According to the Council Resolution, the Commission is in charge of managing the EU LRIT DC, in cooperation with Member States, through the European Maritime Safety Agency (EMSA). The Agency, in particular, is in charge of the technical development, operation and maintenance of the EU LRIT DC. It also “stresses that the objective of the EU LRIT DC should include maritime security, Search and Rescue (SAR), maritime safety and protection of the marine environment, taking into consideration respective developments within the IMO context.”
In January 2009 Canada become one of the first SOLAS contracting governments to implement a national data centre and comply with the LRIT regulation.
In January 2009, the United States became one of the first SOLAS contracting governments to implement a National Data Centre and comply with the LRIT regulation. Currently the US Authorized Application Service Provider (ASP) is Pole Star Space Applications Ltd..
LRIT was proposed by the United States Coast Guard (USCG) at the International Maritime Organization (IMO) in London during the aftermath of the September 11, 2001 attacks to track the approximately 50,000 large ships around the world.
In the United States integration of LRIT information with that from sensors and enables the Coast Guard to correlate Long Range Identification and Tracking (LRIT) data with data from other sources, detect anomalies, and heighten overall Maritime Domain Awareness (MDA). The United States implementation of this regulation is consistent with the Coast Guard's strategic goals of maritime security and maritime safety, and the Department's strategic goals of awareness, prevention, protection, and response.
Every sovereign nation already has the right to request such information (and does so) for ships destined for their ports. The LRIT regulation and computer system will allow the USCG to receive information about all vessels within 1,000 nautical miles (1,900 km) of US territory providing the vessel's flag administration has not excluded the US from receiving such information.
For a more detailed description of the United States implementation of the LRIT system, please
refer to the NPRM published October 3, 2007, in the US Government Federal Register
(72 FR 56600). [ 3 ]
Marshall Islands, one of the largest ship registries in the world, established one of the first prototype Data Centres, using Pole Star Space Applications.
Several African states have formed a LRIT Cooperative Data Centre. South Africa National Data Centre provides services to a number of African states, including Ghana and the Gambia.
Liberia, the second largest ship registry in the world has established a LRIT Data Centre in 2008. The Recognised LRIT provider is Pole Star Space Applications.
In January 2009 Brazil implemented a National Data Centre and was one of the first SOLAS contracting governments to become compliant with the LRIT regulation.
And in August 11th 2010 implemented the Regional LRIT Data Centre Brazil, providing services for Brazil and Uruguay. In 2014, RDC BRAZIL providing services for Namibia.
The Venezuelan flag registry appointed Fulcrum Maritime Systems as the sole LRIT application service provider (ASP) and national data center (NDC) provider for all Venezuelan flagged vessels.
The Chilean flag registry appointed Collecte Localisation Satellites (CLS) as the sole LRIT application service provider (ASP). This Data Center provides services for all Chilean and Mexican flagged vessels.
The Republic of Ecuador entered in LRIT production environment at April 15 of 2010. Ecuador owns a National LRIT Data Center (NDC) and recognize their Maritime Authority as Application Service Provider (ASP).
Honduras Flag appointed Pole Star Space Applications as ASP.
The Panama Flag Registry appointed the Pole Star Space Applications, Inc. / Absolute Maritime Tracking Services Inc. consortium as the sole LRIT Application Service Provider (ASP) and National Data Center (NDC) provider for all Panama flagged vessels. It is widely considered to be the world’s largest vessel monitoring centre, consisting of over 8,000 SOLAS ships and small-craft. The National Data Centre is based in Panama City, and operates on a 24 x 7 x 365 basis.
Panama is the first flag administration to implement such a broad range of Maritime Domain Awareness (MDA) capabilities under a single LRIT service provision, which also includes advanced small-craft monitoring, vessel vetting and sanctions compliance, port-risk mitigation, and fleet-wide Ship Security Alert Service (SSAS) management - including testing, response escalation and notifications.
The Vanuatu flag registry appointed Collecte Localisation Satellites (CLS) as the sole LRIT application service provider (ASP) and national data center (RDC) provider for all Vanuatu flagged vessels.
Singapore established its LRIT National Data Centre with CLS as its LRIT Recognised ASP.
Vietnam entered in LRIT production environment at October 21 of 2013. Vietnam owns a National LRIT Data Center (NDC) and recognize VISHIPEL as Application Service Provider (ASP). | https://en.wikipedia.org/wiki/Long-range_identification_and_tracking |
Long-range optical wireless communication or free-space optical communication ( FSO ) is an optical communication technology that uses light propagating in free space to wirelessly transmit data for telecommunications or computer networking over long distances. "Free space" means air, outer space, vacuum, or something similar. This contrasts with using solids such as optical fiber cable .
The technology is useful where the physical connections are impractical due to high costs or other considerations.
Optical communications , in various forms, have been used for thousands of years. The ancient Greeks used a coded alphabetic system of signalling with torches developed by Cleoxenus, Democleitus and Polybius . [ 1 ] In the modern era, semaphores and wireless solar telegraphs called heliographs were developed, using coded signals to communicate with their recipients.
In 1880, Alexander Graham Bell and his assistant Charles Sumner Tainter created the photophone , at Bell's newly established Volta Laboratory in Washington, DC . Bell considered it his most important invention. The device allowed for the transmission of sound on a beam of light . On June 3, 1880, Bell conducted the world's first wireless telephone transmission between two buildings, some 213 meters (699 feet) apart. [ 2 ] [ 3 ]
Its first practical use came in military communication systems many decades later, first for optical telegraphy. German colonial troops used heliograph telegraphy transmitters during the Herero Wars starting in 1904, in German South-West Africa (today's Namibia ) as did British, French, US or Ottoman signals.
During the trench warfare of World War I when wire communications were often cut, German signals used three types of optical Morse transmitters called Blinkgerät , the intermediate type for distances of up to 4 km (2.5 mi) at daylight and of up to 8 km (5.0 mi) at night, using red filters for undetected communications. Optical telephone communications were tested at the end of the war, but not introduced at troop level. In addition, special blinkgeräts were used for communication with airplanes, balloons, and tanks, with varying success. [ citation needed ]
A major technological step was to replace the Morse code by modulating optical waves in speech transmission. Carl Zeiss, Jena developed the Lichtsprechgerät 80/80 (literal translation: optical speaking device) that the German army used in their World War II anti-aircraft defense units, or in bunkers at the Atlantic Wall . [ 4 ]
The invention of lasers in the 1960s revolutionized free-space optics. [ citation needed ] Military organizations were particularly interested and boosted their development. In 1973, while prototyping the first laser printers at PARC , Gary Starkweather and others made a duplex 30 Mbit/s CAN optical link using astronomical telescopes and HeNe lasers to send data between offices; they chose the method due partly to less strict regulations (at the time) on free-space optical communication by the FCC . [ 5 ] [ non-primary source needed ] However, laser-based free-space optics lost market momentum when the installation of optical fiber networks for civilian uses was at its peak. [ citation needed ]
Many simple and inexpensive consumer remote controls use low-speed communication using infrared (IR) light. This is known as consumer IR technologies.
Free-space point-to-point optical links can be implemented using infrared laser light, although low-data-rate communication over short distances is possible using LEDs . Infrared Data Association (IrDA) technology is a very simple form of free-space optical communications. On the communications side the FSO technology is considered as a part of the optical wireless communications applications. Free-space optics can be used for communications between spacecraft . [ 6 ]
The reliability of FSO units has always been a problem for commercial telecommunications. Consistently, studies find too many dropped packets and signal errors over small ranges (400 to 500 meters (1,300 to 1,600 ft)). This is from both independent studies, such as in the Czech Republic, [ 7 ] as well as internal studies, such as one conducted by MRV FSO staff. [ 8 ]
Military based studies consistently produce longer estimates for reliability, projecting the maximum range for terrestrial links is of the order of 2 to 3 km (1.2 to 1.9 mi). [ 9 ] All studies agree the stability and quality of the link is highly dependent on atmospheric factors such as rain, fog, dust and heat. Relays may be employed to extend the range for FSO communications. [ 10 ] [ 11 ]
TMEX USA ran two eight-mile links between Laredo, Texas and Nuevo Laredo, Mexico from 1998 [ 12 ] to 2002. The links operated at 155 Mbit/s and reliably carried phone calls and internet service. [ 13 ] [ dubious – discuss ] [ citation needed ]
The main reason terrestrial communications have been limited to non-commercial telecommunications functions is fog. Fog often prevents FSO laser links over 500 meters (1,600 ft) from achieving a year-round availability sufficient for commercial services. Several entities are continually attempting to overcome these key disadvantages to FSO communications and field a system with a better quality of service . DARPA has sponsored over US$130 million in research toward this effort, with the ORCA and ORCLE programs. [ 14 ] [ 15 ] [ 16 ]
Other non-government groups are fielding tests to evaluate different technologies that some claim have the ability to address key FSO adoption challenges. As of October 2014 [update] , none have fielded a working system that addresses the most common atmospheric events.
FSO research from 1998 to 2006 in the private sector totaled $407.1 million, divided primarily among four start-up companies. All four failed to deliver products that would meet telecommunications quality and distance standards: [ 17 ]
One private company published a paper on November 20, 2014, claiming they had achieved commercial reliability (99.999% availability) in extreme fog. There is no indication this product is currently commercially available. [ 25 ]
The massive advantages of laser communication in space have multiple space agencies racing to develop a stable space communication platform, with many significant demonstrations and achievements.
The first gigabit laser-based communication [ clarification needed ] was achieved by the European Space Agency and called the European Data Relay System (EDRS) on November 28, 2014. The system is operational and is being used on a daily basis.
In December 2023, the Australian National University (ANU) demonstrated its Quantum Optical Ground Station at its Mount Stromlo Observatory . QOGS uses adaptive optics and lasers as part of a telescope, to create a bi-directional communications system capable of supporting the NASA Artemis program to the Moon . [ 26 ]
A two-way distance record for communication was set by the Mercury laser altimeter instrument aboard the MESSENGER spacecraft. It was able to communicate across a distance of 24 million km (15 million mi), as the craft neared Earth on a fly-by in May 2005. The previous record had been set with a one-way detection of laser light from Earth by the Galileo probe, of 6 million km (3.7 million mi) in 1992.
In January 2013, NASA used lasers to beam an image of the Mona Lisa to the Lunar Reconnaissance Orbiter roughly 390,000 km (240,000 mi) away. To compensate for atmospheric interference, an error correction code algorithm similar to that used in CDs was implemented. [ 27 ]
In the early morning hours of October 18, 2013, NASA's Lunar Laser Communication Demonstration (LLCD) transmitted data from lunar orbit to Earth at a rate of 622 megabits per second (Mbit/s). [ 28 ] LLCD was flown aboard the Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft, whose primary science mission was to investigate the tenuous and exotic atmosphere that exists around the Moon.
Between April and July 2014 NASA's OPALS instrument successfully uploaded 175 megabytes in 3.5 seconds and downloaded 200–300 MB in 20 s. [ 29 ] Their system was also able to re-acquire tracking after the signal was lost due to cloud cover.
On December 7, 2021 NASA launched the Laser Communications Relay Demonstration (LCRD), which aims to relay data between spacecraft in geosynchronous orbit and ground stations. LCRD is NASA's first two-way, end-to-end optical relay. LCRD uses two ground stations , Optical Ground Station (OGS)-1 and -2, at Table Mountain Observatory in California, and Haleakalā , Hawaii . [ 30 ] One of LCRD's first operational users is the Integrated LCRD Low-Earth Orbit User Modem and Amplifier Terminal (ILLUMA-T), on the International Space Station. The terminal will receive high-resolution science data from experiments and instruments on board the space station and then transfer this data to LCRD, which will then transmit it to a ground station. After the data arrives on Earth, it will be delivered to mission operation centers and mission scientists. The ILLUMA-T payload was sent to the ISS in late 2023 on SpaceX CRS-29 , and achieved first light on December 5, 2023. [ 31 ] [ 32 ]
On April 28, 2023, NASA and its partners achieved 200 gigabit per second (Gbit/s) throughput on a space-to-ground optical link between a satellite in orbit and Earth. This was achieved by the TeraByte InfraRed Delivery (TBIRD) system, mounted on NASA's Pathfinder Technology Demonstrator 3 (PTD-3) satellite. [ 33 ]
Various satellite constellations that are intended to provide global broadband coverage, such as SpaceX Starlink , employ laser communication for inter-satellite links. This effectively creates a space-based optical mesh network between the satellites.
In 2001, Twibright Labs released RONJA Metropolis , an open-source DIY 10 Mbit/s full-duplex LED FSO system that can span 1.4 km (0.87 mi). [ 34 ] [ 35 ]
In 2004, a visible light communication consortium was formed in Japan . [ 36 ] This was based on work from researchers that used a white LED-based space lighting system for indoor local area network (LAN) communications. These systems present advantages over traditional UHF RF-based systems from improved isolation between systems, the size and cost of receivers/transmitters, RF licensing laws and by combining space lighting and communication into the same system. [ 37 ] In January 2009, a task force for visible light communication was formed by the Institute of Electrical and Electronics Engineers working group for wireless personal area network standards known as IEEE 802.15.7 . [ 38 ] A trial was announced in 2010, in St. Cloud, Minnesota . [ 39 ]
Amateur radio operators have achieved significantly farther distances using incoherent sources of light from high-intensity LEDs. One reported 278 km (173 mi) in 2007. [ 40 ] However, physical limitations of the equipment used limited bandwidths to about 4 kHz . The high sensitivities required of the detector to cover such distances made the internal capacitance of the photodiode used a dominant factor in the high-impedance amplifier which followed it, thus naturally forming a low-pass filter with a cut-off frequency in the 4 kHz range. Lasers can reach very high data rates which are comparable to fiber communications.
Projected data rates and future data rate claims vary. A low-cost white LED (GaN-phosphor) which could be used for space lighting can typically be modulated up to 20 MHz. [ 41 ] Data rates of over 100 Mbit/s can be achieved using efficient modulation schemes and Siemens claimed to have achieved over 500 Mbit/s in 2010. [ 42 ] Research published in 2009, used a similar system for traffic control of automated vehicles with LED traffic lights. [ 43 ]
In September 2013, pureLiFi, the Edinburgh start-up working on Li-Fi , also demonstrated high speed point-to-point connectivity using any off-the-shelf LED light bulb. In previous work, high bandwidth specialist LEDs have been used to achieve the high data rates. The new system, the Li-1st , maximizes the available optical bandwidth for any LED device, thereby reducing the cost and improving the performance of deploying indoor FSO systems. [ 44 ]
Typically, the best scenarios for using this technology are:
The light beam can be very narrow, which makes FSO hard to intercept, improving security. Encryption can secure the data traversing the link. FSO provides vastly improved electromagnetic interference (EMI) behavior compared to using microwaves .
For terrestrial applications, the principal limiting factors are:
These factors cause an attenuated receiver signal and lead to higher bit error ratio (BER). To overcome these issues, vendors found some solutions, like multi-beam or multi-path architectures, which use more than one sender and more than one receiver. Some state-of-the-art devices also have larger fade margin (extra power, reserved for rain, smog, fog). To keep an eye-safe environment, good FSO systems have a limited laser power density and support laser classes 1 or 1M . Atmospheric and fog attenuation, which are exponential in nature, limit practical range of FSO devices to several kilometers. However, free-space optics based on 1550 nm wavelength, have considerably lower optical loss than free-space optics using 830 nm wavelength, in dense fog conditions. FSO using wavelength 1550 nm system are capable of transmitting several times higher power than systems with 850 nm and are safe to the human eye (1M class). Additionally, some free-space optics, such as EC SYSTEM, [ 47 ] ensure higher connection reliability in bad weather conditions by constantly monitoring link quality to regulate laser diode transmission power with built-in automatic gain control. [ 47 ] | https://en.wikipedia.org/wiki/Long-range_optical_wireless_communication |
In astronomy , long-slit spectroscopy involves observing a celestial object using a spectrograph in which the entrance aperture is an elongated, narrow slit. Light entering the slit is then refracted using a prism , diffraction grating , or grism . The dispersed light is typically recorded on a charge-coupled device detector. [ 1 ]
This technique can be used to observe the rotation curve of a galaxy, as those stars moving towards the observer are blue-shifted , while stars moving away are red-shifted . [ 2 ]
Long-slit spectroscopy can also be used to observe the expansion of optically-thin nebulae. When the spectrographic slit extends over the diameter of a nebula, the lines of the velocity profile meet at the edges. In the middle of the nebula, the line splits in two, since one component is redshifted and one is blueshifted. The blueshifted component will appear brighter as it is on the "near side" of the nebula, and is as such subject to a smaller degree of attenuation as the light coming from the far side of the nebula. The tapered edges of the velocity profile stem from the fact that the material at the edge of the nebula is moving perpendicular to the line of sight and so its line of sight velocity will be zero relative to the rest of the nebula. [ 3 ]
Several effects can contribute to the transverse broadening of the velocity profile. Individual stars themselves rotate as they orbit, so the side approaching will be blueshifted and the side moving away will be redshifted. Stars also have random (as well as orbital ) motion around the galaxy, meaning any individual star may depart significantly from the rest relative to its neighbours in the rotation curve. In spiral galaxies this random motion is small compared to the low-eccentricity orbital motion, but this is not true for an elliptical galaxy . Molecular-scale Doppler broadening will also contribute.
Long-slit spectroscopy can ameliorate problems with contrast when observing structures near a very luminous source. The structure in question can be observed through a slit, thus occulting the luminous source and allowing a greater signal-to-noise ratio . An example of this application would be the observation of the kinematics of Herbig-Haro objects around their parent star. [ 4 ] | https://en.wikipedia.org/wiki/Long-slit_spectroscopy |
In GSM , a Regular Pulse Excitation-Long Term Prediction ( RPE-LTP ) scheme is employed in order to reduce the amount of data sent between the mobile station (MS) and base transceiver station (BTS).
In essence, when a voltage level of a particular speech sample is quantified, the mobile station's internal logic predicts the voltage level for the next sample. When the next sample is quantified, the packet sent by the MS to the BTS contains only the error (the signed difference between the actual and predicted level of the sample).
This mobile technology related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Long-term_prediction_(communications) |
Long-term support ( LTS ) is a product lifecycle management policy in which a stable release of computer software is maintained for a longer period of time than the standard edition. The term is typically reserved for open-source software , where it describes a software edition that is supported for months or years longer than the software's standard edition. This is often called an extended-support release .
Short-term support (STS) is a term that distinguishes the support policy for the software's standard edition. STS software has a comparatively short life cycle, and may be afforded new features that are omitted from the LTS edition to avoid potentially compromising the stability or compatibility of the LTS release. [ 1 ]
LTS applies the tenets of reliability engineering to the software development process and software release life cycle . Long-term support extends the period of software maintenance ; it also alters the type and frequency of software updates ( patches ) to reduce the risk , expense, and disruption of software deployment , while promoting the dependability of the software. It does not necessarily imply technical support .
At the beginning of a long-term support period, the software developers impose a feature freeze : They make patches to correct software bugs and vulnerabilities , but do not introduce new features that may cause regression . The software maintainer either distributes patches individually, or packages them in maintenance releases , point releases , or service packs . At the conclusion of the support period, the product either reaches end-of-life , or receives a reduced level of support for a period of time (e.g., high-priority security patches only). [ 2 ]
Before upgrading software, a decision-maker might consider the risk and cost of the upgrade. [ 3 ]
As software developers add new features and fix software bugs, they may introduce new bugs or break old functionality. [ 4 ] When such a flaw occurs in software, it is called a regression . [ 4 ] Two ways that a software publisher or maintainer can reduce the risk of regression are to release major updates less frequently, and to allow users to test an alternate, updated version of the software. [ 3 ] [ 5 ] LTS software applies these two risk-reduction strategies. The LTS edition of the software is published in parallel with the STS (short-term support) edition. Since major updates to the STS edition are published more frequently, it offers LTS users a preview of changes that might be incorporated into the LTS edition when those changes are judged to be of sufficient quality .
While using older versions of software may avoid the risks associated with upgrading, it may introduce the risk of losing support for the old software. [ 6 ] Long-term support addresses this by assuring users and administrators that the software will be maintained for a specific period of time, and that updates selected for publication will carry a significantly reduced risk of regression. [ 2 ] The maintainers of LTS software only publish updates that either have low IT risk or that reduce IT risk (such as security patches ). Patches for LTS software are published with the understanding that installing them is less risky than not installing them.
This table only lists software that have a specific LTS version in addition to a normal release cycle. Many projects, such as CentOS , provide a long period of support for every release.
(v2.1) | https://en.wikipedia.org/wiki/Long-term_support |
NASA 's Long Duration Exposure Facility , or LDEF (pronounced "eldef"), was a cylindrical facility designed to provide long-term experimental data on the outer space environment and its effects on space systems, materials, operations and selected spores ' survival. [ 2 ] [ 3 ] It was placed in low Earth orbit by Space Shuttle Challenger in April 1984. The original plan called for the LDEF to be retrieved in March 1985, but after a series of delays it was eventually returned to Earth by Columbia in January 1990. [ 3 ]
It successfully carried science and technology experiments for about 5.7 years that have revealed a broad and detailed collection of space environmental data. LDEF's 69 months in space provided scientific data on the long-term effects of space exposure on materials, components and systems that has benefited NASA spacecraft designers to this day. [ 4 ]
Researchers identified the potential of the planned Space Shuttle to deliver a payload to space, leave it there for a long-term exposure to the harsh outer space environment, and retrieve it for analysis on a separate mission. The LDEF concept evolved from a spacecraft proposed by NASA's Langley Research Center in 1970 to study the meteoroid environment, the Meteoroid and Exposure Module (MEM). [ 2 ] The project was approved in 1974 and LDEF was built at NASA's Langley Research Center . [ 4 ]
LDEF was intended to be reused, and redeployed with new experiments, perhaps every 18 months. [ 5 ] but after the unintended extension of mission 1 the structure itself was treated as an experiment and intensively studied before being placed into storage.
The STS-41-C crew of Challenger deployed LDEF on April 7, 1984, into a nearly circular orbit at an altitude of 257 nautical miles. [ 6 ]
The LDEF structure shape was a 12 sided prism (to fit the shuttle orbiter payload bay), and made entirely from stainless steel . There were 5 or 6 experiments on each of the 12 long sides and a few more on the ends. It was designed to fly with one end facing earth and the other away from earth. [ 7 ] Attitude control of LDEF was achieved with gravity-gradient stabilization and inertial distribution to maintain three-axis stability in orbit. Therefore, propulsion or other attitude control systems were not required, making LDEF free of acceleration forces and contaminants from jet firings. [ 4 ] There was also a magnetic/viscous damper to stop any initial oscillation after deployment. [ 7 ]
It had two grapple fixtures . An FRGF and an active (rigidize sensing) grapple used to send an electronic signal to initiate the 19 experiments that had electrical systems. [ 7 ] This activated the Experiment Initiate System (EIS) [ 8 ] : 1538 which sent 24 initiation signals to the 20 active experiments. There were six initiation indications which were visible to the deploying astronauts [ 9 ] : 109 next to the active grapple fixture. [ 9 ] : 111
Engineers originally intended that the first mission would last about one year, and that several long-duration exposure missions would use the same frame. In the event the facility was actually used for a single 5.7-year mission.
The LDEF facility was designed to glean information vital to the development of the Space Station Freedom - cum - International Space Station ) and other spacecraft, especially the reactions of various space building materials to radiation, extreme temperature changes and collisions with space matter.
Some of the experiments had a cover that opened after deployment and was designed to close after about a year, [ 10 ] e.g. , Space Environment Effects (M0006). [ 11 ] Furthermore, interstellar gases would be trapped in an attempt to find clues into the formation of the Milky Way and the evolution of the heavier elements. [ 4 ]
There was no telemetry, but some active experiments recorded data on a magnetic tape recorder that was powered by a lithium sulfur dioxide battery, [ 10 ] e.g. , the Advanced Photovoltaic Experiment (S0014), which recorded data once a day, [ 12 ] the German Solar cell study (S1002), [ 12 ] : 91 and the Space Environment Effects on Fiber Optics Systems (M004). [ 11 ] : 182
Six of the seven active experiments that needed to record data used one or two Experiment Power and Data System (EPDS) modules. [ 8 ] : 1545 Each EPDS contained a processing and control module, a magnetic tape recorder and two LiSO 2 batteries. [ 8 ] : 1536 One experiment (S0069) used a 4-track magnetic tape module not as part of an EPDS. [ 8 ] : 1540
Fifty-seven science and technology experiments—involving government and university investigators from the United States , Canada , Denmark , France , Germany , Ireland , the Netherlands , Switzerland , and the United Kingdom —flew on the LDEF mission. [ 4 ] [ 3 ] Some example investigations were the effects of exposure on:
and physics in low gravity – e.g. crystal growth. [ 13 ]
At least one of the on-board experiments, the Thermal Control Surfaces Experiment (TCSE), used the RCA 1802 microprocessor. [ 14 ]
In the German experiment EXOSTACK, 30% of Bacillus subtilis spores survived the nearly 6 years exposure to outer space when embedded in salt crystals, whereas 80% survived in the presence of glucose , which stabilize the structure of the cellular macromolecules, especially during vacuum-induced dehydration. [ 15 ] [ 16 ]
If shielded against solar UV , spores of B. subtilis were capable of surviving in space for up to 6 years, especially if embedded in clay or meteorite powder (artificial meteorites). The data may support the likelihood of interplanetary transfer of microorganisms within meteorites, the so-called lithopanspermia hypothesis. [ 16 ]
The Space Exposed Experiment Developed for Students (SEEDS) allowed students the opportunity to grow control and experimental tomato seeds that had been exposed on LDEF comparing and reporting the results. 12.5 million seeds were flown, and students from elementary to graduate school returned 8000 reports to NASA. The L.A. Times misreported that a DNA mutation from space exposure could yield a poisonous fruit. Whilst incorrect, the report served to raise awareness of the experiment and generate discussion. [ 17 ] Space seeds germinated sooner and grew faster than the control seeds. They were also more porous than terrestrial seeds. [ 18 ]
At LDEF's launch, retrieval was scheduled for March 19, 1985, eleven months after deployment. [ 4 ] Schedules slipped, postponing the retrieval mission first to 1986, then indefinitely due to the Challenger disaster . After 5.7 years its orbit had decayed to about 175 nautical miles (324 km) and it was likely to burn up on reentry in a little over a month. [ 6 ] [ 9 ] : 15
It was finally recovered by Columbia on mission STS-32 on January 12, 1990. [ 19 ] Columbia approached LDEF in such a way as to minimize possible contamination to LDEF from thruster exhaust. [ 20 ] While LDEF was still attached to the RMS arm, an extensive 4.5 hour survey photographed each individual experiment tray, as well as larger areas. [ 20 ] Nevertheless, shuttle operations did contaminate experiments when concerns for human comfort out-weighed important LDEF mission goals. [ 21 ]
Columbia landed at Edwards Air Force Base on January 20, 1990. [ 4 ] With LDEF still in its bay, Columbia was ferried back on the Shuttle Carrier Aircraft to the Kennedy Space Center on January 26. Special efforts were taken to ensure protection against contamination of the payload bay during the ferry flight. [ 4 ]
Between January 30 and 31, LDEF was removed from Columbia 's payload bay in KSC's Orbiter Processing Facility , placed in a special payload canister, and transported to the Operations and Checkout Building. On February 1, 1990, LDEF was transported in the LDEF Assembly and Transportation System to the Spacecraft Assembly and Encapsulation Facility – 2, where the LDEF project team led deintegration activities. [ 20 ] | https://en.wikipedia.org/wiki/Long_Duration_Exposure_Facility |
The Long Harbour Nickel Processing Plant is a Canadian nickel concentrate processing facility located in Long Harbour , Newfoundland and Labrador .
Operated by Vale Limited , construction on the plant started in April 2009 and operations began in 2014. Construction costs were in excess of CAD $4.25 billion. [ 1 ] Construction involved over 3,200 workers generating approximately 3,000 person-years of employment. Operation of the plant will require approximately 475 workers. [ 2 ]
Production began in July 2014, reported in November 2014. [ 3 ] Vale's nickel processing plant in Long Harbour received its first major shipment from its Labrador mine in Voisey's Bay in May 2015. As of that date, a small proportion of the plant's raw materials came from Voisey's Bay but the majority were imported from Indonesia. A spokesman for Vale said 100 per cent of the Long Harbour facility's production materials will come from Voisey's Bay by early 2016. [ 4 ]
Using the metal processing technology of hydrometallurgy , the plant is designed to produce 50,000 t (49,000 long tons; 55,000 short tons) per year of finished nickel product, together with associated cobalt and copper products. The hydrometallurgy technology was piloted at a demonstration plant that opened in Argentia , Newfoundland and Labrador in 2004. This demonstration plant operated until 2008 and was instrumental in helping Vale decide to use the hydrometallurgy process for the Long Harbour processing plant.
A processing plant on Newfoundland was one of the requirements established by the Government of Newfoundland and Labrador in order to exploit the nickel deposit at the Voisey's Bay Mine in Labrador . The bulk carrier MV Umiak I was one of several ice-strengthened bulk carriers built to transport nickel concentrate from Voisey's Bay to the Long Harbour Nickel Processing Plant.
The Long Harbour Nickel Processing Plant was built on a partially brownfield site near the port of Long Harbour. The facility consists of a wharf for offloading nickel ore concentrate from bulk carriers, crushing and grinding facilities, a main processing plant approximately 2 km (1.2 mi) south of the port, a pipeline to supply process water, an effluent discharge pipe and diffuser, and a residue pipeline to a nearby disposal area. The hydrometallurgical process in the plant will pressure-leach the nickel ore concentrate in acidic solutions to separate iron, sulfur and other impurities from nickel, copper and cobalt. [ 5 ]
On June 11, 2018, Premier Dwight Ball announced Vale is moving forward with its underground mine at Voisey's Bay. Ball stated that the move will extend the mine's operating life by at least 15 years. [ 6 ] First ore is expected by April 2021 with processing to take place in Long Harbour. | https://en.wikipedia.org/wiki/Long_Harbour_Nickel_Processing_Plant |
In superconductivity , a long Josephson junction (LJJ) is a Josephson junction which has one or more dimensions longer than the Josephson penetration depth λ J {\displaystyle \lambda _{J}} . This definition is not strict.
In terms of underlying model a short Josephson junction is characterized by the Josephson phase ϕ ( t ) {\displaystyle \phi (t)} , which is only a function of time, but not of coordinates i.e. the Josephson junction is assumed to be point-like in space. In contrast, in a long Josephson junction the Josephson phase can be a function of one or two spatial coordinates, i.e., ϕ ( x , t ) {\displaystyle \phi (x,t)} or ϕ ( x , y , t ) {\displaystyle \phi (x,y,t)} .
The simplest and the most frequently used model which describes the dynamics of the Josephson phase ϕ {\displaystyle \phi } in LJJ is the so-called perturbed sine-Gordon equation . For the case of 1D LJJ it looks like:
where subscripts x {\displaystyle x} and t {\displaystyle t} denote partial derivatives with respect to x {\displaystyle x} and t {\displaystyle t} , λ J {\displaystyle \lambda _{J}} is the Josephson penetration depth , ω p {\displaystyle \omega _{p}} is the Josephson plasma frequency , ω c {\displaystyle \omega _{c}} is the so-called characteristic frequency and j / j c {\displaystyle j/j_{c}} is the bias current density j {\displaystyle j} normalized to the critical current density j c {\displaystyle j_{c}} . In the above equation, the r.h.s. is considered as perturbation.
Usually for theoretical studies one uses normalized sine-Gordon equation:
where spatial coordinate is normalized to the Josephson penetration depth λ J {\displaystyle \lambda _{J}} and time is normalized to the inverse plasma frequency ω p − 1 {\displaystyle \omega _{p}^{-1}} . The parameter α = 1 / β c {\displaystyle \alpha =1/{\sqrt {\beta _{c}}}} is the dimensionless damping parameter ( β c {\displaystyle \beta _{c}} is McCumber-Stewart parameter ), and, finally, γ = j / j c {\displaystyle \gamma =j/j_{c}} is a normalized bias current.
Here x {\displaystyle x} , t {\displaystyle t} and u = v / c 0 {\displaystyle u=v/c_{0}} are the normalized coordinate, normalized time and normalized velocity. The physical velocity v {\displaystyle v} is normalized to the so-called Swihart velocity c 0 = λ J ω p {\displaystyle c_{0}=\lambda _{J}\omega _{p}} , which represent a typical unit of velocity and equal to the unit of space λ J {\displaystyle \lambda _{J}} divided by unit of time ω p − 1 {\displaystyle \omega _{p}^{-1}} . [ 2 ] | https://en.wikipedia.org/wiki/Long_Josephson_junction |
In phylogenetics , long branch attraction ( LBA ) is a form of systematic error whereby distantly related lineages are incorrectly inferred to be closely related. [ 1 ] LBA arises when the amount of molecular or morphological change accumulated within a lineage is sufficient to cause that lineage to appear similar (thus closely related) to another long-branched lineage, solely because they have both undergone a large amount of change, rather than because they are related by descent. Such bias is more common when the overall divergence of some taxa results in long branches within a phylogeny . Long branches are often attracted to the base of a phylogenetic tree , because the lineage included to represent an outgroup is often also long-branched. The frequency of true LBA is unclear and often debated, [ 1 ] [ 2 ] [ 3 ] and some authors view it as untestable and therefore irrelevant to empirical phylogenetic inference. [ 4 ] Although often viewed as a failing of parsimony-based methodology, LBA could in principle result from a variety of scenarios and be inferred under multiple analytical paradigms.
LBA was first recognized as problematic when analyzing discrete morphological character sets under parsimony criteria, however Maximum Likelihood analyses of DNA or protein sequences are also susceptible. A simple hypothetical example can be found in Felsenstein 1978 where it is demonstrated that for certain unknown "true" trees, some methods can show bias for grouping long branches, ultimately resulting in the inference of a false sister relationship. [ 5 ] Often this is because convergent evolution of one or more characters included in the analysis has occurred in multiple taxa. Although they were derived independently, these shared traits can be misinterpreted in the analysis as being shared due to common ancestry.
In phylogenetic and clustering analyses , LBA is a result of the way clustering algorithms work: terminals or taxa with many autapomorphies (character states unique to a single branch) may by chance exhibit the same states as those on another branch ( homoplasy ). A phylogenetic analysis will group these taxa together as a clade unless other synapomorphies outweigh the homoplastic features to group together true sister taxa.
These problems may be minimized by using methods that correct for multiple substitutions at the same site, by adding taxa related to those with the long branches that add additional true synapomorphies to the data, or by using alternative slower evolving traits (e.g. more conservative gene regions).
The result of LBA in evolutionary analyses is that rapidly evolving lineages may be inferred to be sister taxa, regardless of their true relationships. For example, in DNA sequence-based analyses, the problem arises when sequences from two (or more) lineages evolve rapidly. There are only four possible nucleotides and when DNA substitution rates are high, the probability that two lineages will evolve the same nucleotide at the same site increases. When this happens, a phylogenetic analysis may erroneously interpret this homoplasy as a synapomorphy (i.e., evolving once in the common ancestor of the two lineages).
The opposite effect may also be observed, in that if two (or more) branches exhibit particularly slow evolution among a wider, fast evolving group, those branches may be misinterpreted as closely related. As such, "long branch attraction" can in some ways be better expressed as "branch length attraction". However, it is typically long branches that exhibit attraction.
The recognition of long-branch attraction implies that there is some other evidence that suggests that the phylogeny is incorrect. For example, two different sources of data (i.e. molecular and morphological) or even different methods or partition schemes might support different placement for the long-branched groups. [ 6 ] Hennig's Auxiliary Principle suggests that synapomorphies should be viewed as de facto evidence of grouping unless there is specific contrary evidence (Hennig, 1966; Schuh and Brower, 2009).
A simple and effective method for determining whether or not long branch attraction is affecting tree topology is the SAW method, named for Siddal and Whiting. If long branch attraction is suspected between a pair of taxa (A and B), simply remove taxon A ("saw" off the branch) and re-run the analysis. Then remove B and replace A, running the analysis again. If either of the taxa appears at a different branch point in the absence of the other, there is evidence of long branch attraction. Since long branches can't possibly attract one another when only one is in the analysis, consistent taxon placement between treatments would indicate long branch attraction is not a problem. [ 7 ]
Assume for simplicity that we are considering a single binary character (it can either be + or –) distributed on the unrooted "true tree" with branch lengths proportional to amount of character state change, shown in the figure. Because the evolutionary distance from B to D is small, we assume that in the vast majority of all cases, B and D will exhibit the same character state. Here, we will assume that they are both + (+ and – are assigned arbitrarily and swapping them is only a matter of definition). If this is the case, there are four remaining possibilities. A and C can both be +, in which case all taxa are the same and all the trees have the same length. A can be + and C can be –, in which case only one character is different, and we cannot learn anything, as all trees have the same length. Similarly, A can be – and C can be +. The only remaining possibility is that A and C are both –. In this case, however, we view either A and C, or B and D, as a group with respect to the other (one character state is ancestral, the other is derived, and the ancestral state does not define a group). As a consequence, when we have a "true tree" of this type, the more data we collect (i.e. the more characters we study), the more of them are homoplastic and support the wrong tree. [ 8 ] Of course, when dealing with empirical data in phylogenetic studies of actual organisms, we never know the topology of the true tree, and the more parsimonious (AC) or (BD) might well be the correct hypothesis. | https://en.wikipedia.org/wiki/Long_branch_attraction |
In theoretical computer science and coding theory , the long code is an error-correcting code that is locally decodable . Long codes have an extremely poor rate, but play a fundamental role in the theory of hardness of approximation .
Let f 1 , … , f 2 n : { 0 , 1 } k → { 0 , 1 } {\displaystyle f_{1},\dots ,f_{2^{n}}:\{0,1\}^{k}\to \{0,1\}} for k = log n {\displaystyle k=\log n} be the list of all functions from { 0 , 1 } k → { 0 , 1 } {\displaystyle \{0,1\}^{k}\to \{0,1\}} .
Then the long code encoding of a message x ∈ { 0 , 1 } k {\displaystyle x\in \{0,1\}^{k}} is the string f 1 ( x ) ∘ f 2 ( x ) ∘ ⋯ ∘ f 2 n ( x ) {\displaystyle f_{1}(x)\circ f_{2}(x)\circ \dots \circ f_{2^{n}}(x)} where ∘ {\displaystyle \circ } denotes concatenation of strings.
This string has length 2 n = 2 2 k {\displaystyle 2^{n}=2^{2^{k}}} .
The Walsh-Hadamard code is a subcode of the long code, and can be obtained by only using functions f i {\displaystyle f_{i}} that are linear functions when interpreted as functions F 2 k → F 2 {\displaystyle \mathbb {F} _{2}^{k}\to \mathbb {F} _{2}} on the finite field with two elements. Since there are only 2 k {\displaystyle 2^{k}} such functions, the block length of the Walsh-Hadamard code is 2 k {\displaystyle 2^{k}} .
An equivalent definition of the long code is as follows:
The Long code encoding of j ∈ [ n ] {\displaystyle j\in [n]} is defined to be the truth table of the Boolean dictatorship function on the j {\displaystyle j} th coordinate, i.e., the truth table of f : { 0 , 1 } n → { 0 , 1 } {\displaystyle f:\{0,1\}^{n}\to \{0,1\}} with f ( x 1 , … , x n ) = x j {\displaystyle f(x_{1},\dots ,x_{n})=x_{j}} . [ 1 ] Thus, the Long code encodes a ( log n ) {\displaystyle (\log n)} -bit string as a 2 n {\displaystyle 2^{n}} -bit string.
The long code does not contain repetitions, in the sense that the function f i {\displaystyle f_{i}} computing the i {\displaystyle i} th bit of the output is different from any function f j {\displaystyle f_{j}} computing the j {\displaystyle j} th bit of the output for j ≠ i {\displaystyle j\neq i} .
Among all codes that do not contain repetitions, the long code has the longest possible output.
Moreover, it contains all non-repeating codes as a subcode. | https://en.wikipedia.org/wiki/Long_code_(mathematics) |
Long delayed echoes ( LDEs ) are radio echoes which return to the sender several seconds after a radio transmission has occurred. Delays of longer than 2.7 seconds are considered LDEs. [ 1 ] [ 2 ] LDEs are considered anomalous and have a number of proposed scientific origins.
These echoes were first observed in 1927 by civil engineer and amateur radio operator Jørgen Hals from his home near Oslo , Norway . [ 3 ] Hals had repeatedly observed an unexpected second radio echo with a significant time delay after the primary radio echo ended. Unable to account for this strange phenomenon, he wrote a letter to Norwegian physicist Carl Størmer , explaining the event:
At the end of the summer of 1927 I repeatedly heard signals from the Dutch short-wave transmitting station PCJJ at Eindhoven. At the same time as I heard these I also heard echoes. I heard the usual echo which goes round the Earth with an interval of about 1/7 of a second as well as a weaker echo about three seconds after the principal echo had gone. When the principal signal was especially strong, I suppose the amplitude for the last echo three seconds later, lay between 1/10 and 1/20 of the principal signal in strength. From where this echo comes I cannot say for the present, I can only confirm that I really heard it. [ 4 ]
Physicist Balthasar van der Pol [ 5 ] helped Hals and Stormer investigate the echoes, but due to the sporadic nature of the echo events and variations in time-delay, did not find a suitable explanation. [ 6 ]
Long delayed echoes have been heard sporadically from the first observations in 1927 and up to the present day.
Shlionskiy lists 15 possible natural explanations in two groups: reflections in outer space, and reflections within the Earth's magnetosphere. [ 7 ] [ 8 ] Vidmar and Crawford suggest five of them are the most likely. [ 9 ] Sverre Holm, professor of signal processing at the University of Oslo details those five; [ 10 ] in summary,
Radio waves of frequency less than about 7 MHz can become trapped in magnetic field-aligned ionization ducts with L values (distance from the center of the Earth to the field line at the magnetic equator) less than about 4. These waves after being trapped can propagate to the opposite hemisphere where they become reflected in the topside ionosphere. They can return along the duct, leave it, and propagate to the receiver. [ 12 ]
The signals from two separated transmitters T1 and T2, T2 transmitting CW or quasi-CW signals, interact nonlinearly in the ionosphere or magnetosphere. If the wave vector and frequency of the forced oscillation at the difference frequency of the two signals satisfies the dispersion relation for electrostatic waves, such waves would exist and begin to propagate. This wave could grow in amplitude due to wave-particle interaction. At a later time it could interact with the CW signal and propagate to T1. [ 16 ]
Some believe that the aurora activity that follows a solar storm is the source of LDEs.
Still others believe that LDEs are double EME (EMEME) reflections, i.e. the signal is reflected by the Moon and that reflected signal is reflected by the Earth back to the Moon and reflected again by the Moon back to the Earth.
When discussing the use of automated probes as a potential means of contact with extraterrestrial civilizations , American physicist Ronald Bracewell proposed that such probes might try to attract attention by sending back to us our own signals, citing the long delayed echoes as a possible case. [ 20 ] This concept was expanded upon by Duncan Lunan , [ 1 ] and also addressed by Holm. [ 10 ]
Volker Grassmann writing in VHF Communications noted the possibility of individuals hoaxing LDEs, saying, "Attempts at deception can in no case be ruled out, and it is to be feared that less serious radio amateurs contribute to deliberate falsification.... Short transmissions using different frequencies are a relatively simple procedure for excluding potential troublemakers." [ 6 ] To reduce the possibilities of errors or hoaxes a worldwide logging system has been developed. [ 21 ] | https://en.wikipedia.org/wiki/Long_delayed_echo |
In C and related programming languages , long double refers to a floating-point data type that is often more precise than double precision though the language standard only requires it to be at least as precise as double . As with C's other floating-point types, it may not necessarily map to an IEEE format .
The long double type was present in the original 1989 C standard, [ 1 ] but support was improved by the 1999 revision of the C standard, or C99 , which extended the standard library to include functions operating on long double such as sinl() and strtold() .
Long double constants are floating-point constants suffixed with "L" or "l" (lower-case L), e.g., 0.3333333333333333333333333333333333L or 3.1415926535897932384626433832795029L for quadruple precision . Without a suffix, the evaluation depends on FLT_EVAL_METHOD .
On the x86 architecture, most C compilers implement long double as the 80-bit extended precision type supported by x86 hardware (generally stored as 12 or 16 bytes to maintain data structure alignment ), as specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). An exception is Microsoft Visual C++ for x86, which makes long double a synonym for double . [ 2 ] The Intel C++ compiler on Microsoft Windows supports extended precision, but requires the /Qlong‑double switch for long double to correspond to the hardware's extended precision format. [ 3 ]
Compilers may also use long double for the IEEE 754 quadruple-precision binary floating-point format (binary128). This is the case on HP-UX , [ 4 ] Solaris / SPARC , [ 5 ] MIPS with the 64-bit or n32 ABI , [ 6 ] 64-bit ARM (AArch64) [ 7 ] (on operating systems using the standard AAPCS calling conventions, such as Linux), and z/OS with FLOAT(IEEE) [ 8 ] [ 9 ] [ 10 ] . Most implementations are in software, but some processors have hardware support .
On some PowerPC systems, [ 11 ] long double is implemented as a double-double arithmetic, where a long double value is regarded as the exact sum of two double-precision values, giving at least a 106-bit precision; with such a format, the long double type does not conform to the IEEE floating-point standard . Otherwise, long double is simply a synonym for double (double precision), e.g. on 32-bit ARM , [ 12 ] 64-bit ARM (AArch64) (on Windows [ 13 ] and macOS [ 14 ] ) and on 32-bit MIPS [ 15 ] (old ABI, a.k.a. o32).
With the GNU C Compiler , long double is 80-bit extended precision on x86 processors regardless of the physical storage used for the type (which can be either 96 or 128 bits), [ 16 ] On some other architectures, long double can be double-double (e.g. on PowerPC [ 17 ] [ 18 ] [ 19 ] ) or 128-bit quadruple precision (e.g. on SPARC [ 20 ] ). As of gcc 4.3, a quadruple precision is also supported on x86, but as the nonstandard type __float128 rather than long double . [ 21 ]
Although the x86 architecture, and specifically the x87 floating-point instructions on x86, supports 80-bit extended-precision operations, it is possible to configure the processor to automatically round operations to double (or even single) precision. Conversely, in extended-precision mode, extended precision may be used for intermediate compiler-generated calculations even when the final results are stored at a lower precision (i.e. FLT_EVAL_METHOD == 2 ). With gcc on Linux , 80-bit extended precision is the default; on several BSD operating systems ( FreeBSD and OpenBSD ), double-precision mode is the default, and long double operations are effectively reduced to double precision. [ 22 ] ( NetBSD 7.0 and later, however, defaults to 80-bit extended precision [ 23 ] ). However, it is possible to override this within an individual program via the FLDCW "floating-point load control-word" instruction. [ 22 ] On x86_64, the BSDs default to 80-bit extended precision. Microsoft Windows with Visual C++ also sets the processor in double-precision mode by default, but this can again be overridden within an individual program (e.g. by the _controlfp_s function in Visual C++ [ 24 ] ). The Intel C++ Compiler for x86, on the other hand, enables extended-precision mode by default. [ 25 ] On IA-32 OS X, long double is 80-bit extended precision. [ 26 ]
In CORBA (from specification of 3.0, which uses " ANSI/IEEE Standard 754-1985 " as its reference), "the long double data type represents an IEEE double-extended floating-point number, which has an exponent of at least 15 bits in length and a signed fraction of at least 64 bits", with GIOP/IIOP CDR, whose floating-point types "exactly follow the IEEE standard formats for floating point numbers", marshalling this as what seems to be IEEE 754-2008 binary128 a.k.a. quadruple precision without using that name. | https://en.wikipedia.org/wiki/Long_double |
The long hundred , also known as the great hundred or twelfty , [ 1 ] is the number 120 (in base-10 Hindu-Arabic numerals ) that was referred to as hund, hund-teontig, hundrað , hundrath , or hundred in Germanic languages prior to the 15th century, and is now known as one hundred ( and ) twenty , or six score . The number was translated into Latin in Germanic-speaking countries as centum ( Roman numeral C), but the qualifier long is now added because English now uses hundred exclusively to refer to the number of five score ( 100 ) instead.
The long hundred was 120, but the long thousand was reckoned decimally as 10 long hundreds ( 1200 ).
The hundred ( Latin : centena ) was an English unit of measurement used in the production, sale and taxation of various items in the medieval kingdom of England . The value was often different from 100 units, mostly because of the continued medieval use of the Germanic long hundred of 120 . The unit's use as a measure of weight is now described as a hundredweight , i.e. 112 pounds.
The Latin edition of the Assize of Weights and Measures , one of the statutes of uncertain date from around 1300, describes hundreds of (red) herring (a long hundred of 120 fish), beeswax , sugar , pepper , cumin , and alum (" 13 + 1 / 2 stone , each stone containing 8 pounds ", or 108 Tower lbs. ), coarse and woven linen , hemp canvas (a long hundred of 120 ells ), and iron or horseshoes and shillings (a short hundred of 100 pieces). [ 2 ] Later versions used the Troy or the avoirdupois pound in their reckonings instead and included hundreds of fresh herrings (a short hundred of 100 fish), cinnamon , nutmegs ( 13 + 1 / 2 stone of 8 lb), and garlic ("15 ropes of 15 heads", or 225 heads). [ 3 ]
The existence of a non-decimal base in the earliest traces of the Germanic languages is attested by the presence of glosses such as "tenty-wise" and "ten-count" to denote that certain numbers are to be understood as decimal . Such glosses would not be expected where decimal counting was usual. In the Gothic Bible , [ 4 ] some marginalia glosses a five hundred ( fimf hundram ) in the text as being understood taihuntewjam ("tenty-wise"). Similar words are known in most other Germanic languages. Old Norse counted large numbers in twelves of tens, with its words "one hundred and eighty" ( hundrað ok átta tigir ) meaning 200, "two hundred" ( tvau hundrað ) meaning 240 and "thousand" ( þúsund , Old English : þúsend ) meaning 1200. [ 5 ] The word to signify 100 (a "short hundred") was originally tíu tigir ( lit. ' ten tens ' ). The use of the long hundred in medieval England and Scotland is documented by Stevenson [ 6 ] and Goodare although the latter notes that it was sometimes avoided by using numbers such as "seven score". [ 7 ] The Assize of Weights and Measures , one of England 's statutes of uncertain date from c. 1300 , shows both the short and long hundred in competing use.
The hundred of kippers is formed by six score fish and the hundred of hemp canvas and linen cloth is formed by six score ells , but the hundred of pounds , to be used in measuring bulk goods, is five times twenty, and the hundred of fresh herring is five score fish. [ 8 ] Within the original Latin text, the numeral c. is used for a value of 120: Et quodlibet c. continet vi. xx. ("And each such 'hundred' contains six twenties.") [ 2 ] Once the short hundred began coming into use, Old Norse referred to the long hundred as hundrað tolf-roett ( lit. ' duodecimal hundred ' ), as opposed to the short hundrað ti-rætt ( lit. ' decimal hundred ' ) .
Measuring by long hundreds declined as Arabic numerals , which require the uniform base 10, spread throughout Europe during and after the 14th century. In modern times, J. R. R. Tolkien 's use of the long hundred system within The Lord of the Rings helped popularize the word eleventy in modern English, primarily as a colloquial word for an indefinitely large number. | https://en.wikipedia.org/wiki/Long_hundred |
Long interspersed nuclear elements ( LINEs ) [ 1 ] (also known as long interspersed nucleotide elements [ 2 ] or long interspersed elements [ 3 ] ) are a group of non-LTR ( long terminal repeat ) retrotransposons that are widespread in the genome of many eukaryotes . [ 4 ] [ 5 ] LINEs contain an internal Pol II promoter to initiate transcription into mRNA , and encode one or two proteins, ORF1 and ORF2. [ 6 ] The functional domains present within ORF1 vary greatly among LINEs, but often exhibit RNA/DNA binding activity. ORF2 is essential to successful retrotransposition, and encodes a protein with both reverse transcriptase and endonuclease activity. [ 7 ]
LINEs are the most abundant transposable element within the human genome , [ 8 ] with approximately 20.7% of the sequences identified as being derived from LINEs. The only active lineage of LINE found within humans belongs to the LINE-1 class, and is referred to as L1Hs. [ 9 ] The human genome contains an estimated 100,000 truncated and 4,000 full-length LINE-1 elements. [ 10 ] Due to the accumulation of random mutations, the sequence of many LINEs has degenerated to the extent that they are no longer transcribed or translated. Comparisons of LINE DNA sequences can be used to date transposon insertions in the genome.
The first description of an approximately 6.4 kb long LINE-derived sequence was published by J. Adams et al. in 1980. [ 11 ]
Based on structural features and the phylogeny of the essential protein ORF2p, LINEs can be separated into six main groups, referred to as R2, RanI, L1, RTE, I and Jockey. These groups can further be subdivided into at least 28 clades. [ 12 ]
In plant genomes, so far only LINEs of the L1 and RTE clade have been reported. [ 13 ] [ 14 ] [ 15 ] Whereas L1 elements diversify into several subclades, RTE-type LINEs are highly conserved, often constituting a single family. [ 16 ] [ 17 ]
In fungi, Tad, L1, CRE, Deceiver and Inkcap-like elements have been identified, [ 18 ] with Tad-like elements appearing exclusively in fungal genomes. [ 19 ]
All LINEs encode a least one protein, ORF2, which contains a RT and an endonuclease (EN) domain, either an N-terminal APE or a C-terminal RLE or rarely both. A ribonuclease H domain is occasionally present. Except for the evolutionary ancient R2 and RTE superfamilies, LINEs usually encode for another protein named ORF1, which may contain an Gag-knuckle , a L1-like RRM ( InterPro : IPR035300 ), and/or an esterase. LINE elements are relatively rare compared to LTR-retrotransposons in plants, fungi or insects, but are dominant in vertebrates and especially in mammals, where they represent around 20% of the genome. [ 12 ] : fig. 1
The LINE-1/L1 -element is one of the elements that are still active in the human genome today. It is found in all therian mammals [ 20 ] [ 21 ] except megabats . [ 22 ]
Remnants of L2 and L3 elements are found in the human genome. [ 23 ] It is estimated that L2 and L3 elements were active ~200-300 million years ago. Due to the age of L2 elements found within therian genomes, they lack flanking target site duplications. [ 24 ] The L2 (and L3) elements are in the same group as the CR1 clade, Jockey. [ 25 ]
In the first human genome draft the fraction of LINE elements of the human genome was given as 21% and their copy number as 850,000. Of these, L1 , L2 and L3 elements made up 516,000, 315,000 and 37,000 copies, respectively. The non-autonomous SINE elements which depend on L1 elements for their proliferation make up 13% of the human genome and have a copy number of around 1.5 million. [ 23 ] They probably originated from the RTE family of LINEs. [ 26 ] Recent estimates show the typical human genome contains on average 100 L1 elements with potential for mobilization, however there is a fair amount of variation and some individuals may contain a larger number of active L1 elements, making these individuals more prone to L1-induced mutagenesis. [ 27 ]
Increased L1 copy numbers have also been found in the brains of people with schizophrenia, indicating that LINE elements may play a role in some neuronal diseases. [ 28 ]
LINE elements propagate by a so-called target primed reverse transcription mechanism (TPRT), which was first described for the R2 element from the silkworm Bombyx mori .
ORF2 (and ORF1 when present) proteins primarily associate in cis with their encoding mRNA , forming a ribonucleoprotein (RNP) complex, likely composed of two ORF2s and an unknown number of ORF1 trimers. [ 29 ] The complex is transported back into the nucleus , where the ORF2 endonuclease domain opens the DNA (at TTAAAA hexanucleotide motifs in mammals [ 30 ] ). Thus, a 3'OH group is freed for the reverse transcriptase to prime reverse transcription of the LINE RNA transcript. Following the reverse transcription the target strand is cleaved and the newly created cDNA is integrated [ 31 ]
New insertions create short target site duplications (TSDs), and the majority of new inserts are severely 5’-truncated (average insert size of 900bp in humans) and often inverted (Szak et al., 2002). Because they lack their 5’UTR, most of new inserts are non functional.
It has been shown that host cells regulate L1 retrotransposition activity, for example through epigenetic silencing.
For example, the RNA interference (RNAi) mechanism of small interfering RNAs derived from L1 sequences can cause suppression of L1 retrotransposition. [ 32 ]
In plant genomes, epigenetic modification of LINEs can lead to expression changes of nearby genes and even to phenotypic changes: In the oil palm genome, methylation of a Karma-type LINE underlies the somaclonal, 'mantled' variant of this plant, responsible for drastic yield loss. [ 33 ]
Human APOBEC3C mediated restriction of LINE-1 elements were reported and it is due to the interaction between A3C with the ORF1p that affects the reverse transcriptase activity. [ 34 ]
A historic example of L1-conferred disease is Haemophilia A, which is caused by insertional mutagenesis . [ 35 ] There are nearly 100 examples of known diseases caused by retroelement insertions, including some types of cancer and neurological disorders. [ 36 ] Correlation between L1 mobilization and oncogenesis has been reported for epithelial cell cancer ( carcinoma ). [ 37 ] Hypomethylation of LINES is associated with chromosomal instability and altered gene expression [ 38 ] and is found in various cancer cell types in various tissues types. [ 39 ] [ 38 ] Hypomethylation of a specific L1 located in the MET onco gene is associated with bladder cancer tumorogenesis, [ 40 ] Shift work sleep disorder [ 41 ] is associated with increased cancer risk because light exposure at night reduces melatonin , a hormone that has been shown to reduce L1-induced genome instability . [ 42 ] | https://en.wikipedia.org/wiki/Long_interspersed_nuclear_element |
Long non-coding RNAs ( long ncRNAs , lncRNA ) are a type of RNA , generally defined as transcripts more than 200 nucleotides that are not translated into protein. [ 2 ] This arbitrary limit distinguishes long ncRNAs from small non-coding RNAs , such as microRNAs (miRNAs), small interfering RNAs (siRNAs), Piwi-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs), and other short RNAs. [ 3 ] Given that some lncRNAs have been reported to have the potential to encode small proteins or micro-peptides, the latest definition of lncRNA is a class of transcripts of over 200 nucleotides that have no or limited coding capacity. [ 4 ] However, John S. Mattick and colleagues suggested to change definition of long non-coding RNAs to transcripts more than 500 nt, which are mostly generated by Pol II. [ 5 ] That means that question of lncRNA exact definition is still under discussion in the field. Long intervening/intergenic noncoding RNAs (lincRNAs) are sequences of transcripts that do not overlap protein-coding genes. [ 6 ]
Long non-coding RNAs include intergenic lincRNAs, intronic ncRNAs, and sense and antisense lncRNAs, each type showing different genomic positions in relation to genes and exons . [ 1 ] [ 3 ]
The definition of lncRNAs differs from that of other RNAs such as siRNAs, mRNAs, miRNAs, and snoRNAs because it is not connected to the function of the RNA. A lncRNA is any transcript that is not one of the other well-characterized RNAs and is longer than 200-500 nucleotides. Some scientists think that most lncRNAs do not have a biologically relevant function because they are transcripts of junk DNA . [ 7 ] [ 8 ]
Long non-coding transcripts are found in many species. Large-scale complementary DNA (cDNA) sequencing projects such as FANTOM reveal the complexity of these transcripts in humans. [ 9 ] The FANTOM3 project identified ~35,000 non-coding transcripts that bear many signatures of messenger RNAs , including 5' capping , splicing , and poly-adenylation , but have little or no open reading frame (ORF). [ 9 ] This number represents a conservative lower estimate, since it omitted many singleton transcripts and non- polyadenylated transcripts ( tiling array data shows more than 40% of transcripts are non-polyadenylated). [ 10 ] Identifying ncRNAs within these cDNA libraries is challenging since it can be difficult to distinguish protein-coding transcripts from non-coding transcripts. It has been suggested through multiple studies that testis , [ 11 ] and neural tissues express the greatest amount of long non-coding RNAs of any tissue type. [ 12 ] Using FANTOM5, 27,919 long ncRNAs have been identified in various human sources. [ 13 ]
Quantitatively, lncRNAs demonstrate ~10-fold lower abundance than mRNAs , [ 14 ] [ 15 ] which is explained by higher cell-to-cell variation of expression levels of lncRNA genes in the individual cells, when compared to protein-coding genes. [ 16 ] In general, the majority (~78%) of lncRNAs are characterized as tissue -specific, as opposed to only ~19% of mRNAs. [ 14 ] Only 3.6% of human lncRNA genes are expressed in various biological contexts and 34% of lncRNA genes are expressed at high level (top 25% of both lncRNAs and mRNAs) in at least one biological context. [ 17 ] In addition to higher tissue specificity, lncRNAs are characterized by higher developmental stage specificity, [ 18 ] and cell subtype specificity in tissues such as human neocortex [ 19 ] and other parts of the brain, regulating correct brain development and function. [ 20 ] In 2022, a comprehensive integration of lncRNAs from existing databases, revealed that there are 95,243 lncRNA genes and 323,950 transcripts in humans. [ 21 ]
In comparison to mammals relatively few studies have focused on the prevalence of lncRNAs in plants . However an extensive study considering 37 higher plant species and six algae identified ~200,000 non-coding transcripts using an in-silico approach, [ 22 ] which also established the associated Green Non-Coding Database ( GreeNC ), a repository of plant lncRNAs.
In 2005 the landscape of the mammalian genome was described as numerous 'foci' of transcription that are separated by long stretches of intergenic space. [ 9 ] While some long ncRNAs are located within the intergenic stretches, the majority are overlapping sense and antisense transcripts that often include protein-coding genes, [ 23 ] giving rise to a complex hierarchy of overlapping isoforms. [ 24 ] Genomic sequences within these transcriptional foci are often shared within a number of coding and non-coding transcripts in the sense and antisense directions [ 25 ] For example, 3012 out of 8961 cDNAs previously annotated as truncated coding sequences within FANTOM2 were later designated as genuine ncRNA variants of protein-coding cDNAs. [ 9 ] While the abundance and conservation of these arrangements suggest they have biological relevance, the complexity of these foci frustrates easy evaluation.
The GENCODE consortium has collated and analysed a comprehensive set of human lncRNA annotations and their genomic organisation, modifications, cellular locations and tissue expression profiles. [ 12 ] Their analysis indicates human lncRNAs show a bias toward two- exon transcripts. [ 12 ]
There has been considerable debate about whether lncRNAs have been misannotated and do in fact encode proteins . Several lncRNAs have been found to in fact encode for peptides with biologically significant function. [ 37 ] [ 38 ] [ 39 ] Ribosome profiling studies have suggested that anywhere from 40% to 90% of annotated lncRNAs are in fact translated , [ 40 ] [ 41 ] although there is disagreement about the correct method for analyzing ribosome profiling data. [ 42 ] Additionally, it is thought that many of the peptides produced by lncRNAs may be highly unstable and without biological function. [ 41 ]
Unlike protein coding genes, sequence of long non-coding RNAs has lower level of conservation. Initial studies into lncRNA conservation noted that as a class, they were enriched for conserved sequence elements, [ 43 ] depleted in substitution and insertion/deletion rates [ 44 ] and depleted in rare frequency variants, [ 45 ] indicative of purifying selection maintaining lncRNA function. However, further investigations into vertebrate lncRNAs revealed that while lncRNAs are conserved in sequence, they are not conserved in transcription . [ 46 ] [ 47 ] [ 11 ] In other words, even when the sequence of a human lncRNA is conserved in another vertebrate species, there is often no transcription of a lncRNA in the orthologous genomic region. Some argue that these observations suggest non-functionality of the majority of lncRNAs, [ 48 ] [ 49 ] [ 7 ] while others argue that they may be indicative of rapid species -specific adaptive selection. [ 50 ]
While the turnover of lncRNA transcription is much higher than initially expected, it is important to note that still, hundreds of lncRNAs are conserved at the sequence level. There have been several attempts to delineate the different categories of selection signatures seen amongst lncRNAs including: lncRNAs with strong sequence conservation across the entire length of the gene , lncRNAs in which only a portion of the transcript (e.g. 5′ end , splice sites ) is conserved, and lncRNAs that are transcribed from syntenic regions of the genome but have no recognizable sequence similarity. [ 51 ] [ 52 ] [ 53 ] Additionally, there have been attempts to identify conserved secondary structures in lncRNAs, though these studies have currently given way to conflicting results. [ 54 ] [ 55 ]
Some groups have claimed that the majority of long noncoding RNAs in mammals are likely to be functional, [ 56 ] [ 57 ] but other groups have claimed the opposite. [ 7 ] [ 8 ] This is an active area of research.
Some lncRNAs have been functionally annotated in LncRNAdb (a database of literature described lncRNAs), [ 58 ] [ 59 ] with the majority of these being described in humans . Over 2600 human lncRNAs with experimental evidences have been community-curated in LncRNAWiki (a wiki -based, publicly editable and open-content platform for community curation of human lncRNAs). [ 60 ] According to the curation of functional mechanisms of lncRNAs based on the literatures, lncRNAs are extensively reported to be involved in ceRNA regulation, transcriptional regulation , and epigenetic regulation. [ 60 ] A further large-scale sequencing study provides evidence that many transcripts thought to be lncRNAs may, in fact, be translated into proteins . [ 61 ]
In eukaryotes , RNA transcription is a tightly regulated process. Noncoding RNAs act upon different aspects of this process, targeting transcriptional modulators, RNA polymerase (RNAP) II and even the DNA duplex to regulate gene expression. [ 62 ]
NcRNAs modulate transcription by several mechanisms, including functioning themselves as co-regulators, modifying transcription factor activity, or regulating the association and activity of co-regulators. For example, the noncoding RNA Evf-2 functions as a co-activator for the homeobox transcription factor Dlx2 , which plays important roles in forebrain development and neurogenesis . [ 63 ] [ 64 ] Sonic hedgehog induces transcription of Evf-2 from an ultra-conserved element located between the Dlx5 and Dlx6 genes during forebrain development. [ 63 ] Evf-2 then recruits the Dlx2 transcription factor to the same ultra-conserved element whereby Dlx2 subsequently induces expression of Dlx5. The existence of other similar ultra- or highly conserved elements within the mammalian genome that are both transcribed and fulfill enhancer functions suggest Evf-2 may be illustrative of a generalised mechanism that regulates developmental genes with complex expression patterns during vertebrate growth. [ 65 ] [ 66 ] Indeed, the transcription and expression of similar non-coding ultraconserved elements was shown to be abnormal in human leukaemia and to contribute to apoptosis in colon cancer cells, suggesting their involvement in tumorigenesis in like fashion to protein-coding RNA. [ 67 ] [ 68 ] [ 69 ]
Local ncRNAs can also recruit transcriptional programmes to regulate adjacent protein-coding gene expression .
The RNA binding protein TLS binds and inhibits the CREB binding protein and p300 histone acetyltransferase activities on a repressed gene target, cyclin D1 . The recruitment of TLS to the promoter of cyclin D1 is directed by long ncRNAs expressed at low levels and tethered to 5' regulatory regions in response to DNA damage signals. [ 70 ] Moreover, these local ncRNAs act cooperatively as ligands to modulate the activities of TLS. In the broad sense, this mechanism allows the cell to harness RNA-binding proteins , which make up one of the largest classes within the mammalian proteome , and integrate their function in transcriptional programs. Nascent long ncRNAs have been shown to increase the activity of CREB binding protein, which in turn increases the transcription of that ncRNA. [ 71 ] A study found that a lncRNA in the antisense direction of the Apolipoprotein A1 (APOA1) regulates the transcription of APOA1 through epigenetic modifications. [ 72 ]
Recent evidence has raised the possibility that transcription of genes that escape from X-inactivation might be mediated by expression of long non-coding RNA within the escaping chromosomal domains. [ 73 ]
NcRNAs also target general transcription factors required for the RNAP II transcription of all genes. [ 62 ] These general factors include components of the initiation complex that assemble on promoters or involved in transcription elongation. A ncRNA transcribed from an upstream minor promoter of the dihydrofolate reductase (DHFR) gene forms a stable RNA-DNA triplex within the major promoter of DHFR to prevent the binding of the transcriptional co-factor TFIIB . [ 74 ] This novel mechanism of regulating gene expression may represent a widespread method of controlling promoter usage, as thousands of RNA-DNA triplexes exist in eukaryotic chromosome . [ 75 ] The U1 ncRNA can induce transcription by binding to and stimulating TFIIH to phosphorylate the C-terminal domain of RNAP II. [ 76 ] In contrast the ncRNA 7SK is able to repress transcription elongation by, in combination with HEXIM1 / 2 , forming an inactive complex that prevents PTEFb from phosphorylating the C-terminal domain of RNAP II, [ 76 ] [ 77 ] [ 78 ] repressing global elongation under stressful conditions. These examples, which bypass specific modes of regulation at individual promoters provide a means of quickly affecting global changes in gene expression .
The ability to quickly mediate global changes is also apparent in the rapid expression of non-coding repetitive sequences . The short interspersed nuclear ( SINE ) Alu elements in humans and analogous B1 and B2 elements in mice have succeeded in becoming the most abundant mobile elements within the genomes, comprising ~10% of the human and ~6% of the mouse genome , respectively. [ 79 ] [ 80 ] These elements are transcribed as ncRNAs by RNAP III in response to environmental stresses such as heat shock , [ 81 ] where they then bind to RNAP II with high affinity and prevent the formation of active pre-initiation complexes. [ 82 ] [ 83 ] [ 84 ] [ 85 ] This allows for the broad and rapid repression of gene expression in response to stress. [ 82 ] [ 85 ]
A dissection of the functional sequences within Alu RNA transcripts has drafted a modular structure analogous to the organization of domains in protein transcription factors. [ 86 ] The Alu RNA contains two 'arms', each of which may bind one RNAP II molecule, as well as two regulatory domains that are responsible for RNAP II transcriptional repression in vitro. [ 85 ] These two loosely structured domains may even be concatenated to other ncRNAs such as B1 elements to impart their repressive role. [ 85 ] The abundance and distribution of Alu elements and similar repetitive elements throughout the mammalian genome may be partly due to these functional domains being co-opted into other long ncRNAs during evolution, with the presence of functional repeat sequence domains being a common characteristic of several known long ncRNAs including Kcnq1ot1 , Xlsirt and Xist . [ 87 ] [ 88 ] [ 89 ] [ 90 ]
In addition to heat shock , the expression of SINE elements (including Alu, B1, and B2 RNAs) increases during cellular stress such as viral infection [ 91 ] in some cancer cells [ 92 ] where they may similarly regulate global changes to gene expression. The ability of Alu and B2 RNA to bind directly to RNAP II provides a broad mechanism to repress transcription. [ 83 ] [ 85 ] Nevertheless, there are specific exceptions to this global response where Alu or B2 RNAs are not found at activated promoters of genes undergoing induction, such as the heat shock genes. [ 85 ] This additional hierarchy of regulation that exempts individual genes from the generalised repression also involves a long ncRNA, heat shock RNA-1 (HSR-1). It was argued that HSR-1 is present in mammalian cells in an inactive state, but upon stress is activated to induce the expression of heat shock genes . [ 93 ] This activation involves a conformational alteration of HSR-1 in response to rising temperatures, permitting its interaction with the transcriptional activator HSF-1, which trimerizes and induces the expression of heat shock genes. [ 93 ] In the broad sense, these examples illustrate a regulatory circuit nested within ncRNAs whereby Alu or B2 RNAs repress general gene expression , while other ncRNAs activate the expression of specific genes .
Many of the ncRNAs that interact with general transcription factors or RNAP II itself (including 7SK , Alu and B1 and B2 RNAs) are transcribed by RNAP III , [ 94 ] uncoupling their expression from RNAP II, which they regulate. RNAP III also transcribes other ncRNAs, such as BC2, BC200 and some microRNAs and snoRNAs, in addition to housekeeping ncRNA genes such as tRNAs , 5S rRNAs and snRNAs . [ 94 ] The existence of an RNAP III-dependent ncRNA transcriptome that regulates its RNAP II-dependent counterpart is supported by the finding of a set of ncRNAs transcribed by RNAP III with sequence homology to protein-coding genes. This prompted the authors to posit a 'cogene/gene' functional regulatory network, [ 95 ] showing that one of these ncRNAs, 21A, regulates the expression of its antisense partner gene, CENP-F in trans.
In addition to regulating transcription, ncRNAs also control various aspects of post-transcriptional mRNA processing . Similar to small regulatory RNAs such as microRNAs and snoRNAs , these functions often involve complementary base pairing with the target mRNA. The formation of RNA duplexes between complementary ncRNA and mRNA may mask key elements within the mRNA required to bind trans-acting factors, potentially affecting any step in post-transcriptional gene expression including pre-mRNA processing and splicing , transport, translation, and degradation. [ 96 ]
The splicing of mRNA can induce its translation and functionally diversify the repertoire of proteins it encodes. The Zeb2 mRNA requires the retention of a 5'UTR intron that contains an internal ribosome entry site for efficient translation. [ 97 ] The retention of the intron depends on the expression of an antisense transcript that complements the intronic 5' splice site . [ 97 ] Therefore, the ectopic expression of the antisense transcript represses splicing and induces translation of the Zeb2 mRNA during mesenchymal development. Likewise, the expression of an overlapping antisense Rev-ErbAa2 transcript controls the alternative splicing of the thyroid hormone receptor ErbAa2 mRNA to form two antagonistic isoforms. [ 98 ]
NcRNA may also apply additional regulatory pressures during translation , a property particularly exploited in neurons where the dendritic or axonal translation of mRNA in response to synaptic activity contributes to changes in synaptic plasticity and the remodelling of neuronal networks. The RNAP III transcribed BC1 and BC200 ncRNAs, that previously derived from tRNAs , are expressed in the mouse and human central nervous system , respectively. [ 99 ] [ 100 ] BC1 expression is induced in response to synaptic activity and synaptogenesis and is specifically targeted to dendrites in neurons. [ 101 ] Sequence complementarity between BC1 and regions of various neuron-specific mRNAs also suggest a role for BC1 in targeted translational repression. [ 102 ] Indeed, it was recently shown that BC1 is associated with translational repression in dendrites to control the efficiency of dopamine D2 receptor-mediated transmission in the striatum [ 103 ] and BC1 RNA-deleted mice exhibit behavioural changes with reduced exploration and increased anxiety . [ 104 ]
In addition to masking key elements within single-stranded RNA , the formation of double-stranded RNA duplexes can also provide a substrate for the generation of endogenous siRNAs (endo-siRNAs) in Drosophila and mouse oocytes . [ 105 ] The annealing of complementary sequences, such as antisense or repetitive regions between transcripts , forms an RNA duplex that may be processed by Dicer-2 into endo-siRNAs. Also, long ncRNAs that form extended intramolecular hairpins may be processed into siRNAs, compellingly illustrated by the esi-1 and esi-2 transcripts. [ 106 ] Endo-siRNAs generated from these transcripts seem particularly useful in suppressing the spread of mobile transposon elements within the genome in the germline. However, the generation of endo-siRNAs from antisense transcripts or pseudogenes may also silence the expression of their functional counterparts via RISC effector complexes , acting as an important node that integrates various modes of long and short RNA regulation, as exemplified by the Xist and Tsix (see above). [ 107 ]
Epigenetic modifications, including histone and DNA methylation , histone acetylation and sumoylation , affect many aspects of chromosomal biology, primarily including regulation of large numbers of genes by remodeling broad chromatin domains. [ 108 ] [ 109 ] While it has been known for some time that RNA is an integral component of chromatin, [ 110 ] [ 111 ] it is only recently that we are beginning to appreciate the means by which RNA is involved in pathways of chromatin modification. [ 112 ] [ 113 ] [ 114 ] For example, Oplr16 epigenetically induces the activation of stem cell core factors by coordinating intrachromosomal looping and recruitment of DNA demethylase TET2 . [ 115 ]
In Drosophila , long ncRNAs induce the expression of the homeotic gene, Ubx , by recruiting and directing the chromatin modifying functions of the trithorax protein Ash1 to Hox regulatory elements . [ 114 ] Similar models have been proposed in mammals, where strong epigenetic mechanisms are thought to underlie the embryonic expression profiles of the Hox genes that persist throughout human development. [ 116 ] [ 113 ] Indeed, the human Hox genes are associated with hundreds of ncRNAs that are sequentially expressed along both the spatial and temporal axes of human development and define chromatin domains of differential histone methylation and RNA polymerase accessibility. [ 113 ] One ncRNA, termed HOTAIR , that originates from the HOXC locus represses transcription across 40 kb of the HOXD locus by altering chromatin trimethylation state. HOTAIR is thought to achieve this by directing the action of Polycomb chromatin remodeling complexes in trans to govern the cells' epigenetic state and subsequent gene expression . Components of the Polycomb complex, including Suz12 , EZH2 and EED, contain RNA binding domains that may potentially bind HOTAIR and probably other similar ncRNAs. [ 117 ] [ 118 ] [ 119 ] This example nicely illustrates a broader theme whereby ncRNAs recruit the function of a generic suite of chromatin modifying proteins to specific genomic loci , underscoring the complexity of recently published genomic maps. [ 109 ] Indeed, the prevalence of long ncRNAs associated with protein coding genes may contribute to localised patterns of chromatin modifications that regulate gene expression during development. For example, the majority of protein-coding genes have antisense partners, including many tumour suppressor genes that are frequently silenced by epigenetic mechanisms in cancer. [ 120 ] A recent study observed an inverse expression profile of the p15 gene and an antisense ncRNA in leukaemia. [ 120 ] A detailed analysis showed the p15 antisense ncRNA ( CDKN2BAS ) was able to induce changes to heterochromatin and DNA methylation status of p15 by an unknown mechanism, thereby regulating p15 expression. [ 120 ] Therefore, misexpression of the associated antisense ncRNAs may subsequently silence the tumour suppressor gene contributing towards cancer .
Many emergent themes of ncRNA-directed chromatin modification were first apparent within the phenomenon of imprinting , whereby only one allele of a gene is expressed from either the maternal or the paternal chromosome . In general, imprinted genes are clustered together on chromosomes, suggesting the imprinting mechanism acts upon local chromosome domains rather than individual genes. These clusters are also often associated with long ncRNAs whose expression is correlated with the repression of the linked protein-coding gene on the same allele. [ 121 ] Indeed, detailed analysis has revealed a crucial role for the ncRNAs Kcnqot1 and Igf2r /Air in directing imprinting. [ 122 ]
Almost all the genes at the Kcnq1 loci are maternally inherited, except the paternally expressed antisense ncRNA Kcnqot1. [ 123 ] Transgenic mice with truncated Kcnq1ot fail to silence the adjacent genes, suggesting that Kcnqot1 is crucial to the imprinting of genes on the paternal chromosome. [ 124 ] It appears that Kcnqot1 is able to direct the trimethylation of lysine 9 ( H3K9me3 ) and 27 of histone 3 ( H3K27me3 ) to an imprinting centre that overlaps the Kcnqot1 promoter and actually resides within a Kcnq1 sense exon. [ 125 ] Similar to HOTAIR (see above), Eed-Ezh2 Polycomb complexes are recruited to the Kcnq1 loci paternal chromosome, possibly by Kcnqot1, where they may mediate gene silencing through repressive histone methylation . [ 125 ] A differentially methylated imprinting centre also overlaps the promoter of a long antisense ncRNA Air that is responsible for the silencing of neighbouring genes at the Igf2r locus on the paternal chromosome. [ 126 ] [ 127 ] The presence of allele-specific histone methylation at the Igf2r locus suggests Air also mediates silencing via chromatin modification. [ 128 ]
The inactivation of a X-chromosome in female placental mammals is directed by one of the earliest and best characterized long ncRNAs, Xist . [ 129 ] The expression of Xist from the future inactive X-chromosome, and its subsequent coating of the inactive X-chromosome, occurs during early embryonic stem cell differentiation. Xist expression is followed by irreversible layers of chromatin modifications that include the loss of the histone (H3K9) acetylation and H3K4 methylation that are associated with active chromatin, and the induction of repressive chromatin modifications including H4 hypoacetylation, H3K27 trimethylation , [ 129 ] H3K9 hypermethylation and H4K20 monomethylation as well as H2AK119 monoubiquitylation. These modifications coincide with the transcriptional silencing of the X-linked genes. [ 130 ] Xist RNA also localises the histone variant macroH2A to the inactive X–chromosome. [ 131 ] There are additional ncRNAs that are also present at the Xist loci, including an antisense transcript Tsix , which is expressed from the future active chromosome and able to repress Xist expression by the generation of endogenous siRNA. [ 107 ] Together these ncRNAs ensure that only one X-chromosome is active in female mammals.
Telomeres form the terminal region of mammalian chromosomes and are essential for stability and aging and play central roles in diseases such as cancer . [ 132 ] Telomeres have been long considered transcriptionally inert DNA-protein complexes until it was shown in the late 2000s that telomeric repeats may be transcribed as telomeric RNAs (TelRNAs) [ 133 ] or telomeric repeat-containing RNAs . [ 134 ] These ncRNAs are heterogeneous in length, transcribed from several sub-telomeric loci and physically localise to telomeres. Their association with chromatin, which suggests an involvement in regulating telomere specific heterochromatin modifications, is repressed by SMG proteins that protect chromosome ends from telomere loss. [ 134 ] In addition, TelRNAs block telomerase activity in vitro and may therefore regulate telomerase activity. [ 133 ] Although early, these studies suggest an involvement for telomeric ncRNAs in various aspects of telomere biology.
Asynchronously replicating autosomal RNAs (ASARs) are very long (~200kb) non-coding RNAs that are non-spliced, non-polyadenylated, and are required for normal DNA replication timing and chromosome stability. [ 135 ] [ 136 ] [ 137 ] Deletion of any one of the genetic loci containing ASAR6, ASAR15, or ASAR6-141 results in the same phenotype of delayed replication timing and delayed mitotic condensation (DRT/DMC) of the entire chromosome. DRT/DMC results in chromosomal segregation errors that lead to increased frequency of secondary rearrangements and an unstable chromosome. Similar to Xist , ASARs show random monoallelic expression and exist in asynchronous DNA replication domains. Although the mechanism of ASAR function is still under investigation, it is hypothesized that they work via similar mechanisms as the Xist lncRNA, but on smaller autosomal domains resulting in allele specific changes in gene expression.
Incorrect reparation of DNA double-strand breaks (DSB) leading to chromosomal rearrangements is one of the oncogenesis's primary causes. A number of lncRNAs are crucial at the different stages of the main pathways of DSB repair in eukaryotic cells : nonhomologous end joining ( NHEJ ) and homology-directed repair ( HDR ). Gene mutations or variation in expression levels of such RNAs can lead to local DNA repair defects, increasing the chromosome aberration frequency. Moreover, it was demonstrated that some RNAs could stimulate long-range chromosomal rearrangements. [ 138 ]
The discovery that long ncRNAs function in various aspects of cell biology has led to research on their role in disease . Tens of thousands of lncRNAs are potentially associated with diseases based on the multi-omics evidence. [ 139 ] A handful of studies have implicated long ncRNAs in a variety of disease states and support an involvement and co-operation in neurological disease and cancer .
The first published report of an alteration in lncRNA abundance in aging and human neurological disease was provided by Lukiw et al. [ 140 ] in a study using short post-mortem interval tissues from patients with Alzheimer's disease and non-Alzheimer's dementia (NAD) ; this early work was based on the prior identification of a primate brain-specific cytoplasmic transcript of the Alu repeat family by Watson and Sutcliffe in 1987 known as BC200 (brain, cytoplasmic, 200 nucleotide). [ 141 ]
While many association studies have identified unusual expression of long ncRNAs in disease states, there is little understanding of their role in causing disease. Expression analyses that compare tumor cells and normal cells have revealed changes in the expression of ncRNAs in several forms of cancer . For example, in prostate tumours , PCGEM1 (one of two overexpressed ncRNAs) is correlated with increased proliferation and colony formation suggesting an involvement in regulating cell growth. [ 142 ] PRNCR1 was found to promote tumor growth in several malignancies like prostate cancer , breast cancer , non-small cell lung cancer , oral squamous cell carcinoma and colorectal cancer . [ 143 ] MALAT1 (also known as NEAT2) was originally identified as an abundantly expressed ncRNA that is upregulated during metastasis of early-stage non-small cell lung cancer and its overexpression is an early prognostic marker for poor patient survival rates. [ 142 ] LncRNAs such as HEAT2 or KCNQ1OT1 have been shown to be regulated in the blood of patients with cardiovascular diseases such as heart failure or coronary artery disease and, moreover, to predict cardiovascular disease events. [ 144 ] [ 145 ] More recently, the highly conserved mouse homologue of MALAT1 was found to be highly expressed in hepatocellular carcinoma . [ 146 ] Intronic antisense ncRNAs with expression correlated to the degree of tumor differentiation in prostate cancer samples have also been reported. [ 147 ] Despite a number of long ncRNAs having aberrant expression in cancer, their function and potential role in tumourigenesis is relatively unknown. For example, the ncRNAs HIS-1 and BIC have been implicated in cancer development and growth control, but their function in normal cells is unknown. [ 148 ] [ 149 ] In addition to cancer, ncRNAs also exhibit aberrant expression in other disease states. Overexpression of PRINS is associated with psoriasis susceptibility, with PRINS expression being elevated in the uninvolved epidermis of psoriatic patients compared with both psoriatic lesions and healthy epidermis. [ 150 ]
Genome-wide profiling revealed that many transcribed non-coding ultraconserved regions exhibit distinct profiles in various human cancer states. [ 68 ] An analysis of chronic lymphocytic leukaemia , colorectal carcinoma and hepatocellular carcinoma found that all three cancers exhibited aberrant expression profiles for ultraconserved ncRNAs relative to normal cells. Further analysis of one ultraconserved ncRNA suggested it behaved like an oncogene by mitigating apoptosis and subsequently expanding the number of malignant cells in colorectal cancers. [ 68 ] Many of these transcribed ultraconserved sites that exhibit distinct signatures in cancer are found at fragile sites and genomic regions associated with cancer. It seems likely that the aberrant expression of these ultraconserved ncRNAs within malignant processes results from important functions they fulfil in normal human development .
Recently, a number of association studies examining single nucleotide polymorphisms (SNPs) associated with disease states have been mapped to long ncRNAs. For example, SNPs that identified a susceptibility locus for myocardial infarction mapped to a long ncRNA, MIAT (myocardial infarction associated transcript). [ 151 ] Likewise, genome-wide association studies identified a region associated with coronary artery disease [ 152 ] that encompassed a long ncRNA, ANRIL . [ 153 ] ANRIL is expressed in tissues and cell types affected by atherosclerosis [ 154 ] [ 155 ] and its altered expression is associated with a high-risk haplotype for coronary artery disease. [ 155 ] [ 156 ] Lately there has been increasing evidence on the role of non-coding RNAs in the development and in the categorization of heart failure. [ 157 ]
The complexity of the transcriptome , and our evolving understanding of its structure may inform a reinterpretation of the functional basis for many natural polymorphisms associated with disease states. Many SNPs associated with certain disease conditions are found within non-coding regions and the complex networks of non-coding transcription within these regions make it particularly difficult to elucidate the functional effects of polymorphisms . For example, a SNP both within the truncated form of ZFAT and the promoter of an antisense transcript increases the expression of ZFAT not through increasing the mRNA stability, but rather by repressing the expression of the antisense transcript. [ 158 ]
The ability of long ncRNAs to regulate associated protein-coding genes may contribute to disease if misexpression of a long ncRNA deregulates a protein coding gene with clinical significance. In similar manner, an antisense long ncRNA that regulates the expression of the sense BACE1 gene, a crucial enzyme in Alzheimer's disease etiology, exhibits elevated expression in several regions of the brain in individuals with Alzheimer's disease [ 159 ] Alteration of the expression of ncRNAs may also mediate changes at an epigenetic level to affect gene expression and contribute to disease aetiology. For example, the induction of an antisense transcript by a genetic mutation led to DNA methylation and silencing of sense genes, causing β-thalassemia in a patient. [ 160 ]
Alongside their role in mediating pathological processes, long noncoding RNAs play a role in the immune response to vaccination , as identified for both the influenza vaccine and the yellow fever vaccine . [ 161 ]
It took over two decades after the discovery of the first human long non-coding transcripts for the functional significance of lncRNA structures to be fully recognized. Early structural studies led to the proposal of several hypotheses for classifying lncRNA architectures. One hypothesis suggests that lncRNAs may feature a compact tertiary structure, similar to ribozymes like the ribosome or self-splicing introns. Another possibility is that lncRNAs could have structured protein-binding sites arranged in a decentralized scaffold, lacking a compact core. A third hypothesis posits that lncRNAs might exhibit a largely unstructured architecture, with loosely organized protein-binding domains interspersed with long regions of disordered single-stranded RNA. [ 162 ]
Studying the tertiary structure of lncRNAs by conventional methods such as X- ray crystallography, cryo-EM and nuclear magnetic resonance (NMR) is unfortunately still hampered by their size and conformational dynamics, and by the fact that for now we still know too little about their mechanism to reconstruct stable and functionally-active lncRNA-ribonucleoprotein complexes. But some pioneering studies, showed that lncRNAs can already be studied by low-resolution single-particle and in-solution methods, such as atomic force microscopy (AFM) and small-angle X-ray scattering (SAXS), in some cases even in complexes with small molecule modulators. [ 163 ]
For instance, lncRNA MEG3 was shown to regulate transcription factor p53 thanks to its compact structured core. [ 164 ] Moreover, lncRNA Braveheart (Bvht) was shown to have a well-defined, albeit flexible 3D structure that is remodeled upon binding CNBP (Cellular Nucleic-acid Binding Protein) which recognizes distal domains in the RNA. [ 165 ] Finally, Xist a master regulator of X chromosome inactivation was shown to specifically bind a small molecule compound, which alters the conformation of Xist RepA motif and displaces two known interacting protein factors (PRC2 and SPEN) from the RNA. By such mechanism of action, the compound abrogates the initiation of X-chromosome inactivation. [ 166 ] | https://en.wikipedia.org/wiki/Long_non-coding_RNA |
A long reach excavator is a type of excavator where the arm has been extended to reach further than a normal excavator would. It is often used in demolition of buildings, but it can also be used in other applications.
The term long reach excavator was probably first coined by Richard Melhuish, the Chairman of Land & Water. During the 1970s, Land & Water operated the UK's first hire fleet of these new and innovative long reach hydraulic excavators. In fact they still operate the largest fleet of long-reaches in the UK. Land & Water's first long reach excavator was the Hymac 580 BT All Hydraulic 360 “Waterway” machine, designed for work on waterways. [ 1 ] These early machines from Hymac came to be widely preferred to the more traditional drag lines designs. [ 1 ]
Around the same time Priestman (and later Ruston Bucyrus) VC (Variable Counterweight) excavators started to become more popular. However, the work VC machines could achieve was slightly constrained by design limitations compared to fully hydraulic "long reach" machines, especially with the arrival of more reliable machines from Japan built by manufacturers such as Hitachi and Komatsu . These Japanese designed machines hardly ever leaked hydraulic fluid. [ 1 ]
Long reach machines are not suitable for the high side twisting forces that can be exerted by demolition attachments and many demolition machines are unstable at large radius. They are often assisted with electronic cut off devices that restrict the operating radius of the machine. Long reach machines are particularly useful in dredging operations. [ 1 ]
The high reach excavator is a development of the excavator with an especially long boom arm, that is primarily used for demolition . Instead of excavating ditches, the high reach excavator is designed to reach the upper stories of buildings that are being demolished and pull down the structure in a controlled fashion. It has largely replaced the wrecking ball as the primary tool for demolition.
Ultra high reach demolition excavators (UHD) are demolition excavators with several tens of meters of reach. [ 2 ] [ 3 ] Reaches of up to 48 metres (157 ft) are in operation as of 2016. As of 2017 [update] , there are UHD machines that can reach 67 metres (220 ft).
The long reach excavator imported to New Zealand for demolitions of tall buildings following the 2010 and 2011 earthquakes has been nicknamed Twinkle Toes . It is the largest excavator in the Southern Hemisphere. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Long_reach_excavator |
A long terminal repeat ( LTR ) is a pair of identical sequences of DNA , several hundred base pairs long, which occur in eukaryotic genomes on either end of a series of genes or pseudogenes that form a retrotransposon or an endogenous retrovirus or a retroviral provirus . All retroviral genomes are flanked by LTRs, while there are some retrotransposons without LTRs. Typically, an element flanked by a pair of LTRs will encode a reverse transcriptase and an integrase , allowing the element to be copied and inserted at a different location of the genome. Copies of such an LTR-flanked element can often be found hundreds or thousands of times in a genome. LTR retrotransposons comprise about 8% of the human genome . [ 1 ]
The first LTR sequences were found by A.P. Czernilofsky and J. Shine in 1977 and 1980. [ 2 ] [ 3 ]
The LTR-flanked sequences are partially transcribed into an RNA intermediate, followed by reverse transcription into complementary DNA (cDNA) and ultimately dsDNA (double-stranded DNA) with full LTRs. The LTRs then mediate integration of the DNA via an LTR specific integrase into another region of the host chromosome .
Retroviruses such as human immunodeficiency virus ( HIV ) use this basic mechanism.
As 5' and 3' LTRs are identical upon insertion, the difference between paired LTRs can be used to estimate the age of ancient retroviral insertions. This method of dating is used by paleovirologists , though it fails to take into account confounding factors such as gene conversion and homologous recombination . [ 4 ]
The HIV-1 LTR is 634 bp [ 5 ] in length and, like other retroviral LTRs, is segmented into the U3, R, and U5 regions. U3 and U5 has been further subdivided according to transcription factor sites and their impact on LTR activity and viral gene expression. The multi-step process of reverse transcription results in the placement of two identical LTRs, each consisting of a U3, R, and U5 region, at either end of the proviral DNA. The ends of the LTRs subsequently participate in integration of the provirus into the host genome . Once the provirus has been integrated, the LTR on the 5′ end serves as the promoter for the entire retroviral genome, while the LTR at the 3′ end provides for nascent viral RNA polyadenylation and, in HIV-1, HIV-2, and SIV, encodes the accessory protein, Nef . [ 6 ]
All of the required signals for gene expression are found in the LTRs: Enhancer, promoter (can have both transcriptional enhancers or regulatory elements), transcription initiation (such as capping), transcription terminator and polyadenylation signal. [ 7 ]
In HIV-1, the 5'UTR region has been characterized according to functional and structural differences into several sub-regions:
The transcript begins, at the beginning of R, is capped, and proceeds through U5 and the rest of the provirus, usually terminating by the addition of a poly A tract just after the R sequence in the 3' LTR.
The finding that both HIV LTRs can function as transcriptional promoters is not surprising since both elements are apparently identical in nucleotide sequence. Instead, the 3' LTR acts in transcription termination and polyadenylation. However, it has been suggested that the transcriptional activity of the 5' LTR is far greater than that of the 3' LTR, a situation that is very similar to that of other retroviruses. [ 7 ]
During transcription of the human immunodeficiency virus type 1 provirus, polyadenylation signals present in the 5' long terminal repeat (LTR) are disregarded while the identical polyadenylation signals present in the 3'LTR are utilized efficiently. It has been suggested that transcribed sequences present within the HIV-1 LTR U3 region act in cis to enhance polyadenylation within the 3' LTR. [ 13 ] | https://en.wikipedia.org/wiki/Long_terminal_repeat |
In engineering , a longeron or stringer is a load-bearing component of a framework.
The term is commonly used in connection with aircraft fuselages and automobile chassis . Longerons are used in conjunction with stringers to form structural frameworks. [ 1 ]
In an aircraft fuselage, stringers are attached to formers (also called frames) [ 3 ] and run in the longitudinal direction of the aircraft. They are primarily responsible for transferring the aerodynamic loads acting on the skin onto the frames and formers. In the wings or horizontal stabilizer, longerons run spanwise (from wing root to wing tip) and attach between the ribs . The primary function here also is to transfer the bending loads acting on the wings onto the ribs and spar.
The terms "longeron" and "stringer" are sometimes used interchangeably. Historically, though, there is a subtle difference between the two terms. If the longitudinal members in a fuselage are few in number (usually 4 to 8) and run all along the fuselage length, then they are called "longerons". The longeron system also requires that the fuselage frames be closely spaced (about every 4 to 6 in or 10 to 15 cm). If the longitudinal members are numerous (usually 50 to 100) and are placed just between two formers/frames, then they are called "stringers". In the stringer system the longitudinal members are smaller and the frames are spaced further apart (about 15 to 20 in or 38 to 51 cm). Generally, longerons are of larger cross-section when compared to stringers. On large modern aircraft the stringer system is more common because it is more weight-efficient, despite being more complex to construct and analyze. Some aircraft use a combination of both stringers and longerons. [ 4 ]
Longerons often carry larger loads than stringers and also help to transfer skin loads to internal structure. Longerons nearly always attach to frames or ribs . Stringers are usually not attached to anything but the skin , where they carry a portion of the fuselage bending moment through axial loading. [ 5 ] It is not uncommon to have a mixture of longerons and stringers in the same major structural component.
Stringers are also used in the construction of some launch vehicle propellant tanks. For example, the Falcon 9 launch vehicle uses stringers in the kerosene (RP-1) tanks, but not in the liquid oxygen tanks, on both the first and second stages. [ 6 ] | https://en.wikipedia.org/wiki/Longeron |
In combinatorial mathematics, probability , and computer science , in the longest alternating subsequence problem, one wants to find a subsequence of a given sequence in which the elements are in alternating order, and in which the sequence is as long as possible.
Formally, if x = { x 1 , x 2 , … , x n } {\displaystyle \mathbf {x} =\{x_{1},x_{2},\ldots ,x_{n}\}} is a sequence of distinct real numbers, then the subsequence { x i 1 , x i 2 , … , x i k } {\displaystyle \{x_{i_{1}},x_{i_{2}},\ldots ,x_{i_{k}}\}} is alternating [ 1 ] (or zigzag or down-up ) if
Similarly, x {\displaystyle \mathbf {x} } is reverse alternating (or up-down ) if
Note that every sequence of length 1 is both alternating and reverse alternating.
Let a s n ( x ) {\displaystyle {\rm {as}}_{n}(\mathbf {x} )} denote the length (number of terms) of the longest alternating subsequence of x {\displaystyle \mathbf {x} } . For example, if we consider some of the permutations of the integers 1,2,3,4,5, we have that
In a sequence of distinct elements, the subsequence of local extrema (elements larger than both adjacent elements, or smaller than both adjacent elements) forms a canonical longest alternating sequence. [ 2 ] As a consequence, the longest alternating subsequence of a sequence of n {\displaystyle n} elements can be found in time O ( n ) {\displaystyle O(n)} . In sequences that allow repetitions, the same method can be applied after first replacing each run of repeated elements by a single copy of that element. [ citation needed ]
If x {\displaystyle \mathbf {x} } is a random permutation of the integers 1 , 2 , … , n {\displaystyle 1,2,\ldots ,n} and A n ≡ a s n ( x ) {\displaystyle A_{n}\equiv {\rm {as}}_{n}(\mathbf {x} )} , then it is possible to show [ 3 ] [ 4 ] [ 5 ] that
Moreover, as n → ∞ {\displaystyle n\rightarrow \infty } , the random variable A n {\displaystyle A_{n}} , appropriately centered and scaled, converges to a standard normal distribution.
The longest alternating subsequence problem has also been studied in the setting of online algorithms , in which the elements of x {\displaystyle \mathbf {x} } are presented in an online fashion, and a decision maker needs to decide whether to include or exclude each element at the time it is first presented, without any knowledge of the elements that will be presented in the future,
and without the possibility of recalling on preceding observations.
Given a sequence X 1 , X 2 , … , X n {\displaystyle X_{1},X_{2},\ldots ,X_{n}} of independent random variables with common continuous distribution F {\displaystyle F} , it is possible to construct a selection procedure that maximizes the expected number of alternating selections.
Such expected values can be tightly estimated, and it equals ( 2 − 2 ) n + O ( 1 ) {\displaystyle (2-{\sqrt {2}})n+O(1)} . [ 6 ]
As n → ∞ {\displaystyle n\rightarrow \infty } , the optimal number of online alternating selections appropriately centered and scaled converges to a normal distribution. [ 7 ] | https://en.wikipedia.org/wiki/Longest_alternating_subsequence |
A longest common subsequence ( LCS ) is the longest subsequence common to all sequences in a set of sequences (often just two sequences). It differs from the longest common substring : unlike substrings, subsequences are not required to occupy consecutive positions within the original sequences. The problem of computing longest common subsequences is a classic computer science problem, the basis of data comparison programs such as the diff utility , and has applications in computational linguistics and bioinformatics . It is also widely used by revision control systems such as Git for reconciling multiple changes made to a revision-controlled collection of files.
For example, consider the sequences (ABCD) and (ACBAD). They have five length-2 common subsequences: (AB), (AC), (AD), (BD), and (CD); two length-3 common subsequences: (ABD) and (ACD); and no longer common subsequences. So (ABD) and (ACD) are their longest common subsequences.
For the general case of an arbitrary number of input sequences, the problem is NP-hard . [ 1 ] When the number of sequences is constant, the problem is solvable in polynomial time by dynamic programming .
Given N {\displaystyle N} sequences of lengths n 1 , . . . , n N {\displaystyle n_{1},...,n_{N}} , a naive search would test each of the 2 n 1 {\displaystyle 2^{n_{1}}} subsequences of the first sequence to determine whether they are also subsequences of the remaining sequences; each subsequence may be tested in time linear in the lengths of the remaining sequences, so the time for this algorithm would be
For the case of two sequences of n and m elements, the running time of the dynamic programming approach is O ( n × m ). [ 2 ] For an arbitrary number of input sequences, the dynamic programming approach gives a solution in
There exist methods with lower complexity, [ 3 ] which often depend on the length of the LCS, the size of the alphabet, or both.
The LCS is not necessarily unique; in the worst case, the number of common subsequences is exponential in the lengths of the inputs, so the algorithmic complexity must be at least exponential. [ 4 ]
The LCS problem has an optimal substructure : the problem can be broken down into smaller, simpler subproblems, which can, in turn, be broken down into simpler subproblems, and so on, until, finally, the solution becomes trivial. LCS in particular has overlapping subproblems : the solutions to high-level subproblems often reuse solutions to lower level subproblems. Problems with these two properties are amenable to dynamic programming approaches, in which subproblem solutions are memoized , that is, the solutions of subproblems are saved for reuse.
The prefix S n of S is defined as the first n characters of S . [ 5 ] For example, the prefixes of S = (AGCA) are
Let LCS ( X , Y ) be a function that computes a longest subsequence common to X and Y . Such a function has two interesting properties.
LCS ( X ^ A , Y ^ A ) = LCS ( X , Y )^ A , for all strings X , Y and all symbols A , where ^ denotes string concatenation. This allows one to simplify the LCS computation for two sequences ending in the same symbol.
For example, LCS ("BANANA","ATANA") = LCS ("BANAN","ATAN")^"A", Continuing for the remaining common symbols, LCS ("BANANA","ATANA") = LCS ("BAN","AT")^"ANA".
If A and B are distinct symbols ( A ≠ B ), then LCS (X^A,Y^B) is one of the maximal-length strings in the set { LCS ( X ^ A , Y ), LCS ( X , Y ^ B ) }, for all strings X , Y .
For example, LCS ("ABCDEFG","BCDGK") is the longest string among LCS ("ABCDEFG","BCDG") and LCS ("ABCDEF","BCDGK"); if both happened to be of equal length, one of them could be chosen arbitrarily.
To realize the property, distinguish two cases:
Let two sequences be defined as follows: X = ( x 1 x 2 ⋯ x m ) {\displaystyle X=(x_{1}x_{2}\cdots x_{m})} and Y = ( y 1 y 2 ⋯ y n ) {\displaystyle Y=(y_{1}y_{2}\cdots y_{n})} . The prefixes of X {\displaystyle X} are X 0 , X 1 , X 2 , … , X m {\displaystyle X_{0},X_{1},X_{2},\dots ,X_{m}} ; the prefixes of Y {\displaystyle Y} are Y 0 , Y 1 , Y 2 , … , Y n {\displaystyle Y_{0},Y_{1},Y_{2},\dots ,Y_{n}} . Let L C S ( X i , Y j ) {\displaystyle {\mathit {LCS}}(X_{i},Y_{j})} represent the set of longest common subsequence of prefixes X i {\displaystyle X_{i}} and Y j {\displaystyle Y_{j}} . This set of sequences is given by the following.
To find the LCS of X i {\displaystyle X_{i}} and Y j {\displaystyle Y_{j}} , compare x i {\displaystyle x_{i}} and y j {\displaystyle y_{j}} . If they are equal, then the sequence L C S ( X i − 1 , Y j − 1 ) {\displaystyle {\mathit {LCS}}(X_{i-1},Y_{j-1})} is extended by that element, x i {\displaystyle x_{i}} . If they are not equal, then the longest among the two sequences, L C S ( X i , Y j − 1 ) {\displaystyle {\mathit {LCS}}(X_{i},Y_{j-1})} , and L C S ( X i − 1 , Y j ) {\displaystyle {\mathit {LCS}}(X_{i-1},Y_{j})} , is retained. (If they are the same length, but not identical, then both are retained.) The base case, when either X i {\displaystyle X_{i}} or Y i {\displaystyle Y_{i}} is empty, is the empty string , ϵ {\displaystyle \epsilon } .
The longest subsequence common to R = (GAC), and C = (AGCAT) will be found. Because the LCS function uses a "zeroth" element, it is convenient to define zero prefixes that are empty for these sequences: R 0 = ε; and C 0 = ε. All the prefixes are placed in a table with C in the first row (making it a c olumn header) and R in the first column (making it a r ow header).
This table is used to store the LCS sequence for each step of the calculation. The second column and second row have been filled in with ε, because when an empty sequence is compared with a non-empty sequence, the longest common subsequence is always an empty sequence.
LCS ( R 1 , C 1 ) is determined by comparing the first elements in each sequence. G and A are not the same, so this LCS gets (using the "second property") the longest of the two sequences, LCS ( R 1 , C 0 ) and LCS ( R 0 , C 1 ). According to the table, both of these are empty, so LCS ( R 1 , C 1 ) is also empty, as shown in the table below. The arrows indicate that the sequence comes from both the cell above, LCS ( R 0 , C 1 ) and the cell on the left, LCS ( R 1 , C 0 ).
LCS ( R 1 , C 2 ) is determined by comparing G and G. They match, so G is appended to the upper left sequence, LCS ( R 0 , C 1 ), which is (ε), giving (εG), which is (G).
For LCS ( R 1 , C 3 ), G and C do not match. The sequence above is empty; the one to the left contains one element, G. Selecting the longest of these, LCS ( R 1 , C 3 ) is (G). The arrow points to the left, since that is the longest of the two sequences.
LCS ( R 1 , C 4 ), likewise, is (G).
LCS ( R 1 , C 5 ), likewise, is (G).
For LCS ( R 2 , C 1 ), A is compared with A. The two elements match, so A is appended to ε, giving (A).
For LCS ( R 2 , C 2 ), A and G do not match, so the longest of LCS ( R 1 , C 2 ), which is (G), and LCS ( R 2 , C 1 ), which is (A), is used. In this case, they each contain one element, so this LCS is given two subsequences: (A) and (G).
For LCS ( R 2 , C 3 ), A does not match C. LCS ( R 2 , C 2 ) contains sequences (A) and (G); LCS( R 1 , C 3 ) is (G), which is already contained in LCS ( R 2 , C 2 ). The result is that LCS ( R 2 , C 3 ) also contains the two subsequences, (A) and (G).
For LCS ( R 2 , C 4 ), A matches A, which is appended to the upper left cell, giving (GA).
For LCS ( R 2 , C 5 ), A does not match T. Comparing the two sequences, (GA) and (G), the longest is (GA), so LCS ( R 2 , C 5 ) is (GA).
For LCS ( R 3 , C 1 ), C and A do not match, so LCS ( R 3 , C 1 ) gets the longest of the two sequences, (A).
For LCS ( R 3 , C 2 ), C and G do not match. Both LCS ( R 3 , C 1 ) and LCS ( R 2 , C 2 ) have one element. The result is that LCS ( R 3 , C 2 ) contains the two subsequences, (A) and (G).
For LCS ( R 3 , C 3 ), C and C match, so C is appended to LCS ( R 2 , C 2 ), which contains the two subsequences, (A) and (G), giving (AC) and (GC).
For LCS ( R 3 , C 4 ), C and A do not match. Combining LCS ( R 3 , C 3 ), which contains (AC) and (GC), and LCS ( R 2 , C 4 ), which contains (GA), gives a total of three sequences: (AC), (GC), and (GA).
Finally, for LCS ( R 3 , C 5 ), C and T do not match. The result is that LCS ( R 3 , C 5 ) also contains the three sequences, (AC), (GC), and (GA).
The final result is that the last cell contains all the longest subsequences common to (AGCAT) and (GAC); these are (AC), (GC), and (GA). The table also shows the longest common subsequences for every possible pair of prefixes. For example, for (AGC) and (GA), the longest common subsequence are (A) and (G).
Calculating the LCS of a row of the LCS table requires only the solutions to the current row and the previous row. Still, for long sequences, these sequences can get numerous and long, requiring a lot of storage space. Storage space can be saved by saving not the actual subsequences, but the length of the subsequence and the direction of the arrows, as in the table below.
The actual subsequences are deduced in a "traceback" procedure that follows the arrows backwards, starting from the last cell in the table. When the length decreases, the sequences must have had a common element. Several paths are possible when two arrows are shown in a cell. Below is the table for such an analysis, with numbers colored in cells where the length is about to decrease. The bold numbers trace out the sequence, (GA). [ 6 ]
For two strings X 1 … m {\displaystyle X_{1\dots m}} and Y 1 … n {\displaystyle Y_{1\dots n}} , the length of the shortest common supersequence is related to the length of the LCS by [ 3 ]
The edit distance when only insertion and deletion is allowed (no substitution), or when the cost of the substitution is the double of the cost of an insertion or deletion, is:
The function below takes as input sequences X[1..m] and Y[1..n] , computes the LCS between X[1..i] and Y[1..j] for all 1 ≤ i ≤ m and 1 ≤ j ≤ n , and stores it in C[i,j] . C[m,n] will contain the length of the LCS of X and Y . [ 7 ]
Alternatively, memoization could be used.
The following function backtracks the choices taken when computing the C table. If the last characters in the prefixes are equal, they must be in an LCS. If not, check what gave the largest LCS of keeping x i {\displaystyle x_{i}} and y j {\displaystyle y_{j}} , and make the same choice. Just choose one if they were equally long. Call the function with i=m and j=n .
If choosing x i {\displaystyle x_{i}} and y j {\displaystyle y_{j}} would give an equally long result, read out both resulting subsequences. This is returned as a set by this function. Notice that this function is not polynomial, as it might branch in almost every step if the strings are similar.
This function will backtrack through the C matrix, and print the diff between the two sequences. Notice that you will get a different answer if you exchange ≥ and < , with > and ≤ below.
Let X {\displaystyle X} be “ XMJYAUZ ” and Y {\displaystyle Y} be “ MZJAWXU ”. The longest common subsequence between X {\displaystyle X} and Y {\displaystyle Y} is “ MJAU ”. The table C shown below, which is generated by the function LCSLength , shows the lengths of the longest common subsequences between prefixes of X {\displaystyle X} and Y {\displaystyle Y} . The i {\displaystyle i} th row and j {\displaystyle j} th column shows the length of the LCS between X 1.. i {\displaystyle X_{1..i}} and Y 1.. j {\displaystyle Y_{1..j}} .
The highlighted numbers show the path the function backtrack would follow from the bottom right to the top left corner, when reading out an LCS. If the current symbols in X {\displaystyle X} and Y {\displaystyle Y} are equal, they are part of the LCS, and we go both up and left (shown in bold ). If not, we go up or left, depending on which cell has a higher number. This corresponds to either taking the LCS between X 1.. i − 1 {\displaystyle X_{1..i-1}} and Y 1.. j {\displaystyle Y_{1..j}} , or X 1.. i {\displaystyle X_{1..i}} and Y 1.. j − 1 {\displaystyle Y_{1..j-1}} .
Several optimizations can be made to the algorithm above to speed it up for real-world cases.
The C matrix in the naive algorithm grows quadratically with the lengths of the sequences. For two 100-item sequences, a 10,000-item matrix would be needed, and 10,000 comparisons would need to be done. In most real-world cases, especially source code diffs and patches, the beginnings and ends of files rarely change, and almost certainly not both at the same time. If only a few items have changed in the middle of the sequence, the beginning and end can be eliminated. This reduces not only the memory requirements for the matrix, but also the number of comparisons that must be done.
In the best-case scenario, a sequence with no changes, this optimization would eliminate the need for the C matrix. In the worst-case scenario, a change to the very first and last items in the sequence, only two additional comparisons are performed.
Most of the time taken by the naive algorithm is spent performing comparisons between items in the sequences. For textual sequences such as source code, you want to view lines as the sequence elements instead of single characters. This can mean comparisons of relatively long strings for each step in the algorithm. Two optimizations can be made that can help to reduce the time these comparisons consume.
A hash function or checksum can be used to reduce the size of the strings in the sequences. That is, for source code where the average line is 60 or more characters long, the hash or checksum for that line might be only 8 to 40 characters long. Additionally, the randomized nature of hashes and checksums would guarantee that comparisons would short-circuit faster, as lines of source code will rarely be changed at the beginning.
There are three primary drawbacks to this optimization. First, an amount of time needs to be spent beforehand to precompute the hashes for the two sequences. Second, additional memory needs to be allocated for the new hashed sequences. However, in comparison to the naive algorithm used here, both of these drawbacks are relatively minimal.
The third drawback is that of collisions . Since the checksum or hash is not guaranteed to be unique, there is a small chance that two different items could be reduced to the same hash. This is unlikely in source code, but it is possible. A cryptographic hash would therefore be far better suited for this optimization, as its entropy is going to be significantly greater than that of a simple checksum. However, the benefits may not be worth the setup and computational requirements of a cryptographic hash for small sequence lengths.
If only the length of the LCS is required, the matrix can be reduced to a 2 × min ( n , m ) {\displaystyle 2\times \min(n,m)} matrix, or to a min ( m , n ) + 1 {\displaystyle \min(m,n)+1} vector as the dynamic programming approach requires only the current and previous columns of the matrix. Hirschberg's algorithm allows the construction of the optimal sequence itself in the same quadratic time and linear space bounds. [ 8 ]
Chowdhury and Ramachandran devised a quadratic-time linear-space algorithm [ 9 ] [ 10 ] for finding the LCS length along with an optimal sequence which runs faster than Hirschberg's algorithm in practice due to its superior cache performance. [ 9 ] The algorithm has an asymptotically optimal cache complexity under the Ideal cache model . [ 11 ] Interestingly, the algorithm itself is cache-oblivious [ 11 ] meaning that it does not make any choices based on the cache parameters (e.g., cache size and cache line size) of the machine.
Several algorithms exist that run faster than the presented dynamic programming approach. One of them is Hunt–Szymanski algorithm , which typically runs in O ( ( n + r ) log ( n ) ) {\displaystyle O((n+r)\log(n))} time (for n > m {\displaystyle n>m} ), where r {\displaystyle r} is the number of matches between the two sequences. [ 12 ] For problems with a bounded alphabet size, the Method of Four Russians can be used to reduce the running time of the dynamic programming algorithm by a logarithmic factor. [ 13 ]
Beginning with Chvátal & Sankoff (1975) , [ 14 ] a number of researchers have investigated the behavior of the longest common subsequence length when the two given strings are drawn randomly from the same alphabet. When the alphabet size is constant, the expected length of the LCS is proportional to the length of the two strings, and the constants of proportionality (depending on alphabet size) are known as the Chvátal–Sankoff constants . Their exact values are not known, but upper and lower bounds on their values have been proven, [ 15 ] and it is known that they grow inversely proportionally to the square root of the alphabet size. [ 16 ] Simplified mathematical models of the longest common subsequence problem have been shown to be controlled by the Tracy–Widom distribution . [ 17 ]
For decades, it had been considered folklore that the longest palindromic subsequence of a string could be computed by finding the longest common subsequence between the string and its reversal, using the classical dynamic programming approach introduced by Wagner and Fischer. However, a formal proof of the correctness of this method was only established in 2024 by Brodal, Fagerberg, and Moldrup Rysgaard. [ 18 ] | https://en.wikipedia.org/wiki/Longest_common_subsequence |
In computer science , the longest increasing subsequence problem aims to find a subsequence of a given sequence in which the subsequence's elements are sorted in an ascending order and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous or unique. The longest increasing subsequences are studied in the context of various disciplines related to mathematics , including algorithmics , random matrix theory , representation theory , and physics . [ 1 ] [ 2 ] The longest increasing subsequence problem is solvable in time O ( n log n ) , {\displaystyle O(n\log n),} where n {\displaystyle n} denotes the length of the input sequence. [ 3 ]
In the first 16 terms of the binary Van der Corput sequence
one of the longest increasing subsequences is
This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not the only solution: for instance,
are other increasing subsequences of equal length in the same input sequence.
The longest increasing subsequence problem is closely related to the longest common subsequence problem , which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence S {\displaystyle S} is the longest common subsequence of S {\displaystyle S} and T , {\displaystyle T,} where T {\displaystyle T} is the result of sorting S . {\displaystyle S.} However, for the special case in which the input is a permutation of the integers 1 , 2 , … , n , {\displaystyle 1,2,\ldots ,n,} this approach can be made much more efficient, leading to time bounds of the form O ( n log log n ) . {\displaystyle O(n\log \log n).} [ 4 ]
The largest clique in a permutation graph corresponds to the longest decreasing subsequence of the permutation that defines the graph (assuming the original non-permuted sequence is sorted from lowest value to highest). Similarly, the maximum independent set in a permutation graph corresponds to the longest non-decreasing subsequence. Therefore, longest increasing subsequence algorithms can be used to solve the clique problem efficiently in permutation graphs. [ 5 ]
In the Robinson–Schensted correspondence between permutations and Young tableaux , the length of the first row of the tableau corresponding to a permutation equals the length of the longest increasing subsequence of the permutation, and the length of the first column equals the length of the longest decreasing subsequence. [ 3 ]
The algorithm outlined below solves the longest increasing subsequence problem efficiently with arrays and binary searching .
It processes the sequence elements in order, maintaining the longest increasing subsequence found so far. Denote the sequence values as X [ 0 ] , X [ 1 ] , … , {\displaystyle X[0],X[1],\ldots ,} etc. Then, after processing X [ i ] , {\displaystyle X[i],} the algorithm will have stored an integer L {\displaystyle L} and values in two arrays:
Because the algorithm below uses zero-based numbering , for clarity M {\displaystyle M} is padded with M [ 0 ] , {\displaystyle M[0],} which goes unused so that M [ l ] {\displaystyle M[l]} corresponds to a subsequence of length l . {\displaystyle l.} A real implementation can skip M [ 0 ] {\displaystyle M[0]} and adjust the indices accordingly.
Note that, at any point in the algorithm, the sequence X [ M [ 1 ] ] , X [ M [ 2 ] ] , … , X [ M [ L ] ] {\displaystyle X[M[1]],X[M[2]],\ldots ,X[M[L]]} is increasing. For, if there is an increasing subsequence of length l ≥ 2 {\displaystyle l\geq 2} ending at X [ M [ l ] ] , {\displaystyle X[M[l]],} then there is also a subsequence of length l − 1 {\displaystyle l-1} ending at a smaller value: namely the one ending at X [ P [ M [ l ] ] ] . {\displaystyle X[P[M[l]]].} Thus, we may do binary searches in this sequence in logarithmic time.
The algorithm, then, proceeds as follows:
Because the algorithm performs a single binary search per sequence element, its total time can be expressed using Big O notation as O ( n log n ) . {\displaystyle O(n\log n).} Fredman (1975) discusses a variant of this algorithm, which he credits to Donald Knuth ; in the variant that he studies, the algorithm tests whether each value X [ i ] {\displaystyle X[i]} can be used to extend the current longest increasing sequence, in constant time, prior to doing the binary search. With this modification, the algorithm uses at most n log 2 n − n log 2 log 2 n + O ( n ) {\displaystyle n\log _{2}n-n\log _{2}\log _{2}n+O(n)} comparisons in the worst case, which is optimal for a comparison-based algorithm up to the constant factor in the O ( n ) {\displaystyle O(n)} term. [ 6 ]
Example run
According to the Erdős–Szekeres theorem , any sequence of n 2 + 1 {\displaystyle n^{2}+1} distinct integers has an increasing or a decreasing subsequence of length n + 1. {\displaystyle n+1.} [ 7 ] [ 8 ] For inputs in which each permutation of the input is equally likely, the expected length of the longest increasing subsequence is approximately 2 n . {\displaystyle 2{\sqrt {n}}.} [ 9 ] [ 2 ]
In the limit as n {\displaystyle n} approaches infinity, the Baik-Deift-Johansson theorem says, that the length of the longest increasing subsequence of a randomly permuted sequence of n {\displaystyle n} items has a distribution approaching the Tracy–Widom distribution , the distribution of the largest eigenvalue of a random matrix in the Gaussian unitary ensemble . [ 10 ]
The longest increasing subsequence has also been studied in the setting of online algorithms , in which the elements of a sequence of independent random variables with continuous distribution F {\displaystyle F} – or alternatively the elements of a random permutation – are presented one at a time to an algorithm that must decide whether to include or exclude each element, without knowledge of the later elements. In this variant of the problem, which allows for interesting applications in several contexts, it is possible to devise an optimal selection procedure that, given a random sample of size n {\displaystyle n} as input, will generate an increasing sequence with maximal expected length of size approximately 2 n . {\displaystyle {\sqrt {2n}}.} [ 11 ] The length of the increasing subsequence selected by this optimal procedure has variance approximately equal to 2 n / 3 , {\displaystyle {\sqrt {2n}}/3,} and its limiting distribution is asymptotically normal after the usual centering and scaling. [ 12 ] The same asymptotic results hold with more precise bounds for the corresponding problem in the setting of a Poisson arrival process. [ 13 ] A further refinement in the Poisson process setting is given through the proof of a central limit theorem for the optimal selection process
which holds, with a suitable normalization, in a more complete sense than one would expect. The proof yields not only the "correct" functional limit theorem
but also the (singular) covariance matrix of the three-dimensional process summarizing all interacting processes. [ 14 ] | https://en.wikipedia.org/wiki/Longest_increasing_subsequence |
In synthetic chemistry , the longest linear sequence , commonly abbreviated as LLS , is the largest number of reactions required to go from the starting materials to the products in a multistep sequence. [ 1 ]
This concept is very important when trying to optimize a synthetic plan. Since every reaction step can decrease the yield of the product, reducing the value of the LLS is a good way to increase the quantity of chemicals formed at the end: this can be done by devising quicker methods to couple the fragments or by introducing convergence . [ 2 ] However, improving sequences which are not the longest linear one will generally not produce an overall enhancement to the yield of the reaction since the benefits incur in intermediates that were already present in excess, assuming that the yields for each of the steps are roughly equal.
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Longest_linear_sequence |
In computer science , the longest repeated substring problem is the problem of finding the longest substring of a string that occurs at least twice.
This problem can be solved in linear time and space Θ ( n ) {\displaystyle \Theta (n)} by building a suffix tree for the string (with a special end-of-string symbol like '$' appended), and finding the deepest internal node in the tree with more than one child. Depth is measured by the number of characters traversed from the root. The string spelled by the edges from the root to such a node is a longest repeated substring. The problem of finding the longest substring with at least k {\displaystyle k} occurrences can be solved by first preprocessing the tree to count the number of leaf descendants for each internal node, and then finding the deepest node with at least k {\displaystyle k} leaf descendants. To avoid overlapping repeats, you can check that the list of suffix lengths has no consecutive elements with less than prefix-length difference.
In the figure with the string "ATCGATCGA$", the longest substring that repeats at least twice is "ATCGA".
This algorithms or data structures -related article is a stub . You can help Wikipedia by expanding it .
This combinatorics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Longest_repeated_substring_problem |
Longevity may refer to especially long-lived members of a population, whereas life expectancy is defined statistically as the average number of years remaining at a given age. For example, a population's life expectancy at birth is the same as the average age at death for all people born in the same year (in the case of cohorts ).
Longevity studies may involve putative methods to extend life. Longevity has been a topic not only for the scientific community but also for writers of travel , science fiction , and utopian novels. The legendary fountain of youth appeared in the work of the Ancient Greek historian Herodotus .
There are difficulties in authenticating the longest human life span , owing to inaccurate or incomplete birth statistics. Fiction, legend, and folklore have proposed or claimed life spans in the past or future vastly longer than those verified by modern standards, and longevity narratives and unverified longevity claims frequently speak of their existence in the present.
A life annuity is a form of longevity insurance .
Various factors contribute to an individual's longevity. Significant factors in life expectancy include gender , genetics , access to health care , hygiene , diet and nutrition , exercise , lifestyle , and crime rates . Below is a list of life expectancies in different types of countries: [ 1 ]
Population longevities are increasing as life expectancies around the world grow: [ 2 ] [ 3 ]
The Gerontology Research Group validates current longevity records by modern standards, and maintains a list of supercentenarians ; many other unvalidated longevity claims exist. Record-holding individuals include: [ 4 ] [ 5 ] [ 6 ]
Evidence-based studies indicate that longevity is based on two major factors: genetics and lifestyle . [ 10 ]
Twin studies have estimated that approximately 20-30% of the variation in human lifespan can be related to genetics , with the rest due to individual behaviors and environmental factors which can be modified. [ 11 ] Although over 200 gene variants have been associated with longevity according to a US-Belgian-UK research database of human genetic variants [ 12 ] these explain only a small fraction of the heritability. [ 13 ]
Lymphoblastoid cell lines established from blood samples of centenarians have significantly higher activity of the DNA repair protein PARP ( Poly ADP ribose polymerase ) than cell lines from younger (20 to 70 year old) individuals. [ 14 ] The lymphocytic cells of centenarians have characteristics typical of cells from young people, both in their capability of priming the mechanism of repair after H 2 O 2 sublethal oxidative DNA damage and in their PARP gene expression. [ 15 ] These findings suggest that elevated PARP gene expression contributes to the longevity of centenarians, consistent with the DNA damage theory of aging . [ 16 ]
In July 2020 , scientists used public biological data on 1.75 m people with known lifespans overall and identified 10 genomic loci which appear to intrinsically influence healthspan , lifespan, and longevity – of which half have not been reported previously at genome-wide significance and most being associated with cardiovascular disease – and identified haem metabolism as a promising candidate for further research within the field. Their study suggests that high levels of iron in the blood likely reduce, and genes involved in metabolising iron likely increase healthy years of life in humans. [ 18 ] [ 17 ]
Longevity is a highly plastic trait, and traits that influence its components respond to physical (static) environments and to wide-ranging life-style changes: physical exercise, dietary habits, living conditions, and pharmaceutical as well as nutritional interventions. [ 19 ] [ 20 ] [ 21 ] A 2012 study found that even modest amounts of leisure time physical exercise can extend life expectancy by as much as 4.5 years. [ 22 ]
As of 2021, there is no clinical evidence that any dietary practice contributes to human longevity. [ 23 ] Although health can be influenced by diet, including the type of foods consumed, the amount of calories ingested, and the duration and frequency of fasting periods, [ 24 ] there is no good clinical evidence that fasting promotes longevity in humans, as of 2021 [update] . [ 23 ] [ 25 ] [ 26 ]
Calorie restriction is a widely researched intervention to assess effects on aging, defined as a sustained reduction in dietary energy intake compared to the energy required for weight maintenance. [ 23 ] [ 25 ] To ensure metabolic homeostasis , the diet during calorie restriction must provide sufficient energy, micronutrients, and fiber. [ 25 ] Some studies on rhesus monkeys showed that restricting calorie intake resulted in lifespan extension, while other animals studies did not detect a significant change. [ 23 ] [ 27 ] According to preliminary research in humans, there is little evidence that calorie restriction affects lifespan. [ 23 ] [ 25 ] There is a link between diet and obesity and consequent obesity-associated morbidity .
Four well-studied biological pathways that are known to regulate aging, and whose modulation has been shown to influence longevity are Insulin/IGF-1 , mechanistic target of rapamycin ( mTOR ), AMP-activating protein kinase ( AMPK ), and Sirtuin pathways. [ 28 ] [ 29 ]
In preindustrial times, deaths at young and middle age were more common than they are today. This is not due to genetics, but because of environmental factors such as disease, accidents, and malnutrition, especially since the former were not generally treatable with pre-20th-century medicine. Deaths from childbirth were common for women, and many children did not live past infancy. In addition, most people who did attain old age were likely to die quickly from the above-mentioned untreatable health problems. Despite this, there are several examples of pre-20th-century individuals attaining lifespans of 85 years or greater, including John Adams , Cato the Elder , Thomas Hobbes , Christopher Polhem , and Michelangelo . This was also true for poorer people like peasants or laborers . [ citation needed ] Genealogists will almost certainly find ancestors living to their 70s, 80s and even 90s several hundred years ago.
For example, an 1871 census in the UK (the first of its kind, but personal data from other censuses dates back to 1841 and numerical data back to 1801) found the average male life expectancy as being 44, but if infant mortality is subtracted, males who lived to adulthood averaged 75 years. The present life expectancy in the UK is 77 years for males and 81 for females, while the United States averages 74 for males and 80 for females.
Studies have shown that black American males have the shortest lifespans of any group of people in the US, averaging only 69 years (Asian-American females average the longest). [ 30 ] This reflects overall poorer health and greater prevalence of heart disease, obesity, diabetes, and cancer among black American men.
Women normally outlive men. Theories for this include smaller bodies that place lesser strain on the heart (women have lower rates of cardiovascular disease ) and a reduced tendency to engage in physically dangerous activities. [ 31 ] Conversely, women are more likely to participate in health-promoting activities. [ 32 ] The X chromosome also contains more genes related to the immune system, and women tend to mount a stronger immune response to pathogens than men. [ 33 ] However, the idea that men have weaker immune systems due to the supposed immuno-suppressive actions of testosterone is unfounded. [ 34 ]
There is debate as to whether the pursuit of longevity is a worthwhile health care goal. Bioethicist Ezekiel Emanuel , who is also one of the architects of ObamaCare , has argued that the pursuit of longevity via the compression of morbidity explanation is a "fantasy" and that longevity past age 75 should not be considered an end in itself. [ 35 ] This has been challenged by neurosurgeon Miguel Faria , who states that life can be worthwhile in healthy old age, that the compression of morbidity is a real phenomenon, and that longevity should be pursued in association with quality of life. [ 36 ] Faria has discussed how longevity in association with leading healthy lifestyles can lead to the postponement of senescence as well as happiness and wisdom in old age. [ 37 ]
Most biological organisms have a naturally limited longevity due to aging , unlike a rare few that are considered biologically immortal .
Given that different species of animals and plants have different potentials for longevity, the disrepair accumulation theory of aging tries to explain how the potential for longevity of an organism is sometimes positively correlated to its structural complexity. It suggests that while biological complexity increases individual lifespan, it is counteracted in nature since the survivability of the overall species may be hindered when it results in a prolonged development process , which is an evolutionarily vulnerable state. [ 38 ]
According to the antagonistic pleiotropy hypothesis , one of the reasons biological immortality is so rare is that certain categories of gene expression that are beneficial in youth become deleterious at an older age.
Longevity myths are traditions about long-lived people (generally supercentenarians ), either as individuals or groups of people, and practices that have been believed to confer longevity, but for which scientific evidence does not support the ages claimed or the reasons for the claims. [ 39 ] [ 40 ] A comparison and contrast of "longevity in antiquity" (such as the Sumerian King List , the genealogies of Genesis , and the Persian Shahnameh ) with "longevity in historical times" (common-era cases through twentieth-century news reports) is elaborated in detail in Lucian Boia 's 2004 book Forever Young: A Cultural History of Longevity from Antiquity to the Present and other sources. [ 41 ]
After the death of Juan Ponce de León , Gonzalo Fernández de Oviedo y Valdés wrote in Historia General y Natural de las Indias (1535) that Ponce de León was looking for the waters of Bimini to cure his aging. [ 42 ] Traditions that have been believed to confer greater human longevity also include alchemy , [ 43 ] such as that attributed to Nicolas Flamel . In the modern era, the Okinawa diet has some reputation of linkage to exceptionally high ages. [ 44 ]
Longevity claims may be subcategorized into four groups: "In late life, very old people often tend to advance their ages at the rate of about 17 years per decade .... Several celebrated super-centenarians (over 110 years) are believed to have been double lives (father and son, relations with the same names or successive bearers of a title) .... A number of instances have been commercially sponsored, while a fourth category of recent claims are those made for political ends ...." [ 45 ] The estimate of 17 years per decade was corroborated by the 1901 and 1911 British censuses. [ 45 ] Time magazine considered that, by the Soviet Union, longevity had been elevated to a state-supported "Methuselah cult". [ 46 ]
Robert Ripley regularly reported supercentenarian claims in Ripley's Believe It or Not! , usually citing his own reputation as a fact-checker to claim reliability. [ 47 ]
Longevity in other animals can shed light on the determinants of life expectancy in humans, especially when found in related mammals . However, important contributions to longevity research have been made by research in other species, ranging from yeast to flies to worms . In fact, some closely related species of vertebrates can have dramatically different life expectancies, demonstrating that relatively small genetic changes can have a dramatic impact on aging. For instance, Pacific Ocean rockfishes have widely varying lifespans. The species Sebastes minor lives a mere 11 years while its cousin Sebastes aleutianus can live for more than 2 centuries. [ 48 ] Similarly, a chameleon , Furcifer labordi , is the current record holder for shortest lifespan among tetrapods , with only 4–5 months to live. [ 49 ] By contrast, some of its relatives, such as Furcifer pardalis , have been found to live up to 6 years. [ 50 ]
There are studies about aging-related characteristics of and aging in long-lived animals like various turtles [ 51 ] [ 52 ] and plants like Ginkgo biloba trees. [ 53 ] They have identified potentially causal protective traits and suggest many of the species have "slow or [times of] [ clarification needed ] negligible [ clarification needed ] senescence" (or aging). [ 54 ] [ 51 ] [ 52 ] The jellyfish T. dohrnii is biologically immortal and has been studied by comparative genomics . [ 55 ] [ 56 ]
Honey bees ( Apis mellifera ) are eusocial insects that display dramatic caste-specific differences in longevity. Queen bees live for an average of 1-2 years, compared to workers who live on average 15-38 days in summer and 150-200 days in winter. [ 57 ] Worker honey bees with high amounts of flight experience exhibit increased DNA damage in flight muscle, as measured by elevated 8-Oxo-2'-deoxyguanosine , compared to bees with less flight experience. [ 58 ] This increased DNA damage is likely due to an imbalance of pro- and anti-oxidants during flight-associated oxidative stress . Flight induced oxidative DNA damage appears to hasten senescence and reduce longevity in A. mellifera . [ 58 ]
Gene editing via CRISPR - Cas9 and other methods have significantly altered lifespans in animals. [ 66 ] [ 67 ] [ 68 ]
Media related to Longevity at Wikimedia Commons | https://en.wikipedia.org/wiki/Longevity |
In the life extension movement , longevity escape velocity ( LEV ), actuarial escape velocity [ 2 ] or biological escape velocity [ 3 ] is a hypothetical situation in which one's remaining life expectancy (not life expectancy at birth ) is extended longer than the time that is passing. For example, in a given year in which longevity escape velocity would be maintained, medical advances would increase people's remaining life expectancy more than the year that just went by.
The term is meant as an analogy to the concept of escape velocity in physics, which is the minimum speed required for an object to indefinitely move away from a gravitational body despite the gravitational force pulling the object towards the body.
For many years in the past, life expectancy at each age has increased slightly every year as treatment strategies and technologies have improved. At present, more than one year of research is required for each additional year of expected life. Longevity escape velocity occurs when this ratio reverses, so that life expectancy increases faster than one year per one year of research, as long as that rate of advance is sustainable.
Mouse lifespan research has been the most contributive to conclusive evidence on the matter, since mice require only a few years before research results can be concluded. [ 4 ] [ 5 ]
The term "longevity escape velocity" was conceived of by futurist David Gobel of the Methuselah Foundation and coined by biogerontologist Aubrey de Grey in a 2004 paper, [ 4 ] but the concept has been present in the life extension community since at least the 1970s, such as in Robert Anton Wilson 's essay Next Stop, Immortality . [ 6 ] The concept is also part of the fictional history leading to multi-century youthful lifespans in the science fiction series The Mars Trilogy by Kim Stanley Robinson . More recent proponents include David Gobel , co-founder of the Methuselah Foundation and futurist , and technologist Ray Kurzweil , [ 7 ] who named one of his books, Fantastic Voyage: Live Long Enough to Live Forever , after the concept. The last two claim that by putting further pressure on science and medicine to focus research on increasing limits of aging , rather than continuing along at its current pace, more lives will be saved in the future, even if the benefit is not immediately apparent. [ 4 ]
The idea was even more popularized with the publishing of Aubrey de Grey and Michael Rae's book, Ending Aging , in 2007. de Grey has also popularized the word " Methuselarity " which describes the same concept. [ 8 ]
Ray Kurzweil predicts that longevity escape velocity will be reached before humanity realizes it. [ 9 ] [ 10 ] In 2018, he predicted that it would be reached in 10–12 years, meaning that the milestone would occur around 2028–2030. [ 11 ] In 2024, writing in The Economist , Kurzweil revised his prediction to 2029–2035 and explained how AI would help to simulate biological processes. [ 12 ] Aubrey de Grey has also similarly predicted that humanity has a 50 percent chance of reaching longevity escape velocity in the mid to late 2030s. [ 8 ] [ 13 ] | https://en.wikipedia.org/wiki/Longevity_escape_velocity |
Aspen is a common name for certain tree species in the Populus sect. Populus , of the Populus (poplar) genus . [ 1 ]
These species are called aspens:
Aspen trees are all native to cold regions with cool summers, in the north of the northern hemisphere , extending south at high-altitude areas such as mountains or high plains. They are all medium-sized deciduous trees reaching 15–30 m (50–100 ft) tall. In North America, the aspen is referred to as quaking aspen or trembling aspen because the leaves "quake" or tremble in the wind. This is due to their flattened petioles which reduce aerodynamic drag on the trunk and branches.
Aspens typically grow in environments that are otherwise dominated by coniferous /kəˈnɪfərəs/ tree species, and which are often lacking other large deciduous tree species. Aspens have evolved several adaptations that aid their survival in such environments. One is the flattened leaf petiole, which reduces aerodynamic drag during high winds and decreases the likelihood of trunk or branch damage. Dropping leaves in the winter (like most but not all other deciduous plants) also helps to prevent damage from heavy winter snow. Additionally, the bark is photosynthetic, meaning that growth is still possible after the leaves have been dropped. The bark also contains lenticels that serve as pores for gas exchange, in which their respiratory function resembles that of the stomata on leaves.
Aspens are also aided by the rhizomatic nature of their root systems. Most aspens grow in large clonal colonies , derived from a single seedling, and spread by means of root suckers ; new stems in the colony may appear at up to 30–40 m (100–130 ft) from the parent tree. Each individual tree can live for 40–150 years above ground, but the root system of the colony is long-lived. In some cases, this is for thousands of years, sending up new trunks as the older trunks die off above ground. For this reason, it is considered to be an indicator of ancient woodlands. One such colony in Utah, given the nickname of " Pando ", has been estimated to be as old as 80,000 years, [ 3 ] if validated, this would be making it possibly the oldest living colony of aspens. Some aspen colonies become very large with time, spreading about 1 m (3 ft) per year, eventually covering many hectares. They are able to survive forest fires , because the roots are below the heat of the fire, and new sprouts appear after the fire burns out. The high stem turnover rate combined with the clonal growth leads to proliferation in aspen colonies. The high stem turnover regime supports a diverse herbaceous understory. [ citation needed ]
Aspen seedlings do not thrive in the shade, and it is difficult for seedlings to establish in an already mature aspen stand. Fire indirectly benefits aspen trees, since it allows the saplings to flourish in open sunlight in the burned landscape, devoid of other competing tree species. Aspens have increased in popularity as a forestry cultivation species, mostly because of their fast growth rate and ability to regenerate from sprouts. This lowers the cost of reforestation after harvesting since no planting or sowing is required.
Recently, aspen populations have been declining in some areas ("Sudden Aspen Death"). This has been attributed to several different factors, such as climate change , which exacerbates drought and modifies precipitation patterns. Recruitment failure from herbivory or grazing prevents new trees from coming up after old trees die. Additionally, successional replacement by conifers due to fire suppression alters forest diversity and creates conditions where aspen may be at less of an advantage.
In contrast with many trees, aspen bark is base-rich , meaning aspens are important hosts for bryophytes [ 4 ] and act as food plants for the larvae of butterfly ( Lepidoptera ) species—see List of Lepidoptera that feed on poplars.
Young aspen bark is an important seasonal forage for the European hare and other animals in early spring. Aspen is also a preferred food of the European beaver . Elk , deer , and moose not only eat the leaves but also strip the bark with their front teeth.
Aspen wood is white and soft, but fairly strong, and has low flammability. It has a number of uses, notably for making matches and paper where its low flammability makes it safer to use than most other woods. [ citation needed ] Shredded aspen wood is used for packing and stuffing, sometimes called excelsior (wood wool) . Aspen flakes are the most common species of wood used to make oriented strand boards . [ 5 ] It is also a popular animal bedding, since it lacks the phenols associated with pine and juniper , which are thought to cause respiratory system ailments in some animals. Heat-treated aspen is a popular material for the interiors of saunas . While standing trees sometimes tend to rot from the heart outward, the dry timber weathers very well, becoming silvery-grey and resistant to rotting and warping, and has traditionally been used for rural construction in the northwestern regions of Russia (especially for roofing, in the form of thin slats). | https://en.wikipedia.org/wiki/Longevity_of_aspen_trees |
Lobsters are malacostracans decapod crustaceans of the family Nephropidae [ 1 ] or its synonym Homaridae . [ 2 ] They have long bodies with muscular tails and live in crevices or burrows on the sea floor. Three of their five pairs of legs have claws, including the first pair, which are usually much larger than the others. Highly prized as seafood , lobsters are economically important and are often one of the most profitable commodities in the coastal areas they populate. [ 3 ]
Commercially important species include two species of Homarus from the northern Atlantic Ocean and scampi (which look more like a shrimp , or a "mini lobster")—the Northern Hemisphere genus Nephrops and the Southern Hemisphere genus Metanephrops . [ citation needed ]
Although several other groups of crustaceans have the word "lobster" in their names, the unqualified term "lobster" generally refers to the clawed lobsters of the family Nephropidae. [ 4 ] Clawed lobsters are not closely related to spiny lobsters or slipper lobsters , which have no claws ( chelae ), or to squat lobsters . The most similar living relatives of clawed lobsters are the reef lobsters and the three families of freshwater crayfish .
Lobsters are invertebrates with a hard protective exoskeleton . [ 5 ] Like most arthropods , lobsters must shed to grow, which leaves them vulnerable. During the shedding process, several species change color. Lobsters have eight walking legs; the front three pairs bear claws, the first of which are larger than the others. The front pincers are also biologically considered legs, so they belong in the order Decapods ("ten-footed"). [ 6 ] Although lobsters are largely bilaterally symmetrical like most other arthropods, some genera possess unequal, specialized claws. [ citation needed ]
Lobster anatomy includes two main body parts: the cephalothorax and the abdomen . The cephalothorax fuses the head and the thorax , both of which are covered by a chitinous carapace . The lobster's head bears antennae , antennules, mandibles , the first and second maxillae . The head also bears the (usually stalked) compound eyes . Because lobsters live in murky environments at the bottom of the ocean, they mostly use their antennae as sensors. The lobster eye has a reflective structure above a convex retina. In contrast, most complex eyes use refractive ray concentrators (lenses) and a concave retina. [ 7 ] The lobster's thorax is composed of maxillipeds , appendages that function primarily as mouthparts, and pereiopods , appendages that serve for walking and for gathering food. The abdomen includes pleopods (also known as swimmerets ), used for swimming, as well as the tail fan, composed of uropods and the telson .
Lobsters, like snails and spiders, have blue blood due to the presence of hemocyanin , which contains copper . [ 8 ] In contrast, vertebrates, and many other animals have red blood from iron -rich hemoglobin . Lobsters possess a green hepatopancreas , called the tomalley by chefs, which functions as the animal's liver and pancreas . [ 9 ]
Lobsters of the family Nephropidae are similar in overall form to several other related groups. They differ from freshwater crayfish in lacking the joint between the last two segments of the thorax, [ 10 ] and they differ from the reef lobsters of the family Enoplometopidae in having full claws on the first three pairs of legs, rather than just one. [ 10 ] The distinctions from fossil families such as the Chilenophoberidae are based on the pattern of grooves on the carapace. [ 10 ]
Analysis of the neural gene complement revealed extraordinary development of the chemosensory machinery, including a profound diversification of ligand-gated ion channels and secretory molecules. [ 11 ]
Typically, lobsters are dark colored, either bluish-green or greenish-brown, to blend in with the ocean floor, but they can be found in many colors. [ 12 ] [ 13 ] Lobsters with atypical coloring are extremely rare, accounting for only a few of the millions caught every year, and due to their rarity, they usually are not eaten, instead being released back into the wild or donated to aquariums . Often, in cases of atypical coloring, there is a genetic factor, such as albinism or hermaphroditism . Special coloring does not appear to affect the lobster's taste once cooked; except for albinos, all lobsters possess astaxanthin, which is responsible for the bright red color lobsters turn after being cooked. [ 14 ]
Lobsters live up to an estimated 45 to 50 years in the wild, although determining age is difficult: [ 39 ] it is typically estimated from size and other variables. Newer techniques may lead to more accurate age estimates. [ 40 ] [ 41 ] [ 42 ]
Research suggests that lobsters may not slow down, weaken, or lose fertility with age and that older lobsters may be more fertile than younger lobsters. [ 43 ] This longevity may be due to telomerase , an enzyme that repairs long repetitive sections of DNA sequences at the ends of chromosomes, referred to as telomeres . Telomerase is expressed by most vertebrates during embryonic stages but is generally absent from adult stages of life. [ 44 ] However, unlike most vertebrates, lobsters express telomerase as adults through most tissue, which has been suggested to be related to their longevity. [ citation needed ] Telomerase is especially present in green spotted lobsters, whose markings are thought to be produced by the enzyme interacting with their shell pigmentation. [ 45 ] [ 46 ] [ 47 ] Lobster longevity is limited by their size. Moulting requires metabolic energy, and the larger the lobster, the more energy is needed; 10 to 15% of lobsters die of exhaustion during moulting, while in older lobsters, moulting ceases and the exoskeleton degrades or collapses entirely, leading to death. [ 48 ] [ 49 ]
Like many decapod crustaceans, lobsters grow throughout life and can add new muscle cells at each moult. [ 50 ] Lobster longevity allows them to reach impressive sizes. According to Guinness World Records , the largest lobster ever caught was in Nova Scotia , Canada, weighing 20.15 kilograms (44.4 lb). [ 51 ]
Lobsters live in all oceans, on rocky, sandy, or muddy bottoms from the shoreline to beyond the edge of the continental shelf , contingent largely on size and age. [ 52 ] Smaller, younger lobsters are typically found in crevices or in burrows under rocks and do not typically migrate. Larger, older lobsters are more likely to be found in deeper seas, migrating back to shallow waters seasonally. [ 52 ]
Lobsters are omnivores and typically eat live prey such as fish, mollusks, other crustaceans, worms, and some plant life. They scavenge if necessary and are known to resort to cannibalism in captivity. However, when lobster skin is found in lobster stomachs, this is not necessarily evidence of cannibalism because lobsters eat their shed skin after moulting. [ 53 ] While cannibalism was thought to be nonexistent among wild lobster populations, it was observed in 2012 by researchers studying wild lobsters in Maine. These first known instances of lobster cannibalism in the wild are theorized to be attributed to a local population explosion among lobsters caused by the disappearance of many of the Maine lobsters' natural predators. [ 54 ]
In general, lobsters are 25–50 cm (10–20 in) long and move by slowly walking on the sea floor. However, they swim backward quickly when they flee by curling and uncurling their abdomens . A speed of 5 m/s (11 mph) has been recorded. [ 55 ] This is known as the caridoid escape reaction .
Symbiotic animals of the genus Symbion , the only known member of the phylum Cycliophora , live exclusively on lobster gills and mouthparts. [ 56 ] Different species of Symbion have been found on the three commercially important lobsters of the North Atlantic Ocean: Nephrops norvegicus , Homarus gammarus , and Homarus americanus . [ 56 ]
Lobster is commonly served boiled or steamed in the shell. Diners crack the shell with lobster crackers and fish out the meat with lobster picks . The meat is often eaten with melted butter and lemon juice . Lobster is also used in soup, bisque , lobster rolls , cappon magro , and dishes such as lobster Newberg and lobster Thermidor .
Cooks boil or steam live lobsters. When a lobster is cooked, its shell's color changes from brown to orange because the heat from cooking breaks down a protein called crustacyanin , which suppresses the orange hue of the chemical astaxanthin , which is also found in the shell. [ 57 ]
According to the United States Food and Drug Administration (FDA), the mean level of mercury in American lobster between 2005 and 2007 was 0.107 ppm . [ 58 ] [ needs context ]
Humans are claimed to have eaten lobster since early history. Large piles of lobster shells near areas populated by fishing communities attest to the crustacean's extreme popularity during this period [ which? ] . Evidence indicates that lobster was being consumed as a regular food product in fishing communities along the shores of Britain, [ 59 ] South Africa, [ 59 ] Australia, and Papua New Guinea years ago [ when? ] . Lobster became a significant source of nutrients among European coastal dwellers [ when? ] . Historians suggest lobster was an important secondary food source for most European coastal dwellers, and it was a primary food source for coastal communities in Britain during this time. [ 59 ] [ clarification needed ]
Lobster became a popular mid-range delicacy during the mid to late Roman period . The price of lobster could vary widely due to various factors, but evidence indicates that lobster was regularly transported inland over long distances to meet popular demand. A mosaic found in the ruins of Pompeii suggests that the spiny lobster was of considerable interest to the Roman population during the early imperial period. [ 60 ]
Lobster was a popular food among the Moche people of Peru between 50 CE and 800 CE. Besides its use as food, lobster shells were also used to create a light pink dye, ornaments, and tools. A mass-produced lobster-shaped effigy vessel dated to this period attests to lobster's popularity at this time, though the purpose of this vessel has not been identified. [ 61 ]
The Viking period saw an increase in lobster and other shellfish consumption among northern Europeans. This can be attributed to the overall increase in marine activity due to the development of better boats and the increasing cultural investment in building ships and training sailors. The consumption of marine life went up overall in this period, and the consumption of lobster went up in accordance with this general trend. [ 62 ]
Unlike fish, however, lobster had to be cooked within two days of leaving salt water, limiting the availability of lobster for inland dwellers. Thus lobster, more than fish, became a food primarily available to the relatively well-off, at least among non-coastal dwellers. [ 63 ]
Lobster is first mentioned in cookbooks during the medieval period. Le Viandier de Taillevent , a French recipe collection written around 1300, suggests that lobster (also called saltwater crayfish) be "Cooked in wine and water, or in the oven; eaten in vinegar." [ 64 ] Le Viandier de Taillevent is considered to be one of the first "haute cuisine" cookbooks, advising on how to cook meals that would have been quite elaborate for the period and making usage of expensive and hard to obtain ingredients. Though the original edition, which includes the recipe for lobster, was published before the birth of French court cook Guillaume Tirel , Tirel later expanded and republished this recipe collection, suggesting that the recipes included in both editions were popular among the highest circles of French nobility, including King Philip VI. [ 65 ] The inclusion of a lobster recipe in this cookbook, especially one which does not make use of other more expensive ingredients, attests to the popularity of lobster among the wealthy.
The French household guidebook Le Ménagier de Paris , published in 1393, includes no less than five recipes including lobster, which vary in elaboration. [ 66 ] A guidebook intended to provide advice for women running upper-class households, Le Ménagier de Paris is similar to its predecessor in that it indicates the popularity of lobster as a food among the upper classes. [ 67 ]
That lobster was first mentioned in cookbooks during the 1300s and only mentioned in two during this century should not be taken as an implication that lobster was not widely consumed before or during this time. Recipe collections were virtually non-existent before the 1300s, and only a handful exist from the medieval period.
During the early 1400s, lobster was still a popular dish among the upper classes. During this time, influential households used the variety and variation of species served at feasts to display wealth and prestige. Lobster was commonly found among these spreads, indicating that it continued to be held in high esteem among the wealthy. In one notable instance, the Bishop of Salisbury offered at least 42 kinds of crustaceans and fish at his feasts over nine months, including several varieties of lobster. However, lobster was not a food exclusively accessed by the wealthy. The general population living on the coasts made use of the various food sources provided by the ocean, and shellfish especially became a more popular source of nutrition. Among the general population, lobster was generally eaten boiled during the mid-15th century, but the influence of the cuisine of higher society can be seen in that it was now also regularly eaten cold with vinegar. The inland peasantry would still have generally been unfamiliar with lobster during this time. [ 68 ]
Lobster continued to be eaten as a delicacy and a general staple food among coastal communities until the late 17th century. During this time, the influence of the Church and the government regulating and sometimes banning meat consumption during certain periods continued to encourage the popularity of seafood, especially shellfish, as a meat alternative among all classes. Throughout this period, lobster was eaten fresh, pickled , and salted . From the late 17th century onward, developments in fishing, transportation, and cooking technology allowed lobster to more easily make its way inland, and the variety of dishes involving lobster and cooking techniques used with the ingredient expanded. [ 69 ] However, these developments coincided with a decrease in the lobster population, and lobster increasingly became a delicacy food, valued among the rich as a status symbol and less likely to be found in the diet of the general population. [ 70 ]
The American lobster was not originally popular among European colonists in North America. This was partially due to the European inlander's association of lobster with barely edible salted seafood and partially due to a cultural opinion that seafood was a lesser alternative to meat that did not provide the taste or nutrients desired. It was also due to the extreme abundance of lobster at the time of the colonists' arrival, which contributed to a general perception of lobster as an undesirable peasant food. [ 71 ] The American lobster did not achieve popularity until the mid-19th century when New Yorkers and Bostonians developed a taste for it, and commercial lobster fisheries only flourished after the development of the lobster smack , [ 72 ] a custom-made boat with open holding wells on the deck to keep the lobsters alive during transport. [ 73 ]
Before this time, lobster was considered a poverty food or as a food for indentured servants or lower members of society in Maine , Massachusetts , and the Canadian Maritimes . Some servants specified in employment agreements that they would not eat lobster more than twice per week, [ 74 ] however there is limited evidence for this. [ 75 ] [ 76 ] Lobster was also commonly served in prisons, much to the displeasure of inmates. [ 77 ] American lobster was initially deemed worthy only of being used as fertilizer or fish bait, and until well into the 20th century, it was not viewed as more than a low-priced canned staple food. [ 78 ]
As a crustacean, lobster remains a taboo food in the dietary laws of Judaism and certain streams of Islam . [ note 1 ] [ 79 ]
Caught lobsters are graded as new-shell, hard-shell, or old-shell. Because lobsters that have recently shed their shells are the most delicate, an inverse relationship exists between the price of American lobster and its flavor. New-shell lobsters have paper-thin shells and a worse meat-to-shell ratio, but the meat is very sweet. However, the lobsters are so delicate that even transport to Boston almost kills them, making the market for new-shell lobsters strictly local to the fishing towns where they are offloaded. Hard-shell lobsters with firm shells but less sweet meat can survive shipping to Boston, New York, and even Los Angeles, so they command a higher price than new-shell lobsters. Meanwhile, old-shell lobsters, which have not shed since the previous season and have a coarser flavor, can be air-shipped anywhere in the world and arrive alive, making them the most expensive.
Several methods are used for killing lobsters. The most common way of killing lobsters is by placing them live in boiling water, sometimes after being placed in a freezer for a period. Boiling lobsters has been banned in several jurisdictions, including Switzerland, New Zealand, and parts of Italy. In Italy, offenders face fines of up to €495. [ 80 ] Boiling has been deemed to cause extreme suffering in lobsters, who continue to show intense brain activity for 30 to 150 seconds after immersion in boiling water. [ 81 ] Slowly raising the water temperature may also cause pain in crustaceans over a longer period of time. [ 81 ]
Another method is to split the lobster or sever the body in half lengthwise. To effectively kill the lobster quickly, the whole lobster must be split in two (not just its head, as is the practice in some restaurants). [ 81 ] Lobsters may also be killed or immobilized immediately before boiling by a stab into the brain ( pithing ), in the belief that this will stop suffering. However, a lobster's brain operates from not one but several ganglia , and disabling only the frontal ganglion does not usually result in death. [ 82 ] Lobsters can be killed by electrocution prior to cooking with a device called the CrustaStun . [ 83 ] Another method of rendering a lobster unconscious, chilling, has not been found to be effective. [ 81 ]
Since March 2018, lobsters in Switzerland need to be knocked out, or killed instantly, before they are boiled. They also receive other forms of protection while in transit. [ 84 ] [ 85 ] [ 86 ]
A 2021 London School of Economics report found strong evidence to suggest that lobsters can experience pain. [ 81 ] Dr Jonathan Birch, Principal Investigator on the project, said, "After reviewing over 300 scientific studies, we concluded that cephalopod molluscs and decapod crustaceans should be regarded as sentient, and should therefore be included within the scope of animal welfare law." [ 87 ]
Following the publication of the report, octopuses, crabs and lobsters are now protected under stronger animal welfare legislation in the UK (under the Animal Welfare (Sentience) Bill). [ 88 ]
Lobsters are caught using baited one-way traps with a color-coded marker buoy to mark cages. Lobster is fished in water between 2 and 900 metres (1 and 500 fathoms), although some lobsters live at 3,700 metres (2,000 fathoms). Cages are of plastic-coated galvanized steel or wood. A lobster fisher may tend to as many as 2,000 traps.
Around the year 2000, owing to overfishing and high demand, lobster aquaculture expanded. [ 89 ]
The fossil record of clawed lobsters extends back at least to the Valanginian age of the Cretaceous (140 million years ago). [ 90 ] This list contains all 54 extant species in the family Nephropidae : [ 91 ] | https://en.wikipedia.org/wiki/Longevity_of_lobsters |
Richardson, 1848
(Richardson, 1848)
Richardson, 1848
(Richardson, 1848)
(Richardson, 1848)
Richardson, 1848
(Richardson, 1848)
Liénard, 1842
Richardson, 1848
Cantor, 1849
Cantor, 1849
Bleeker, 1852
Bleeker, 1852
Bleeker, 1853
Macleay, 1881
Fourmanoir, 1961
Fourmanoir, 1961
(non Bleeker, 1853)
The longfin snake-eel ( Pisodonophis cancrivorus ) is an eel in the family Ophichthidae (worm/snake eels). It was described by John Richardson in 1848. It has a Dorsal fin beginning above its pectoral fin with a snake -like upper body which is cylindrical, but compressed only along its extreme tail tip. It also has a tubular nostril in front and a nostril along lower edge of the lip in back. Colors range from grey to black to brown. Large longfin snake-eels have wrinkled skin. [ 2 ]
Longfin snake-eels can survive in both Marine and freshwater environments, and swim from the sea up rivers to spawn. Some are found in coral reefs and live from 1 to 20 meters below the surface. They are an Indo-Pacific tropical species that lives in the Red Sea region surrounded by East Africa , French Polynesia , Ogasawara Islands and Australia . [ 2 ] [ 3 ] [ 4 ]
Members of this species are typically 50 cm long, but may grow as long as 108 cm including their tails. They are born as males, but some transition to become females at maturity. [ 5 ] [ 6 ]
Longfin snake-eels are often found in lagoons and estuaries that as they enter freshwater . [ 7 ] Loose groups of the eels congregate in tidal channels with their heads peeking up from below the surface. [ 8 ] Anglers catch them with bag nets in estuaries and tidal areas. [ 9 ] | https://en.wikipedia.org/wiki/Longfin_snake-eel |
Longifolene is a common sesquiterpene . It is an oily liquid hydrocarbon found primarily in the high-boiling fraction of certain pine resins . The name is derived from that of a pine species from which the compound was isolated. [ 1 ] It is a tricyclic chiral molecule. The enantiomer commonly found in pines and other higher plants exhibits a positive optical rotation of +42.73°. The other enantiomer (optical rotation −42.73°) is found in small amounts in certain fungi and liverworts .
Terpentine obtained from Pinus longifolia (obsolete name for Pinus roxburghii Sarg.) contains as much as 20% of longifolene. [ 2 ]
Longifolene is also one of two most abundant aroma constituents of lapsang souchong tea, because the tea is smoked over pinewood fires. [ 3 ]
The biosynthesis of longifolene begins with farnesyl diphosphate ( 1 ) (also called farnesyl pyrophosphate ) by means of a cationic polycyclization cascade. Loss of the pyrophosphate group and cyclization by the distal alkene gives intermediate 3 , which by means of a 1,3-hydride shift gives intermediate 4 . After two additional cyclizations, intermediate 6 produces longifolene by a 1,2-alkyl migration .
The laboratory characterization and synthesis of longifolene has long attracted attention. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
It reacts with borane to give the derivative dilongifolylborane , which is a chiral hydroborating agent. [ 12 ] | https://en.wikipedia.org/wiki/Longifolene |
The longitude of the ascending node , also known as the right ascension of the ascending node , is one of the orbital elements used to specify the orbit of an object in space. Denoted with the symbol Ω , it is the angle from a specified reference direction, called the origin of longitude , to the direction of the ascending node (☊), as measured in a specified reference plane . [ 1 ] The ascending node is the point where the orbit of the object passes through the plane of reference, as seen in the adjacent image.
Commonly used reference planes and origins of longitude include:
In the case of a binary star known only from visual observations, it is not possible to tell which node is ascending and which is descending. In this case the orbital parameter which is recorded is simply labeled longitude of the node , ☊, and represents the longitude of whichever node has a longitude between 0 and 180 degrees. [ 5 ] , chap. 17; [ 4 ] , p. 72.
In astrodynamics , the longitude of the ascending node can be calculated from the specific relative angular momentum vector h as follows:
Here, n = ⟨ n x , n y , n z ⟩ is a vector pointing towards the ascending node . The reference plane is assumed to be the xy -plane, and the origin of longitude is taken to be the positive x -axis. k is the unit vector (0, 0, 1), which is the normal vector to the xy reference plane.
For non-inclined orbits (with inclination equal to zero), ☊ is undefined. For computation it is then, by convention, set equal to zero; that is, the ascending node is placed in the reference direction, which is equivalent to letting n point towards the positive x -axis. | https://en.wikipedia.org/wiki/Longitude_of_the_ascending_node |
In flight dynamics , longitudinal stability is the stability of an aircraft in the longitudinal, or pitching , plane. This characteristic is important in determining whether an aircraft pilot will be able to control the aircraft in the pitching plane without requiring excessive attention or excessive strength. [ 1 ]
The longitudinal stability of an aircraft, also called pitch stability, [ 2 ] refers to the aircraft's stability in its plane of symmetry [ 2 ] about the lateral axis (the axis along the wingspan ). [ 1 ] It is an important aspect of the handling qualities of the aircraft, and one of the main factors determining the ease with which the pilot is able to maintain level flight. [ 2 ]
Longitudinal static stability refers to the aircraft's initial tendency on pitching. Dynamic stability refers to whether oscillations tend to increase, decrease or stay constant. [ 3 ]
If an aircraft is longitudinally statically stable, a small increase in angle of attack will create a nose-down pitching moment on the aircraft, so that the angle of attack decreases. Similarly, a small decrease in angle of attack will create a nose-up pitching moment so that the angle of attack increases. [ 1 ] This means the aircraft will self-correct longitudinal (pitch) disturbances without pilot input.
If an aircraft is longitudinally statically unstable, a small increase in angle of attack will create a nose-up pitching moment on the aircraft, promoting a further increase in the angle of attack.
If the aircraft has zero longitudinal static stability it is said to be statically neutral, and the position of its center of gravity is called the neutral point . [ 4 ] : 27
The longitudinal static stability of an aircraft depends on the location of its center of gravity relative to the neutral point. As the center of gravity moves increasingly forward, the pitching moment arm is increased, increasing stability. [ 5 ] [ 4 ] The distance between the center of gravity and the neutral point is defined as "static margin". It is usually given as a percentage of the mean aerodynamic chord . [ 6 ] : 92 If the center of gravity is forward of the neutral point, the static margin is positive. [ 7 ] : 8 If the center of gravity is aft of the neutral point, the static margin is negative. The greater the static margin, the more stable the aircraft will be.
Most conventional aircraft have positive longitudinal stability, providing the aircraft's center of gravity lies within the approved range. The operating handbook for every airplane specifies a range over which the center of gravity is permitted to move. [ 8 ] If the center of gravity is too far aft, the aircraft will be unstable. If it is too far forward, the aircraft will be excessively stable, which makes the aircraft "stiff" in pitch and hard for the pilot to bring the nose up for landing. Required control forces will be greater.
Some aircraft have low stability to reduce trim drag . This has the benefit of reducing fuel consumption. [ 5 ] Some aerobatic and fighter aircraft may have low or even negative stability to provide high manoeuvrability. Low or negative stability is called relaxed stability . [ 9 ] [ 10 ] [ 5 ] An aircraft with low or negative static stability will typically have fly-by-wire controls with computer augmentation to assist the pilot. [ 5 ] Otherwise, an aircraft with negative longitudinal stability will be more difficult to fly. It will be necessary for the pilot devote more effort, make more frequent inputs to the elevator control, and make larger inputs, in an attempt to maintain the desired pitch attitude. [ 1 ]
For an aircraft to possess positive static stability, it is not necessary for its level to return to exactly what it was before the upset. It is sufficient that the speed and orientation do not continue to diverge but undergo at least a small change back towards the original speed and orientation. [ 11 ] : 477 [ 7 ] : 3
The deployment of flaps will increase longitudinal stability. [ 12 ]
Unlike motion about the other two axes, and in the other degrees of freedom of the aircraft (sideslip translation, rotation in roll, rotation in yaw), which are usually heavily coupled, motion in the longitudinal plane does not typically cause a roll or yaw. [ 2 ] [ 7 ] : 2
A larger horizontal stabilizer, and a greater moment arm of the horizontal stabilizer about the neutral point, will increase longitudinal stability. [ citation needed ]
For a tailless aircraft , the neutral point coincides with the aerodynamic center , and so for such aircraft to have longitudinal static stability, the center of gravity must lie ahead of the aerodynamic center. [ 13 ]
For missiles with symmetric airfoils, the neutral point and the center of pressure are coincident and the term neutral point is not used. [ citation needed ]
An unguided rocket must have a large positive static margin so the rocket shows minimum tendency to diverge from the direction of flight given to it at launch. In contrast, guided missiles usually have a negative static margin for increased maneuverability. [ citation needed ]
Longitudinal dynamic stability of a statically stable aircraft refers to whether the aircraft will continue to oscillate after a disturbance, or whether the oscillations are damped . A dynamically stable aircraft will experience oscillations reducing to nil. A dynamically neutral aircraft will continue to oscillate around its original level, and dynamically unstable aircraft will experience increasing oscillations and displacement from its original level. [ 3 ]
Dynamic stability is caused by damping. If damping is too great, the aircraft will be less responsive and less manoeuvrable. [ 3 ] [ 11 ] : 588
Decreasing phugoid (long-period) oscillations can be achieved by building a smaller stabilizer on a longer tail, and by shifting the center of gravity to the rear. [ citation needed ]
An aircraft that is not statically stable cannot be dynamically stable. [ 7 ] : 3
Near the cruise condition most of the lift force is generated by the wings, with ideally only a small amount generated by the fuselage and tail. We may analyse the longitudinal static stability by considering the aircraft in equilibrium under wing lift, tail force, and weight. The moment equilibrium condition is called trim , and we are generally interested in the longitudinal stability of the aircraft about this trim condition.
Equating forces in the vertical direction:
where W is the weight, L w {\displaystyle L_{w}} is the wing lift and L t {\displaystyle L_{t}} is the tail force.
For a thin airfoil at low angle of attack , the wing lift is proportional to the angle of attack:
where S w {\displaystyle S_{w}} is the wing area C L {\displaystyle C_{L}} is the (wing) lift coefficient , α {\displaystyle \alpha } is the angle of attack. The term α 0 {\displaystyle \alpha _{0}} is included to account for camber , which results in lift at zero angle of attack. Finally q {\displaystyle q} is the dynamic pressure :
where ρ {\displaystyle \rho } is the air density and v {\displaystyle v} is the speed. [ 8 ]
The force from the tail-plane is proportional to its angle of attack, including the effects of any elevator deflection and any adjustment the pilot has made to trim-out any stick force. In addition, the tail is located in the flow field of the main wing, and consequently experiences downwash , reducing its angle of attack.
In a statically stable aircraft of conventional (tail in rear) configuration, the tail-plane force may act upward or downward depending on the design and the flight conditions. [ 14 ] In a typical canard aircraft both fore and aft planes are lifting surfaces. The fundamental requirement for static stability is that the aft surface must have greater authority (leverage) in restoring a disturbance than the forward surface has in exacerbating it. This leverage is a product of moment arm from the center of gravity and surface area . Correctly balanced in this way, the partial derivative of pitching moment with respect to changes in angle of attack will be negative: a momentary pitch up to a larger angle of attack makes the resultant pitching moment tend to pitch the aircraft back down. (Here, pitch is used casually for the angle between the nose and the direction of the airflow; angle of attack.) This is the "stability derivative" d(M)/d(alpha), described below.
The tail force is, therefore:
where S t {\displaystyle S_{t}\!} is the tail area, C l {\displaystyle C_{l}\!} is the tail force coefficient, η {\displaystyle \eta \!} is the elevator deflection, and ϵ {\displaystyle \epsilon \!} is the downwash angle.
A canard aircraft may have its foreplane rigged at a high angle of incidence, which can be seen in a canard catapult glider from a toy store; the design puts the c.g. well forward, requiring nose-up lift.
Violations of the basic principle are exploited in some high performance "relaxed static stability" combat aircraft to enhance agility; artificial stability is supplied by active electronic means.
There are a few classical cases where this favorable response was not achieved, notably in T-tail configurations. A T-tail airplane has a higher horizontal tail that passes through the wake of the wing later (at a higher angle of attack) than a lower tail would, and at this point the wing has already stalled and has a much larger separated wake. Inside the separated wake, the tail sees little to no freestream and loses effectiveness. Elevator control power is also heavily reduced or even lost, and the pilot is unable to easily escape the stall. This phenomenon is known as ' deep stall '.
Taking moments about the center of gravity, the net nose-up moment is:
where x g {\displaystyle x_{g}\!} is the location of the center of gravity behind the aerodynamic center of the main wing, l t {\displaystyle l_{t}\!} is the tail moment arm.
For trim, this moment must be zero. For a given maximum elevator deflection, there is a corresponding limit on center of gravity position at which the aircraft can be kept in equilibrium. When limited by control deflection this is known as a 'trim limit'. In principle trim limits could determine the permissible forwards and rearwards shift of the center of gravity, but usually it is only the forward cg limit which is determined by the available control, the aft limit is usually dictated by stability.
In a missile context 'trim limit' more usually refers to the maximum angle of attack, and hence lateral acceleration which can be generated.
The nature of stability may be examined by considering the increment in pitching moment with change in angle of attack at the trim condition. If this is nose up, the aircraft is longitudinally unstable; if nose down it is stable. Differentiating the moment equation with respect to α {\displaystyle \alpha } :
Note: ∂ M ∂ α {\displaystyle {\frac {\partial M}{\partial \alpha }}} is a stability derivative .
It is convenient to treat total lift as acting at a distance h ahead of the centre of gravity, so that the moment equation may be written:
Applying the increment in angle of attack:
Equating the two expressions for moment increment:
The total lift L {\displaystyle L} is the sum of L w {\displaystyle L_{w}} and L t {\displaystyle L_{t}} so the sum in the denominator can be simplified and written as the derivative of the total lift due to angle of attack, yielding:
Where c is the mean aerodynamic chord of the main wing. The term:
is known as the tail volume ratio. Its coefficient, the ratio of the two lift derivatives, has values in the range of 0.50 to 0.65 for typical configurations. [ 15 ] [ page needed ] Hence the expression for h may be written more compactly, though somewhat approximately, as:
h {\displaystyle h} is known as the static margin. For stability it must be negative. (However, for consistency of language, the static margin is sometimes taken as − h {\displaystyle -h} , so that positive stability is associated with positive static margin.) [ 7 ] : 8 | https://en.wikipedia.org/wiki/Longitudinal_stability |
Lonigutamab is an investigational monoclonal antibody biosimilar designed to target the insulin-like growth factor 1 receptor (IGF-1R). It is being evaluated for its potential in treating thyroid eye disease (TED) and certain cancers, such as breast cancer, through clinical trials. Lonigutamab is expressed in Chinese hamster ovary (CHO) cells and is currently limited to laboratory research applications, not approved for human therapeutic or diagnostic use. [ 1 ] [ 2 ] [ 3 ]
Lonigutamab is under investigation for its therapeutic potential in two primary areas: thyroid eye disease (TED) and breast cancer. In TED, it is being studied for its ability to mitigate symptoms such as proptosis and inflammation, while in breast cancer, it is evaluated as part of an antibody-drug conjugate (ADC) to induce tumor regression in IGF-1R-overexpressing tumors.
A Phase 1/2, multicenter, multiple-dose clinical study is evaluating Lonigutamab’s efficacy and safety in subjects with TED, a condition characterized by proptosis (eye bulging) and inflammation. [ 4 ] The study includes patients aged 18 to 75 with a Clinical Activity Score (CAS) ≥4 and proptosis ≥3 mm above normal in the most severely affected eye. Key exclusion criteria includes inflammatory bowel disease, hearing impairment, and corneal decompensation unresponsive to medical management. The primary outcome measure is the incidence and characterization of nonserious treatment-emergent adverse events (TEAEs) from Day 1 to Day 169. [ 2 ]
Lonigutamab ugodotin (W0101), an ADC targeting IGF-1R and delivering monomethyl auristatin E (MMAE) as a payload, has shown promise in preclinical models of breast cancer with IGF-1R overexpression. A 2020 study demonstrated its ability to induce tumor regression without affecting normal cells. [ 5 ] A Phase I clinical trial (NCT03316638) assessed its safety profile in advanced or metastatic tumors, including breast cancer. The study was terminated in 2024. [ 3 ]
Lonigutamab targets IGF-1R, a receptor implicated in cell growth and survival, which is overexpressed in certain cancers and inflammatory conditions like TED. As a monoclonal antibody, it binds to IGF-1R, potentially inhibiting signaling pathways that promote tumor growth or inflammation. In its ADC form (lonigutamab ugodotin), it delivers MMAE, a cytotoxic agent , to selectively kill cancer cells.
Lonigutamab is being developed by Acelyrin for TED applications. Its use in breast cancer is explored through collaborative research, with no regulatory approval granted as of May 2025. The drug remains a research tool, with Abbexa Ltd. explicitly stating it is not suitable for human therapeutic or diagnostic applications. Ongoing clinical trials (e.g., NCT05683496 for TED, NCT03316638 for cancer) are in early phases, and further data are needed to establish efficacy and safety. [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Lonigutamab |
The Look-Aside Interface is a computer interface that was specified by an interface interoperability agreement produced by the Network Processing Forum . It specifies the method to interface a Network Processing Element (of which an NPU is an example) to a Network Search Element (of which a CAM is an example). The interface is used by devices that off-load certain tasks from the network processor.
Numerous devices which implement the LA interface have been produced. Companies which have implemented these devices include Integrated Device Technology and Cypress Semiconductor .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Look-Aside_Interface |
In mathematics , the look-and-say sequence is the sequence of integers beginning as follows:
To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example:
The look-and-say sequence was analyzed by John Conway [ 1 ] after he was introduced to it by one of his students at a party. [ 2 ] [ 3 ]
The idea of the look-and-say sequence is similar to that of run-length encoding .
If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows:
Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence (sequence A006715 in the OEIS ). (for d = 2, see OEIS : A006751 ) [ 4 ]
The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22, ... which remains the same size. [ 5 ]
No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the same digit. [ 5 ]
Conway's cosmological theorem asserts that every sequence eventually splits ("decays") into a sequence of "atomic elements", which are finite subsequences that never again interact with their neighbors. There are 92 elements containing the digits 1, 2, and 3 only, which John Conway named after the 92 naturally-occurring chemical elements up to uranium , calling the sequence audioactive . There are also two " transuranic " elements (Np and Pu) for each digit other than 1, 2, and 3. [ 5 ] [ 6 ] Below is a table of all such elements:
The terms eventually grow in length by about 30% per generation. In particular, if L n denotes the number of digits of the n -th member of the sequence, then the limit of the ratio L n + 1 L n {\displaystyle {\frac {L_{n+1}}{L_{n}}}} exists and is given by lim n → ∞ L n + 1 L n = λ {\displaystyle \lim _{n\to \infty }{\frac {L_{n+1}}{L_{n}}}=\lambda }
where λ = 1.303577269034... (sequence A014715 in the OEIS ) is an algebraic number of degree 71. [ 5 ] This fact was proven by Conway, and the constant λ is known as Conway's constant . The same result also holds for every variant of the sequence starting with any seed other than 22.
Conway's constant is the unique positive real root of the following polynomial (sequence A137275 in the OEIS ): + 1 x 71 − 1 x 69 − 2 x 68 − 1 x 67 + 2 x 66 + 2 x 65 + 1 x 64 − 1 x 63 − 1 x 62 − 1 x 61 − 1 x 60 − 1 x 59 + 2 x 58 + 5 x 57 + 3 x 56 − 2 x 55 − 10 x 54 − 3 x 53 − 2 x 52 + 6 x 51 + 6 x 50 + 1 x 49 + 9 x 48 − 3 x 47 − 7 x 46 − 8 x 45 − 8 x 44 + 10 x 43 + 6 x 42 + 8 x 41 − 5 x 40 − 12 x 39 + 7 x 38 − 7 x 37 + 7 x 36 + 1 x 35 − 3 x 34 + 10 x 33 + 1 x 32 − 6 x 31 − 2 x 30 − 10 x 29 − 3 x 28 + 2 x 27 + 9 x 26 − 3 x 25 + 14 x 24 − 8 x 23 − 7 x 21 + 9 x 20 + 3 x 19 − 4 x 18 − 10 x 17 − 7 x 16 + 12 x 15 + 7 x 14 + 2 x 13 − 12 x 12 − 4 x 11 − 2 x 10 + 5 x 9 + 1 x 7 − 7 x 6 + 7 x 5 − 4 x 4 + 12 x 3 − 6 x 2 + 3 x 1 − 6 x 0 {\displaystyle {\begin{matrix}&&\qquad &&\qquad &&\qquad &&+1x^{71}&\\-1x^{69}&-2x^{68}&-1x^{67}&+2x^{66}&+2x^{65}&+1x^{64}&-1x^{63}&-1x^{62}&-1x^{61}&-1x^{60}\\-1x^{59}&+2x^{58}&+5x^{57}&+3x^{56}&-2x^{55}&-10x^{54}&-3x^{53}&-2x^{52}&+6x^{51}&+6x^{50}\\+1x^{49}&+9x^{48}&-3x^{47}&-7x^{46}&-8x^{45}&-8x^{44}&+10x^{43}&+6x^{42}&+8x^{41}&-5x^{40}\\-12x^{39}&+7x^{38}&-7x^{37}&+7x^{36}&+1x^{35}&-3x^{34}&+10x^{33}&+1x^{32}&-6x^{31}&-2x^{30}\\-10x^{29}&-3x^{28}&+2x^{27}&+9x^{26}&-3x^{25}&+14x^{24}&-8x^{23}&&-7x^{21}&+9x^{20}\\+3x^{19}&-4x^{18}&-10x^{17}&-7x^{16}&+12x^{15}&+7x^{14}&+2x^{13}&-12x^{12}&-4x^{11}&-2x^{10}\\+5x^{9}&&+1x^{7}&-7x^{6}&+7x^{5}&-4x^{4}&+12x^{3}&-6x^{2}&+3x^{1}&-6x^{0}\\\end{matrix}}}
This polynomial was correctly given in Conway's original Eureka article, [ 1 ] but in the reprinted version in the book edited by Cover and Gopinath [ 1 ] the term x 35 {\displaystyle x^{35}} was incorrectly printed with a minus sign in front. [ 7 ]
The look-and-say sequence is also popularly known as the Morris Number Sequence , after cryptographer Robert Morris , and the puzzle "What is the next number in the sequence 1, 11, 21, 1211, 111221?" is sometimes referred to as the Cuckoo's Egg , from a description of Morris in Clifford Stoll 's book The Cuckoo's Egg . [ 8 ] [ 9 ]
There are many possible variations on the rule used to generate the look-and-say sequence. For example, to form the "pea pattern" one reads the previous term and counts all instances of each digit, listed in order of their first appearance, not just those occurring in a consecutive block. [ 10 ] [ 11 ] [ verification needed ] So beginning with the seed 1, the pea pattern proceeds 1, 11 ("one 1"), 21 ("two 1s"), 1211 ("one 2 and one 1"), 3112 ("three 1s and one 2"), 132112 ("one 3, two 1s and one 2"), 311322 ("three 1s, one 3 and two 2s"), etc. This version of the pea pattern eventually forms a cycle with the two "atomic" terms 23322114 and 32232114. Since the sequence is infinite, the length of each element in the sequence is bounded, and there are only finitely many words that are at most a predetermined length, it must eventually repeat, and as a consequence, pea pattern sequences are always eventually periodic . [ 10 ]
Other versions of the pea pattern are also possible; for example, instead of reading the digits as they first appear, one could read them in ascending order instead (sequence A005151 in the OEIS ). In this case, the term following 21 would be 1112 ("one 1, one 2") and the term following 3112 would be 211213 ("two 1s, one 2 and one 3"). This variation ultimately ends up repeating the number 21322314 ("two 1s, three 2s, two 3s and one 4").
These sequences differ in several notable ways from the look-and-say sequence. Notably, unlike the Conway sequences, a given term of the pea pattern does not uniquely define the preceding term. Moreover, for any seed the pea pattern produces terms of bounded length: This bound will not typically exceed 2 × Radix + 2 digits (22 digits for decimal : radix = 10 ) and may only exceed 3 × Radix digits (30 digits for decimal radix) in length for long, degenerate, initial seeds (sequence of "100 ones", etc.). For these extreme cases, individual elements of decimal sequences immediately settle into a permutation of the form a 0 b 1 c 2 d 3 e 4 f 5 g 6 h 7 i 8 j 9 where here the letters a – j are placeholders for digit counts from the preceding sequence element. | https://en.wikipedia.org/wiki/Look-and-say_sequence |
A lookout , [ 1 ] lookout rafter or roof outlooker [ 2 ] is a wooden joist that extends in cantilever out from the exterior wall (or wall plate) of a building, supporting the roof sheathing and providing a nailing surface for the fascia boards. When not exposed it serves to fasten the finish materials of the eaves . | https://en.wikipedia.org/wiki/Lookout_(architecture) |
In computer science , a lookup table ( LUT ) is an array that replaces runtime computation of a mathematical function with a simpler array indexing operation, in a process termed as direct addressing . The savings in processing time can be significant, because retrieving a value from memory is often faster than carrying out an "expensive" computation or input/output operation. [ 1 ] The tables may be precalculated and stored in static program storage, calculated (or "pre-fetched" ) as part of a program's initialization phase ( memoization ), or even stored in hardware in application-specific platforms. Lookup tables are also used extensively to validate input values by matching against a list of valid (or invalid) items in an array and, in some programming languages, may include pointer functions (or offsets to labels) to process the matching input. FPGAs also make extensive use of reconfigurable, hardware-implemented, lookup tables to provide programmable hardware functionality.
LUTs differ from hash tables in a way that, to retrieve a value v {\displaystyle v} with key k {\displaystyle k} , a hash table would store the value v {\displaystyle v} in the slot h ( k ) {\displaystyle h(k)} where h {\displaystyle h} is a hash function i.e. k {\displaystyle k} is used to compute the slot, while in the case of LUT, the value v {\displaystyle v} is stored in slot k {\displaystyle k} , thus directly addressable. [ 2 ] : 466
Before the advent of computers, lookup tables of values were used to speed up hand calculations of complex functions, such as in trigonometry , logarithms , and statistical density functions. [ 3 ]
In ancient (499 AD) India, Aryabhata created one of the first sine tables , which he encoded in a Sanskrit-letter-based number system. In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals ) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144" [ 4 ] Modern school children are often taught to memorize " times tables " to avoid calculations of the most commonly used numbers (up to 9 x 9 or 12 x 12).
Early in the history of computers, input/output operations were particularly slow – even in comparison to processor speeds of the time. It made sense to reduce expensive read operations by a form of manual caching by creating either static lookup tables (embedded in the program) or dynamic prefetched arrays to contain only the most commonly occurring data items. Despite the introduction of systemwide caching that now automates this process, application level lookup tables can still improve performance for data items that rarely, if ever, change.
Lookup tables were one of the earliest functionalities implemented in computer spreadsheets , with the initial version of VisiCalc (1979) including a LOOKUP function among its original 20 functions. [ 5 ] This has been followed by subsequent spreadsheets, such as Microsoft Excel , and complemented by specialized VLOOKUP and HLOOKUP functions to simplify lookup in a vertical or horizontal table. In Microsoft Excel the XLOOKUP function has been rolled out starting 28 August 2019.
Although the performance of a LUT is a guaranteed O ( 1 ) {\displaystyle O(1)} for a lookup operation, no two entities or values can have the same key k {\displaystyle k} . When the size of universe ∪ {\displaystyle \cup } —where the keys are drawn—is large, it might be impractical or impossible to be stored in memory . Hence, in this case, a hash table would be a preferable alternative. [ 2 ] : 468
For a trivial hash function lookup, the unsigned raw data value is used directly as an index to a one-dimensional table to extract a result. For small ranges, this can be amongst the fastest lookup, even exceeding binary search speed with zero branches and executing in constant time . [ 6 ]
One discrete problem that is expensive to solve on many computers is that of counting the number of bits that are set to 1 in a (binary) number, sometimes called the population function . For example, the decimal number "37" is "00100101" in binary, so it contains three bits that are set to binary "1". [ 7 ] : 282
A simple example of C code, designed to count the 1 bits in a int , might look like this: [ 7 ] : 283
The above implementation requires 32 operations for an evaluation of a 32-bit value, which can potentially take several clock cycles due to branching . It can be " unrolled " into a lookup table which in turn uses trivial hash function for better performance. [ 7 ] : 282-283
The bits array, bits_set with 256 entries is constructed by giving the number of one bits set in each possible byte value (e.g. 0x00 = 0, 0x01 = 1, 0x02 = 1, and so on). Although a runtime algorithm can be used to generate the bits_set array, it's an inefficient usage of clock cycles when the size is taken into consideration, hence a precomputed table is used—although a compile time script could be used to dynamically generate and append the table to the source file . Sum of ones in each byte of the integer can be calculated through trivial hash function lookup on each byte; thus, effectively avoiding branches resulting in considerable improvement in performance. [ 7 ] : 284
"Lookup tables (LUTs) are an excellent technique for optimizing the evaluation of functions that are expensive to compute and inexpensive to cache. ... For data requests that fall between the table's samples, an interpolation algorithm can generate reasonable approximations by averaging nearby samples." [ 8 ]
In data analysis applications, such as image processing , a lookup table (LUT) can be used to transform the input data into a more desirable output format. For example, a grayscale picture of the planet Saturn could be transformed into a color image to emphasize the differences in its rings.
In image processing, lookup tables are often called LUT s (or 3DLUT), and give an output value for each of a range of index values. One common LUT, called the colormap or palette , is used to determine the colors and intensity values with which a particular image will be displayed. In computed tomography , "windowing" refers to a related concept for determining how to display the intensity of measured radiation.
A classic example of reducing run-time computations using lookup tables is to obtain the result of a trigonometry calculation, such as the sine of a value. [ 9 ] Calculating trigonometric functions can substantially slow a computing application. The same application can finish much sooner when it first precalculates the sine of a number of values, for example for each whole number of degrees (The table can be defined as static variables at compile time, reducing repeated run time costs).
When the program requires the sine of a value, it can use the lookup table to retrieve the closest sine value from a memory address, and may also interpolate to the sine of the desired value, instead of calculating by mathematical formula. Lookup tables can thus be used by mathematics coprocessors in computer systems. An error in a lookup table was responsible for Intel's infamous floating-point divide bug .
Functions of a single variable (such as sine and cosine) may be implemented by a simple array. Functions involving two or more variables require multidimensional array indexing techniques. The latter case may thus employ a two-dimensional array of power[x][y] to replace a function to calculate x y for a limited range of x and y values. Functions that have more than one result may be implemented with lookup tables that are arrays of structures.
As mentioned, there are intermediate solutions that use tables in combination with a small amount of computation, often using interpolation . Pre-calculation combined with interpolation can produce higher accuracy for values that fall between two precomputed values. This technique requires slightly more time to be performed but can greatly enhance accuracy in applications that require it. Depending on the values being precomputed, precomputation with interpolation can also be used to shrink the lookup table size while maintaining accuracy.
While often effective, employing a lookup table may nevertheless result in a severe penalty if the computation that the LUT replaces is relatively simple. Memory retrieval time and the complexity of memory requirements can increase application operation time and system complexity relative to what would be required by straight formula computation. The possibility of polluting the cache may also become a problem. Table accesses for large tables will almost certainly cause a cache miss . This phenomenon is increasingly becoming an issue as processors outpace memory. A similar issue appears in rematerialization , a compiler optimization . In some environments, such as the Java programming language , table lookups can be even more expensive due to mandatory bounds-checking involving an additional comparison and branch for each lookup.
There are two fundamental limitations on when it is possible to construct a lookup table for a required operation. One is the amount of memory that is available: one cannot construct a lookup table larger than the space available for the table, although it is possible to construct disk-based lookup tables at the expense of lookup time. The other is the time required to compute the table values in the first instance; although this usually needs to be done only once, if it takes a prohibitively long time, it may make the use of a lookup table an inappropriate solution. As previously stated however, tables can be statically defined in many cases.
Most computers only perform basic arithmetic operations and cannot directly calculate the sine of a given value. Instead, they use the CORDIC algorithm or a complex formula such as the following Taylor series to compute the value of sine to a high degree of precision: [ 10 ] : 5
However, this can be expensive to compute, especially on slow processors, and there are many applications, particularly in traditional computer graphics , that need to compute many thousands of sine values every second. A common solution is to initially compute the sine of many evenly distributed values, and then to find the sine of x we choose the sine of the value closest to x through array indexing operation. This will be close to the correct value because sine is a continuous function with a bounded rate of change. [ 10 ] : 6 For example: [ 11 ] : 545–548
Unfortunately, the table requires quite a bit of space: if IEEE double-precision floating-point numbers are used, over 16,000 bytes would be required. We can use fewer samples, but then our precision will significantly worsen. One good solution is linear interpolation , which draws a line between the two points in the table on either side of the value and locates the answer on that line. This is still quick to compute, and much more accurate for smooth functions such as the sine function. Here is an example using linear interpolation:
Linear interpolation provides for an interpolated function that is continuous, but will not, in general, have continuous derivatives . For smoother interpolation of table lookup that is continuous and has continuous first derivative , one should use the cubic Hermite spline .
When using interpolation, the size of the lookup table can be reduced by using nonuniform sampling , which means that where the function is close to straight, we use few sample points, while where it changes value quickly we use more sample points to keep the approximation close to the real curve. For more information, see interpolation .
Storage caches (including disk caches for files, or processor caches for either code or data) work also like a lookup table. The table is built with very fast memory instead of being stored on slower external memory, and maintains two pieces of data for a sub-range of bits composing an external memory (or disk) address (notably the lowest bits of any possible external address):
A single (fast) lookup is performed to read the tag in the lookup table at the index specified by the lowest bits of the desired external storage address, and to determine if the memory address is hit by the cache. When a hit is found, no access to external memory is needed (except for write operations, where the cached value may need to be updated asynchronously to the slower memory after some time, or if the position in the cache must be replaced to cache another address).
In digital logic , a lookup table can be implemented with a multiplexer whose select lines are driven by the address signal and whose inputs are the values of the elements contained in the array. These values can either be hard-wired, as in an ASIC whose purpose is specific to a function, or provided by D latches which allow for configurable values. ( ROM , EPROM , EEPROM , or RAM .)
An n -bit LUT can encode any n -input Boolean function by storing the truth table of the function in the LUT. This is an efficient way of encoding Boolean logic functions, and LUTs with 4-6 bits of input are in fact the key component of modern field-programmable gate arrays (FPGAs) which provide reconfigurable hardware logic capabilities.
In data acquisition and control systems , lookup tables are commonly used to undertake the following operations in:
In some systems, polynomials may also be defined in place of lookup tables for these calculations. | https://en.wikipedia.org/wiki/Lookup_table |
A loom is a device used to weave cloth and tapestry . The basic purpose of any loom is to hold the warp threads under tension to facilitate the interweaving of the weft threads. The precise shape of the loom and its mechanics may vary, but the basic function is the same.
The word "loom" derives from the Old English geloma , formed from ge- (perfective prefix) and loma , a root of unknown origin; the whole word geloma meant a utensil, tool, or machine of any kind. In 1404 "lome" was used to mean a machine to enable weaving thread into cloth. [ 1 ] [ 2 ] [ failed verification ] By 1838 "loom" had gained the additional meaning of a machine for interlacing thread. [ citation needed ]
Weaving is done on two sets of threads or yarns, which cross one another. The warp threads are the ones stretched on the loom (from the Proto-Indo-European * werp , "to bend" [ 3 ] ). Each thread of the weft (i.e. "that which is woven") is inserted so that it passes over and under the warp threads.
The ends of the warp threads are usually fastened to beams. One end is fastened to one beam, the other end to a second beam, so that the warp threads all lie parallel and are all the same length. The beams are held apart to keep the warp threads taut.
The textile is woven starting at one end of the warp threads, and progressing towards the other end. The beam on the finished-fabric end is called the cloth beam . The other beam is called the warp beam .
Beams may be used as rollers to allow the weaver to weave a piece of cloth longer than the loom. As the cloth is woven, the warp threads are gradually unrolled from the warp beam, and the woven portion of the cloth is rolled up onto the cloth beam (which is also called the takeup roll ). The portion of the fabric that has already been formed but not yet rolled up on the takeup roll is called the fell .
Not all looms have two beams. For instance, warp-weighted looms have only one beam; the warp yarns hang from this beam. The bottom ends of the warp yarns are tied to dangling loom weights.
A loom has to perform three principal motions : shedding, picking, and battening.
There are also usually two secondary motions , because the newly constructed fabric must be wound onto cloth beam. This process is called taking up. At the same time, the warp yarns must be let off or released from the warp beam, unwinding from it. To become fully automatic, a loom needs a tertiary motion , the filling stop motion. This will brake the loom if the weft thread breaks. [ 4 ] An automatic loom requires 0.125 hp to 0.5 hp to operate (100W to 400W).
A loom, then, usually needs two beams, and some way to hold them apart. It generally has additional components to make shedding, picking, and battening faster and easier. There are also often components to help take up the fell.
The nature of the loom frame and the shedding, picking, and battening devices vary. Looms come in a wide variety of types, many of them specialized for specific types of weaving. They are also specialized for the lifestyle of the weaver. For instance, nomadic weavers tend to use lighter, more portable looms, while weavers living in cramped city dwellings are more likely to use a tall upright loom, or a loom that folds into a narrow space when not in use.
It is possible to weave by manually threading the weft over and under the warp threads, but this is slow. Some tapestry techniques use manual shedding. Pin looms and peg looms also generally have no shedding devices. Pile carpets generally do not use shedding for the pile, because each pile thread is individually knotted onto the warps, but there may be shedding for the weft holding the carpet together.
Usually weaving uses shedding devices. These devices pull some of the warp threads to each side, so that a shed is formed between them, and the weft is passed through the shed. There are a variety of methods for forming the shed. At least two sheds must be formed, the shed and the countershed. Two sheds is enough for tabby weave ; more complex weaves, such as twill weaves , satin weaves , diaper weaves , and figured (picture-forming) weaves, require more sheds.
Heddle-rods and shedding-sticks are not the fastest way to weave, but they are very simple to make, needing only sticks and yarn. They are often used on vertical [ 5 ] and backstrap looms. [ 6 ] They allow the creation of elaborate supplementary-weft brocades . [ 6 ] They are also used on modern tapestry looms; the frequent changing of weft colour in tapestry makes weaving tapestry slow, so using faster, more complex shedding systems isn't worthwhile. The same is true of looms for handmade knotted-pile carpet ; hand-knotting each pile thread to the warp takes far more time than weaving a couple of weft threads to hold the pile in place.
At its simplest, a heddle-bar is simply a stick placed across the warp and tied to individual warp threads. It is not tied to all of the warp threads; for a plain tabby weave , it is tied to every other thread. The little loops of string used to tie the wraps to the heddle bar are called heddles or leashes . When the heddle-bar is pulled perpendicular to the warp, it pulls the warp threads it is tied to out of position, creating a shed.
A warp-weighted loom (see diagram) typically uses a heddle-bar, or several. It has two upright posts (C); they support a horizontal beam (D), which is cylindrical so that the finished cloth can be rolled around it, allowing the loom to be used to weave a piece of cloth taller than the loom, and preserving an ergonomic working height. The warp threads (F, and A and B) hang from the beam and rest against the shed rod (E). The heddle-bar (G) is tied to some of the warp threads (A, but not B), using loops of string called leashes (H). So when the heddle rod is pulled out and placed in the forked sticks protruding from the posts (not lettered, no technical term given in citation), the shed (1) is replaced by the counter-shed (2). By passing the weft through the shed and the counter-shed, alternately, cloth is woven. [ 7 ]
Several heddle-bars can be used side-by-side; three or more can be used to weave twill weaves , for instance.
There are also other ways to create counter-sheds. A shed-rod is simpler and easier to set up than a heddle-bar, and can make a counter-shed. A shed-rod (shedding stick, shed roll) is simply a stick woven through the warp threads. When pulled perpendicular to the threads (or rotated to stand on edge, for wide, flat shedding rods), it creates a counter shed. The combination of a heddle-bar and a shedding-stick can create the shed and countershed needed for a plain tabby weave, as in the video.
There are also slitted heddle-rods, which are sawn partway through, with evenly-placed slits. Each warp thread goes in a slit. The odd-numbered slits are at 90 degrees to the even slits. The rod is rotated back and forth to create the shed and countershed, [ 8 ] so it is often large-diameter. [ 9 ]
Tablet weaving uses cards punched with holes. The warp threads pass through the holes, and the cards are twisted and shifted to created varied sheds. This shedding technique is used for narrow work . It is also used to finish edges, weaving decorative selvage bands instead of hemming.
There are heddles made of flip-flopping rotating hooks, which raise and lower the warp, creating sheds . The hooks, when vertical, have the weft threads looped around them horizontally. If the hooks are flopped over on side or another, the loop of weft twists, raising one or the other side of the loop, which creates the shed and countershed. [ 10 ]
Rigid heddles are generally used on single-shaft looms. Odd warp threads go through the slots, and even ones through the circular holes, or vice versa. The shed is formed by lifting the heddle, and the countershed by depressing it. The warp threads in the slots stay where they are, and the ones in the circular holes are pulled back and forth. A single rigid heddle can hold all the warp threads, though sometimes multiple rigid heddles are used.
Treadles may be used to drive the rigid heddle up and down.
Rigid heddles or (above) are called "rigid" to distinguish them from string and wire heddles. Rigid heddles are one-piece, by non-rigid ones are multi-piece. Each warp thread has its own heald (also, confusingly, called a heddle). The heald has an eyelet at each end (for the staves, also called shafts) and one in the middle, called the mail, (for the warp thread). A row of these healds is slid onto two staves, the upper and lower staves; the staves together, or the staves together with the healds, may be called a heald frame , which is, confusingly, also called a shaft and a harness. [ 11 ] Replaceable, interchangeable healds can be smaller, allowing finer weaves.
Unlike a rigid heddle, a flexible heddle cannot push the warp thread. This means that two heald frames are needed even for a plain tabby weave . Twill weaves require three or more heald frames (depending on the type of twill), and more complex figured weaves require still more frames.
The different heald frames must be controlled by some mechanism, and the mechanism must be able to pull them in both directions. They are mostly controlled by treadles; creating the shed with the feet leaves the hands free to ply the shuttle. However in some tabletop looms, heald frames are also controlled by levers. [ 12 ] [ better source needed ]
In treadle looms, the weaver controls the shedding with their feet, by treading on treadles . Different treadles and combinations of treadles produce different sheds. The weaver must remember the sequence of treadling needed to produce the pattern.
The precise mechanism by which the treadles control the heddles varies. Rigid-heddle treadle looms do exist, but the heddles are usually flexible. Sometimes, the treadles are tied directly to the staves (with a Y-shaped bridle so they stay level). Alternately, they may be tied to a stick called a lamm , which in turn is tied to the stave, to make the motion more controlled and regular. The lamm may pivot or slide.
Counterbalance looms are the most common type of treadle loom globally, as they are simple and give a smooth, quiet, quick motion. [ 13 ] The heald frames are joined together in pairs, by a cord running over heddle pulleys or a heddle roller. When one heald frame rises, the other falls. It takes a pair of treadles to control a pair of frames. Counterbalance looms are usually used with two or four frames, though some have as many as ten. [ 13 ]
In theory each pair of heald frames has to have an equal number to warps pulled by each frame, so the patterns that can be made on them are limited. [ 14 ] In practice, fairly unbalanced tie-ups just make the shed a bit smaller, and as the shed on a counterbalance loom is adjustable in size and quite large to start with (compared to other types of loom), so it is entirely possible to weave good cloth on a counterbalance loom with unbalanced heald frames, [ 15 ] [ 13 ] unless the loom is extremely shallow (that is, the length of warp being pulled on is short, less than 1 meter or 3 feet), which exacerbates the slightly uneven tension. [ 13 ] Limited patterns are not, of course, a disadvantage when weaving plainer patterns, such as tabbies and twills.
Jack looms (also called single-tieup-looms and rising-shed looms [ 16 ] ), have their treadles connected to jacks, levers that push or pull the heald frames up; the harnesses are weighted to fall back into place by gravity. Several frames can be connected to a single treadle. Frames can also be raised by more than one treadle. This allows treadles to control arbitrary combinations of frames, which vastly increases the number of different sheds that can be created from the same number of frames. Any number of treadles can also be engaged at once, meaning that the number of different sheds that can be selected is two to the power of the number of treadles. Eight is a large but reasonable number of treadles, giving a maximum of 2 8 =256 sheds (some of which will probably not have enough threads on one side to be useful). [ citation needed ] Having more possible sheds allows more complex patterns, [ 14 ] [ 16 ] such as diaper weaves . [ citation needed ]
Jack looms are easy to make and to tie up (if not quite as easy as counterbalance looms). The gravity return makes jack looms heavy to operate. The shed of a jack loom is smaller for a given length of warp being pulled aside by the heddles (loom depth). The warp threads being pulled up by the jacks are also tauter than the other warp threads (unlike a counter balance loom, where the threads are pulled an equal amount in opposite directions). Uneven tension makes weaving evenly harder. It also lowers the maximum tension at which one can practically weave. [ 14 ] [ 16 ] If the threads are rough, closely-spaced, very long or numerous, it can be hard to open the sheds on the jack loom. [ 16 ] Jack looms without castles (the superstructure above the weft) have to lift the heald frames from below, and are noiser due to the impact of wood on wood; elastomer pads can reduce the noise. [ 13 ]
In countermarch looms , the treadles are tied to lamms, [ 17 ] [ 14 ] which may pivot at one end or slide up and down. [ 18 ] Half of the lamms in turn connect to jacks, which also pivot, and push or pull the staves up or down. [ 17 ] Some countermarches have two horizontal jacks per shaft, others a single vertical jack. [ 13 ] Each treadle is tied to all of the heald frames, moving some of them up and the rest of them down. [ 13 ] This allows the complex combinatorial treadles of a jack loom, with the large shed and balanced, even tension of a counterbalance loom, with its quiet, light operation. Unfortunately, countermarch looms are more complex, harder to build, slower to tie up, [ 17 ] [ 14 ] [ 13 ] and more prone to malfunction. [ 17 ] [ 19 ]
A drawloom is for weaving figured cloth. In a drawloom, a "figure harness" is used to control each warp thread separately, [ 20 ] allowing very complex patterns. A drawloom requires two operators, the weaver, and an assistant called a "drawboy" to manage the figure harness.
The earliest confirmed drawloom fabrics come from the State of Chu and date c. 400 BC. [ 21 ] Some scholars speculate an independent invention in ancient Syria , since drawloom fabrics found in Dura-Europas are thought to date before 256 AD. [ 21 ] [ 22 ] The draw loom was invented in China during the Han dynasty ( State of Liu ?); [ contradictory ] [ 23 ] foot-powered multi-harness looms and jacquard looms were used for silk weaving and embroidery, both of which were cottage industries with imperial workshops. [ 24 ] The drawloom enhanced and sped up the production of silk and played a significant role in Chinese silk weaving. The loom was introduced to Persia, India, and Europe. [ 23 ]
A dobby head is a device that replaces the drawboy, the weaver's helper who used to control the warp threads by pulling on draw threads. "Dobby" is a corruption of "draw boy". Mechanical dobbies pull on the draw threads using pegs in bars to lift a set of levers. The placement of the pegs determines which levers are lifted. The sequence of bars (they are strung together) effectively remembers the sequence for the weaver. Computer-controlled dobbies use solenoids instead of pegs.
The Jacquard loom is a mechanical loom, invented by Joseph Marie Jacquard in 1801, which simplifies the process of manufacturing figured textiles with complex patterns such as brocade , damask , and matelasse . [ 25 ] [ 26 ] The loom is controlled by punched cards with punched holes, each row of which corresponds to one row of the design. Multiple rows of holes are punched on each card and the many cards that compose the design of the textile are strung together in order. It is based on earlier inventions by the Frenchmen Basile Bouchon (1725), Jean Baptiste Falcon (1728), and Jacques Vaucanson (1740). [ 27 ] To call it a loom is a misnomer. A Jacquard head could be attached to a power loom or a handloom, the head controlling which warp thread was raised during shedding. Multiple shuttles could be used to control the colour of the weft during picking. The Jacquard loom is the predecessor to the computer punched card readers of the 19th and 20th centuries. [ 28 ]
The weft may be passed across the shed as a ball of yarn, but usually this is too bulky and unergonomic. Shuttles are designed to be slim, so they pass through the shed; to carry a lot of yarn, so the weaver does not need to refill them too often; and to be an ergonomic size and shape for the particular weaver, loom, and yarn. They may also be designed for low friction.
At their simplest, these are just sticks wrapped with yarn. They may be specially shaped, as with the bobbins and bones used in tapestry-making (bobbins are used on vertical warps, and bones on horizontal ones). [ 29 ] [ 30 ]
Boat shuttles may be closed (central hollow with a solid bottom) or open (central hole goes right through). The yarn may be side-feed or end-feed. [ 34 ] [ 35 ] They are commonly made for 10-cm (4-inch) and 15-cm (6-inch) bobbin lengths. [ 36 ]
Hand weavers who threw a shuttle could only weave a cloth as wide as their armspan . If cloth needed to be wider, two people would do the task (often this would be an adult with a child). John Kay (1704–1779) patented the flying shuttle in 1733. The weaver held a picking stick that was attached by cords to a device at both ends of the shed. With a flick of the wrist, one cord was pulled and the shuttle was propelled through the shed to the other end with considerable force, speed and efficiency. A flick in the opposite direction and the shuttle was propelled back. A single weaver had control of this motion but the flying shuttle could weave much wider fabric than an arm's length at much greater speeds than had been achieved with the hand thrown shuttle.
The flying shuttle was one of the key developments in weaving that helped fuel the Industrial Revolution . The whole picking motion no longer relied on manual skill and it was just a matter of time before it could be powered by something other than a human.
Different types of power looms are most often defined by the way that the weft, or pick, is inserted into the warp. Many advances in weft insertion have been made in order to make manufactured cloth more cost effective. Weft insertion rate is a limiting factor in production speed. As of 2010 [update] , industrial looms can weave at 2,000 weft insertions per minute. [ 37 ]
There are five main types of weft insertion and they are as follows:
The newest weft thread must be beaten against the fell. Battening can be done with a long stick placed in the shed parallel to the weft (a sword batten), a shorter stick threaded between the warp threads perpendicular to warp and weft (a pin batten), a comb, or a reed (a comb with both ends closed, so that it has to be sleyed, that is have the warp threads threaded through it, when the loom is warped). For rigid-heddle looms, the heddle may be used as a reed.
Patented in 1802, dandy looms automatically rolled up the finished cloth, keeping the fell always the same length. They significantly speeded up hand weaving (still a major part of the textile industry in the 1800s). Similar mechanisms were used in power looms.
The temples act to keep the cloth from shrinking sideways as it is woven. Some warp-weighted looms had temples made of loom weights , suspended by strings so that they pulled the cloth breadthwise. [ 7 ] Other looms may have temples tied to the frame, or temples that are hooks with an adjustable shaft between them. Power looms may use temple cylinders. Pins can leave a series of holes in the selvages (these may be from stenter pins used in post-processing).
Loom frames can be roughly divided, by the orientation of the warp threads, into horizontal looms and vertical looms. There are many finer divisions. Most handloom frame designs can be constructed fairly simply. [ 39 ]
The back-strap loom (also known as belt loom) [ 40 ] is a simple loom with ancient roots, still used in many cultures around the world (as in the weaving of Andean textiles , and in Central, East and South Asia). [ 41 ] It consists of two sticks or bars between which the warps are stretched. One bar is attached to a fixed object and the other to the weaver, usually by means of a strap around the weaver's back. [ 42 ] The weaver leans back and uses their body weight to tension the loom.
Both simple and complex textiles can be woven on backstrap looms. They produce narrowcloth : width is limited to the weaver's armspan. They can readily produce warp-faced textiles, often decorated with intricate pick-up patterns woven in complementary and supplementary warp techniques, and brocading. Balanced weaves are also possible on the backstrap loom.
The warp-weighted loom is a vertical loom that may have originated in the Neolithic period. Its defining characteristic is hanging weights (loom weights) which keep bundles of the warp threads taut. Frequently, extra warp thread is wound around the weights. When a weaver has woven far enough down, the completed section (fell) can be rolled around the top beam, and additional lengths of warp threads can be unwound from the weights to continue. This frees the weaver from vertical size constraint. Horizontally, breadth is limited by armspan; making broadwoven cloth requires two weavers, standing side by side at the loom.
Simple weaves, and complex weaves that need more than two different sheds, can both be woven on a warp-weighted loom. They can also be used to produce tapestries.
In pegged looms, the beams can be simply held apart by hooking them behind pegs driven into the ground, with wedges or lashings used to adjust the tension. Pegged looms may, however, also have horizontal sidepieces holding the beams apart.
Such looms are easy to set up and dismantle, and are easy to transport, so they are popular with nomadic weavers. They are generally only used for comparatively small woven articles. [ 45 ] Urbanites are unlikely to use horizontal floor looms as they take up a lot of floor space, and full-time professional weavers are unlikely to use them as they are unergonomic. Their cheapness and portability is less valuable to urban professional weavers. [ 46 ]
In a treadle loom, the shedding is controlled by the feet, which tread on the treadles .
The earliest evidence of a horizontal loom is found on a pottery dish in ancient Egypt , dated to 4400 BC. It was a frame loom, equipped with treadles to lift the warp threads, leaving the weaver's hands free to pass and beat the weft thread. [ 47 ]
A pit loom has a pit for the treadles, reducing the stress transmitted through the much shorter frame. [ 48 ]
In a wooden vertical-shaft loom, the heddles are fixed in place in the shaft. The warp threads pass alternately through a heddle, and through a space between the heddles (the shed ), so that raising the shaft raises half the threads (those passing through the heddles), and lowering the shaft lowers the same threads — the threads passing through the spaces between the heddles remain in place.
A treadle loom for figured weaving may have a large number of harnesses or a control head. It can, for instance, have a Jacquard machine attached to it [ 49 ] (see Loom#Shedding methods) .
Tapestry can have extremely complex wefts, as different strands of wefts of different colours are used to form the pattern. Speed is lower, and shedding and picking devices may be simpler. Looms used for weaving traditional tapestry are called not as "vertical-warp" and "horizontal-warp", but as "high-warp" or "low-warp" (the French terms haute-lisse and basse-lisse are also used in English). [ 50 ]
Inkle looms are narrow looms used for narrow work . They are used to make narrow warp-faced strips such as ribbons, bands, or tape. They are often quite small; some are used on a tabletop. others are backstraps looms with a rigid heddle , and very portable.
There exist very small hand-held looms known as darning looms. They are made to fit under the fabric being mended, and are often held in place by an elastic band on one side of the cloth and a groove around the loom's darning-egg portion on the other. They may have heddles made of flip-flopping rotating hooks (see Loom#Rotating-hook heddles ) . [ 51 ] Other devices sold as darning looms are just a darning egg and a separate comb-like piece with teeth to hook the warp over; these are used for repairing knitted garments and are like a linear knitting spool . [ 52 ] Darning looms were sold during World War Two clothing rationing in the United Kingdom [ 53 ] and Canada, [ 54 ] and some are homemade. [ 55 ] [ 56 ]
Circular looms are used to create seamless tubes of fabric for products such as hosiery, sacks, clothing, fabric hoses (such as fire hoses) and the like. Tablet weaving can be used to knit tubes, including tubes that split and join.
Small jigs also used for circular knitting are also sometimes called circular looms, [ 57 ] but they are used for knitting, not weaving.
A power loom is a loom powered by a source of energy other than the weaver's muscles. When power looms were developed, other looms came to be referred to as handlooms . Most cloth is now woven on power looms, but some is still woven on handlooms. [ 48 ]
The development of power looms was gradual. The capabilities of power looms gradually expanded, but handlooms remained the most cost-effective way to make some types of textiles for most of the 1800s. Many improvements in loom mechanisms were first applied to hand looms (like the dandy loom ), and only later integrated into power looms.
Edmund Cartwright built and patented a power loom in 1785, and it was this that was adopted by the nascent cotton industry in England. The silk loom made by Jacques Vaucanson in 1745 operated on the same principles but was not developed further. The invention of the flying shuttle by John Kay allowed a hand weaver to weave broadwoven cloth without an assistant, and was also critical to the development of a commercially successful power loom. [ 58 ] Cartwright's loom was impractical but the ideas behind it were developed by numerous inventors in the Manchester area of England. By 1818, there were 32 factories containing 5,732 looms in the region. [ 59 ]
The Horrocks loom was viable, but it was the Roberts Loom in 1830 that marked the turning point. [ 60 ] [ clarification needed ] Incremental changes to the three motions continued to be made. The problems of sizing, stop-motions, consistent take-up, and a temple to maintain the width remained. In 1841, Kenworthy and Bullough produced the Lancashire Loom [ 61 ] which was self-acting or semi-automatic. This enabled a youngster to run six looms at the same time. Thus, for simple calicos, the power loom became more economical to run than the handloom – with complex patterning that used a dobby or Jacquard head, jobs were still put out to handloom weavers until the 1870s. Incremental changes were made such as the Dickinson Loom , culminating in the fully automatic Northrop Loom , developed by the Keighley -born inventor Northrop, who was working for the Draper Corporation in Hopedale . This loom recharged the shuttle when the pirn was empty. The Draper E and X models became the leading products from 1909. They were challenged by synthetic fibres such as rayon . [ 62 ]
By 1942, faster, more efficient, and shuttleless Sulzer and rapier looms had been introduced. [ 63 ]
The loom is a symbol of cosmic creation and the structure upon which individual destiny is woven. This symbolism is encapsulated in the classical myth of Arachne who was changed into a spider by the goddess Athena , who was jealous of her skill at the godlike craft of weaving. [ 64 ] In Maya civilization the goddess Ixchel taught the first woman how to weave at the beginning of time. [ 65 ] | https://en.wikipedia.org/wiki/Loom |
In lunar photography, the Looney 11 rule (also known as the Looney f /11 rule ) is a method of estimating correct exposures without a light meter . For daylight photography, there is a similar rule called the Sunny 16 rule . The basic rule is: "For astronomical photos of the Moon 's surface, set aperture to f /11 and shutter speed to the [reciprocal of the] ISO film speed [or ISO setting]." [ 1 ]
As with other light readings, shutter speed can be changed as long as the f-number is altered to compensate, e.g. 1/250 second at f /8 gives equivalent exposure to 1/125 second at f /11 . Generally, the adjustment is done such that for each step in aperture increase (i.e., decreasing the f-number), the exposure time has to be halved (or equivalently, the shutter speed doubled), and vice versa. This follows the more general rule derived from the mathematical relationship between aperture and exposure time—within reasonable ranges, exposure is proportional to the square of the aperture ratio and proportional to exposure time; thus, to maintain a constant level of exposure, a change in aperture by a factor c requires a change in exposure time by a factor 1/ c 2 and vice versa. Steps in aperture correspond to a factor close to the square root of two, thus the above rule.
The intensity of visible sunlight striking the surface of the Moon is essentially the same as at the surface of the Earth. [ 2 ] The albedo of the Moon's surface material is lower (darker) than that of the Earth's surface, [ 3 ] and the Looney 11 rule increases exposure by one stop versus the Sunny 16 rule. | https://en.wikipedia.org/wiki/Looney_11_rule |
Loop-mediated isothermal amplification ( LAMP ) is a single-tube technique for the amplification of DNA [ 2 ] for diagnostic purposes and a low-cost alternative to detect certain diseases. [ 3 ] LAMP is an isothermal nucleic acid amplification technique. In contrast to the polymerase chain reaction (PCR) technology, in which the reaction is carried out with a series of alternating temperature steps or cycles, isothermal amplification is carried out at a constant temperature, and does not require a thermal cycler . LAMP was invented in 1998 by Eiken Chemical Company in Tokyo. [ 1 ] Reverse transcription loop-mediated isothermal amplification (RT-LAMP) combines LAMP with a reverse transcription step to allow the detection of RNA.
In LAMP, the target sequence is amplified at a constant temperature of 60–65 °C (140–149 °F) using either two or three sets of primers and a polymerase like Bst Klenow fragment with high strand displacement activity in addition to a replication activity. Typically, four different primers are used to amplify six distinct regions on the target gene, which increases specificity. An additional pair of "loop primers" can further accelerate the reaction. [ 4 ] The amount of DNA produced in LAMP is considerably higher than PCR -based amplification. [ 1 ] Primer design could be performed using several programs, such as PrimerExplorer , MorphoCatcher , [ 5 ] and NEB LAMP Primer Design Tool . For the screening of conservative and species-specific nucleotide polymorphisms, in most diagnostics applications a combination of PrimerExplorer and MorphoCatcher is very useful, because it allows for the localization of species-specific nucleotides at 3'-ends of primers to enhance the specificity of reactions.
The amplification product can be detected via photometry , measuring the turbidity caused by magnesium pyrophosphate precipitate in solution as a byproduct of amplification. [ 7 ] This allows easy visualization by the naked eye or via simple photometric detection approaches for small volumes. The reaction can be followed in real-time either by measuring the turbidity [ 8 ] or by fluorescence using intercalating dyes such as SYTO 9. [ 9 ]
Dyes, such as SYBR green , can be used to create a visible color change that can be seen with the naked eye without the need for expensive equipment, or for a response that can more accurately be measured by instrumentation. Dye molecules intercalate or directly label the DNA, and in turn can be correlated with the number of copies initially present. Hence, LAMP can also be quantitative. In-tube detection of LAMP DNA amplification is possible using manganese loaded calcein which starts fluorescing upon complexation of manganese by pyrophosphate during in vitro DNA synthesis. [ 10 ] Another method for visual detection of the LAMP amplicons by the unaided eye was based on their ability to hybridize with complementary gold nanoparticle-bound (AuNP) single-stranded DNA (ssDNA) and thus prevent the normal red to purple-blue color change that would otherwise occur during salt-induced aggregation of the gold particles. So, a LAMP method combined with amplicon detection by AuNP can have advantages over other methods in terms of reduced assay time, amplicon confirmation by hybridization and use of simpler equipment (i.e., no need for a thermocycler, electrophoresis equipment or a UV trans-illuminator). [ 11 ] [ 12 ]
pH-dependent dye indicators such as Phenol Red induce a color change from pink to yellow when the pH value of the reaction decreases upon DNA amplification. [ 13 ] Due to its pronounced color change, this is the most commonly used readout for RT-LAMP assays. [ 13 ] However, the pH-change dependent readout requires a weakly buffered reaction solution, which poses a great challenge when using crude sample inputs with variable pH. [ 13 ] A second colorimetric assay utilizes metal ion indicators such as hydroxynaphthol blue (HNB), which changes color from purple to blue upon a drop in free Mg 2+ ions, which form a Mg- pyrophosphate precipitate upon DNA amplification. [ 13 ]
LAMP is a relatively new DNA amplification technique, which due to its simplicity, ruggedness, and low cost could provide major advantages.
LAMP has the potential to be used as a simple screening assay in the field or at the point of care by clinicians. [ 14 ] Because LAMP is isothermal, which eradicates the need for expensive thermocyclers used in conventional PCR, it may be a particularly useful method for infectious disease diagnosis in low and middle income countries. [ 15 ] LAMP is widely being studied for detecting infectious diseases such as filariasis, [ 16 ] Zika Virus, [ 17 ] tuberculosis, [ 18 ] malaria, [ 19 ] [ 20 ] [ 21 ] sleeping sickness, [ 22 ] and SARS-CoV-2 . [ 23 ] [ 24 ] In developing regions, it has yet to be extensively validated for other common pathogens. [ 14 ]
LAMP has been observed to be less sensitive (more resistant) than PCR to inhibitors in complex samples such as blood, likely due to use of a different DNA polymerase (typically Bst – Bacillus stearothermophilus – DNA polymerase rather than Taq polymerase as in PCR). Several reports describe successful detection of pathogens from minimally processed samples such as heat-treated blood, [ 25 ] [ 26 ] or in presence of clinical sample matrices. [ 27 ] This feature of LAMP may be useful in low-resource or field settings where a conventional DNA or RNA extraction prior to diagnostic testing may be impractical.
LAMP has also been used in helping identify body fluids. With its simplicity, researchers are able to test one or more samples with little hands on time which is helping cut down the time needed to get results. Researchers have also been able to add factors to make identification even more simple including metal-indicator dye and phenol red to be able to use a smartphone and the naked eye respectively to analyze the results. [ 28 ] [ 29 ] [ 30 ]
LAMP is less versatile than PCR, the most well-established nucleic acid amplification technique. LAMP is useful primarily as a diagnostic or detection technique, but is not useful for cloning or many other molecular biology applications enabled by PCR. Because LAMP uses 4 (or 6) primers targeting 6 (or 8) regions within a fairly small segment of the genome, and because primer design is subject to numerous constraints, it is difficult to design primer sets for LAMP "by eye". Free, open-source [ 31 ] or commercial software packages are generally used to assist with LAMP primer design, although the primer design constraints mean there is less freedom to choose the target site than with PCR.
In a diagnostic application, this must be balanced against the need to choose an appropriate target (e.g., a conserved site in a highly variable viral genome, or a target that is specific for a particular strain of pathogen). Multiple degenerated sequences may be required to cover the different variant strains of the same species. A consequence of having such a cocktail of primers can be non-specific amplification in the late amplification. [ citation needed ]
Multiplexing approaches for LAMP are less developed than for PCR. The larger number of primers per target in LAMP increases the likelihood of primer-primer interactions for multiplexed target sets. The product of LAMP is a series of concatemers of the target region, giving rise to a characteristic "ladder" or banding pattern on a gel, rather than a single band as with PCR. Although this is not a problem when detecting single targets with LAMP, "traditional" (endpoint) multiplex PCR applications wherein identity of a target is confirmed by size of a band on a gel are not feasible with LAMP. Multiplexing in LAMP has been achieved by choosing a target region with a restriction site, and digesting prior to running on a gel, such that each product gives rise to a distinct size of fragment, [ 32 ] although this approach adds complexity to the experimental design and protocol.
The use of a strand-displacing DNA polymerase in LAMP also precludes the use of hydrolysis probes, e.g. TaqMan probes, which rely upon the 5'-3' exonuclease activity of Taq polymerase. An alternative real-time multiplexing approach based on fluorescence quenchers has been reported. [ 33 ]
SYBR green dye may be added to view LAMP in real-time. However, in the late amplification, primer-dimer amplification may contribute to a false positive signal. The use of inorganic pyrophosphatase in a SYBR reaction mix allows the use of melt analysis to distinguish correct amplification [ 34 ]
Although different mitigation strategies have been proposed for false-positive results in assays based on this method, nonspecific amplification due to various factors including the absence of temperature gating mechanisms is one of the major limitations of Loop-mediated isothermal amplification. [ 35 ] [ 36 ]
Lastly, because LAMP requires maintained, elevated incubation temperatures (60–65 °C), some sort of heating mechanism, thermostat, and/or insulator is required (though not necessarily a thermal cycler). This requirement makes LAMP less ideally suited for in the field, point-of-care diagnostics which would ideally function at ambient temperature. [ citation needed ]
RNase hybridization-assisted amplification (RHAM) integrates LAMP with RNase HII -mediated fluorescent reporting. This method employs a conventional LAMP primer set to exponentially amplify the target sequence, followed by the hybridization of a ribonucleotide-containing fluorescent probe to the amplification product. RNase HII then cleaves the probe, releasing a fluorescent signal that can be detected. [ 37 ] [ 38 ] | https://en.wikipedia.org/wiki/Loop-mediated_isothermal_amplification |
In mathematics , a loop in a topological space X is a continuous function f from the unit interval I = [0,1] to X such that f (0) = f (1) . In other words, it is a path whose initial point is equal to its terminal point. [ 1 ]
A loop may also be seen as a continuous map f from the pointed unit circle S 1 into X , because S 1 may be regarded as a quotient of I under the identification of 0 with 1.
The set of all loops in X forms a space called the loop space of X . [ 1 ]
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Loop_(topology) |
Loop entropy is the entropy lost upon bringing together two residues of a polymer within a prescribed distance. For a single loop, the entropy varies logarithmically with the number of residues N {\displaystyle N} in the loop
where k B {\displaystyle k_{B}} is the Boltzmann constant and α {\displaystyle \alpha } is a coefficient that depends on the properties of the polymer. This entropy formula corresponds to a power-law distribution P ∼ N − α {\displaystyle P\sim N^{-\alpha }} for the probability of the residues contacting.
The loop entropy may also vary with the position of the contacting residues. Residues near the ends of the polymer are more likely to contact (quantitatively, have a lower α {\displaystyle \alpha } ) than those in the middle (i.e., far from the ends), primarily due to excluded volume effects .
The loop entropy formula becomes more complicated with multiples loops, but may be determined for a Gaussian polymer using a matrix method developed by Wang and Uhlenbeck. Let there be M {\displaystyle M} contacts among the residues, which define M {\displaystyle M} loops of the polymers. The Wang-Uhlenbeck matrix W {\displaystyle \mathbf {W} } is an M × M {\displaystyle M\times M} symmetric, real matrix whose elements W i j {\displaystyle W_{ij}} equal the number of common residues between loops i {\displaystyle i} and j {\displaystyle j} . The entropy of making the specified contacts equals
As an example, consider the entropy lost upon making the contacts between residues 26 and 84 and residues 58 and 110 in a polymer (cf. ribonuclease A ). The first and second loops have lengths 58 (=84-26) and 52 (=110-58), respectively, and they have 26 (=84-58) residues in common. The corresponding Wang-Uhlenbeck matrix is
whose determinant is 2340. Taking the logarithm and multiplying by the constants α k B {\displaystyle \alpha k_{B}} gives the entropy.
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Loop_entropy |
In quantum field theory and statistical mechanics , loop integrals are the integrals which appear when evaluating the Feynman diagrams with one or more loops by integrating over the internal momenta. [ 1 ] These integrals are used to determine counterterms, which in turn allow evaluation of the beta function , which encodes the dependence of coupling g {\displaystyle g} for an interaction on an energy scale μ {\displaystyle \mu } .
A generic one-loop integral, for example those appearing in one-loop renormalization of QED or QCD may be written as a linear combination of terms in the form
where the q i {\displaystyle q_{i}} are 4-momenta which are linear combinations of the external momenta, and the m i {\displaystyle m_{i}} are masses of interacting particles. This expression uses Euclidean signature. In Lorentzian signature the denominator would instead be a product of expressions of the form ( k + q ) 2 − m 2 + i ϵ {\displaystyle (k+q)^{2}-m^{2}+i\epsilon } .
Using Feynman parametrization , this can be rewritten as a linear combination of integrals of the form
where the 4-vector l {\displaystyle l} and Δ {\displaystyle \Delta } are functions of the q i , m i {\displaystyle q_{i},m_{i}} and the Feynman parameters. This integral is also integrated over the domain of the Feynman parameters. The integral is an isotropic tensor and so can be written as an isotropic tensor without l {\displaystyle l} dependence (but possibly dependent on the dimension d {\displaystyle d} ), multiplied by the integral
Note that if n {\displaystyle n} were odd, then the integral vanishes, so we can define n = 2 a {\displaystyle n=2a} .
In Wilsonian renormalization , the integral is made finite by specifying a cutoff scale Λ > 0 {\displaystyle \Lambda >0} . The integral to be evaluated is then
where ∫ Λ {\displaystyle \int ^{\Lambda }} is shorthand for integration over the domain { l ∈ R d : | l | < Λ } {\displaystyle \{l\in \mathbb {R} ^{d}:|l|<\Lambda \}} . The expression is finite, but in general as Λ → ∞ {\displaystyle \Lambda \rightarrow \infty } , the expression diverges.
The integral without a momentum cutoff may be evaluated as
where B {\displaystyle B} is the Beta function . For calculations in the renormalization of QED or QCD, a {\displaystyle a} takes values 0 , 1 {\displaystyle 0,1} and 2 {\displaystyle 2} .
For loop integrals in QFT, B {\displaystyle B} actually has a pole for relevant values of a , b {\displaystyle a,b} and d {\displaystyle d} . For example in scalar ϕ 4 {\displaystyle \phi ^{4}} theory in 4 dimensions, the loop integral in the calculation of one-loop renormalization of the interaction vertex has ( a , b , d ) = ( 0 , 2 , 4 ) {\displaystyle (a,b,d)=(0,2,4)} . We use the 'trick' of dimensional regularization , analytically continuing d {\displaystyle d} to d = 4 − ϵ {\displaystyle d=4-\epsilon } with ϵ {\displaystyle \epsilon } a small parameter.
For calculation of counterterms, the loop integral should be expressed as a Laurent series in ϵ {\displaystyle \epsilon } . To do this, it is necessary to use the Laurent expansion of the Gamma function ,
where γ {\displaystyle \gamma } is the Euler–Mascheroni constant. In practice the loop integral generally diverges as ϵ → 0 {\displaystyle \epsilon \rightarrow 0} .
For full evaluation of the Feynman diagram, there may be algebraic factors which must be evaluated. For example in QED, the tensor indices of the integral may be contracted with Gamma matrices , and identities involving these are needed to evaluate the integral. In QCD, there may be additional Lie algebra factors, such as the quadratic Casimir of the adjoint representation as well as of any representations that matter (scalar or spinor fields) in the theory transform under.
The starting point is the action for ϕ 4 {\displaystyle \phi ^{4}} theory in R d {\displaystyle \mathbb {R} ^{d}} is
Where ( ∂ ϕ 0 ) 2 = ∇ ϕ 0 ⋅ ∇ ϕ 0 = ∑ i = 1 d ∂ i ϕ 0 ∂ i ϕ 0 {\displaystyle (\partial \phi _{0})^{2}=\nabla \phi _{0}\cdot \nabla \phi _{0}=\sum _{i=1}^{d}\partial _{i}\phi _{0}\partial _{i}\phi _{0}} . The domain is purposefully left ambiguous, as it varies depending on regularisation scheme.
The Euclidean signature propagator in momentum space is
The one-loop contribution to the two-point correlator ⟨ ϕ ( x ) ϕ ( y ) ⟩ {\displaystyle \langle \phi (x)\phi (y)\rangle } (or rather, to the momentum space two-point correlator or Fourier transform of the two-point correlator) comes from a single Feynman diagram and is
This is an example of a loop integral.
If d ≥ 2 {\displaystyle d\geq 2} and the domain of integration is R d {\displaystyle \mathbb {R} ^{d}} , this integral diverges. This is typical of the puzzle of divergences which plagued quantum field theory historically. To obtain finite results, we choose a regularization scheme. For illustration, we give two schemes.
Cutoff regularization : fix Λ > 0 {\displaystyle \Lambda >0} . The regularized loop integral is the integral over the domain k = | k | < Λ , {\displaystyle k=|\mathbf {k} |<\Lambda ,} and it is typical to denote this integral by
This integral is finite and in this case can be evaluated.
Dimensional regularization : we integrate over all of R d {\displaystyle \mathbb {R} ^{d}} , but instead of considering d {\displaystyle d} to be a positive integer, we analytically continue d {\displaystyle d} to d = n − ϵ {\displaystyle d=n-\epsilon } , where ϵ {\displaystyle \epsilon } is small. By the computation above, we showed that the integral can be written in terms of expressions which have a well-defined analytic continuation from integers n {\displaystyle n} to functions on C {\displaystyle \mathbb {C} } : specifically the gamma function has an analytic continuation and taking powers, x d {\displaystyle x^{d}} , is an operation which can be analytically continued. | https://en.wikipedia.org/wiki/Loop_integral |
Loop modeling is a problem in protein structure prediction requiring the prediction of the conformations of loop regions in proteins with or without the use of a structural template. Computer programs that solve these problems have been used to research a broad range of scientific topics from ADP to breast cancer . [ 1 ] [ 2 ] Because protein function is determined by its shape and the physiochemical properties of its exposed surface, it is important to create an accurate model for protein/ligand interaction studies. [ 3 ] The problem arises often in homology modeling , where the tertiary structure of an amino acid sequence is predicted based on a sequence alignment to a template , or a second sequence whose structure is known. Because loops have highly variable sequences even within a given structural motif or protein fold , they often correspond to unaligned regions in sequence alignments; they also tend to be located at the solvent -exposed surface of globular proteins and thus are more conformationally flexible. Consequently, they often cannot be modeled using standard homology modeling techniques. More constrained versions of loop modeling are also used in the data fitting stages of solving a protein structure by X-ray crystallography , because loops can correspond to regions of low electron density and are therefore difficult to resolve.
Regions of a structural model that are predicted by non-template-based loop modeling tend to be much less accurate than regions that are predicted using template-based techniques. The extent of the inaccuracy increases with the number of amino acids in the loop. The loop amino acids' side chains dihedral angles are often approximated from a rotamer library, but can worsen the inaccuracy of side chain packing in the overall model. Andrej Sali 's homology modeling suite MODELLER includes a facility explicitly designed for loop modeling by a satisfaction of spatial restraints method. All methods require an upload of the PDB file and some require the specification of the loop location.
In general, the most accurate predictions are for loops of fewer than 8 amino acids. Extremely short loops of three residues can be determined from geometry alone, provided that the bond lengths and bond angles are specified. Slightly longer loops are often determined from a "spare parts" approach, in which loops of similar length are taken from known crystal structures and adapted to the geometry of the flanking segments. In some methods, the bond lengths and angles of the loop region are allowed to vary, in order to obtain a better fit; in other cases, the constraints of the flanking segments may be varied to find more "protein-like" loop conformations. The accuracy of such short loops may be almost as accurate as that of the homology model upon which it is based. It should also be considered that the loops in proteins may not be well-structured and therefore have no one conformation that could be predicted; NMR experiments indicate that solvent-exposed loops are "floppy" and adopt many conformations, while the loop conformations seen by X-ray crystallography may merely reflect crystal packing interactions, or the stabilizing influence of crystallization co-solvents.
As mentioned above homology-based methods use a database to align the target protein gap with a known template protein. A database of known structures is searched for a loop that fits the gap of interest by similarity of sequence and stems (the edges of the gap created by the unknown loop structure). The success of this method largely depends on the quality of that alignment. Since the loop is the least conserved portion of a protein’s structure, the homology-based method cannot always find a known template that aligns with the target sequence. Fortunately, the template databases are always adding new templates so the problem of not being able to find an alignment is becoming less of an issue. Some programs that use this method are SuperLooper and FREAD.
Otherwise known as an ab initio method, non-template based approaches use a statistical model to fill in the gaps created by the unknown loop structure. Some of these programs include MODELLER, Loopy, and RAPPER; but each of these programs approaches the problem in a different manner. For example, Loopy uses samples of torsion angle pairs to generate the initial loop structure then it revises this structure to maintain a realistic shape and closure, while RAPPER builds from one end of the gap to the other by extending the stem with different sampled angles until the gap is closed. [ 4 ] Yet another method is the “divide and conquer” approach. This involves subdividing the loop into 2 segments and then repeatedly dividing and transforming each segment until the loop is small enough to be solved. [ 5 ] Even with all these methods non-template based approaches are most accurate up to 12 residues (amino acids within the loop).
There are three problems that arise when using a non-template based technique. First, there are constraints that limit the possibilities for local region modeling. One such constraint is that loop termini are required to end at the correct anchor position. Also, the Ramachandran space cannot contain a backbone of dihedral angles . Second, a modeling program has to use a set procedure. Some programs use the “spare parts” approach as mentioned above. Other programs use a de novo approach that samples sterically feasible loop conformations and selects the best one. Third, determining the best model means that a scoring method must be created to compare the various conformations. [ 6 ] | https://en.wikipedia.org/wiki/Loop_modeling |
Loop perforation is an approximate computing technique that allows to regularly skip some iterations of a loop. [ 1 ] [ 2 ] [ 3 ]
It relies on one parameter : the perforation rate . The perforation rate can be interpreted as the number of iteration to skip each time or the number of iterations to perform before skipping one.
Variants of loop perforation include those that skip iterations deterministically at regular intervals, those that skip iterations at the beginning or the end of the loop, and those that skip a random sample of iterations. The compiler may select the perforation variant at the compile-time, or include instrumentation that allows the runtime system to adaptively adjust the perforation strategy and perforation rate to satisfy the end-to-end accuracy goal.
Loop perforation techniques were first developed by MIT senior researchers Martin C. Rinard and Stelios Sidiroglou.
The examples that follows provide the result of loop perforation applied on this C -like source code
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Loop_perforation |
In topology , a branch of mathematics , the loop space Ω X of a pointed topological space X is the space of (based) loops in X , i.e. continuous pointed maps from the pointed circle S 1 to X , equipped with the compact-open topology . Two loops can be multiplied by concatenation . With this operation, the loop space is an A ∞ -space . That is, the multiplication is homotopy-coherently associative .
The set of path components of Ω X , i.e. the set of based-homotopy equivalence classes of based loops in X , is a group , the fundamental group π 1 ( X ).
The iterated loop spaces of X are formed by applying Ω a number of times.
There is an analogous construction for topological spaces without basepoint. The free loop space of a topological space X is the space of maps from the circle S 1 to X with the compact-open topology. The free loop space of X is often denoted by L X {\displaystyle {\mathcal {L}}X} .
As a functor , the free loop space construction is right adjoint to cartesian product with the circle, while the loop space construction is right adjoint to the reduced suspension . This adjunction accounts for much of the importance of loop spaces in stable homotopy theory . (A related phenomenon in computer science is currying , where the cartesian product is adjoint to the hom functor .) Informally this is referred to as Eckmann–Hilton duality .
The loop space is dual to the suspension of the same space; this duality is sometimes called Eckmann–Hilton duality . The basic observation is that
where [ A , B ] {\displaystyle [A,B]} is the set of homotopy classes of maps A → B {\displaystyle A\rightarrow B} ,
and Σ A {\displaystyle \Sigma A} is the suspension of A, and ≊ {\displaystyle \approxeq } denotes the natural homeomorphism . This homeomorphism is essentially that of currying , modulo the quotients needed to convert the products to reduced products.
In general, [ A , B ] {\displaystyle [A,B]} does not have a group structure for arbitrary spaces A {\displaystyle A} and B {\displaystyle B} . However, it can be shown that [ Σ Z , X ] {\displaystyle [\Sigma Z,X]} and [ Z , Ω X ] {\displaystyle [Z,\Omega X]} do have natural group structures when Z {\displaystyle Z} and X {\displaystyle X} are pointed , and the aforementioned isomorphism is of those groups. [ 1 ] Thus, setting Z = S k − 1 {\displaystyle Z=S^{k-1}} (the k − 1 {\displaystyle k-1} sphere) gives the relationship
This follows since the homotopy group is defined as π k ( X ) = [ S k , X ] {\displaystyle \pi _{k}(X)=[S^{k},X]} and the spheres can be obtained via suspensions of each-other, i.e. S k = Σ S k − 1 {\displaystyle S^{k}=\Sigma S^{k-1}} . [ 2 ] | https://en.wikipedia.org/wiki/Loop_space |
In computer graphics , the Loop method for subdivision surfaces is an approximating subdivision scheme developed by Charles Loop in 1987 for triangular meshes . Prior methods, namely Catmull-Clark and Doo-Sabin (1978), focused on quad meshes .
Loop subdivision surfaces are defined recursively, dividing each triangle into four smaller ones. The method is based on a quartic box spline . It generates C 2 continuous limit surfaces everywhere except at extraordinary vertices, where they are C 1 continuous.
Geologists have applied Loop subdivision surfaces to model erosion on mountain faces , specifically in the Appalachians . [ citation needed ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Loop_subdivision_surface |
In mathematics, in the topology of 3-manifolds , the loop theorem is a generalization of Dehn's lemma . The loop theorem was first proven by Christos Papakyriakopoulos in 1956, along with Dehn's lemma and the Sphere theorem .
A simple and useful version of the loop theorem states that if for some 3-dimensional manifold M with boundary ∂M there is a map
with f | ∂ D 2 {\displaystyle f|\partial D^{2}} not nullhomotopic in ∂ M {\displaystyle \partial M} , then there is an embedding with the same property.
The following version of the loop theorem, due to John Stallings , is given in the standard 3-manifold treatises (such as Hempel or Jaco):
Let M {\displaystyle M} be a 3-manifold and let S {\displaystyle S} be a connected surface in ∂ M {\displaystyle \partial M} . Let N ⊂ π 1 ( S ) {\displaystyle N\subset \pi _{1}(S)} be a normal subgroup such that k e r ( π 1 ( S ) → π 1 ( M ) ) − N ≠ ∅ {\displaystyle \mathop {\mathrm {ker} } (\pi _{1}(S)\to \pi _{1}(M))-N\neq \emptyset } .
Let f : D 2 → M {\displaystyle f\colon D^{2}\to M} be a continuous map such that f ( ∂ D 2 ) ⊂ S {\displaystyle f(\partial D^{2})\subset S} and [ f | ∂ D 2 ] ∉ N . {\displaystyle [f|\partial D^{2}]\notin N.} Then there exists an embedding g : D 2 → M {\displaystyle g\colon D^{2}\to M} such that g ( ∂ D 2 ) ⊂ S {\displaystyle g(\partial D^{2})\subset S} and [ g | ∂ D 2 ] ∉ N . {\displaystyle [g|\partial D^{2}]\notin N.}
Furthermore if one starts with a map f in general position, then for any neighborhood U of the singularity set of f , we can find such a g with image lying inside the union of image of f and U.
Stalling's proof utilizes an adaptation, due to Whitehead and Shapiro, of Papakyriakopoulos' "tower construction". The "tower" refers to a special sequence of coverings designed to simplify lifts of the given map. The same tower construction was used by Papakyriakopoulos to prove the sphere theorem (3-manifolds) , which states that a nontrivial map of a sphere into a 3-manifold implies the existence of a nontrivial embedding of a sphere. There is also a version of Dehn's lemma for minimal discs due to Meeks and S.-T. Yau, which also crucially relies on the tower construction.
A proof not utilizing the tower construction exists of the first version of the loop theorem. This was essentially done 30 years ago by Friedhelm Waldhausen as part of his solution to the word problem for Haken manifolds ; although he recognized this gave a proof of the loop theorem, he did not write up a detailed proof. The essential ingredient of this proof is the concept of Haken hierarchy . Proofs were later written up, by Klaus Johannson , Marc Lackenby , and Iain Aitchison with Hyam Rubinstein .
One easy corollary of the loop theorem is a following: Let M {\displaystyle M} be a compact orientable irreducible 3-manifold. Then ∂ M {\displaystyle \partial M} is incompressible if and only if π 1 ( F ) → π 1 ( M ) {\displaystyle \pi _{1}(F)\to \pi _{1}(M)} is injective for each component F {\displaystyle F} of ∂ M {\displaystyle \partial M} . | https://en.wikipedia.org/wiki/Loop_theorem |
Lophophorine , also known as N-methylanhalonine , is a bio-active alkaloid made by various cacti in the Lophophora family. [ 1 ] It has been found to lack hallucinogenic effects in humans. [ 2 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lophophorine |
Loran-C is a hyperbolic radio navigation system that allows a receiver to determine its position by listening to low frequency radio signals that are transmitted by fixed land-based radio beacons . Loran-C combined two different techniques to provide a signal that was both long-range and highly accurate, features that had been incompatible. Its disadvantage was the expense of the equipment needed to interpret the signals, which meant that Loran-C was used primarily by militaries after it was introduced in 1957.
By the 1970s, the cost, weight and size of electronics needed to implement Loran-C had been dramatically reduced because of the introduction of solid-state electronics and, from the mid-1970s, early microcontrollers to process the signal. Low-cost and easy-to-use Loran-C units became common from the late 1970s, especially in the early 1980s, and the earlier LORAN [ a ] system was discontinued in favor of installing more Loran-C stations around the world. Loran-C became one of the most common and widely-used navigation systems for large areas of North America, Europe, Japan and the entire Atlantic and Pacific areas. The Soviet Union operated a nearly identical system, CHAYKA .
The introduction of civilian satellite navigation in the 1990s led to a rapid drop-off in Loran-C use. Discussions about the future of Loran-C began in the 1990s; several turn-off dates were announced and then cancelled. In 2010, the US and Canadian systems were shut down, along with Loran-C/CHAYKA stations that were shared with Russia. [ 2 ] [ 3 ] Several other chains remained active; some were upgraded for continued use. At the end of 2015, navigation chains in most of Europe were turned off. [ 4 ]
In December 2015 there was also renewed discussion of funding an eLoran system, [ 5 ] and NIST offered to fund development of a microchip-sized eLoran receiver for distribution of timing signals. [ 6 ] The National Timing Resilience and Security Act of 2017, proposed resurrecting Loran as a backup for the United States in case of a GPS outage caused by space weather or attack. [ 7 ] [ 8 ]
The original LORAN was proposed in 1940 by Alfred Lee Loomis at a meeting of the U.S. Army's Microwave Committee. The Army Air Corps were interested in the concept for aircraft navigation, and after some discussion they returned a requirement for a system offering accuracy of about 1 mile (1.6 km) at a range of 200 miles (320 km), and a maximum range as great as 500 miles (800 km) for high-flying aircraft. The Microwave Committee, by this time organized into what would become the MIT Radiation Laboratory , took up development as Project 3 . During the initial meetings, a member of the UK liaison team, Taffy Bowen , mentioned that he was aware the British were also working on a similar concept, but had no information on its performance. [ 9 ]
The development team, led by Loomis, made rapid progress on the transmitter design and tested several systems during 1940 before settling on a 3 MHz design. Extensive signal-strength measurements were made by mounting a conventional radio receiver in a station wagon and driving around the eastern states. [ 10 ] However, the custom receiver design and its associated cathode-ray tube displays proved to be a bigger problem. In spite of several efforts to design around the problem, instability in the display prevented accurate measurements as the output shifted back and forth on the face of the oscilloscope. [ 11 ]
By this time the team had become much more familiar with the British Gee system, and were aware of their related work on "strobes", a time base generator that produced well-positioned "pips" on the display that could be used for accurate measurement. This meant that inaccuracy of the positioning on the display had no effect: any inaccuracy in the position of the signal was also in the strobe, so the two remained aligned. The Project 3 team met with the Gee team in 1941, and immediately adopted this solution. This meeting also revealed that Project 3 and Gee called for almost identical systems, with similar performance, range and accuracy, but Gee had already completed basic development and was entering into initial production, making Project 3 superfluous. [ 12 ]
In response, the Project 3 team told the Army Air Force to adopt Gee, and instead, at the behest of the British team, realigned their efforts to provide long-range navigation on the oceans where Gee was not useful. This led to United States Navy interest, and a series of experiments quickly demonstrated that systems using the basic Gee concept, but operating at a lower frequency around 2 MHz would offer reasonable accuracy on the order of a few miles over distances on the order of 1,250 miles (2,010 km), at least at night when signals of this frequency range were able to skip off the ionosphere . [ 12 ] Rapid development followed, and a system covering the western Atlantic was operational in 1943. Additional stations followed, first covering the European side of the Atlantic, and then a large expansion in the Pacific. By the end of the war, there were 72 operational LORAN stations and as many as 75,000 receivers.
In 1958 the operation of the LORAN system was handed over to the United States Coast Guard , which renamed the system "Loran-A", the lower-case name being introduced at that time. [ 13 ]
There are two ways to implement the timing measurements needed for a hyperbolic navigation system, pulse timing systems like Gee and LORAN, and phase-timing systems like the Decca Navigator System . [ 14 ]
The former requires sharp pulses of signal, and their accuracy is generally limited to how rapidly the pulses can be turned on and off, which is a function of the carrier frequency . There is an ambiguity in the signal; the same measurements can be valid at two locations relative to the broadcasters, but in normal operation, they are hundreds of kilometres apart, so one possibility can be eliminated. [ 14 ]
The second system uses constant signals ("continuous wave") and takes measurements by comparing the phase of two signals. This system is easy to use even at very low frequencies. However, its signal is ambiguous over the distance of a wavelength, meaning there are hundreds of locations that will return the same signal. Decca referred to these ambiguous locations as cells . This demands some other navigation method to be used in conjunction to pick which cell the receiver is within, and then using the phase measurements to place the receiver accurately within the cell. [ 14 ]
Numerous efforts were made to provide some sort of secondary low-accuracy system that could be used with a phase-comparison system like Decca in order to resolve the ambiguity. Among the many methods was a directional broadcast system known as POPI , and a variety of systems combining pulse-timing for low-accuracy navigation and then using phase-comparison for fine adjustment. Decca themselves had set aside one frequency, "9f", for testing this combined-signal concept, but did not have the chance to do so until much later. Similar concepts were also used in the experimental Navarho system in the United States. [ 15 ]
It was known from the start of the LORAN project that the same CRT displays that showed the LORAN pulses could, when suitably magnified, also show the individual waves of the intermediate frequency . This meant that pulse-matching could be used to get a rough fix, and then the operator could gain additional timing accuracy by lining up the individual waves within the pulse, like Decca. This could either be used to greatly increase the accuracy of LORAN, or alternately, offer similar accuracy using much lower carrier frequencies, and thus greatly extend the effective range. This would require the transmitter stations to be synchronized both in time and phase, but much of this problem had already been solved by Decca engineers. [ 14 ]
The long-range option was of considerable interest to the Coast Guard, who set up an experimental system known as LF LORAN in 1945. This operated at much lower frequencies than the original LORAN, at 180 kHz, and required very long balloon-borne antennas. Testing was carried out throughout the year, including several long-distance flights as far as Brazil . The experimental system was then sent to Canada where it was used during Operation Muskox in the Arctic. Accuracy was found to be 150 feet (46 m) at 750 miles (1,210 km), a significant advance over LORAN. With the ending of Muskox, it was decided to keep the system running under what became known as "Operation Musk Calf", run by a group consisting of the United States Air Force , Royal Canadian Air Force , Royal Canadian Navy and the UK Royal Corps of Signals . The system ran until September 1947. [ 16 ]
This led to another major test series, this time by the newly-formed United States Air Force, known as Operation Beetle. Beetle was located in the far north, on the Canada-Alaska border, and used new guy-stayed 625 feet (191 m) steel towers, replacing the earlier system's balloon-lofted cable antennas. The system became operational in 1948 and ran for two years until February 1950. Unfortunately, the stations proved poorly sited, as the radio transmission over the permafrost was much shorter than expected and synchronization of the signals between the stations using ground waves proved impossible. The tests also showed that the system was extremely difficult to use in practice; it was easy for the operator to select the wrong sections of the waveforms on their display, leading to significant real-world inaccuracy. [ 16 ]
In 1946 the Rome Air Development Center sent out contracts for longer-ranged and more-accurate navigation systems that would be used for long-range bombing navigation. As the United States Army Air Forces were moving towards smaller crews, only three in the Boeing B-47 Stratojet for instance, a high degree of automation was desired. Two contracts were accepted; Sperry Gyroscope proposed the CYCLAN system (CYCLe matching LorAN) which was broadly similar to LF LORAN but with additional automation, and Sylvania proposed Whyn using continuous wave navigation like Decca, but with additional coding using frequency modulation . In spite of great efforts, Whyn could never be made to work, and was abandoned. [ 17 ]
CYCLAN operated by sending the same LF LORAN-like signals on two frequencies, LF LORAN's 180 kHz and again on 200 kHz. The associated equipment would look for a rising amplitude that indicated the start of the signal pulse, and then use sampling gates to extract the carrier phase. Using two receivers solved the problem of mis-aligning the pulses, because the phases would only align properly between the two copies of the signal when the same pulses were being compared. None of this was trivial; using the era's tube-based electronics, the experimental CYCLAN system filled much of a semi-trailer . [ 18 ]
CYCLAN proved highly successful, so much so that it became increasingly clear that the problems that led the engineers to use two frequencies were simply not as bad as expected. It appeared that a system using a single frequency would work just as well, given the right electronics. This was especially good news, as the 200 kHz frequency was interfering with existing broadcasts, and had to be moved to 160 kHz during testing. [ 19 ]
Through this period the issue of radio spectrum use was becoming a major concern, and had led to international efforts to decide on a frequency band suitable for long-range navigation. This process eventually settled on the band from 90 to 100 kHz. CYCLAN appeared to suggest that accuracy at even lower frequencies was not a problem, and the only real concern was the expense of the equipment involved. [ 19 ]
The success of the CYCLAN system led to a further contract with Sperry in 1952 for a new system with the twin goals of working in the 100 kHz range while being equally accurate, less complex and less expensive. These goals would normally be contradictory, but the CYCLAN system gave all involved the confidence that these could be met. The resulting system was known as Cytac. [ 20 ]
To solve the complexity problem, a new circuit was developed to properly time the sampling of the signal. This consisted of a circuit to extract the envelope of the pulse, another to extract the derivative of the envelope, and finally another that subtracted the derivative from the envelope. The result of this final operation would become negative during a very specific and stable part of the rising edge of the pulse, and this zero-crossing was used to trigger a very short-time sampling gate. This system replaced the complex system of clocks used in CYCLAN. By simply measuring the time between the zero-crossings of the master and secondary, pulse-timing was extracted. [ 21 ]
The output of the envelope sampler was also sent to a phase-shifter that adjusted the output of a local clock that locked to the master carrier using a phase-locked loop . This retained the phase of the master signal long enough for the secondary signal to arrive. Gating on the secondary signal was then compared to this master signal in a phase detector , and a varying voltage was produced depending on the difference in phase. This voltage represented the fine-positioning measurement. [ 21 ]
The system was generally successful during testing through 1953, but there were concerns raised about the signal power at long range and the possibility of jamming. This led to further modifications of the basic signal. The first was to broadcast a series of pulses instead of just one, broadcasting more energy during a given time and improving the ability of the receivers to tune in a useful signal. They also added a fixed 45° phase shift to every pulse, so simple continuous-wave jamming signals could be identified and rejected. [ 22 ]
The Cytac system underwent an enormous series of tests across the United States and offshore. Given the potential accuracy of the system, even minor changes to the groundwave synchronization were found to cause errors that could be eliminated — issues such as the number of rivers the signal crossed caused predictable delays that could be measured and then factored into navigation solutions. This led to a series of correction contours that could be added to the received signal to adjust for these concerns, and these were printed on the Cytac charts. Using prominent features on dams as target points, a series of tests demonstrated that the uncorrected signals provided accuracy on the order of 100 yards, while adding the correction contour adjustments reduced this to the order of ten yards. [ 23 ]
It was at this moment that the United States Air Force, having taken over these efforts while moving from the United States Army Air Forces , dropped their interest in the project. Although the reasons are not well recorded, it appears the idea of a fully automated bombing system using radio aids was no longer considered possible. [ 20 ] The AAF had been involved in missions covering about 1000 km (the distance from London to Berlin) and the Cytac system would work well at these ranges, but as the mission changed to trans-polar missions of 5,000 km or more, even Cytac did not offer the range and accuracy needed. They turned their attention to the use of inertial platforms and Doppler radar systems, cancelling work on Cytac as well as a competing system known as Navarho. [ 24 ]
Around this period the United States Navy began work on a similar system using combined pulse and phase comparison, but based on the existing LORAN frequency of 200 kHz. By this time the United States Navy had handed operational control of the LORAN system to the Coast Guard, and it was assumed the same arrangement would be true for any new system as well. Thus, the United States Coast Guard was given the choice of naming the systems, and decided to rename the existing system Loran-A, and the new system Loran-B. [ 1 ]
With Cytac fully developed and its test system on the east coast of the United States mothballed, the United States Navy also decided to re-commission Cytac for tests in the long-range role. An extensive series of tests across the Atlantic were carried out by the USCGC Androscoggin starting in April 1956. Meanwhile, Loran-B proved to have serious problems keeping their transmitters in phase, and that work was abandoned. [ b ] Minor changes were made to the Cytac systems to further simplify it, including a reduction in the pulse-chain spacing from 1200 to 1000 μs, the pulse rate changed to 20 pps to match the existing Loran-A system, and the phase-shifting between pulses to an alternating 0, 180-degree shift instead of 45 degrees at every pulse within the chain. [ 25 ]
The result was Loran-C. Testing of the new system was intensive, and over-water flights around Bermuda demonstrated that 50% of fixes lay within a circle of 260 ft (79 m) radius, [ 26 ] a dramatic improvement over the original Loran-A, meeting the accuracy of the Gee system, but at much greater range. The first chain was set up using the original experimental Cytac system, and a second one in the Mediterranean in 1957. Further chains covering the North Atlantic and large areas of the Pacific followed. At the time global charts were printed with shaded sections representing the area where a 3-mile (4.8 km) accurate fix could be obtained under most operational conditions. Loran-C operated in the 90 to 110 kHz frequency range.
Loran-C had originally been designed to be highly automated, allowing the system to be operated more rapidly than the original LORAN's multi-minute measurement. It was also operated in "chains" of linked stations, allowing a fix to be made by simultaneously comparing two secondaries to a single master. The downside of this approach was that the required electronic equipment, built using 1950s-era tube technology, was very large. Looking for companies with knowledge of seaborne, multi-channel phase-comparison electronics led, ironically, to Decca, who built the AN/SPN-31, the first widely used Loran-C receiver. The AN/SPN-31 weighed over 100 pounds (45 kg) and had 52 controls. [ 27 ]
Airborne units followed, and an adapted AN/SPN-31 was tested in an Avro Vulcan in 1963. By the mid-1960s, units with some transistorization were becoming more common, and a chain was set up in Vietnam to support the United States' war efforts there. A number of commercial airline operators experimented with the system as well, using it for navigation on the great circle route between North America and Europe. However, inertial platforms ultimately became more common in this role. [ 27 ]
In 1969, Decca sued the United States Navy for patent infringement, producing ample documentation of their work on the basic concept as early as 1944, along with the "missing" 9f frequency [ c ] at 98 kHz that had been set aside for experiments using this system. Decca won the initial suit, but the judgement was overturned on appeal when the Navy claimed "wartime expediency". [ 28 ]
When Loran-C became widespread, the United States Air Force once again became interested in using it as a guidance system. They proposed a new system layered on top of Loran-C to provide even higher accuracy, using the Loran-C fix as the coarse guidance signal in much the same way that Loran-C extracted coarse location from pulse timing to remove the ambiguity in the fine measurement. To provide an extra-fine guidance signal, Loran-D interleaved another train of eight pulses immediately after the signals from one of the existing Loran-C stations, folding the two signals together. This technique became known as "Supernumary Interpulse Modulation" (SIM). These were broadcast from low-power portable transmitters, offering relatively short-range service of high accuracy. [ 29 ]
Loran-D was used only experimentally during war-games in the 1960s from a transmitter set in the UK. The system was also used in a limited fashion during the Vietnam War , combined with the Pave Spot laser designator system, a combination known as Pave Nail. Using mobile transmitters, the AN/ARN-92 LORAN navigation receiver could achieve accuracy on the order of 60 feet (18 m), which the Spot laser improved to about 20 feet (6.1 m). [ 29 ] The SIM concept later became a system for sending additional data. [ 30 ] [ 31 ]
At about the same time, Motorola proposed a new system using pseudo-random pulse-chains. This mechanism ensures that no two chains within a given period (on the order of many seconds) will have the same pattern, making it easy to determine if the signal is a groundwave from a recent transmission or a multi-hop signal from a previous one. The system, Multi-User Tactical Navigation Systems (MUTNS) was used briefly but it was found that Loran-D met the same requirements but had the added advantage of being a standard Loran-C signal as well. Although MUTNS was unrelated to the Loran systems, it was sometimes referred to as Loran-F . [ 32 ]
In spite of its many advantages, the high cost of implementing a Loran-C receiver made it uneconomical for many users. Additionally, as military users upgraded from Loran-A to Loran-C, large numbers of surplus Loran-A receivers were dumped on the market. This made Loran-A popular in spite of being less accurate and fairly difficult to operate. By the early 1970s the introduction of integrated circuits combining a complete radio receiver began to greatly reduce the complexity of Loran-A measurements, and fully automated units the size of a stereo receiver became common. For those users requiring higher accuracy, Decca had considerable success with their Decca Navigator system, and produced units that combined both receivers, using Loran to eliminate the ambiguities in Decca.
The same rapid development of microelectronics that made Loran-A so easy to operate worked equally well on the Loran-C signals, and the obvious desire to have a long-range system that could also provide enough accuracy for lake and harbour navigation led to the "opening" of the Loran-C system to public use in 1974. Civilian receivers quickly followed, and dual-system A/C receivers were also common for a time. The switch from A to C was extremely rapid, due largely to rapidly falling prices which led to many users' first receiver being Loran-C. By the late 1970s the Coast Guard decided to turn off Loran-A, in favour of adding additional Loran-C stations to cover gaps is its coverage. The original Loran-A network was shut down in 1979 and 1980, with a few units used in the Pacific for some time. Given the widespread availability of Loran-A charts, many Loran-C receivers included a system for converting coordinates between A and C units.
One of the reasons for Loran-C's opening to the public was the move from Loran to new forms of navigation, including inertial navigation systems , Transit and OMEGA , meant that the security of Loran was no longer as stringent as it was as a primary form of navigation. As these newer systems gave way to GPS through the 1980s and 90s, this process repeated itself, but this time the military was able to separate GPS's signals in such a way that it could provide both secure military and insecure civilian signals at the same time. GPS was more difficult to receive and decode, but by the 1990s the required electronics were already as small and inexpensive as Loran-C, leading to rapid adoption that has become largely universal.
Although Loran-C was largely redundant by 2000, it has not universally disappeared as of 2014 [update] due to a number of concerns. One is that the GPS system can be jammed through a variety of means. Although the same is true of Loran-C, the transmitters are close-at-hand and can be adjusted if necessary. More importantly, there are effects that might cause the GPS system to become unusable over wide areas, notably space weather events and potential EMP events. Loran, located entirely under the atmosphere, offers more resilience to these issues. There has been considerable debate about the relative merits of keeping the Loran-C system operational as a result of such considerations.
In November 2009, the United States Coast Guard announced that Loran-C was not needed by the U.S. for maritime navigation. This decision left the fate of LORAN and eLoran in the United States to the Secretary of the Department of Homeland Security . [ 33 ] Per a subsequent announcement, the US Coast Guard, in accordance with the DHS Appropriations Act, terminated the transmission of all U.S. Loran-C signals on 8 February 2010. [ 2 ] On 1 August 2010 the U.S. transmission of the Russian American signal was terminated, [ 2 ] and on 3 August 2010 all Canadian signals were shut down by the USCG and the CCG. [ 2 ] [ 3 ]
The European Union had decided that the potential security advantages of Loran are worthy not only of keeping the system operational, but upgrading it and adding new stations. [ citation needed ] This is part of the wider Eurofix system which combines GPS, Galileo and nine Loran stations into a single integrated system.
In 2014, Norway and France both announced that all of their remaining transmitters, which make up a significant part of the Eurofix system, would be shut down on 31 December 2015. [ 34 ] The two remaining transmitters in Europe ( Anthorn , UK and Sylt , Germany) would no longer be able to sustain a positioning and navigation Loran service, with the result that the UK announced its trial eLoran service would be discontinued from the same date. [ 35 ]
In conventional navigation, measuring one's location, or taking a fix , is accomplished by taking two measurements against well known locations. In optical systems this is typically accomplished by measuring the angle to two landmarks, and then drawing lines on a nautical chart at those angles, producing an intersection that reveals the ship's location. Radio methods can also use the same concept with the aid of a radio direction finder , but due to the nature of radio propagation, such instruments are subject to significant errors, especially at night. More accurate radio navigation can be made using pulse timing or phase comparison techniques, which rely on the time-of-flight of the signals. In comparison to angle measurements, these remain fairly steady over time, and most of the effects that change these values are fixed objects like rivers and lakes that can be accounted for on charts.
Timing systems can reveal the absolute distance to an object, as is the case in radar . The problem in the navigational case is that the receiver has to know when the original signal was sent. In theory, one could synchronize an accurate clock to the signal before leaving port, and then use that to compare the timing of the signal during the voyage. However, in the 1940s no suitable system was available that could hold an accurate signal over the time span of an operational mission.
Instead, radio navigation systems adopted the multilateration concept which is based on the difference in times (or phase) instead of the absolute time. The basic idea is that it is relatively easy to synchronize two ground stations, using a signal shared over a phone line for instance, so one can be sure that the signals received were sent at exactly the same time. They will not be received at exactly the same time, however, as the receiver will receive the signal from the closer station first. Timing the difference between two signals can be easily accomplished, first by physically measuring them on a cathode-ray tube, or simple electronics in the case of phase comparison.
The difference in signal timing does not reveal the location by itself. Instead, it determines a series of locations where that timing is possible. For instance, if the two stations are 300 km apart and the receiver measures no difference in the two signals, that implies that the receiver is somewhere along a line equidistant between the two. If the signal from one is received exactly 100 μs after, then the receiver is 30 kilometres (19 mi) closer to one station than the other. Plotting all the locations where one station is 30 km closer than the other produces a curved line. Taking a fix is accomplished by making two such measurements with different pairs of stations, and then looking up both curves on a navigational chart. The curves are known as lines of position or LOP. [ 36 ]
In practice, radio navigation systems normally use a chain of three or four stations, all synchronized to a master signal that is broadcast from one of the stations. The others, the secondaries , are positioned so their LOPs cross at acute angles, which increases the accuracy of the fix. So for instance, a given chain might have four stations with the master in the center, allowing a receiver to pick the signals from two secondaries that are currently as close to right angles as possible given their current location. Modern systems, which know the locations of all the broadcasters, can automate which stations to pick.
In the case of LORAN, one station remains constant in each application of the principle, the primary , being paired up separately with two other secondary stations. Given two secondary stations, the time difference (TD) between the primary and first secondary identifies one curve, and the time difference between the primary and second secondary identifies another curve, the intersections of which will determine a geographic point in relation to the position of the three stations. These curves are referred to as TD lines . [ 37 ]
In practice, LORAN is implemented in integrated regional arrays , or chains , consisting of one primary station and at least two (but often more) secondary stations, with a uniform group repetition interval (GRI) defined in microseconds . The amount of time before transmitting the next set of pulses is defined by the distance between the start of transmission of primary to the next start of transmission of primary signal.
The secondary stations receive this pulse signal from the primary, then wait a preset number of milliseconds , known as the secondary coding delay , to transmit a response signal. In a given chain, each secondary's coding delay is different, allowing for separate identification of each secondary's signal. (In practice, however, modern LORAN receivers do not rely on this for secondary identification.) [ citation needed ]
Every LORAN chain in the world uses a unique Group Repetition Interval, the number of which, when multiplied by ten, gives how many microseconds pass between pulses from a given station in the chain. In practice, the delays in many, but not all, chains are multiples of 100 microseconds. LORAN chains are often referred to by this designation, e.g. , GRI 9960, the designation for the LORAN chain serving the Northeastern United States . [ citation needed ]
Due to the nature of hyperbolic curves, a particular combination of a primary and two secondary stations can possibly result in a "grid" where the grid lines intersect at shallow angles. For ideal positional accuracy, it is desirable to operate on a navigational grid where the grid lines are closer to right angles ( orthogonal ) to each other. As the receiver travels through a chain, a certain selection of secondaries whose TD lines initially formed a near-orthogonal grid can become a grid that is significantly skewed. As a result, the selection of one or both secondaries should be changed so that the TD lines of the new combination are closer to right angles. [ 38 ] In practice, nearly all chains provide at least three, and as many as five, secondaries. [ 39 ]
Where available, common marine nautical charts include visible representations of TD lines at regular intervals over water areas. The TD lines representing a given primary-secondary pairing are printed with distinct colors, and note the specific time difference indicated by each line. On a nautical chart, the denotation for each Line of Position from a receiver, relative to axis and color, can be found at the bottom of the chart. The color on official charts for stations and the timed-lines of position follow no specific conformance for the purpose of the International Hydrographic Organization (IHO). However, local chart producers may color these in a specific conformance to their standard. Always consult the chart notes, administrations Chart1 reference, and information given on the chart for the most accurate information regarding surveys, datum, and reliability.
There are three major factors when considering signal delay and propagation in relation to LORAN-C:
The chart notes should indicate whether ASF corrections have been made (Canadian Hydrographic Service (CHS) charts, for example, include them). Otherwise, the appropriate correction factors must be obtained before use.
Due to interference and propagation issues suffered from land features and artificial structures such as tall buildings, the accuracy of the LORAN signal can be degraded considerably in inland areas (see Limitations ). As a result, nautical charts will not show TD lines in those areas, to prevent reliance on LORAN-C for navigation.
Traditional LORAN receivers display the time difference between each pairing of the primary and one of the two selected secondary stations, which is then used to find the appropriate TD line on the chart. Modern LORAN receivers display latitude and longitude coordinates instead of time differences, and, with the advent of time difference comparison and electronics, provide improved accuracy and better position fixing, allowing the observer to plot their position on a nautical chart more easily. When using such coordinates, the datum used by the receiver (usually WGS84 ) must match that of the chart, or manual conversion calculations must be performed before the coordinates can be used.
Each LORAN station is equipped with a suite of specialized equipment to generate the precisely timed signals used to modulate / drive the transmitting equipment. Up to three commercial cesium atomic clocks are used to generate 5 MHz and pulse per second (or 1 Hz) signals that are used by timing equipment to generate the various GRI-dependent drive signals for the transmitting equipment.
While each U.S.-operated LORAN station is supposed to be synchronized to within 100 ns of Coordinated Universal Time (UTC), the actual accuracy achieved as of 1994 was within 500 ns. [ 40 ]
LORAN-C transmitters operate at peak powers of 100–4,000 kilowatts, comparable to longwave broadcasting stations. Most use 190–220 metre tall mast radiators, insulated from ground. The masts are inductively lengthened and fed by a loading coil (see: electrical length ). A well known-example of a station using such an antenna is Rantum . Free-standing tower radiators in this height range are also used [ clarification needed ] . Carolina Beach uses a free-standing antenna tower. Some LORAN-C transmitters with output powers of 1,000 kW and higher used extremely tall 412-metre mast radiators (see below). Other high power LORAN-C stations, like George , used four T-antennas mounted on four guyed masts arranged in a square.
All LORAN-C antennas are designed to radiate an omnidirectional pattern. Unlike longwave broadcasting stations, LORAN-C stations cannot use backup antennas because the exact position of the antenna is a part of the navigation calculation. The slightly different physical location of a backup antenna would produce Lines of Position different from those of the primary antenna.
LORAN suffers from electronic effects of weather and the ionospheric effects of sunrise and sunset. The most accurate signal is the groundwave that follows the Earth's surface, ideally over seawater. At night the indirect skywave , bent back to the surface by the ionosphere , is a problem as multiple signals may arrive via different paths ( multipath interference ). The ionosphere's reaction to sunrise and sunset accounts for the particular disturbance during those periods. Geomagnetic storms have serious effects, as with any radio based system.
LORAN uses ground-based transmitters that only cover certain regions. Coverage is quite good in North America, Europe, and the Pacific Rim.
The absolute accuracy of LORAN-C varies from 0.10 to 0.25 nmi (185 to 463 m). Repeatable accuracy is much greater, typically from 60 to 300 ft (18 to 91 m). [ 41 ]
LORAN Data Channel (LDC) is a project underway between the FAA and United States Coast Guard to send low bit rate data using the LORAN system. Messages to be sent include station identification, absolute time, and position correction messages. In 2001, data similar to Wide Area Augmentation System (WAAS) GPS correction messages were sent as part of a test of the Alaskan LORAN chain. As of November 2005, test messages using LDC were being broadcast from several U.S. LORAN stations. [ 42 ]
In recent years, LORAN-C has been used in Europe to send differential GPS and other messages, employing a similar method of transmission known as EUROFIX. [ 43 ]
A system called SPS (Saudi Positioning System), similar to EUROFIX, is in use in Saudi Arabia. [ 44 ] GPS differential corrections and GPS integrity information are added to the LORAN signal. A combined GPS/LORAN receiver is used, and if a GPS fix is not available it automatically switches over to LORAN.
As LORAN systems are maintained and operated by governments, their continued existence is subject to public policy. With the evolution of other electronic navigation systems, such as satellite navigation systems, funding for existing systems is not always assured.
Critics, who have called for the elimination of the system, state that the LORAN system has too few users, lacks cost-effectiveness, and that Global Navigation Satellite System (GNSS) signals are superior to LORAN. [ citation needed ] Supporters of continued and improved LORAN operation note that LORAN uses a strong signal, which is difficult to jam, and that LORAN is an independent, dissimilar, and complementary system to other forms of electronic navigation, which helps ensure availability of navigation signals. [ 45 ] [ 46 ]
On 26 February 2009, the U.S. Office of Management and Budget released the first blueprint for the Fiscal Year 2010 budget . [ 47 ] This document identified the LORAN-C system as "outdated" and supported its termination at an estimated savings of $36 million in 2010 and $190 million over five years.
On 21 April 2009 the U.S. Senate Committee on Commerce, Science and Transportation and the Committee on Homeland Security and Governmental Affairs released inputs to the FY 2010 Concurrent Budget Resolution with backing for the continued support for the LORAN system, acknowledging the investment already made in infrastructure upgrades and recognizing the studies performed and multi-departmental conclusion that eLoran is the best backup to GPS.
Senator Jay Rockefeller , Chairman of the Committee on Commerce, Science and Transportation, wrote that the committee recognized the priority in "Maintaining LORAN-C while transitioning to eLORAN" as means of enhancing the national security, marine safety and environmental protection missions of the Coast Guard.
Senator Collins, the ranking member on the Committee on Homeland Security and Governmental Affairs wrote that the President's budget overview proposal to terminate the LORAN-C system is inconsistent with the recent investments, recognized studies and the mission of the U.S. Coast Guard. The committee also recognizes the $160 million investment already made toward upgrading the LORAN-C system to support the full deployment of eLoran.
Further, the Committees also recognize the many studies which evaluated GPS backup systems and concluded both the need to back up GPS and identified eLoran as the best and most viable backup. "This proposal is inconsistent with the recently released (January 2009) Federal Radionavigation Plan (FRP), which was jointly prepared by DHS and the Departments of Defense (DOD) and Transportation (DOT). The FRP proposed the eLoran program to serve as a Position, Navigation and Timing (PNT) backup to GPS (Global Positioning System)."
On 7 May 2009, President Barack Obama proposed cutting funding (approx. $35 million/year) for LORAN, citing its redundancy alongside GPS. [ 48 ] In regard to the pending Congressional bill, H.R. 2892, it was subsequently announced that "[t]he Administration supports the Committee's aim to achieve an orderly termination through a phased decommissioning beginning in January 2010, and the requirement that certifications be provided to document that the LORAN-C termination will not impair maritime safety or the development of possible GPS backup capabilities or needs." [ 49 ]
Also on 7 May 2009, the U.S. General Accounting Office (GAO), the investigative arm of Congress, released a report citing the very real potential for the GPS system to degrade or fail in light of program delays which have resulted in scheduled GPS satellite launches slipping by up to three years. [ 50 ]
On 12 May 2009 the March 2007 Independent Assessment Team (IAT) report on LORAN was released to the public. In its report the ITA stated that it "unanimously recommends that the U.S. government complete the eLoran upgrade and commit to eLoran as the national backup to GPS for 20 years." The release of the report followed an extensive Freedom of Information Act (FOIA) battle waged by industry representatives against the federal government. Originally completed 20 March 2007 and presented to the co-sponsoring Department of Transportation and Department of Homeland Security (DHS) Executive Committees, the report carefully considered existing navigation systems, including GPS. The unanimous recommendation for keeping the LORAN system and upgrading to eLoran was based on the team's conclusion that LORAN is operational, deployed and sufficiently accurate to supplement GPS. The team also concluded that the cost to decommission the LORAN system would exceed the cost of deploying eLoran, thus negating any stated savings as offered by the Obama administration and revealing the vulnerability of the U.S. to GPS disruption. [ 51 ]
In November 2009, the U.S. Coast Guard announced that the LORAN-C stations under its control would be closed down for budgetary reasons after 4 January 2010 provided the Secretary of the Department of Homeland Security certified that LORAN is not needed as a backup for GPS. [ 52 ]
On 7 January 2010, Homeland Security published a notice of the permanent discontinuation of LORAN-C operation. Effective 2000 UTC 8 February 2010, the United States Coast Guard terminated all operation and broadcast of LORAN-C signals in the United States. The United States Coast Guard transmission of the Russian American CHAYKA signal was terminated on 1 August 2010. The transmission of Canadian LORAN-C signals was terminated on 3 August 2010. [ 53 ]
On 31 May 2007, the UK Department for Transport (DfT), via the general lighthouse authorities , awarded a 15-year contract to provide a state-of-the-art Enhanced LORAN service to improve the safety of mariners in the UK and Western Europe. The service contract was to operate in two phases, with development work and further focus for European agreement on eLoran service provision from 2007 through 2010, and full operation of the eLoran service from 2010 through 2022. The first eLoran transmitter was situated at Anthorn Radio Station Cumbria, UK, and was operated by Babcock International (previously Babcock Communications). [ 54 ]
The UK government granted approval for seven differential eLoran ship-positioning technology stations to be built along the south and east coasts of the UK to help counter the threat of jamming of global positioning systems. They were set to reach initial operational capability by summer 2014. [ 55 ] The general lighthouse authorities of the UK and Ireland announced 31 October 2014 the initial operational capability of UK maritime eLoran. Seven differential reference stations provided additional position, navigation, and timing (PNT) information via low-frequency pulses to ships fitted with eLoran receivers. The service was to help ensure they could navigate safely in the event of GPS failure in one of the busiest shipping regions in the world, with expected annual traffic of 200,000 vessels by 2020. [ 56 ]
Despite these plans, in light of the decision by France and Norway to cease Loran transmissions on 31 December 2015, the UK announced at the start of that month that its eLoran service would be discontinued on the same day. [ 57 ] However to allow for further research and PNT development, the eLoran timing signal is still active from the government facility in Anthorn . [ 58 ]
Download coordinates as:
A list of LORAN-C transmitters. Stations with an antenna tower taller than 300 metres (984 feet) are shown in bold.
North Central U.S. (GRI 8290)Great Lakes (GRI 8970)
Great Lakes (GRI 8970) South Central U.S. (GRI 9610)
Canadian East Coast (GRI 5930) Newfoundland East Coast (GRI 7270)
Canadian East Coast (GRI 5930) Northeast U.S. (GRI 9960)
Southeast U.S. (GRI 7980) Northeast US (GRI 9960)
Newfoundland East Coast (GRI 7270)
Great Lakes (GRI 8970) Northeast US (GRI 9960)
Eiði (GRI 9007)
shut down
U.S. West Coast (GRI 9940)
Canadian East Coast (GRI 5930) Newfoundland East Coast (GRI 7270)
Canadian West Coast (GRI 5990) U.S. West Coast (GRI 9940)
North West Pacific (GRI 8930) East Asia (GRI 9930)
North Central U.S. (GRI 8290) South Central U.S. (GRI 9610)
Southeast U.S. (GRI 7980) South Central U.S. (GRI 9610)
North Central U.S. (GRI 8290)
Southeast U.S. (GRI 7980)
South Central U.S. (GRI 9610)
Southeast U.S. (GRI 7980) Great Lakes (GRI 8970)
U.S. West Coast (GRI 9940)
North West Pacific (GRI 8930)
demolished
Canadian East Coast (GRI 5930) Northeast U.S. (GRI 9960)
0) North Pacific (GRI 9990)
North West Pacific (GRI 8930) East Asia (GRI 9930)
Canadian West Coast (GRI 5990)
Southeast U.S. (GRI 7980) South Central U.S. (GRI 9610)
North Pacific (GRI 9990)
South Central U.S. (GRI 9610) U.S. West Coast (GRI 9940)
Great Lakes (GRI 8970) Northeast U.S. (GRI 9960)
Canadian West Coast (GRI 5990) Gulf of Alaska (GRI 7960)
Gulf of Alaska (GRI 7960)
Eastern Russia Chayka (GRI 7950) North West Pacific (GRI 8930)
Canadian West Coast (GRI 5990) North Central U.S. (GRI 8290)
Download coordinates as: | https://en.wikipedia.org/wiki/Loran-C |
lorcon ( acronym for L oss O f R adio CON nectivity ) is an open source network tool. It is a library for injecting 802.11 (WLAN) frames, capable of injecting via multiple driver frameworks, without the need to change the application code. Lorcon is built by patching the third-party MadWifi -driver for cards based on the Qualcomm Atheros wireless chipset. [ 1 ] [ 2 ] [ 3 ]
The project is maintained by Joshua Wright and Michael Kershaw ("dragorn").
This Unix -related article is a stub . You can help Wikipedia by expanding it .
This network -related software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lorcon |
Lordosis behavior ( / l ɔːr ˈ d oʊ s ɪ s / [ 1 ] ), also known as mammalian lordosis (Greek lordōsis, from lordos "bent backward" [ 1 ] ) or presenting , is the naturally occurring body posture for sexual receptivity to copulation present in females of most mammals including rodents , elephants , and cats . The primary characteristics of the behavior are a lowering of the forelimbs but with the rear limbs extended and hips raised, ventral arching of the spine and a raising, or sideward displacement, of the tail . During lordosis, the spine curves dorsoventrally so that its apex points towards the abdomen.
Lordosis is a reflex action that causes many non-primate female mammals to adopt a body position that is often crucial to reproductive behavior. The posture moves the pelvic tilt in an anterior direction, with the posterior pelvis rising up, the bottom angling backward and the front angling downward. Lordosis aids in copulation as it elevates the hips, thereby facilitating penetration by the penis . It is commonly seen in female mammals during estrus (being "in heat"). Lordosis occurs during copulation itself and in some species, like the cat, during pre- copulatory behavior. [ 2 ]
The lordosis reflex arc is hardwired in the spinal cord, at the level of the lumbar and sacral vertebrae (L1, L2, L5, L6 and S1). [ 3 ] In the brain, several regions modulate the lordosis reflex. The vestibular nuclei and the cerebellum , via the vestibular tract, send information which makes it possible to coordinate the lordosis reflex with postural balance . More importantly, the ventromedial hypothalamus sends projections that inhibit the reflex at the spinal level, so it is not activated at all times. [ 4 ] Sex hormones control reproduction and coordinate sexual activity with the physiological state. Schematically, at the breeding season , and when an ovum is available, hormones (especially estrogen ) simultaneously induce ovulation and estrus (heat). Under the action of estrogen in the hypothalamus, the lordosis reflex is uninhibited. [ 5 ] The female is ready for copulation and fertilization .
When a male mammal mounts the female, tactile stimuli on the flanks, the perineum and the rump of the female are transmitted via the sensory nerves in the spinal cord . In the spinal cord and lower brainstem , they are integrated with the information coming from the brain, and then, in general, a nerve impulse is transmitted to the muscles via the motor nerves . The contraction of the longissimus and transverso-spinalis muscles causes the ventral arching of the vertebral column. [ 3 ]
Sexual behaviour is optimized for reproduction, and the hypothalamus is the key brain area which regulates and coordinates the physiological and behavioural aspects of reproduction. [ 6 ] Most of the time, the ventromedial nucleus of the hypothalamus (VMN) inhibits lordosis. But when environmental conditions are favorable and the female is in estrus, the estrogen hormone , estradiol , induces sexual receptivity by the neurons in the ventromedial nucleus , [ 7 ] the periaqueductal gray , and other areas of the brain . The ventromedial hypothalamus sends impulses down axons synapsing with neurons in the periaqueductal gray. These convey an impulse to neurons in the medullary reticular formation which project down the reticulospinal tract and synapse with the neurobiological circuits of the lordosis reflex in the spinal cord (L1–L6). These neurobiological processes induced by estradiol enable the tactile stimuli to trigger lordosis.
The mechanisms of regulation of this estrogen-dependent lordosis reflex have been identified through different types of experiments . When the VMN is lesioned lordosis is abolished; this suggests the importance of this cerebral structure in the regulation of lordosis. Concerning hormones, displays of lordosis can be affected by ovariectomy, injections of estradiol benzoate and progesterone, [ 8 ] or exposure to stress during puberty. [ 9 ] [ 10 ] Specifically, stress can suppress the hypothalamic-pituitary-gonadal (HPG) axis and therefore decrease concentrations of gonadal hormones. Consequently, these reductions in exposure to gonadal hormones around puberty can result in decreases in sexual behavior in adulthood, including displays of lordosis. [ 9 ]
While lordosis behavior has not been observed in humans, positions similar to lordosis can be seen in those being mounted from behind , with the autonomous lordosis reflex replaced by a conscious decision to expose the vulva for penetration. [ 11 ]
In a 2017 study, using 3D models and eye-tracking technology it is shown that the slight thrusting out of a woman's hips influences how attractive others perceive her to be and captures the gaze of both men and women. [ 12 ] The authors argue "while reflexive lordosis posture is not exhibited by human females and receptivity is not passive or obligatory for them, a manifestation of lumbar curvature might serve as a vestigial remnant of proceptivity-/receptivity-communicative signal between men and women". [ 13 ] Previously, anthropologist Helen Fisher also speculated that when a human female wears high-heeled footwear the buttocks thrusts out and the back arches into a pose that simulates lordosis behavior, which is why high heels are considered "sexy". [ 14 ] Recent evidence has also supported the perception of sexual receptivity in women when arching the back in supine and quadruped poses. [ 15 ] [ 16 ] Researchers have found that women perceive other women exhibiting this posture as a potential threat to their romantic relationship. [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Lordosis_behavior |
Lore Harp McGovern ( née Lange-Hegermann , March 3, 1944) is an American entrepreneur and philanthropist based in California. She co-founded Vector Graphic , one of the earliest personal computing companies, in 1976. She served as the company's CEO and president, took it public, and saw an annual revenue of $36 million before she left the company in 1984.
Harp McGovern was born in Poland under German occupation and moved to the United States following high school. She has founded or run companies in diverse fields including health care, educational publishing and high-tech, and is an investor in numerous start-up companies in Silicon Valley .
Lore Lange-Hegermann was born in German-occupied Poland [ 1 ] on March 3, 1944. She grew up in Bottrop , West Germany , in a partially bombed-out building with her parents and grandparents. Her grandfather was businessman and Weimar Republic politician Hermann Lange-Hegermann [ de ] . [ 2 ] She attended a Catholic boarding school, learning English, French and Latin, and graduated from a German high school. [ 1 ]
At 19, she travelled to the United States and lived with a family in Santa Cruz, California , as part of an exchange . [ 2 ] She hitchhiked to Mexico with a friend and decided to stay in the United States against her parents' wishes. Her visa ran out and she worked as a babysitter and took odd jobs. [ 1 ] She married Bob Harp, moved to Pasadena , and had two daughters. She attended California State University, Los Angeles , earning a bachelor's degree in anthropology. [ 3 ] She later earned her MBA from Pepperdine University in 1979. [ 4 ]
After attending law school for a year, [ 4 ] Harp met Carole Ely, a fellow housewife, mother and neighbor who had formerly worked as a bond trader. Both women were bored with being housewives and in 1976 they decided to start a business. After rejecting the idea of starting a travel agency, they settled on microelectronics. They based their business around an 8K RAM board for the S-100 bus of an Altair 8800 that Lore's husband had devised the previous year. They incorporated the firm Vector Graphic and registered the business in August 1976. They started the company with $6,000 in capital and Lore served as the CEO and president. [ 3 ] Early on, she had a meeting with an AMD representative to purchase memory chips, and after finding the price too high, she arranged a deal with Fairchild Semiconductor . They began selling the RAM boards, cash on delivery, via mail orders and advertised the product in magazines. [ 1 ] The company was run out of a spare bedroom in her Westlake Village home. [ 3 ]
By 1977, Vector Graphic had designed the 1702 PROM board and the Vector 1, a full microcomputer using the Z80 microprocessor . [ 1 ] In the company's first year, it had $1 million in sales. [ 5 ] The company's success was due to the relationships that Harp and Ely forged with vendors, in addition to a focus on advertising, packaging, and better user manuals. They marketed their desktop computers to mid-size businesses, carving out a niche in the industry that other companies were not filling. [ 5 ] Theirs was one of the earliest computer companies to consider aesthetics in design, producing computers with rounded edges available in multiple colors and coordinating the color of their capacitors with their memory boards. [ 6 ] By 1981, the company's revenue had reached $36.2 million. [ 3 ] Harp took the company public, [ 4 ] and fought with the underwriters over her decision to grant stock to all of the company's employees. Following the success, Harp was featured on the front cover of several magazines, including Inc. in 1981. She gained a reputation for her tenacity and was called the "ice maiden". [ 3 ] By 1982, the company had $36 million in annual sales. [ 7 ]
Harp divorced her husband in 1982 and married Patrick J. McGovern the same year. She relinquished her role as president and CEO of Vector Graphic. The company rapidly declined, experiencing setbacks due to poor managerial decisions, mistimed advertising, and the entrance of IBM to the market. She returned to her leadership role in 1983, [ 4 ] though she was unable to salvage the business and stepped down from the position in 1984. [ 3 ]
Harp McGovern went on to serve as president of the feminine hygiene business Aplex Corporation, which designed a handled, disposable paper funnel device enabling women to urinate while standing. [ 7 ] [ 3 ] She was president and CEO of Good Morning Teacher!, an educational publishing company. [ 8 ]
Harp McGovern and her husband Patrick co-founded the McGovern Institute for Brain Research at MIT in 2000. Their donation of approximately $350 million was among the largest gifts to a university at the time. [ 9 ]
Harp McGovern was named Entrepreneur of the Year in 1983 by Women Business Owners of New York. [ 10 ] The Commonwealth Club of San Francisco awarded her the Distinguished Immigrant Award. [ 11 ] She has also served as the Chair Emerita of the Board of Associates of the Whitehead Institute for Biomedical Research . [ 12 ] | https://en.wikipedia.org/wiki/Lore_Harp_McGovern |
In relativistic physics , Lorentz symmetry or Lorentz invariance , named after the Dutch physicist Hendrik Lorentz , is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame . It has also been described as "the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space". [ 1 ]
Lorentz covariance , a related concept, is a property of the underlying spacetime manifold. Lorentz covariance has two distinct, but closely related meanings:
On manifolds , the words covariant and contravariant refer to how objects transform under general coordinate transformations. Both covariant and contravariant four-vectors can be Lorentz covariant quantities.
Local Lorentz covariance , which follows from general relativity , refers to Lorentz covariance applying only locally in an infinitesimal region of spacetime at every point. There is a generalization of this concept to cover Poincaré covariance and Poincaré invariance.
In general, the (transformational) nature of a Lorentz tensor [ clarification needed ] can be identified by its tensor order , which is the number of free indices it has. No indices implies it is a scalar, one implies that it is a vector, etc. Some tensors with a physical interpretation are listed below.
The sign convention of the Minkowski metric η = diag (1, −1, −1, −1) is used throughout the article.
In standard field theory, there are very strict and severe constraints on marginal and relevant Lorentz violating operators within both QED and the Standard Model . Irrelevant Lorentz violating operators may be suppressed by a high cutoff scale, but they typically induce marginal and relevant Lorentz violating operators via radiative corrections. So, we also have very strict and severe constraints on irrelevant Lorentz violating operators.
Since some approaches to quantum gravity lead to violations of Lorentz invariance, [ 2 ] these studies are part of phenomenological quantum gravity . Lorentz violations are allowed in string theory , supersymmetry and Hořava–Lifshitz gravity . [ 3 ]
Lorentz violating models typically fall into four classes: [ citation needed ]
Models belonging to the first two classes can be consistent with experiment if Lorentz breaking happens at Planck scale or beyond it, or even before it in suitable preonic models, [ 6 ] and if Lorentz symmetry violation is governed by a suitable energy-dependent parameter. One then has a class of models which deviate from Poincaré symmetry near the Planck scale but still flows towards an exact Poincaré group at very large length scales. This is also true for the third class, which is furthermore protected from radiative corrections as one still has an exact (quantum) symmetry.
Even though there is no evidence of the violation of Lorentz invariance, several experimental searches for such violations have been performed during recent years. A detailed summary of the results of these searches is given in the Data Tables for Lorentz and CPT Violation. [ 7 ]
Lorentz invariance is also violated in QFT assuming non-zero temperature. [ 8 ] [ 9 ] [ 10 ]
There is also growing evidence of Lorentz violation in Weyl semimetals and Dirac semimetals . [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Lorentz_covariance |
The Lorentz factor or Lorentz term (also known as the gamma factor [ 1 ] ) is a dimensionless quantity expressing how much the measurements of time, length, and other physical properties change for an object while it moves. The expression appears in several equations in special relativity , and it arises in derivations of the Lorentz transformations . The name originates from its earlier appearance in Lorentzian electrodynamics – named after the Dutch physicist Hendrik Lorentz . [ 2 ]
It is generally denoted γ (the Greek lowercase letter gamma ). Sometimes (especially in discussion of superluminal motion ) the factor is written as Γ (Greek uppercase-gamma) rather than γ .
The Lorentz factor γ is defined as [ 3 ] γ = 1 1 − v 2 c 2 = 1 1 − β 2 = d t d τ , {\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}={\frac {1}{\sqrt {1-\beta ^{2}}}}={\frac {dt}{d\tau }},} where:
This is the most frequently used form in practice, though not the only one (see below for alternative forms).
To complement the definition, some authors define the reciprocal [ 4 ] α = 1 γ = 1 − v 2 c 2 = 1 − β 2 ; {\displaystyle \alpha ={\frac {1}{\gamma }}={\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\ ={\sqrt {1-{\beta }^{2}}};} see velocity addition formula .
Following is a list of formulae from Special relativity which use γ as a shorthand: [ 3 ] [ 5 ]
Corollaries of the above transformations are the results:
Applying conservation of momentum and energy leads to these results:
In the table below, the left-hand column shows speeds as different fractions of the speed of light (i.e. in units of c ). The middle column shows the corresponding Lorentz factor, the final is the reciprocal. Values in bold are exact.
There are other ways to write the factor. Above, velocity v was used, but related variables such as momentum and rapidity may also be convenient.
Solving the previous relativistic momentum equation for γ leads to γ = 1 + ( p m 0 c ) 2 . {\displaystyle \gamma ={\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}\,.} This form is rarely used, although it does appear in the Maxwell–Jüttner distribution . [ 6 ]
Applying the definition of rapidity as the hyperbolic angle φ {\displaystyle \varphi } : [ 7 ] tanh φ = β {\displaystyle \tanh \varphi =\beta } also leads to γ (by use of hyperbolic identities ): γ = cosh φ = 1 1 − tanh 2 φ = 1 1 − β 2 . {\displaystyle \gamma =\cosh \varphi ={\frac {1}{\sqrt {1-\tanh ^{2}\varphi }}}={\frac {1}{\sqrt {1-\beta ^{2}}}}.}
Using the property of Lorentz transformation , it can be shown that rapidity is additive, a useful property that velocity does not have. Thus the rapidity parameter forms a one-parameter group , a foundation for physical models.
The Bunney identity represents the Lorentz factor in terms of an infinite series of Bessel functions : [ 8 ] ∑ m = 1 ∞ ( J m − 1 2 ( m β ) + J m + 1 2 ( m β ) ) = 1 1 − β 2 . {\displaystyle \sum _{m=1}^{\infty }\left(J_{m-1}^{2}(m\beta )+J_{m+1}^{2}(m\beta )\right)={\frac {1}{\sqrt {1-\beta ^{2}}}}.}
The Lorentz factor has the Maclaurin series : γ = 1 1 − β 2 = ∑ n = 0 ∞ β 2 n ∏ k = 1 n ( 2 k − 1 2 k ) = 1 + 1 2 β 2 + 3 8 β 4 + 5 16 β 6 + 35 128 β 8 + 63 256 β 10 + ⋯ , {\displaystyle {\begin{aligned}\gamma &={\dfrac {1}{\sqrt {1-\beta ^{2}}}}\\[1ex]&=\sum _{n=0}^{\infty }\beta ^{2n}\prod _{k=1}^{n}\left({\dfrac {2k-1}{2k}}\right)\\[1ex]&=1+{\tfrac {1}{2}}\beta ^{2}+{\tfrac {3}{8}}\beta ^{4}+{\tfrac {5}{16}}\beta ^{6}+{\tfrac {35}{128}}\beta ^{8}+{\tfrac {63}{256}}\beta ^{10}+\cdots ,\end{aligned}}} which is a special case of a binomial series .
The approximation γ ≈ 1 + 1 2 β 2 {\textstyle \gamma \approx 1+{\frac {1}{2}}\beta ^{2}} may be used to calculate relativistic effects at low speeds. It holds to within 1% error for v < 0.4 c ( v < 120,000 km/s), and to within 0.1% error for v < 0.22 c ( v < 66,000 km/s).
The truncated versions of this series also allow physicists to prove that special relativity reduces to Newtonian mechanics at low speeds. For example, in special relativity, the following two equations hold:
p = γ m v , E = γ m c 2 . {\displaystyle {\begin{aligned}\mathbf {p} &=\gamma m\mathbf {v} ,\\E&=\gamma mc^{2}.\end{aligned}}}
For γ ≈ 1 {\displaystyle \gamma \approx 1} and γ ≈ 1 + 1 2 β 2 {\textstyle \gamma \approx 1+{\frac {1}{2}}\beta ^{2}} , respectively, these reduce to their Newtonian equivalents:
p = m v , E = m c 2 + 1 2 m v 2 . {\displaystyle {\begin{aligned}\mathbf {p} &=m\mathbf {v} ,\\E&=mc^{2}+{\tfrac {1}{2}}mv^{2}.\end{aligned}}}
The Lorentz factor equation can also be inverted to yield β = 1 − 1 γ 2 . {\displaystyle \beta ={\sqrt {1-{\frac {1}{\gamma ^{2}}}}}.} This has an asymptotic form β = 1 − 1 2 γ − 2 − 1 8 γ − 4 − 1 16 γ − 6 − 5 128 γ − 8 + ⋯ . {\displaystyle \beta =1-{\tfrac {1}{2}}\gamma ^{-2}-{\tfrac {1}{8}}\gamma ^{-4}-{\tfrac {1}{16}}\gamma ^{-6}-{\tfrac {5}{128}}\gamma ^{-8}+\cdots \,.}
The first two terms are occasionally used to quickly calculate velocities from large γ values. The approximation β ≈ 1 − 1 2 γ − 2 {\textstyle \beta \approx 1-{\frac {1}{2}}\gamma ^{-2}} holds to within 1% tolerance for γ > 2 , and to within 0.1% tolerance for γ > 3.5 .
The standard model of long-duration gamma-ray bursts (GRBs) holds that these explosions are ultra-relativistic (initial γ greater than approximately 100), which is invoked to explain the so-called "compactness" problem: absent this ultra-relativistic expansion, the ejecta would be optically thick to pair production at typical peak spectral energies of a few 100 keV, whereas the prompt emission is observed to be non-thermal. [ 9 ]
Muons , a subatomic particle, travel at a speed such that they have a relatively high Lorentz factor and therefore experience extreme time dilation . Since muons have a mean lifetime of just 2.2 μs , muons generated from cosmic-ray collisions 10 km (6.2 mi) high in Earth's atmosphere should be nondetectable on the ground due to their decay rate. However, roughly 10% of muons from these collisions are still detectable on the surface, thereby demonstrating the effects of time dilation on their decay rate. [ 10 ] | https://en.wikipedia.org/wiki/Lorentz_factor |
In physics , specifically in electromagnetism , the Lorentz force law is the combination of electric and magnetic force on a point charge due to electromagnetic fields . The Lorentz force , on the other hand, is a physical effect that occurs in the vicinity of electrically neutral, current-carrying conductors causing moving electrical charges to experience a magnetic force .
The Lorentz force law states that a particle of charge q moving with a velocity v in an electric field E and a magnetic field B experiences a force (in SI units [ nb 1 ] [ nb 2 ] ) of F = q ( E + v × B ) . {\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right).} It says that the electromagnetic force on a charge q is a combination of (1) a force in the direction of the electric field E (proportional to the magnitude of the field and the quantity of charge), and (2) a force at right angles to both the magnetic field B and the velocity v of the charge (proportional to the magnitude of the field, the charge, and the velocity).
Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force ), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction ), and the force on a moving charged particle. [ 1 ]
Historians suggest that the law is implicit in a paper by James Clerk Maxwell , published in 1865. [ 2 ] Hendrik Lorentz arrived at a complete derivation in 1895, [ 3 ] identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force. [ 4 ]
In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the definition of the electric and magnetic fields E and B . [ 5 ] [ 6 ] [ 7 ] To be specific, the Lorentz force is understood to be the following empirical statement:
The electromagnetic force F on a test charge at a given point and time is a certain function of its charge q and velocity v , which can be parameterized by exactly two vectors E and B , in the functional form: F = q ( E + v × B ) {\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )}
This is valid, even for particles approaching the speed of light (that is, magnitude of v , | v | ≈ c ). [ 8 ] So the two vector fields E and B are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force.
Coulomb's law is only valid for point charges at rest. In fact, the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity . For small relative velocities and very small accelerations, instead of the Coulomb force, the Weber force can be applied. The sum of the Weber forces of all charge carriers in a closed DC loop on a single test charge produces – regardless of the shape of the current loop – the Lorentz force.
The interpretation of magnetism by means of a modified Coulomb law was first proposed by Carl Friedrich Gauss . In 1835, Gauss assumed that each segment of a DC loop contains an equal number of negative and positive point charges that move at different speeds. [ 9 ] If Coulomb's law were completely correct, no force should act between any two short segments of such current loops. However, around 1825, André-Marie Ampère demonstrated experimentally that this is not the case. Ampère also formulated a force law . Based on this law, Gauss concluded that the electromagnetic force between two point charges depends not only on the distance but also on the relative velocity.
The Weber force is a central force and complies with Newton's third law . This demonstrates not only the conservation of momentum but also that the conservation of energy and the conservation of angular momentum apply. Weber electrodynamics is only a quasistatic approximation , i.e. it should not be used for higher velocities and accelerations. However, the Weber force illustrates that the Lorentz force can be traced back to central forces between numerous point-like charge carriers.
The force F acting on a particle of electric charge q with instantaneous velocity v , due to an external electric field E and magnetic field B , is given by ( SI definition of quantities [ nb 1 ] ): [ 10 ]
F = q ( E + v × B ) {\displaystyle \mathbf {F} =q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)}
where × is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have: F x = q ( E x + v y B z − v z B y ) , F y = q ( E y + v z B x − v x B z ) , F z = q ( E z + v x B y − v y B x ) . {\displaystyle {\begin{aligned}F_{x}&=q\left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right),\\[0.5ex]F_{y}&=q\left(E_{y}+v_{z}B_{x}-v_{x}B_{z}\right),\\[0.5ex]F_{z}&=q\left(E_{z}+v_{x}B_{y}-v_{y}B_{x}\right).\end{aligned}}}
In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as: F ( r ( t ) , r ˙ ( t ) , t , q ) = q [ E ( r , t ) + r ˙ ( t ) × B ( r , t ) ] {\displaystyle \mathbf {F} \left(\mathbf {r} (t),{\dot {\mathbf {r} }}(t),t,q\right)=q\left[\mathbf {E} (\mathbf {r} ,t)+{\dot {\mathbf {r} }}(t)\times \mathbf {B} (\mathbf {r} ,t)\right]} in which r is the position vector of the charged particle, t is time, and the overdot is a time derivative.
A positively charged particle will be accelerated in the same linear orientation as the E field, but will curve perpendicularly to both the instantaneous velocity vector v and the B field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of v and are then curled to point in the direction of B , then the extended thumb will point in the direction of F ).
The term q E is called the electric force , while the term q ( v × B ) is called the magnetic force . [ 11 ] According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force, [ 12 ] with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: in what follows, the term Lorentz force will refer to the expression for the total force.
The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force .
The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is v ⋅ F = q v ⋅ E . {\displaystyle \mathbf {v} \cdot \mathbf {F} =q\,\mathbf {v} \cdot \mathbf {E} .} Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle.
For a continuous charge distribution in motion, the Lorentz force equation becomes: d F = d q ( E + v × B ) {\displaystyle \mathrm {d} \mathbf {F} =\mathrm {d} q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)} where d F {\displaystyle \mathrm {d} \mathbf {F} } is the force on a small piece of the charge distribution with charge d q {\displaystyle \mathrm {d} q} . If both sides of this equation are divided by the volume of this small piece of the charge distribution d V {\displaystyle \mathrm {d} V} , the result is: f = ρ ( E + v × B ) {\displaystyle \mathbf {f} =\rho \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)} where f {\displaystyle \mathbf {f} } is the force density (force per unit volume) and ρ {\displaystyle \rho } is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is [ 13 ] J = ρ v {\displaystyle \mathbf {J} =\rho \mathbf {v} } so the continuous analogue to the equation is [ 14 ]
f = ρ E + J × B {\displaystyle \mathbf {f} =\rho \mathbf {E} +\mathbf {J} \times \mathbf {B} }
The total force is the volume integral over the charge distribution: F = ∫ ( ρ E + J × B ) d V . {\displaystyle \mathbf {F} =\int \left(\rho \mathbf {E} +\mathbf {J} \times \mathbf {B} \right)\mathrm {d} V.}
By eliminating ρ {\displaystyle \rho } and J {\displaystyle \mathbf {J} } , using Maxwell's equations , and manipulating using the theorems of vector calculus , this form of the equation can be used to derive the Maxwell stress tensor σ {\displaystyle {\boldsymbol {\sigma }}} , in turn this can be combined with the Poynting vector S {\displaystyle \mathbf {S} } to obtain the electromagnetic stress–energy tensor T used in general relativity . [ 15 ]
In terms of σ {\displaystyle {\boldsymbol {\sigma }}} and S {\displaystyle \mathbf {S} } , another way to write the Lorentz force (per unit volume) is f = ∇ ⋅ σ − 1 c 2 ∂ S ∂ t {\displaystyle \mathbf {f} =\nabla \cdot {\boldsymbol {\sigma }}-{\dfrac {1}{c^{2}}}{\dfrac {\partial \mathbf {S} }{\partial t}}} where ∇ ⋅ {\displaystyle \nabla \cdot } denotes the divergence of the tensor field and c {\displaystyle c} is the speed of light . Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of energy per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details.
The density of power associated with the Lorentz force in a material medium is J ⋅ E . {\displaystyle \mathbf {J} \cdot \mathbf {E} .}
If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is f = ( ρ f − ∇ ⋅ P ) E + ( J f + ∇ × M + ∂ P ∂ t ) × B . {\displaystyle \mathbf {f} =\left(\rho _{f}-\nabla \cdot \mathbf {P} \right)\mathbf {E} +\left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\times \mathbf {B} .}
where: ρ f {\displaystyle \rho _{f}} is the density of free charge; P {\displaystyle \mathbf {P} } is the polarization density ; J f {\displaystyle \mathbf {J} _{f}} is the density of free current; and M {\displaystyle \mathbf {M} } is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is ( J f + ∇ × M + ∂ P ∂ t ) ⋅ E . {\displaystyle \left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\cdot \mathbf {E} .}
The above-mentioned formulae use the conventions for the definition of the electric and magnetic field used with the SI , which is the most common. However, other conventions with the same physics (i.e. forces on e.g. an electron) are possible and used. In the conventions used with the older CGS-Gaussian units , which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead F = q G ( E G + v c × B G ) , {\displaystyle \mathbf {F} =q_{\mathrm {G} }\left(\mathbf {E} _{\mathrm {G} }+{\frac {\mathbf {v} }{c}}\times \mathbf {B} _{\mathrm {G} }\right),} where c is the speed of light . Although this equation looks slightly different, it is equivalent, since one has the following relations: [ nb 1 ] q G = q S I 4 π ε 0 , E G = 4 π ε 0 E S I , B G = 4 π / μ 0 B S I , c = 1 ε 0 μ 0 . {\displaystyle q_{\mathrm {G} }={\frac {q_{\mathrm {SI} }}{\sqrt {4\pi \varepsilon _{0}}}},\quad \mathbf {E} _{\mathrm {G} }={\sqrt {4\pi \varepsilon _{0}}}\,\mathbf {E} _{\mathrm {SI} },\quad \mathbf {B} _{\mathrm {G} }={\sqrt {4\pi /\mu _{0}}}\,{\mathbf {B} _{\mathrm {SI} }},\quad c={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}.} where ε 0 is the vacuum permittivity and μ 0 the vacuum permeability . In practice, the subscripts "G" and "SI" are omitted, and the used convention (and unit) must be determined from context.
Early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, [ 16 ] and electrically charged objects, by Henry Cavendish in 1762, [ 17 ] obeyed an inverse-square law . However, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb , using a torsion balance , was able to definitively show through experiment that this was true. [ 18 ] Soon after the discovery in 1820 by Hans Christian Ørsted that a magnetic needle is acted on by a voltaic current, André-Marie Ampère that same year was able to devise through experimentation the formula for the angular dependence of the force between two current elements. [ 19 ] [ 20 ] In all these descriptions, the force was always described in terms of the properties of the matter involved and the distances between two masses or charges rather than in terms of electric and magnetic fields. [ 21 ]
The modern concept of electric and magnetic fields first arose in the theories of Michael Faraday , particularly his idea of lines of force , later to be given full mathematical description by Lord Kelvin and James Clerk Maxwell . [ 22 ] From a modern perspective it is possible to identify in Maxwell's 1865 formulation of his field equations a form of the Lorentz force equation in relation to electric currents, [ 2 ] although in the time of Maxwell it was not evident how his equations related to the forces on moving charged objects. J. J. Thomson was the first to attempt to derive from Maxwell's field equations the electromagnetic forces on a moving charged object in terms of the object's properties and external fields. Interested in determining the electromagnetic behavior of the charged particles in cathode rays , Thomson published a paper in 1881 wherein he gave the force on the particles due to an external magnetic field as [ 4 ] [ 23 ] F = q 2 v × B . {\displaystyle \mathbf {F} ={\frac {q}{2}}\mathbf {v} \times \mathbf {B} .} Thomson derived the correct basic form of the formula, but, because of some miscalculations and an incomplete description of the displacement current , included an incorrect scale-factor of a half in front of the formula. Oliver Heaviside invented the modern vector notation and applied it to Maxwell's field equations; he also (in 1885 and 1889) had fixed the mistakes of Thomson's derivation and arrived at the correct form of the magnetic force on a moving charged object. [ 4 ] [ 24 ] [ 25 ] Finally, in 1895, [ 3 ] [ 26 ] Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. Lorentz began by abandoning the Maxwellian descriptions of the ether and conduction. Instead, Lorentz made a distinction between matter and the luminiferous aether and sought to apply the Maxwell equations at a microscopic scale. Using Heaviside's version of the Maxwell equations for a stationary ether and applying Lagrangian mechanics (see below), Lorentz arrived at the correct and complete form of the force law that now bears his name. [ 27 ] [ 28 ]
In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma ) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation.
While the modern Maxwell's equations describe how electrically charged particles and currents or moving charged particles give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields. [ 10 ] [ 29 ] The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another.
In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations . For example, see magnetohydrodynamics , fluid dynamics , electrohydrodynamics , superconductivity , stellar evolution . An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory) .
When a wire carrying an electric current is placed in an external magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force ). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight stationary wire in a homogeneous field: [ 30 ] F = I ℓ × B , {\displaystyle \mathbf {F} =I{\boldsymbol {\ell }}\times \mathbf {B} ,} where ℓ is a vector whose magnitude is the length of the wire, and whose direction is along the wire, aligned with the direction of the conventional current I .
If the wire is not straight, the force on it can be computed by applying this formula to each infinitesimal segment of wire d ℓ {\displaystyle \mathrm {d} {\boldsymbol {\ell }}} , then adding up all these forces by integration . This results in the same formal expression, but ℓ should now be understood as the vector connecting the end points of the curved wire with direction from starting to end point of conventional current. Usually, there will also be a net torque .
If, in addition, the magnetic field is inhomogeneous, the net force on a stationary rigid wire carrying a steady current I is given by integration along the wire, [ 31 ] F = I ∫ ( d ℓ × B ) . {\displaystyle \mathbf {F} =I\int (\mathrm {d} {\boldsymbol {\ell }}\times \mathbf {B} ).} One application of this is Ampère's force law , which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's generated magnetic field.
Another application is an induction motor . The stator winding AC current generates a moving magnetic field which induces a current in the rotor. The subsequent Lorentz force F {\displaystyle \mathbf {F} } acting on the rotor creates a torque, making the motor spin. Hence, though the Lorentz force law does not apply when the magnetic field B {\displaystyle \mathbf {B} } is generated by the current I {\displaystyle I} , it does apply when the current I {\displaystyle I} is induced by the movement of magnetic field B {\displaystyle \mathbf {B} } .
The magnetic force ( q v × B ) component of the Lorentz force is responsible for motional electromotive force (or motional EMF ), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire. [ 32 ]
In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force ( q E ) term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF called the transformer EMF , as described by the Maxwell–Faraday equation (one of the four modern Maxwell's equations ). [ 33 ]
Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below .) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. [ 34 ] In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E -field can change in whole or in part to a B -field or vice versa . [ 35 ]
Given a loop of wire in a magnetic field , Faraday's law of induction states the induced electromotive force (EMF) in the wire is: E = − d Φ B d t {\displaystyle {\mathcal {E}}=-{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}} where Φ B = ∫ Σ ( t ) B ( r , t ) ⋅ d A , {\displaystyle \Phi _{B}=\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} ,} is the magnetic flux through the loop, B is the magnetic field, Σ( t ) is a surface bounded by the closed contour ∂Σ( t ) , at time t , d A is an infinitesimal vector area element of Σ( t ) (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch).
The sign of the EMF is determined by Lenz's law . Note that this is valid for not only a stationary wire – but also for a moving wire.
From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations , the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law .
Let ∂Σ( t ) be the moving wire, moving together without rotation and with constant velocity v and Σ( t ) be the internal surface of the wire. The EMF around the closed path ∂Σ( t ) is given by: [ 36 ] E = ∮ ∂ Σ ( t ) F q ⋅ d ℓ {\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}{\frac {\mathbf {F} }{q}}\cdot \mathrm {d} {\boldsymbol {\ell }}} where E ′ ( r , t ) = F / q ( r , t ) {\displaystyle \mathbf {E} '(\mathbf {r} ,t)=\mathbf {F} /q(\mathbf {r} ,t)} is the electric field and d ℓ is an infinitesimal vector element of the contour ∂Σ( t ) . [ 37 ] [ nb 3 ] Equating both integrals leads to the field theory form of Faraday's law, given by: [ 38 ] E = ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = − d d t ∫ Σ ( t ) B ( r , t ) ⋅ d A . {\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-{\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} .} This result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called the (integral form of) Maxwell–Faraday equation : [ 39 ] ∮ ∂ Σ ( t ) E ( r , t ) ⋅ d ℓ = − ∫ Σ ( t ) ∂ B ( r , t ) ∂ t ⋅ d A . {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} .}
The two equations are equivalent if the wire is not moving. In case the circuit is moving with a velocity v {\displaystyle \mathbf {v} } in some direction, then, using the Leibniz integral rule and that div B = 0 , gives ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = − ∫ Σ ( t ) ∂ B ( r , t ) ∂ t ⋅ d A + ∮ ∂ Σ ( t ) ( v × B ( r , t ) ) ⋅ d ℓ . {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} +\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\cdot \mathrm {d} {\boldsymbol {\ell }}.} Substituting the Maxwell-Faraday equation then gives ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = ∮ ∂ Σ ( t ) E ( r , t ) ⋅ d ℓ + ∮ ∂ Σ ( t ) ( v × B ( r , t ) ) d ℓ {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=\oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}+\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\mathrm {d} {\boldsymbol {\ell }}} since this is valid for any wire position it implies that F = q E ( r , t ) + q v × B ( r , t ) . {\displaystyle \mathbf {F} =q\,\mathbf {E} (\mathbf {r} ,\,t)+q\,\mathbf {v} \times \mathbf {B} (\mathbf {r} ,\,t).}
Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law .
If the magnetic field is fixed in time and the conducting loop moves through the field, the magnetic flux Φ B linking the loop can change in several ways. For example, if the B -field varies with position, and the loop moves to a location with different B-field, Φ B will change. Alternatively, if the loop changes orientation with respect to the B-field, the B ⋅ d A differential element will change because of the different angle between B and d A , also changing Φ B . As a third example, if a portion of the circuit is swept through a uniform, time-independent B -field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface ∂Σ( t ) time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in Φ B .
Note that the Maxwell Faraday's equation implies that the Electric Field E is non conservative when the Magnetic Field B varies in time, and is not expressible as the gradient of a scalar field , and not subject to the gradient theorem since its curl is not zero. [ 36 ] [ 40 ]
The E and B fields can be replaced by the magnetic vector potential A and ( scalar ) electrostatic potential ϕ by E = − ∇ ϕ − ∂ A ∂ t B = ∇ × A {\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}\\[1ex]\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}} where ∇ is the gradient, ∇⋅ is the divergence, and ∇× is the curl .
The force becomes F = q [ − ∇ ϕ − ∂ A ∂ t + v × ( ∇ × A ) ] . {\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {v} \times (\nabla \times \mathbf {A} )\right].}
Using an identity for the triple product this can be rewritten as F = q [ − ∇ ϕ − ∂ A ∂ t + ∇ ( v ⋅ A ) − ( v ⋅ ∇ ) A ] . {\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\nabla \left(\mathbf {v} \cdot \mathbf {A} \right)-\left(\mathbf {v} \cdot \nabla \right)\mathbf {A} \right].}
(Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on A {\displaystyle \mathbf {A} } , not on v {\displaystyle \mathbf {v} } ; thus, there is no need of using Feynman's subscript notation in the equation above.) Using the chain rule, the convective derivative of A {\displaystyle \mathbf {A} } is: [ 41 ] d A d t = ∂ A ∂ t + ( v ⋅ ∇ ) A {\displaystyle {\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}={\frac {\partial \mathbf {A} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {A} } so that the above expression becomes: F = q [ − ∇ ( ϕ − v ⋅ A ) − d A d t ] . {\displaystyle \mathbf {F} =q\left[-\nabla (\phi -\mathbf {v} \cdot \mathbf {A} )-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\right].}
With v = ẋ and d d t [ ∂ ∂ x ˙ ( ϕ − x ˙ ⋅ A ) ] = − d A d t , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\partial }{\partial {\dot {\mathbf {x} }}}}\left(\phi -{\dot {\mathbf {x} }}\cdot \mathbf {A} \right)\right]=-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}},} we can put the equation into the convenient Euler–Lagrange form [ 42 ]
F = q [ − ∇ x ( ϕ − x ˙ ⋅ A ) + d d t ∇ x ˙ ( ϕ − x ˙ ⋅ A ) ] {\displaystyle \mathbf {F} =q\left[-\nabla _{\mathbf {x} }(\phi -{\dot {\mathbf {x} }}\cdot \mathbf {A} )+{\frac {\mathrm {d} }{\mathrm {d} t}}\nabla _{\dot {\mathbf {x} }}(\phi -{\dot {\mathbf {x} }}\cdot \mathbf {A} )\right]}
where ∇ x = x ^ ∂ ∂ x + y ^ ∂ ∂ y + z ^ ∂ ∂ z {\displaystyle \nabla _{\mathbf {x} }={\hat {x}}{\dfrac {\partial }{\partial x}}+{\hat {y}}{\dfrac {\partial }{\partial y}}+{\hat {z}}{\dfrac {\partial }{\partial z}}} and ∇ x ˙ = x ^ ∂ ∂ x ˙ + y ^ ∂ ∂ y ˙ + z ^ ∂ ∂ z ˙ . {\displaystyle \nabla _{\dot {\mathbf {x} }}={\hat {x}}{\dfrac {\partial }{\partial {\dot {x}}}}+{\hat {y}}{\dfrac {\partial }{\partial {\dot {y}}}}+{\hat {z}}{\dfrac {\partial }{\partial {\dot {z}}}}.}
The Lagrangian for a charged particle of mass m and charge q in an electromagnetic field equivalently describes the dynamics of the particle in terms of its energy , rather than the force exerted on it. The classical expression is given by: [ 42 ] L = m 2 r ˙ ⋅ r ˙ + q A ⋅ r ˙ − q ϕ {\displaystyle L={\frac {m}{2}}\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} +q\mathbf {A} \cdot \mathbf {\dot {r}} -q\phi } where A and ϕ are the potential fields as above. The quantity V = q ( ϕ − A ⋅ r ˙ ) {\displaystyle V=q(\phi -\mathbf {A} \cdot \mathbf {\dot {r}} )} can be identified as a generalized, velocity-dependent potential energy and, accordingly, F {\displaystyle \mathbf {F} } as a non-conservative force . [ 43 ] Using the Lagrangian, the equation for the Lorentz force given above can be obtained again.
For an A field, a particle moving with velocity v = ṙ has potential momentum q A ( r , t ) {\displaystyle q\mathbf {A} (\mathbf {r} ,t)} , so its potential energy is q A ( r , t ) ⋅ r ˙ {\displaystyle q\mathbf {A} (\mathbf {r} ,t)\cdot \mathbf {\dot {r}} } . For a ϕ field, the particle's potential energy is q ϕ ( r , t ) {\displaystyle q\phi (\mathbf {r} ,t)} .
The total potential energy is then: V = q ϕ − q A ⋅ r ˙ {\displaystyle V=q\phi -q\mathbf {A} \cdot \mathbf {\dot {r}} } and the kinetic energy is: T = m 2 r ˙ ⋅ r ˙ {\displaystyle T={\frac {m}{2}}\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} } hence the Lagrangian: L = T − V = m 2 r ˙ ⋅ r ˙ + q A ⋅ r ˙ − q ϕ = m 2 ( x ˙ 2 + y ˙ 2 + z ˙ 2 ) + q ( x ˙ A x + y ˙ A y + z ˙ A z ) − q ϕ {\displaystyle {\begin{aligned}L&=T-V\\[1ex]&={\frac {m}{2}}\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} +q\mathbf {A} \cdot \mathbf {\dot {r}} -q\phi \\[1ex]&={\frac {m}{2}}\left({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2}\right)+q\left({\dot {x}}A_{x}+{\dot {y}}A_{y}+{\dot {z}}A_{z}\right)-q\phi \end{aligned}}}
Lagrange's equations are d d t ∂ L ∂ x ˙ = ∂ L ∂ x {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}}} (same for y and z ). So calculating the partial derivatives: d d t ∂ L ∂ x ˙ = m x ¨ + q d A x d t = m x ¨ + q [ ∂ A x ∂ t + ∂ A x ∂ x d x d t + ∂ A x ∂ y d y d t + ∂ A x ∂ z d z d t ] = m x ¨ + q [ ∂ A x ∂ t + ∂ A x ∂ x x ˙ + ∂ A x ∂ y y ˙ + ∂ A x ∂ z z ˙ ] {\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {x}}}}&=m{\ddot {x}}+q{\frac {\mathrm {d} A_{x}}{\mathrm {d} t}}\\&=m{\ddot {x}}+q\left[{\frac {\partial A_{x}}{\partial t}}+{\frac {\partial A_{x}}{\partial x}}{\frac {dx}{dt}}+{\frac {\partial A_{x}}{\partial y}}{\frac {dy}{dt}}+{\frac {\partial A_{x}}{\partial z}}{\frac {dz}{dt}}\right]\\[1ex]&=m{\ddot {x}}+q\left[{\frac {\partial A_{x}}{\partial t}}+{\frac {\partial A_{x}}{\partial x}}{\dot {x}}+{\frac {\partial A_{x}}{\partial y}}{\dot {y}}+{\frac {\partial A_{x}}{\partial z}}{\dot {z}}\right]\\\end{aligned}}} ∂ L ∂ x = − q ∂ ϕ ∂ x + q ( ∂ A x ∂ x x ˙ + ∂ A y ∂ x y ˙ + ∂ A z ∂ x z ˙ ) {\displaystyle {\frac {\partial L}{\partial x}}=-q{\frac {\partial \phi }{\partial x}}+q\left({\frac {\partial A_{x}}{\partial x}}{\dot {x}}+{\frac {\partial A_{y}}{\partial x}}{\dot {y}}+{\frac {\partial A_{z}}{\partial x}}{\dot {z}}\right)} equating and simplifying: m x ¨ + q ( ∂ A x ∂ t + ∂ A x ∂ x x ˙ + ∂ A x ∂ y y ˙ + ∂ A x ∂ z z ˙ ) = − q ∂ ϕ ∂ x + q ( ∂ A x ∂ x x ˙ + ∂ A y ∂ x y ˙ + ∂ A z ∂ x z ˙ ) {\displaystyle m{\ddot {x}}+q\left({\frac {\partial A_{x}}{\partial t}}+{\frac {\partial A_{x}}{\partial x}}{\dot {x}}+{\frac {\partial A_{x}}{\partial y}}{\dot {y}}+{\frac {\partial A_{x}}{\partial z}}{\dot {z}}\right)=-q{\frac {\partial \phi }{\partial x}}+q\left({\frac {\partial A_{x}}{\partial x}}{\dot {x}}+{\frac {\partial A_{y}}{\partial x}}{\dot {y}}+{\frac {\partial A_{z}}{\partial x}}{\dot {z}}\right)} F x = − q ( ∂ ϕ ∂ x + ∂ A x ∂ t ) + q [ y ˙ ( ∂ A y ∂ x − ∂ A x ∂ y ) + z ˙ ( ∂ A z ∂ x − ∂ A x ∂ z ) ] = q E x + q [ y ˙ ( ∇ × A ) z − z ˙ ( ∇ × A ) y ] = q E x + q [ r ˙ × ( ∇ × A ) ] x = q E x + q ( r ˙ × B ) x {\displaystyle {\begin{aligned}F_{x}&=-q\left({\frac {\partial \phi }{\partial x}}+{\frac {\partial A_{x}}{\partial t}}\right)+q\left[{\dot {y}}\left({\frac {\partial A_{y}}{\partial x}}-{\frac {\partial A_{x}}{\partial y}}\right)+{\dot {z}}\left({\frac {\partial A_{z}}{\partial x}}-{\frac {\partial A_{x}}{\partial z}}\right)\right]\\[1ex]&=qE_{x}+q[{\dot {y}}(\nabla \times \mathbf {A} )_{z}-{\dot {z}}(\nabla \times \mathbf {A} )_{y}]\\[1ex]&=qE_{x}+q[\mathbf {\dot {r}} \times (\nabla \times \mathbf {A} )]_{x}\\[1ex]&=qE_{x}+q(\mathbf {\dot {r}} \times \mathbf {B} )_{x}\end{aligned}}} and similarly for the y and z directions. Hence the force equation is: F = q ( E + r ˙ × B ) {\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {\dot {r}} \times \mathbf {B} )}
The relativistic Lagrangian is L = − m c 2 1 − ( r ˙ c ) 2 + q A ( r ) ⋅ r ˙ − q ϕ ( r ) {\displaystyle L=-mc^{2}{\sqrt {1-\left({\frac {\dot {\mathbf {r} }}{c}}\right)^{2}}}+q\mathbf {A} (\mathbf {r} )\cdot {\dot {\mathbf {r} }}-q\phi (\mathbf {r} )}
The action is the relativistic arclength of the path of the particle in spacetime , minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential.
The equations of motion derived by extremizing the action (see matrix calculus for the notation): d P d t = ∂ L ∂ r = q ∂ A ∂ r ⋅ r ˙ − q ∂ ϕ ∂ r {\displaystyle {\frac {\mathrm {d} \mathbf {P} }{\mathrm {d} t}}={\frac {\partial L}{\partial \mathbf {r} }}=q{\partial \mathbf {A} \over \partial \mathbf {r} }\cdot {\dot {\mathbf {r} }}-q{\partial \phi \over \partial \mathbf {r} }} P − q A = m r ˙ 1 − ( r ˙ c ) 2 {\displaystyle \mathbf {P} -q\mathbf {A} ={\frac {m{\dot {\mathbf {r} }}}{\sqrt {1-\left({\frac {\dot {\mathbf {r} }}{c}}\right)^{2}}}}} are the same as Hamilton's equations of motion : d r d t = ∂ ∂ p ( ( P − q A ) 2 + ( m c 2 ) 2 + q ϕ ) {\displaystyle {\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}={\frac {\partial }{\partial \mathbf {p} }}\left({\sqrt {(\mathbf {P} -q\mathbf {A} )^{2}+(mc^{2})^{2}}}+q\phi \right)} d p d t = − ∂ ∂ r ( ( P − q A ) 2 + ( m c 2 ) 2 + q ϕ ) {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}=-{\frac {\partial }{\partial \mathbf {r} }}\left({\sqrt {(\mathbf {P} -q\mathbf {A} )^{2}+(mc^{2})^{2}}}+q\phi \right)} both are equivalent to the noncanonical form: d d t m r ˙ 1 − ( r ˙ c ) 2 = q ( E + r ˙ × B ) . {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{m{\dot {\mathbf {r} }} \over {\sqrt {1-\left({\frac {\dot {\mathbf {r} }}{c}}\right)^{2}}}}=q\left(\mathbf {E} +{\dot {\mathbf {r} }}\times \mathbf {B} \right).} This formula is the Lorentz force, representing the rate at which the EM field adds relativistic momentum to the particle.
Using the metric signature (1, −1, −1, −1) , the Lorentz force for a charge q can be written in covariant form : [ 44 ]
d p α d τ = q F α β U β {\displaystyle {\frac {\mathrm {d} p^{\alpha }}{\mathrm {d} \tau }}=qF^{\alpha \beta }U_{\beta }}
where p α is the four-momentum , defined as p α = ( p 0 , p 1 , p 2 , p 3 ) = ( γ m c , p x , p y , p z ) , {\displaystyle p^{\alpha }=\left(p_{0},p_{1},p_{2},p_{3}\right)=\left(\gamma mc,p_{x},p_{y},p_{z}\right),} τ the proper time of the particle, F αβ the contravariant electromagnetic tensor F α β = ( 0 − E x / c − E y / c − E z / c E x / c 0 − B z B y E y / c B z 0 − B x E z / c − B y B x 0 ) {\displaystyle F^{\alpha \beta }={\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}} and U is the covariant 4-velocity of the particle, defined as: U β = ( U 0 , U 1 , U 2 , U 3 ) = γ ( c , − v x , − v y , − v z ) , {\displaystyle U_{\beta }=\left(U_{0},U_{1},U_{2},U_{3}\right)=\gamma \left(c,-v_{x},-v_{y},-v_{z}\right),} in which γ ( v ) = 1 1 − v 2 c 2 = 1 1 − v x 2 + v y 2 + v z 2 c 2 {\displaystyle \gamma (v)={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}={\frac {1}{\sqrt {1-{\frac {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}{c^{2}}}}}}} is the Lorentz factor .
The fields are transformed to a frame moving with constant relative velocity by: F ′ μ ν = Λ μ α Λ ν β F α β , {\displaystyle F'^{\mu \nu }={\Lambda ^{\mu }}_{\alpha }{\Lambda ^{\nu }}_{\beta }F^{\alpha \beta }\,,} where Λ μ α is the Lorentz transformation tensor.
The α = 1 component ( x -component) of the force is d p 1 d τ = q U β F 1 β = q ( U 0 F 10 + U 1 F 11 + U 2 F 12 + U 3 F 13 ) . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=qU_{\beta }F^{1\beta }=q\left(U_{0}F^{10}+U_{1}F^{11}+U_{2}F^{12}+U_{3}F^{13}\right).}
Substituting the components of the covariant electromagnetic tensor F yields d p 1 d τ = q [ U 0 ( E x c ) + U 2 ( − B z ) + U 3 ( B y ) ] . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\left[U_{0}\left({\frac {E_{x}}{c}}\right)+U_{2}(-B_{z})+U_{3}(B_{y})\right].}
Using the components of covariant four-velocity yields d p 1 d τ = q γ [ c ( E x c ) + ( − v y ) ( − B z ) + ( − v z ) ( B y ) ] = q γ ( E x + v y B z − v z B y ) = q γ [ E x + ( v × B ) x ] . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\gamma \left[c\left({\frac {E_{x}}{c}}\right)+(-v_{y})(-B_{z})+(-v_{z})(B_{y})\right]=q\gamma \left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right)=q\gamma \left[E_{x}+\left(\mathbf {v} \times \mathbf {B} \right)_{x}\right]\,.}
The calculation for α = 2, 3 (force components in the y and z directions) yields similar results, so collecting the three equations into one: d p d τ = q γ ( E + v × B ) , {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} \tau }}=q\gamma \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),} and since differentials in coordinate time dt and proper time dτ are related by the Lorentz factor, d t = γ ( v ) d τ , {\displaystyle dt=\gamma (v)\,d\tau ,} so we arrive at d p d t = q ( E + v × B ) . {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}=q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right).}
This is precisely the Lorentz force law, however, it is important to note that p is the relativistic expression, p = γ ( v ) m 0 v . {\displaystyle \mathbf {p} =\gamma (v)m_{0}\mathbf {v} \,.}
The electric and magnetic fields are dependent on the velocity of an observer , so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields F {\displaystyle {\mathcal {F}}} , and an arbitrary time-direction, γ 0 {\displaystyle \gamma _{0}} . This can be settled through spacetime algebra (or the geometric algebra of spacetime), a type of Clifford algebra defined on a pseudo-Euclidean space , [ 45 ] as E = ( F ⋅ γ 0 ) γ 0 {\displaystyle \mathbf {E} =\left({\mathcal {F}}\cdot \gamma _{0}\right)\gamma _{0}} and i B = ( F ∧ γ 0 ) γ 0 {\displaystyle i\mathbf {B} =\left({\mathcal {F}}\wedge \gamma _{0}\right)\gamma _{0}} F {\displaystyle {\mathcal {F}}} is a spacetime bivector (an oriented plane segment, just like a vector is an oriented line segment ), which has six degrees of freedom corresponding to boosts (rotations in spacetime planes) and rotations (rotations in space-space planes). The dot product with the vector γ 0 {\displaystyle \gamma _{0}} pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector. The relativistic velocity is given by the (time-like) changes in a time-position vector v = x ˙ {\displaystyle v={\dot {x}}} , where v 2 = 1 , {\displaystyle v^{2}=1,} (which shows our choice for the metric) and the velocity is v = c v ∧ γ 0 / ( v ⋅ γ 0 ) . {\displaystyle \mathbf {v} =cv\wedge \gamma _{0}/(v\cdot \gamma _{0}).}
The proper form of the Lorentz force law ('invariant' is an inadequate term because no transformation has been defined) is simply
F = q F ⋅ v {\displaystyle F=q{\mathcal {F}}\cdot v}
Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression.
In the general theory of relativity the equation of motion for a particle with mass m {\displaystyle m} and charge e {\displaystyle e} , moving in a space with metric tensor g a b {\displaystyle g_{ab}} and electromagnetic field F a b {\displaystyle F_{ab}} , is given as
m d u c d s − m 1 2 g a b , c u a u b = e F c b u b , {\displaystyle m{\frac {du_{c}}{ds}}-m{\frac {1}{2}}g_{ab,c}u^{a}u^{b}=eF_{cb}u^{b},}
where u a = d x a / d s {\displaystyle u^{a}=dx^{a}/ds} ( d x a {\displaystyle dx^{a}} is taken along the trajectory), g a b , c = ∂ g a b / ∂ x c {\displaystyle g_{ab,c}=\partial g_{ab}/\partial x^{c}} , and d s 2 = g a b d x a d x b {\displaystyle ds^{2}=g_{ab}dx^{a}dx^{b}} .
The equation can also be written as
m d u c d s − m Γ a b c u a u b = e F c b u b , {\displaystyle m{\frac {du_{c}}{ds}}-m\Gamma _{abc}u^{a}u^{b}=eF_{cb}u^{b},}
where Γ a b c {\displaystyle \Gamma _{abc}} is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as
m D u c d s = e F c b u b , {\displaystyle m{\frac {Du_{c}}{ds}}=eF_{cb}u^{b},}
where D {\displaystyle D} is the covariant differential in general relativity.
The Lorentz force occurs in many devices, including:
In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices, including: | https://en.wikipedia.org/wiki/Lorentz_force |
Lorentz force velocimetry [ 1 ] (LFV) is a noncontact electromagnetic flow measurement technique. LFV is particularly suited for the measurement of velocities in liquid metals like steel or aluminium and is currently under development for metallurgical applications. The measurement of flow velocities in hot and aggressive liquids such as liquid aluminium and molten glass constitutes one of the grand challenges of industrial fluid mechanics. Apart from liquids, LFV can also be used to measure the velocity of solid materials as well as for detection of micro-defects in their structures.
A Lorentz force velocimetry system is called Lorentz force flowmeter (LFF). A LFF measures the integrated or bulk Lorentz force resulting from the interaction between a liquid metal in motion and an applied magnetic field. In this case the characteristic length of the magnetic field is of the same order of magnitude as the dimensions of the channel. It must be addressed that in the case where localized magnetic fields are used, it is possible to perform local velocity measurements and thus the term Lorentz force velocimeter is used.
The use of magnetic fields in flow measurement date back to the 19th century, when in 1832 Michael Faraday attempted to determine the velocity of the River Thames . Faraday applied a method in which a flow (the river flow) is exposed to a magnetic field (earth magnetic field) and the induced voltage is measured using two electrodes across the same flow. This method is the basis of one of the most successful commercial applications in flow metering known as the inductive flowmeter. The theory of such devices has been developed and comprehensively summarized by Prof. J. A. Shercliff [ 2 ] in the early 1950s. While inductive flowmeters are widely used for flow measurement in fluids at room temperatures such as beverages, chemicals and waste water, they are not suited for flow measurement of media such as hot, aggressive or for local measurements where surrounding obstacles limit access to the channel or pipe. Since they require electrodes to be inserted into the fluid, their use is limited to applications at temperatures far below the melting points of practically relevant metals.
The Lorentz force velocimetry was invented by the A. Shercliff. However, it did not find practical application in these early years up until recent technical advances; in manufacturing of rare earth and non rare-earth strong permanent magnets, accurate force measurement techniques, multiphysical process simulation software for magnetohydrodynamic (MHD) problems that this principle could be turned into a feasible working flow measurement technique. LFV is currently being developed for applications in metallurgy [ 3 ] as well as in other areas. [ 4 ]
Based on theory introduced by Shercliff there have been several attempts to develop flow measurement methods which do not require any mechanical contact with the fluid,. [ 5 ] [ 6 ] Among them is the eddy current flowmeter which measures flow-induced changes in the electric impedance of coils interacting with the flow. More recently, a non-contact method was proposed in which a magnetic field is applied to the flow and the velocity is determined from measurements of flow-induced deformations of the applied magnetic field,. [ 7 ] [ 8 ]
The principle of Lorentz force velocimetry is based on measurements of the Lorentz force that occurs due to the flow of a conductive fluid under the influence of a variable magnetic field . According to Faraday's law , when a metal or conductive fluid moves through a magnetic field, eddy currents generate there by electromotive force in zones of maximal magnetic field gradient (in the present case in the inlet and outlet zones). Eddy current in its turn creates induced magnetic field according to Ampère's law . The interaction between eddy currents and total magnetic field gives rise to Lorentz force that breaks the flow. By virtue of Newton's third law "actio=reactio" a force with the same magnitude but opposite direction acts upon its source - permanent magnet. Direct measurement of the magnet's reaction force allows to determine fluid's velocity, since this force is proportional to flow rate. The Lorentz force used in LFV has nothing to do with magnetic attraction or repulsion. It is only due to the eddy currents whose strength depends on the electrical conductivity, the relative velocity between the liquid and the permanent magnet as well as the magnitude of the magnetic field.
So, when a liquid metal moves across magnetic field lines, the interaction of the magnetic field (which are either produced by a current-carrying coil or by a permanent magnet) with the induced eddy currents leads to a Lorentz force (with density f → = j → × B → {\displaystyle {\vec {f}}={\vec {j}}\times {\vec {B}}} ) which brakes the flow. The Lorentz force density is roughly
where σ {\displaystyle \sigma } is the electrical conductivity of the fluid, v {\displaystyle v} its velocity, and B {\displaystyle B} the magnitude of the magnetic field. This fact is well known and has found a variety of applications. This force is proportional to the velocity and conductivity of the fluid, and its measurement is the key idea of LFV. With the recent advent of powerful rare earth permanent magnets (like NdFeB , SmCo and other kind of magnets) and tools for designing sophisticated systems by permanent magnet the practical realization of this principle has now become possible.
The primary magnetic field B → ( r → ) {\displaystyle {\vec {B}}\left({\vec {r}}\right)} can be produced by a permanent magnet or a primary current J → ( r → ) {\displaystyle {\vec {J}}\left({\vec {r}}\right)} (see Fig. 1). The motion of the fluid under the action of the primary field induces eddy currents which are sketched in figure 3. They will be denoted by j → ( r → ) {\displaystyle {\vec {j}}\left({\vec {r}}\right)} and are called secondary currents. The interaction of the secondary current with the primary magnetic field is responsible for the Lorentz force within the fluid
which breaks the flow.
The secondary currents create a magnetic field b → ( r → ) {\displaystyle {\vec {b}}\left({\vec {r}}\right)} , the secondary magnetic field. The interaction of the primary electric current with the secondary magnetic field gives rise to the Lorentz force on the magnet system
The reciprocity principle for the Lorentz force velocimetry states that the electromagnetic forces on the fluid and on the magnet system have the same magnitude and act in opposite direction, namely
The general scaling law that relates the measured force to the unknown velocity can be derived with reference to the simplified situation shown in Fig. 2. Here a small permanent magnet with dipole moment m {\displaystyle m} is located at a distance L {\displaystyle L} above a semi-infinite fluid moving with uniform velocity v {\displaystyle v} parallel to its free surface.
The analysis that leads to the scaling relation can be made quantitative by assuming that the magnet is a point dipole with dipole moment m → = m e ^ z {\displaystyle {\vec {m}}=m{\hat {e}}_{z}} whose magnetic field is given by
where R → = r → − L e ^ z {\displaystyle {\vec {R}}={\vec {r}}-L{\hat {e}}_{z}} and R =∣ R → ∣ {\displaystyle R=\mid {\vec {R}}\mid } . Assuming a velocity field v → = v e ^ x {\displaystyle {\vec {v}}=v{\hat {e}}_{x}} for z < 0 {\displaystyle z<0} , the eddy currents can be computed from Ohm's law for a moving electrically conducting fluid
subject to the boundary conditions J z = 0 {\displaystyle J_{z}=0} at z = 0 {\displaystyle z=0} and J z → 0 {\displaystyle J_{z}\to 0} as z → 1 {\displaystyle z\to 1} . First, the scalar electric potential is obtained as
from which the electric current density is readily calculated. They are indeed horizontal. Once they are known, the Biot–Savart law can be used to compute the secondary magnetic field b → ( r → ) {\displaystyle {\vec {b}}\left({\vec {r}}\right)} . Finally, the force is given by
where the gradient of b → {\displaystyle {\vec {b}}} has to be evaluated at the location of the dipole. For the problem at hand all these steps can be carried out analytically without any approximation leading to the result
This provides us with the estimate
Lorentz force flowmeters are usually classified in several main conceptual setups. Some of them designed as static flowmeters where the magnet system is at rest and one measures the force acting on it. Alternatively, they can be designed as rotary flowmeters where the magnets are arranged on a rotating wheel and the spinning velocity is a measure of the flow velocity. Obviously, the force acting on a Lorentz force flowmeter depends both on the velocity distribution and on the shape of the magnet system. This classification depends on the relative direction of the magnetic field that is being applied respect to the direction of the flow. In Figure 3 one can distinguish diagrams of the longitudinal and the transverse Lorentz force flowmeters.
It is important to mention that even that in figures only a coil or a magnet are sketched, the principle holds for both.
Rotary LFF consists of a freely rotating permanent magnet [ 9 ] (or an array of magnets mounted on a flywheel as shown in figure 4), which is magnetized perpendicularly to the axle it is mounted on. When such a system is placed close to a duct carrying an electrically conducting fluid flow, it rotates so that the driving torque due to the eddy currents induced by the flow is balanced by the braking torque induced by the rotation itself. The equilibrium rotation rate varies directly with the flow velocity and inversely with the distance between the magnet and the duct. In this case it is possible to measure either the torque on the magnet system or the angular velocity at which the wheel spins.
LFV is sought to be extended to all fluid or solid materials, providing that they are electrical conductors. As shown before, the Lorentz force generated by the flow depend linearly on the conductivity of the fluid. Typically, the electrical conductivity of molten metals is of the order of 10 6 S / m {\displaystyle 10^{6}~S/m} so the Lorentz force is in the range of some mN . However, equally important liquids as glass melts and electrolytic solutions have a conductivity of ∼ 1 S / m {\displaystyle \sim ~1~S/m} giving rise to a Lorentz force of the order of micronewtons or even smaller.
Among different possibilities to measure the effect on the magnet system, it has been successfully applied those based on the measurement of the deflection of a parallel spring under an applied force. [ 10 ] Firstly using a strain gauge and then recording the deflection of a quartz spring with an interferometer, in whose case the deformation is detected to within 0.1 nm.
Recent advance in LFV made it possible for metering flow velocity of media which has very low electroconductivity, particularly by varying parameters as well as using some state-of-art force measurement devices enable to measure flow velocity of electrolyte solutions with conductivity that is 10 6 times smaller than that for the liquid metals. There are variety of industrial and scientific applications where noncontact flow measurement through opaque walls or in opaque liquids is desirable. Such applications include flow metering of chemicals, food, beverages, blood, aqueous solutions in the pharmaceutical industry, molten salts in solar thermal power plants, [ 11 ] and high temperature reactors [ 12 ] as well as glass melts for high-precision optics. [ 13 ]
A noncontact flowmeter is a device that is neither in mechanical contact with the liquid nor with the wall of the pipe in which the liquid flows. Noncontact flowmeters are equally useful when walls are contaminated like in the processing of radioactive materials, when pipes are strongly vibrating or in cases when portable flowmeters are to be developed. If the liquid and the wall of the pipe are transparent and the liquid contains tracer particles, optical measurement techniques, [ 14 ] [ 15 ] are effective enough tool to perform noncontact measurements. However, if either the wall or the liquid are opaque as is often the case in food production, chemical engineering, glass making, and metallurgy, very few possibilities for noncontact flow measurement exist.
The force measurement system is an important part of the Lorentz force velocimetry. With high resolution force measurement system makes the measurement of even lower conductivity possible. Up to date has the force measurement system continually being developed. At first the pendulum-like setups was used (Figure 5). One of the experimental facilities consists of two high power (410 mT) magnets made of NdFeB suspended by thin wires on both side of channel thereby creating magnetic field perpendicular to the fluid flow, here deflection is measured by interferometer system,. [ 16 ] [ 17 ] The second setup consists of state-of-art weighting balance system (Figure 6) from which is being hanged optimized magnets on the base of Halbach array system. While the total mass of both magnet systems are equal (1 kg), this system induces 3 times higher system response due to arrangement of individual elements in the array and its interaction with predefined fluid profile. Here use of very sensitive force measuring devices is desirable, since flow velocity is being converted from the very tiny detected Lorentz Force. This force in combination with unavoidable dead weight F G {\displaystyle F_{G}} of the magnet ( F G = m ⋅ g {\displaystyle F_{G}=m\cdot g} ) is around F / F G = 10 − 7 {\displaystyle F/F_{G}=10^{-7}} . After that, the method of differential force measurement was developed. With this method two balance were used, one with magnet and the other is with same-weight-dummy. In this way the influence of environment would be reduced. Recently, it have been reported that the flow measurements by this method is possible for saltwater flows whose electrical conductivity is as small as 0.06 S/m (range of electrical conductivity of the regular water from tap). [ 18 ]
Lorentz force sigmometry (LOFOS) [ 19 ] is a contactless method for measuring the thermophysical properties of materials, no matter whether it is a fluid or a solid body. The precise measurements of electrical value, density, viscosity, thermal conductivity and surface tension of molten metals are in great importance in industry applications. One of the major problems in the experimental measurements of the thermophysical properties at high temperature (>1000 K) in the liquid state is the problem of chemical reaction between the hot fluid and the electrical probes.
The basic equation for calculating the electrical conductivity is derived from the equation that links the mass flow rate m ˙ {\displaystyle {\dot {m}}} and Lorentz force F {\displaystyle F} generated by magnetic field in flow:
where Σ = σ ρ {\displaystyle \Sigma ={\frac {\sigma }{\rho }}} is the specific electrical conductivity equals to the ratio of the electrical conductivity σ {\displaystyle \sigma } and the mass density of fluid ρ {\displaystyle \rho } . K {\displaystyle K} is a calibration factor that depends on the geometry of the LOFOS system.
From equation above the cumulative mass during operating time is determined as
where F ~ {\displaystyle {\tilde {F}}} is the integral of Lorentz force within the time process. From this equation and considering the specific electrical conductivity formula, one can derive the final equation to compute the electrical conductivity for the fluid, in the form
Time-of-flight Lorentz force velocimetry, [ 20 ] [ 21 ] is intended for contactless determination of flow rate in conductive fluids. It can be successfully used even in case when such material properties as electrical conductivity or density are not precisely known under specific outer conditions. The last reason makes time-of-flight LFV especially important for industry application. According to time-of-flight LFV (Fig. 9) two coherent measurement systems are mounted on a channel one by one. The measurement is based on getting of cross-correlating function of signals, which are registered by two magnetic measurement's system. Every system consists of permanent magnet and force sensor, so inducing of Lorentz force and measurement of the reaction force are made simultaneously. Any cross-correlation function is useful only in case of qualitative difference between signals and for creating the difference in this case turbulent fluctuations are used. Before reaching of measurement zone of a channel liquid passes artificial vortex generator that induces strong disturbances in it. And when such fluctuation-vortex reaches magnetic field of measurement system we can observe a peak on its force-time characteristic while second system still measures stable flow. Then according to the time between peaks and the distance between measurement system observer can estimate mean velocity and, hence, flow rate of the liquid by equation:
where D {\displaystyle D} is the distance between magnet system, τ {\displaystyle \tau } the time delay between recorded peaks, and k {\displaystyle k} is obtained experimentally for every specific liquid, as shown in figure 9.
A different, albeit physically closely related challenge is the detection of deeply lying flaws and inhomogeneities in electrically conducting solid materials.
In the traditional version of eddy current testing an alternating (AC) magnetic field is used to induce eddy currents inside the material to be investigated. If the material contains a crack or flaw which make the spatial distribution of the electrical conductivity nonuniform, the path of the eddy currents is perturbed and the impedance of the coil which generates the AC magnetic field is modified. By measuring the impedance of this coil, a crack can hence be detected. Since the eddy currents are generated by an AC magnetic field, their penetration into the subsurface region of the material is limited by the skin effect . The applicability of the traditional version of eddy current testing is therefore limited to the analysis of the immediate vicinity of the surface of a material, usually of the order of one millimeter. Attempts to overcome this fundamental limitation using low frequency coils and superconducting magnetic field sensors have not led to widespread applications.
A recent technique, referred to as Lorentz force eddy current testing (LET), [ 22 ] [ 23 ] exploits the advantages of applying DC magnetic fields and relative motion providing deep and relatively fast testing of electrically conducting materials. In principle, LET represents a modification of the traditional eddy current testing from which it differs in two aspects, namely (i) how eddy currents are induced and (ii) how their perturbation is detected. In LET eddy currents are generated by providing the relative motion between the conductor under test and a permanent magnet (see figure 10). If the magnet is passing by a defect, the Lorentz force acting on it shows a distortion whose detection is the key for the LET working principle. If the object is free of defects, the resulting Lorentz force remains constant.
The advantages of LFV are
The limitations of the LFV are | https://en.wikipedia.org/wiki/Lorentz_force_velocimetry |
The Lorentz oscillator model (classical electron oscillator or CEO model) describes the optical response of bound charges . The model is named after the Dutch physicist Hendrik Antoon Lorentz . It is a classical , phenomenological model for materials with characteristic resonance frequencies (or other characteristic energy scales) for optical absorption, e.g. ionic and molecular vibrations , interband transitions (semiconductors), phonons , and collective excitations. [ 1 ] [ 2 ]
The model is derived by modeling an electron orbiting a massive, stationary nucleus as a spring-mass-damper system . [ 2 ] [ 3 ] [ 4 ] The electron is modeled to be connected to the nucleus via a hypothetical spring and its motion is damped by via a hypothetical damper. The damping force ensures that the oscillator's response is finite at its resonance frequency. For a time-harmonic driving force which originates from the electric field, Newton's second law can be applied to the electron to obtain the motion of the electron and expressions for the dipole moment , polarization , susceptibility , and dielectric function . [ 4 ]
Equation of motion for electron oscillator: F net = F damping + F spring + F driving = m d 2 r d t 2 − m τ d r d t − k r − e E ( t ) = m d 2 r d t 2 d 2 r d t 2 + 1 τ d r d t + ω 0 2 r = − e m E ( t ) {\displaystyle {\begin{aligned}\mathbf {F} _{\text{net}}=\mathbf {F} _{\text{damping}}+\mathbf {F} _{\text{spring}}+\mathbf {F} _{\text{driving}}&=m{\frac {\mathrm {d} ^{2}\mathbf {r} }{\mathrm {d} t^{2}}}\\[1ex]{\frac {-m}{\tau }}{\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}-k\mathbf {r} -{e}\mathbf {E} (t)&=m{\frac {\mathrm {d} ^{2}\mathbf {r} }{\mathrm {d} t^{2}}}\\[1ex]{\frac {\mathrm {d} ^{2}\mathbf {r} }{\mathrm {d} t^{2}}}+{\frac {1}{\tau }}{\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}+\omega _{0}^{2}\mathbf {r} \;&=\;{\frac {-e}{m}}\mathbf {E} (t)\end{aligned}}}
where
For time-harmonic fields: E ( t ) = E 0 e − i ω t {\displaystyle \mathbf {E} (t)=\mathbf {E} _{0}e^{-i\omega t}} r ( t ) = r 0 e − i ω t {\displaystyle \mathbf {r} (t)=\mathbf {r} _{0}e^{-i\omega t}}
The stationary solution of this equation of motion is: r ( ω ) = − e m ω 0 2 − ω 2 − i ω / τ E ( ω ) {\displaystyle \mathbf {r} (\omega )={\frac {\frac {-e}{m}}{\omega _{0}^{2}-\omega ^{2}-i\omega /\tau }}\mathbf {E} (\omega )}
The fact that the above solution is complex means there is a time delay (phase shift) between the driving electric field and the response of the electron's motion. [ 4 ]
The displacement, r {\displaystyle \mathbf {r} } , induces a dipole moment, p {\displaystyle \mathbf {p} } , given by p ( ω ) = − e r ( ω ) = α ^ ( ω ) E ( ω ) . {\displaystyle \mathbf {p} (\omega )=-e\mathbf {r} (\omega )={\hat {\alpha }}(\omega )\mathbf {E} (\omega ).}
α ^ ( ω ) {\displaystyle {\hat {\alpha }}(\omega )} is the polarizability of single oscillator, given by α ^ ( ω ) = e 2 m 1 ( ω 0 2 − ω 2 ) − i ω / τ . {\displaystyle {\hat {\alpha }}(\omega )={\frac {e^{2}}{m}}{\frac {1}{(\omega _{0}^{2}-\omega ^{2})-i\omega /\tau }}.}
Three distinct scattering regimes can be interpreted corresponding to the dominant denominator term in the dipole moment: [ 5 ]
The polarization P {\displaystyle \mathbf {P} } is the dipole moment per unit volume. For macroscopic material properties N is the density of charges (electrons) per unit volume. Considering that each electron is acting with the same dipole moment we have the polarization as below P = N p = N α ^ ( ω ) E ( ω ) . {\displaystyle \mathbf {P} =N\mathbf {p} =N{\hat {\alpha }}(\omega )\mathbf {E} (\omega ).}
The electric displacement D {\displaystyle \mathbf {D} } is related to the polarization density P {\displaystyle \mathbf {P} } by D = ε ^ E = E + 4 π P = ( 1 + 4 π N α ^ ) E {\displaystyle \mathbf {D} ={\hat {\varepsilon }}\mathbf {E} =\mathbf {E} +4\pi \mathbf {P} =(1+4\pi N{\hat {\alpha }})\mathbf {E} }
The complex dielectric function is given the following (in Gaussian units ): ε ^ ( ω ) = 1 + 4 π N e 2 m 1 ( ω 0 2 − ω 2 ) − i ω / τ {\displaystyle {\hat {\varepsilon }}(\omega )=1+{\frac {4\pi Ne^{2}}{m}}{\frac {1}{(\omega _{0}^{2}-\omega ^{2})-i\omega /\tau }}} where 4 π N e 2 / m = ω p 2 {\displaystyle 4\pi Ne^{2}/m=\omega _{p}^{2}} and ω p {\displaystyle \omega _{p}} is the so-called plasma frequency .
In practice, the model is commonly modified to account for multiple absorption mechanisms present in a medium. The modified version is given by [ 7 ] ε ^ ( ω ) = ε ∞ + ∑ j χ j L ( ω ; ω 0 , j ) {\displaystyle {\hat {\varepsilon }}(\omega )=\varepsilon _{\infty }+\sum _{j}\chi _{j}^{L}(\omega ;\omega _{0,j})} where χ j L ( ω ; ω 0 , j ) = s j ω 0 , j 2 − ω 2 − i Γ j ω {\displaystyle \chi _{j}^{L}(\omega ;\omega _{0,j})={\frac {s_{j}}{\omega _{0,j}^{2}-\omega ^{2}-i\Gamma _{j}\omega }}} and
Separating the real and imaginary components, ε ^ ( ω ) = ε 1 ( ω ) + i ε 2 ( ω ) = [ ε ∞ + ∑ j s j ( ω 0 , j 2 − ω 2 ) ( ω 0 , j 2 − ω 2 ) 2 + ( Γ j ω ) 2 ] + i [ ∑ j s j ( Γ j ω ) ( ω 0 , j 2 − ω 2 ) 2 + ( Γ j ω ) 2 ] {\displaystyle {\hat {\varepsilon }}(\omega )=\varepsilon _{1}(\omega )+i\varepsilon _{2}(\omega )=\left[\varepsilon _{\infty }+\sum _{j}{\frac {s_{j}(\omega _{0,j}^{2}-\omega ^{2})}{\left(\omega _{0,j}^{2}-\omega ^{2}\right)^{2}+\left(\Gamma _{j}\omega \right)^{2}}}\right]+i\left[\sum _{j}{\frac {s_{j}(\Gamma _{j}\omega )}{\left(\omega _{0,j}^{2}-\omega ^{2}\right)^{2}+\left(\Gamma _{j}\omega \right)^{2}}}\right]}
The complex optical conductivity in general is related to the complex dielectric function (in Gaussian units ) as σ ^ ( ω ) = ω 4 π i ( ε ^ ( ω ) − 1 ) {\displaystyle {\hat {\sigma }}(\omega )={\frac {\omega }{4\pi i}}\left({\hat {\varepsilon }}(\omega )-1\right)}
Substituting the formula of ε ^ ( ω ) {\displaystyle {\hat {\varepsilon }}(\omega )} in the equation above we obtain σ ^ ( ω ) = N e 2 m ω ω / τ + i ( ω 0 2 − ω 2 ) {\displaystyle {\hat {\sigma }}(\omega )={\frac {Ne^{2}}{m}}{\frac {\omega }{\omega /\tau +i\left(\omega _{0}^{2}-\omega ^{2}\right)}}}
Separating the real and imaginary components, σ ^ ( ω ) = σ 1 ( ω ) + i σ 2 ( ω ) = N e 2 m ω 2 τ ( ω 0 2 − ω 2 ) 2 + ω 2 / τ 2 − i N e 2 m ( ω 0 2 − ω 2 ) ω ( ω 0 2 − ω 2 ) 2 + ω 2 / τ 2 {\displaystyle {\hat {\sigma }}(\omega )=\sigma _{1}(\omega )+i\sigma _{2}(\omega )={\frac {Ne^{2}}{m}}{\frac {\frac {\omega ^{2}}{\tau }}{\left(\omega _{0}^{2}-\omega ^{2}\right)^{2}+\omega ^{2}/\tau ^{2}}}-i{\frac {Ne^{2}}{m}}{\frac {\left(\omega _{0}^{2}-\omega ^{2}\right)\omega }{\left(\omega _{0}^{2}-\omega ^{2}\right)^{2}+\omega ^{2}/\tau ^{2}}}} | https://en.wikipedia.org/wiki/Lorentz_oscillator_model |
In a relativistic theory of physics , a Lorentz scalar is a scalar expression whose value is invariant under any Lorentz transformation . A Lorentz scalar may be generated from, e.g., the scalar product of vectors, or by contracting tensors. While the components of the contracted quantities may change under Lorentz transformations, the Lorentz scalars remain unchanged.
A simple Lorentz scalar in Minkowski spacetime is the spacetime distance ("length" of their difference) of two fixed events in spacetime. While the "position"-4-vectors of the events change between different inertial frames, their spacetime distance remains invariant under the corresponding Lorentz transformation. Other examples of Lorentz scalars are the "length" of 4-velocities (see below), or the Ricci curvature in a point in spacetime from general relativity , which is a contraction of the Riemann curvature tensor there.
In special relativity the location of a particle in 4-dimensional spacetime is given by x μ = ( c t , x ) {\displaystyle x^{\mu }=(ct,\mathbf {x} )} where x = v t {\displaystyle \mathbf {x} =\mathbf {v} t} is the position in 3-dimensional space of the particle, v {\displaystyle \mathbf {v} } is the velocity in 3-dimensional space and c {\displaystyle c} is the speed of light .
The "length" of the vector is a Lorentz scalar and is given by x μ x μ = η μ ν x μ x ν = ( c t ) 2 − x ⋅ x = d e f ( c τ ) 2 {\displaystyle x_{\mu }x^{\mu }=\eta _{\mu \nu }x^{\mu }x^{\nu }=(ct)^{2}-\mathbf {x} \cdot \mathbf {x} \ {\stackrel {\mathrm {def} }{=}}\ (c\tau )^{2}} where τ {\displaystyle \tau } is the proper time as measured by a clock in the rest frame of the particle and the Minkowski metric is given by η μ ν = η μ ν = ( 1 0 0 0 0 − 1 0 0 0 0 − 1 0 0 0 0 − 1 ) . {\displaystyle \eta ^{\mu \nu }=\eta _{\mu \nu }={\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}}.} This is a time-like metric.
Often the alternate signature of the Minkowski metric is used in which the signs of the ones are reversed. η μ ν = η μ ν = ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) . {\displaystyle \eta ^{\mu \nu }=\eta _{\mu \nu }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}.} This is a space-like metric.
In the Minkowski metric the space-like interval s {\displaystyle s} is defined as x μ x μ = η μ ν x μ x ν = x ⋅ x − ( c t ) 2 = d e f s 2 . {\displaystyle x_{\mu }x^{\mu }=\eta _{\mu \nu }x^{\mu }x^{\nu }=\mathbf {x} \cdot \mathbf {x} -(ct)^{2}\ {\stackrel {\mathrm {def} }{=}}\ s^{2}.}
We use the space-like Minkowski metric in the rest of this article.
The velocity in spacetime is defined as v μ = d e f d x μ d τ = ( c d t d τ , d t d τ d x d t ) = ( γ c , γ v ) = γ ( c , v ) {\displaystyle v^{\mu }\ {\stackrel {\mathrm {def} }{=}}\ {dx^{\mu } \over d\tau }=\left(c{dt \over d\tau },{dt \over d\tau }{d\mathbf {x} \over dt}\right)=\left(\gamma c,\gamma {\mathbf {v} }\right)=\gamma \left(c,{\mathbf {v} }\right)} where γ = d e f 1 1 − v ⋅ v c 2 . {\displaystyle \gamma \ {\stackrel {\mathrm {def} }{=}}\ {1 \over {\sqrt {1-{{\mathbf {v} \cdot \mathbf {v} } \over c^{2}}}}}.}
The magnitude of the 4-velocity is a Lorentz scalar, v μ v μ = − c 2 . {\displaystyle v_{\mu }v^{\mu }=-c^{2}\,.}
Hence, c {\displaystyle c} is a Lorentz scalar.
The 4-acceleration is given by a μ = d e f d v μ d τ . {\displaystyle a^{\mu }\ {\stackrel {\mathrm {def} }{=}}\ {dv^{\mu } \over d\tau }.}
The 4-acceleration is always perpendicular to the 4-velocity 0 = 1 2 d d τ ( v μ v μ ) = d v μ d τ v μ = a μ v μ . {\displaystyle 0={1 \over 2}{d \over d\tau }\left(v_{\mu }v^{\mu }\right)={dv_{\mu } \over d\tau }v^{\mu }=a_{\mu }v^{\mu }.}
Therefore, we can regard acceleration in spacetime as simply a rotation of the 4-velocity. The inner product of the acceleration and the velocity is a Lorentz scalar and is zero. This rotation is simply an expression of energy conservation: d E d τ = F ⋅ v {\displaystyle {dE \over d\tau }=\mathbf {F} \cdot \mathbf {v} } where E {\displaystyle E} is the energy of a particle and F {\displaystyle \mathbf {F} } is the 3-force on the particle.
The 4-momentum of a particle is p μ = m v μ = ( γ m c , γ m v ) = ( γ m c , p ) = ( E c , p ) {\displaystyle p^{\mu }=mv^{\mu }=\left(\gamma mc,\gamma m\mathbf {v} \right)=\left(\gamma mc,\mathbf {p} \right)=\left({\frac {E}{c}},\mathbf {p} \right)} where m {\displaystyle m} is the particle rest mass, p {\displaystyle \mathbf {p} } is the momentum in 3-space, and E = γ m c 2 {\displaystyle E=\gamma mc^{2}} is the energy of the particle.
Consider a second particle with 4-velocity u {\displaystyle u} and a 3-velocity u 2 {\displaystyle \mathbf {u} _{2}} . In the rest frame of the second particle the inner product of u {\displaystyle u} with p {\displaystyle p} is proportional to the energy of the first particle p μ u μ = − E 1 {\displaystyle p_{\mu }u^{\mu }=-E_{1}} where the subscript 1 indicates the first particle.
Since the relationship is true in the rest frame of the second particle, it is true in any reference frame. E 1 {\displaystyle E_{1}} , the energy of the first particle in the frame of the second particle, is a Lorentz scalar. Therefore, E 1 = γ 1 γ 2 m 1 c 2 − γ 2 p 1 ⋅ u 2 {\displaystyle E_{1}=\gamma _{1}\gamma _{2}m_{1}c^{2}-\gamma _{2}\mathbf {p} _{1}\cdot \mathbf {u} _{2}} in any inertial reference frame, where E 1 {\displaystyle E_{1}} is still the energy of the first particle in the frame of the second particle.
In the rest frame of the particle the inner product of the momentum is p μ p μ = − ( m c ) 2 . {\displaystyle p_{\mu }p^{\mu }=-(mc)^{2}\,.}
Therefore, the rest mass ( m ) is a Lorentz scalar. The relationship remains true independent of the frame in which the inner product is calculated. In many cases the rest mass is written as m 0 {\displaystyle m_{0}} to avoid confusion with the relativistic mass, which is γ m 0 {\displaystyle \gamma m_{0}} .
Note that ( p μ u μ c ) 2 + p μ p μ = E 1 2 c 2 − ( m c ) 2 = ( γ 1 2 − 1 ) ( m c ) 2 = γ 1 2 v 1 ⋅ v 1 m 2 = p 1 ⋅ p 1 . {\displaystyle \left({\frac {p_{\mu }u^{\mu }}{c}}\right)^{2}+p_{\mu }p^{\mu }={E_{1}^{2} \over c^{2}}-(mc)^{2}=\left(\gamma _{1}^{2}-1\right)(mc)^{2}=\gamma _{1}^{2}{\mathbf {v} _{1}\cdot \mathbf {v} _{1}}m^{2}=\mathbf {p} _{1}\cdot \mathbf {p} _{1}.}
The square of the magnitude of the 3-momentum of the particle as measured in the frame of the second particle is a Lorentz scalar.
The 3-speed, in the frame of the second particle, can be constructed from two Lorentz scalars v 1 2 = v 1 ⋅ v 1 = p 1 ⋅ p 1 E 1 2 c 4 . {\displaystyle v_{1}^{2}=\mathbf {v} _{1}\cdot \mathbf {v} _{1}={\frac {\mathbf {p} _{1}\cdot \mathbf {p} _{1}}{E_{1}^{2}}}c^{4}.}
Scalars may also be constructed from the tensors and vectors, from the contraction of tensors (such as F μ ν F μ ν {\displaystyle F_{\mu \nu }F^{\mu \nu }} ), or combinations of contractions of tensors and vectors (such as g μ ν x μ x ν {\displaystyle g_{\mu \nu }x^{\mu }x^{\nu }} ). | https://en.wikipedia.org/wiki/Lorentz_scalar |
In physics , the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz .
The most common form of the transformation, parametrized by the real constant v , {\displaystyle v,} representing a velocity confined to the x -direction, is expressed as [ 1 ] [ 2 ] t ′ = γ ( t − v x c 2 ) x ′ = γ ( x − v t ) y ′ = y z ′ = z {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\x'&=\gamma \left(x-vt\right)\\y'&=y\\z'&=z\end{aligned}}} where ( t , x , y , z ) and ( t′ , x′ , y′ , z′ ) are the coordinates of an event in two frames with the spatial origins coinciding at t = t′ = 0 , where the primed frame is seen from the unprimed frame as moving with speed v along the x -axis, where c is the speed of light , and γ = 1 1 − v 2 / c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}} is the Lorentz factor . When speed v is much smaller than c , the Lorentz factor is negligibly different from 1, but as v approaches c , γ {\displaystyle \gamma } grows without bound. The value of v must be smaller than c for the transformation to make sense.
Expressing the speed as a fraction of the speed of light, β = v / c , {\textstyle \beta =v/c,} an equivalent form of the transformation is [ 3 ] c t ′ = γ ( c t − β x ) x ′ = γ ( x − β c t ) y ′ = y z ′ = z . {\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\\x'&=\gamma \left(x-\beta ct\right)\\y'&=y\\z'&=z.\end{aligned}}}
Frames of reference can be divided into two groups: inertial (relative motion with constant velocity) and non-inertial (accelerating, moving in curved paths, rotational motion with constant angular velocity , etc.). The term "Lorentz transformations" only refers to transformations between inertial frames, usually in the context of special relativity.
In each reference frame , an observer can use a local coordinate system (usually Cartesian coordinates in this context) to measure lengths, and a clock to measure time intervals. An event is something that happens at a point in space at an instant of time, or more formally a point in spacetime . The transformations connect the space and time coordinates of an event as measured by an observer in each frame. [ nb 1 ]
They supersede the Galilean transformation of Newtonian physics , which assumes an absolute space and time (see Galilean relativity ). The Galilean transformation is a good approximation only at relative speeds much less than the speed of light. Lorentz transformations have a number of unintuitive features that do not appear in Galilean transformations. For example, they reflect the fact that observers moving at different velocities may measure different distances , elapsed times , and even different orderings of events , but always such that the speed of light is the same in all inertial reference frames. The invariance of light speed is one of the postulates of special relativity .
Historically, the transformations were the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame , and to understand the symmetries of the laws of electromagnetism . The transformations later became a cornerstone for special relativity .
The Lorentz transformation is a linear transformation . It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost . In Minkowski space —the mathematical model of spacetime in special relativity—the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed. They can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group .
Many physicists—including Woldemar Voigt , George FitzGerald , Joseph Larmor , and Hendrik Lorentz [ 4 ] himself—had been discussing the physics implied by these equations since 1887. [ 5 ] Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the luminiferous aether . FitzGerald then conjectured that Heaviside's distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are being contracted, in order to explain the baffling outcome of the 1887 aether-wind experiment of Michelson and Morley . In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis . [ 6 ] Their explanation was widely known before 1905. [ 7 ]
Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous aether hypothesis, also looked for the transformation under which Maxwell's equations are invariant when transformed from the aether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well (" local time "). Henri Poincaré gave a physical interpretation to local time (to first order in v / c , the relative velocity of the two reference frames normalized to the speed of light) as the consequence of clock synchronization, under the assumption that the speed of light is constant in moving frames. [ 8 ] Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations. [ 9 ]
In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group ,
and he named it after Lorentz. [ 10 ] Later in the same year Albert Einstein published what is now called special relativity , by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame , and by abandoning the mechanistic aether as unnecessary. [ 11 ]
An event is something that happens at a certain point in spacetime, or more generally, the point in spacetime itself. In any inertial frame an event is specified by a time coordinate ct and a set of Cartesian coordinates x , y , z to specify position in space in that frame. Subscripts label individual events.
From Einstein's second postulate of relativity (invariance of c ) it follows that:
in all inertial frames for events connected by light signals . The quantity on the left is called the spacetime interval between events a 1 = ( t 1 , x 1 , y 1 , z 1 ) and a 2 = ( t 2 , x 2 , y 2 , z 2 ) . The interval between any two events, not necessarily separated by light signals, is in fact invariant, i.e., independent of the state of relative motion of observers in different inertial frames, as is shown using homogeneity and isotropy of space . The transformation sought after thus must possess the property that:
where ( t , x , y , z ) are the spacetime coordinates used to define events in one frame, and ( t′ , x′ , y′ , z′ ) are the coordinates in another frame. First one observes that ( D2 ) is satisfied if an arbitrary 4 -tuple b of numbers are added to events a 1 and a 2 . Such transformations are called spacetime translations and are not dealt with further here. Then one observes that a linear solution preserving the origin of the simpler problem solves the general problem too:
(a solution satisfying the first formula automatically satisfies the second one as well; see polarization identity ). Finding the solution to the simpler problem is just a matter of look-up in the theory of classical groups that preserve bilinear forms of various signature. [ nb 2 ] First equation in ( D3 ) can be written more compactly as:
where (·, ·) refers to the bilinear form of signature (1, 3) on R 4 exposed by the right hand side formula in ( D3 ). The alternative notation defined on the right is referred to as the relativistic dot product . Spacetime mathematically viewed as R 4 endowed with this bilinear form is known as Minkowski space M . The Lorentz transformation is thus an element of the group O(1, 3) , the Lorentz group or, for those that prefer the other metric signature , O(3, 1) (also called the Lorentz group). [ nb 3 ] One has:
which is precisely preservation of the bilinear form ( D3 ) which implies (by linearity of Λ and bilinearity of the form) that ( D2 ) is satisfied. The elements of the Lorentz group are rotations and boosts and mixes thereof. If the spacetime translations are included, then one obtains the inhomogeneous Lorentz group or the Poincaré group .
The relations between the primed and unprimed spacetime coordinates are the Lorentz transformations , each coordinate in one frame is a linear function of all the coordinates in the other frame, and the inverse functions are the inverse transformation. Depending on how the frames move relative to each other, and how they are oriented in space relative to each other, other parameters that describe direction, speed, and orientation enter the transformation equations.
Transformations describing relative motion with constant (uniform) velocity and without rotation of the space coordinate axes are called Lorentz boosts or simply boosts , and the relative velocity between the frames is the parameter of the transformation. The other basic type of Lorentz transformation is rotation in the spatial coordinates only, these like boosts are inertial transformations since there is no relative motion, the frames are simply tilted (and not continuously rotating), and in this case quantities defining the rotation are the parameters of the transformation (e.g., axis–angle representation , or Euler angles , etc.). A combination of a rotation and boost is a homogeneous transformation , which transforms the origin back to the origin.
The full Lorentz group O(3, 1) also contains special transformations that are neither rotations nor boosts, but rather reflections in a plane through the origin. Two of these can be singled out; spatial inversion in which the spatial coordinates of all events are reversed in sign and temporal inversion in which the time coordinate for each event gets its sign reversed.
Boosts should not be conflated with mere displacements in spacetime; in this case, the coordinate systems are simply shifted and there is no relative motion. However, these also count as symmetries forced by special relativity since they leave the spacetime interval invariant. A combination of a rotation with a boost, followed by a shift in spacetime, is an inhomogeneous Lorentz transformation , an element of the Poincaré group, which is also called the inhomogeneous Lorentz group.
A "stationary" observer in frame F defines events with coordinates t , x , y , z . Another frame F′ moves with velocity v relative to F , and an observer in this "moving" frame F′ defines events using the coordinates t′ , x′ , y′ , z′ .
The coordinate axes in each frame are parallel (the x and x′ axes are parallel, the y and y′ axes are parallel, and the z and z′ axes are parallel), remain mutually perpendicular, and relative motion is along the coincident xx′ axes. At t = t′ = 0 , the origins of both coordinate systems are the same, ( x , y , z ) = ( x′ , y′ , z′ ) = (0, 0, 0) . In other words, the times and positions are coincident at this event. If all these hold, then the coordinate systems are said to be in standard configuration , or synchronized .
If an observer in F records an event t , x , y , z , then an observer in F′ records the same event with coordinates [ 13 ]
t ′ = γ ( t − v x c 2 ) x ′ = γ ( x − v t ) y ′ = y z ′ = z {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {vx}{c^{2}}}\right)\\x'&=\gamma \left(x-vt\right)\\y'&=y\\z'&=z\end{aligned}}}
where v is the relative velocity between frames in the x -direction, c is the speed of light , and γ = 1 1 − v 2 c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}} (lowercase gamma ) is the Lorentz factor .
Here, v is the parameter of the transformation, for a given boost it is a constant number, but can take a continuous range of values. In the setup used here, positive relative velocity v > 0 is motion along the positive directions of the xx′ axes, zero relative velocity v = 0 is no relative motion, while negative relative velocity v < 0 is relative motion along the negative directions of the xx′ axes. The magnitude of relative velocity v cannot equal or exceed c , so only subluminal speeds − c < v < c are allowed. The corresponding range of γ is 1 ≤ γ < ∞ .
The transformations are not defined if v is outside these limits. At the speed of light ( v = c ) γ is infinite, and faster than light ( v > c ) γ is a complex number , each of which make the transformations unphysical. The space and time coordinates are measurable quantities and numerically must be real numbers.
As an active transformation , an observer in F′ notices the coordinates of the event to be "boosted" in the negative directions of the xx′ axes, because of the − v in the transformations. This has the equivalent effect of the coordinate system F′ boosted in the positive directions of the xx′ axes, while the event does not change and is simply represented in another coordinate system, a passive transformation .
The inverse relations ( t , x , y , z in terms of t′ , x′ , y′ , z′ ) can be found by algebraically solving the original set of equations. A more efficient way is to use physical principles. Here F′ is the "stationary" frame while F is the "moving" frame. According to the principle of relativity, there is no privileged frame of reference, so the transformations from F′ to F must take exactly the same form as the transformations from F to F′ . The only difference is F moves with velocity − v relative to F′ (i.e., the relative velocity has the same magnitude but is oppositely directed). Thus if an observer in F′ notes an event t′ , x′ , y′ , z′ , then an observer in F notes the same event with coordinates
t = γ ( t ′ + v x ′ c 2 ) x = γ ( x ′ + v t ′ ) y = y ′ z = z ′ , {\displaystyle {\begin{aligned}t&=\gamma \left(t'+{\frac {vx'}{c^{2}}}\right)\\x&=\gamma \left(x'+vt'\right)\\y&=y'\\z&=z',\end{aligned}}}
and the value of γ remains unchanged. This "trick" of simply reversing the direction of relative velocity while preserving its magnitude, and exchanging primed and unprimed variables, always applies to finding the inverse transformation of every boost in any direction. [ 14 ] [ 15 ]
Sometimes it is more convenient to use β = v / c (lowercase beta ) instead of v , so that c t ′ = γ ( c t − β x ) , x ′ = γ ( x − β c t ) , {\displaystyle {\begin{aligned}ct'&=\gamma \left(ct-\beta x\right)\,,\\x'&=\gamma \left(x-\beta ct\right)\,,\\\end{aligned}}} which shows much more clearly the symmetry in the transformation. From the allowed ranges of v and the definition of β , it follows −1 < β < 1 . The use of β and γ is standard throughout the literature.
When the boost velocity v {\displaystyle {\boldsymbol {v}}} is in an arbitrary vector direction with the boost vector β = v / c {\displaystyle {\boldsymbol {\beta }}={\boldsymbol {v}}/c} , then the transformation from an unprimed spacetime coordinate system to a primed coordinate system is given by [ 16 ] [ 17 ]
[ c t ′ − γ β x x ′ 1 + γ 2 1 + γ β x 2 y ′ γ 2 1 + γ β x β y z ′ γ 2 1 + γ β y β z ] = [ γ − γ β x − γ β y − γ β z − γ β x 1 + γ 2 1 + γ β x 2 γ 2 1 + γ β x β y γ 2 1 + γ β x β z − γ β y γ 2 1 + γ β x β y 1 + γ 2 1 + γ β y 2 γ 2 1 + γ β y β z − γ β z γ 2 1 + γ β x β z γ 2 1 + γ β y β z 1 + γ 2 1 + γ β z 2 ] [ c t − γ β x x 1 + γ 2 1 + γ β x 2 y γ 2 1 + γ β x β y z γ 2 1 + γ β y β z ] , {\displaystyle {\begin{bmatrix}ct'{\vphantom {-\gamma \beta _{x}}}\\x'{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}}}\\y'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}}}\\z'{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma \beta _{x}&-\gamma \beta _{y}&-\gamma \beta _{z}\\-\gamma \beta _{x}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{z}\\-\gamma \beta _{y}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}^{2}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}\\-\gamma \beta _{z}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{z}&{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}&1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{z}^{2}\\\end{bmatrix}}{\begin{bmatrix}ct{\vphantom {-\gamma \beta _{x}}}\\x{\vphantom {1+{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}^{2}}}\\y{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{x}\beta _{y}}}\\z{\vphantom {{\frac {\gamma ^{2}}{1+\gamma }}\beta _{y}\beta _{z}}}\end{bmatrix}},}
where the Lorentz factor is γ = 1 / 1 − β 2 {\displaystyle \gamma =1/{\sqrt {1-{\boldsymbol {\beta }}^{2}}}} . The determinant of the transformation matrix is +1 and its trace is 2 ( 1 + γ ) {\displaystyle 2(1+\gamma )} . The inverse of the transformation is given by reversing the sign of β {\displaystyle {\boldsymbol {\beta }}} . The quantity c 2 t 2 − x 2 − y 2 − z 2 {\displaystyle c^{2}t^{2}-x^{2}-y^{2}-z^{2}} is invariant under the transformation: namely ( c t ′ 2 − x ′ 2 − y ′ 2 − z ′ 2 ) = ( c t 2 − x 2 − y 2 − z 2 ) {\displaystyle (ct'^{2}-x'^{2}-y'^{2}-z'^{2})=(ct^{2}-x^{2}-y^{2}-z^{2})} .
The Lorentz transformations can also be derived in a way that resembles circular rotations in 3-dimensional space using the hyperbolic functions . For the boost in the x direction, the results are
c t ′ = c t cosh ζ − x sinh ζ x ′ = x cosh ζ − c t sinh ζ y ′ = y z ′ = z {\displaystyle {\begin{aligned}ct'&=ct\cosh \zeta -x\sinh \zeta \\x'&=x\cosh \zeta -ct\sinh \zeta \\y'&=y\\z'&=z\end{aligned}}}
where ζ (lowercase zeta ) is a parameter called rapidity (many other symbols are used, including θ , ϕ , φ , η , ψ , ξ ). Given the strong resemblance to rotations of spatial coordinates in 3-dimensional space in the Cartesian xy , yz , and zx planes, a Lorentz boost can be thought of as a hyperbolic rotation of spacetime coordinates in the xt, yt, and zt Cartesian-time planes of 4-dimensional Minkowski space . The parameter ζ is the hyperbolic angle of rotation, analogous to the ordinary angle for circular rotations. This transformation can be illustrated with a Minkowski diagram .
The hyperbolic functions arise from the difference between the squares of the time and spatial coordinates in the spacetime interval, rather than a sum. The geometric significance of the hyperbolic functions can be visualized by taking x = 0 or ct = 0 in the transformations. Squaring and subtracting the results, one can derive hyperbolic curves of constant coordinate values but varying ζ , which parametrizes the curves according to the identity cosh 2 ζ − sinh 2 ζ = 1 . {\displaystyle \cosh ^{2}\zeta -\sinh ^{2}\zeta =1\,.}
Conversely the ct and x axes can be constructed for varying coordinates but constant ζ . The definition tanh ζ = sinh ζ cosh ζ , {\displaystyle \tanh \zeta ={\frac {\sinh \zeta }{\cosh \zeta }}\,,} provides the link between a constant value of rapidity, and the slope of the ct axis in spacetime. A consequence these two hyperbolic formulae is an identity that matches the Lorentz factor cosh ζ = 1 1 − tanh 2 ζ . {\displaystyle \cosh \zeta ={\frac {1}{\sqrt {1-\tanh ^{2}\zeta }}}\,.}
Comparing the Lorentz transformations in terms of the relative velocity and rapidity, or using the above formulae, the connections between β , γ , and ζ are β = tanh ζ , γ = cosh ζ , β γ = sinh ζ . {\displaystyle {\begin{aligned}\beta &=\tanh \zeta \,,\\\gamma &=\cosh \zeta \,,\\\beta \gamma &=\sinh \zeta \,.\end{aligned}}}
Taking the inverse hyperbolic tangent gives the rapidity ζ = tanh − 1 β . {\displaystyle \zeta =\tanh ^{-1}\beta \,.}
Since −1 < β < 1 , it follows −∞ < ζ < ∞ . From the relation between ζ and β , positive rapidity ζ > 0 is motion along the positive directions of the xx′ axes, zero rapidity ζ = 0 is no relative motion, while negative rapidity ζ < 0 is relative motion along the negative directions of the xx′ axes.
The inverse transformations are obtained by exchanging primed and unprimed quantities to switch the coordinate frames, and negating rapidity ζ → − ζ since this is equivalent to negating the relative velocity. Therefore,
c t = c t ′ cosh ζ + x ′ sinh ζ x = x ′ cosh ζ + c t ′ sinh ζ y = y ′ z = z ′ {\displaystyle {\begin{aligned}ct&=ct'\cosh \zeta +x'\sinh \zeta \\x&=x'\cosh \zeta +ct'\sinh \zeta \\y&=y'\\z&=z'\end{aligned}}}
The inverse transformations can be similarly visualized by considering the cases when x′ = 0 and ct′ = 0 .
So far the Lorentz transformations have been applied to one event . If there are two events, there is a spatial separation and time interval between them. It follows from the linearity of the Lorentz transformations that two values of space and time coordinates can be chosen, the Lorentz transformations can be applied to each, then subtracted to get the Lorentz transformations of the differences;
Δ t ′ = γ ( Δ t − v Δ x c 2 ) , Δ x ′ = γ ( Δ x − v Δ t ) , {\displaystyle {\begin{aligned}\Delta t'&=\gamma \left(\Delta t-{\frac {v\,\Delta x}{c^{2}}}\right)\,,\\\Delta x'&=\gamma \left(\Delta x-v\,\Delta t\right)\,,\end{aligned}}} with inverse relations Δ t = γ ( Δ t ′ + v Δ x ′ c 2 ) , Δ x = γ ( Δ x ′ + v Δ t ′ ) . {\displaystyle {\begin{aligned}\Delta t&=\gamma \left(\Delta t'+{\frac {v\,\Delta x'}{c^{2}}}\right)\,,\\\Delta x&=\gamma \left(\Delta x'+v\,\Delta t'\right)\,.\end{aligned}}}
where Δ (uppercase delta ) indicates a difference of quantities; e.g., Δ x = x 2 − x 1 for two values of x coordinates, and so on.
These transformations on differences rather than spatial points or instants of time are useful for a number of reasons:
A critical requirement of the Lorentz transformations is the invariance of the speed of light, a fact used in their derivation, and contained in the transformations themselves. If in F the equation for a pulse of light along the x direction is x = ct , then in F′ the Lorentz transformations give x′ = ct′ , and vice versa, for any − c < v < c .
For relative speeds much less than the speed of light, the Lorentz transformations reduce to the Galilean transformation : [ 18 ] [ 19 ] t ′ ≈ t x ′ ≈ x − v t {\displaystyle {\begin{aligned}t'&\approx t\\x'&\approx x-vt\end{aligned}}} in accordance with the correspondence principle . It is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance". [ 20 ]
Three counterintuitive, but correct, predictions of the transformations are:
The use of vectors allows positions and velocities to be expressed in arbitrary directions compactly. A single boost in any direction depends on the full relative velocity vector v with a magnitude | v | = v that cannot equal or exceed c , so that 0 ≤ v < c .
Only time and the coordinates parallel to the direction of relative motion change, while those coordinates perpendicular do not. With this in mind, split the spatial position vector r as measured in F , and r′ as measured in F′ , each into components perpendicular (⊥) and parallel ( ‖ ) to v , r = r ⊥ + r ‖ , r ′ = r ⊥ ′ + r ‖ ′ , {\displaystyle \mathbf {r} =\mathbf {r} _{\perp }+\mathbf {r} _{\|}\,,\quad \mathbf {r} '=\mathbf {r} _{\perp }'+\mathbf {r} _{\|}'\,,} then the transformations are t ′ = γ ( t − r ∥ ⋅ v c 2 ) r ‖ ′ = γ ( r ‖ − v t ) r ⊥ ′ = r ⊥ {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {\mathbf {r} _{\parallel }\cdot \mathbf {v} }{c^{2}}}\right)\\\mathbf {r} _{\|}'&=\gamma (\mathbf {r} _{\|}-\mathbf {v} t)\\\mathbf {r} _{\perp }'&=\mathbf {r} _{\perp }\end{aligned}}} where · is the dot product . The Lorentz factor γ retains its definition for a boost in any direction, since it depends only on the magnitude of the relative velocity. The definition β = v / c with magnitude 0 ≤ β < 1 is also used by some authors.
Introducing a unit vector n = v / v = β / β in the direction of relative motion, the relative velocity is v = v n with magnitude v and direction n , and vector projection and rejection give respectively r ∥ = ( r ⋅ n ) n , r ⊥ = r − ( r ⋅ n ) n {\displaystyle \mathbf {r} _{\parallel }=(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {r} _{\perp }=\mathbf {r} -(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} }
Accumulating the results gives the full transformations,
t ′ = γ ( t − v n ⋅ r c 2 ) , r ′ = r + ( γ − 1 ) ( r ⋅ n ) n − γ t v n . {\displaystyle {\begin{aligned}t'&=\gamma \left(t-{\frac {v\mathbf {n} \cdot \mathbf {r} }{c^{2}}}\right)\,,\\\mathbf {r} '&=\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} \,.\end{aligned}}}
The projection and rejection also applies to r′ . For the inverse transformations, exchange r and r′ to switch observed coordinates, and negate the relative velocity v → − v (or simply the unit vector n → − n since the magnitude v is always positive) to obtain
t = γ ( t ′ + r ′ ⋅ v n c 2 ) , r = r ′ + ( γ − 1 ) ( r ′ ⋅ n ) n + γ t ′ v n , {\displaystyle {\begin{aligned}t&=\gamma \left(t'+{\frac {\mathbf {r} '\cdot v\mathbf {n} }{c^{2}}}\right)\,,\\\mathbf {r} &=\mathbf {r} '+(\gamma -1)(\mathbf {r} '\cdot \mathbf {n} )\mathbf {n} +\gamma t'v\mathbf {n} \,,\end{aligned}}}
The unit vector has the advantage of simplifying equations for a single boost, allows either v or β to be reinstated when convenient, and the rapidity parametrization is immediately obtained by replacing β and βγ . It is not convenient for multiple boosts.
The vectorial relation between relative velocity and rapidity is [ 21 ] β = β n = n tanh ζ , {\displaystyle {\boldsymbol {\beta }}=\beta \mathbf {n} =\mathbf {n} \tanh \zeta \,,} and the "rapidity vector" can be defined as ζ = ζ n = n tanh − 1 β , {\displaystyle {\boldsymbol {\zeta }}=\zeta \mathbf {n} =\mathbf {n} \tanh ^{-1}\beta \,,} each of which serves as a useful abbreviation in some contexts. The magnitude of ζ is the absolute value of the rapidity scalar confined to 0 ≤ ζ < ∞ , which agrees with the range 0 ≤ β < 1 .
Defining the coordinate velocities and Lorentz factor by
taking the differentials in the coordinates and time of the vector transformations, then dividing equations, leads to
The velocities u and u′ are the velocity of some massive object. They can also be for a third inertial frame (say F′′ ), in which case they must be constant . Denote either entity by X . Then X moves with velocity u relative to F , or equivalently with velocity u′ relative to F′ , in turn F′ moves with velocity v relative to F . The inverse transformations can be obtained in a similar way, or as with position coordinates exchange u and u′ , and change v to − v .
The transformation of velocity is useful in stellar aberration , the Fizeau experiment , and the relativistic Doppler effect .
The Lorentz transformations of acceleration can be similarly obtained by taking differentials in the velocity vectors, and dividing these by the time differential.
In general, given four quantities A and Z = ( Z x , Z y , Z z ) and their Lorentz-boosted counterparts A′ and Z′ = ( Z′ x , Z′ y , Z′ z ) , a relation of the form A 2 − Z ⋅ Z = A ′ 2 − Z ′ ⋅ Z ′ {\displaystyle A^{2}-\mathbf {Z} \cdot \mathbf {Z} ={A'}^{2}-\mathbf {Z} '\cdot \mathbf {Z} '} implies the quantities transform under Lorentz transformations similar to the transformation of spacetime coordinates; A ′ = γ ( A − v n ⋅ Z c ) , Z ′ = Z + ( γ − 1 ) ( Z ⋅ n ) n − γ A v n c . {\displaystyle {\begin{aligned}A'&=\gamma \left(A-{\frac {v\mathbf {n} \cdot \mathbf {Z} }{c}}\right)\,,\\\mathbf {Z} '&=\mathbf {Z} +(\gamma -1)(\mathbf {Z} \cdot \mathbf {n} )\mathbf {n} -{\frac {\gamma Av\mathbf {n} }{c}}\,.\end{aligned}}}
The decomposition of Z (and Z′ ) into components perpendicular and parallel to v is exactly the same as for the position vector, as is the process of obtaining the inverse transformations (exchange ( A , Z ) and ( A′ , Z′ ) to switch observed quantities, and reverse the direction of relative motion by the substitution n ↦ − n ).
The quantities ( A , Z ) collectively make up a four-vector , where A is the "timelike component", and Z the "spacelike component". Examples of A and Z are the following:
For a given object (e.g., particle, fluid, field, material), if A or Z correspond to properties specific to the object like its charge density , mass density , spin , etc., its properties can be fixed in the rest frame of that object. Then the Lorentz transformations give the corresponding properties in a frame moving relative to the object with constant velocity. This breaks some notions taken for granted in non-relativistic physics. For example, the energy E of an object is a scalar in non-relativistic mechanics, but not in relativistic mechanics because energy changes under Lorentz transformations; its value is different for various inertial frames. In the rest frame of an object, it has a rest energy and zero momentum. In a boosted frame its energy is different and it appears to have a momentum. Similarly, in non-relativistic quantum mechanics the spin of a particle is a constant vector, but in relativistic quantum mechanics spin s depends on relative motion. In the rest frame of the particle, the spin pseudovector can be fixed to be its ordinary non-relativistic spin with a zero timelike quantity s t , however a boosted observer will perceive a nonzero timelike component and an altered spin. [ 22 ]
Not all quantities are invariant in the form as shown above, for example orbital angular momentum L does not have a timelike quantity, and neither does the electric field E nor the magnetic field B . The definition of angular momentum is L = r × p , and in a boosted frame the altered angular momentum is L′ = r′ × p′ . Applying this definition using the transformations of coordinates and momentum leads to the transformation of angular momentum. It turns out L transforms with another vector quantity N = ( E / c 2 ) r − t p related to boosts, see relativistic angular momentum for details. For the case of the E and B fields, the transformations cannot be obtained as directly using vector algebra. The Lorentz force is the definition of these fields, and in F it is F = q ( E + v × B ) while in F′ it is F′ = q ( E′ + v′ × B′ ) . A method of deriving the EM field transformations in an efficient way which also illustrates the unit of the electromagnetic field uses tensor algebra, given below .
Throughout, italic non-bold capital letters are 4 × 4 matrices, while non-italic bold letters are 3 × 3 matrices.
Writing the coordinates in column vectors and the Minkowski metric η as a square matrix X ′ = [ c t ′ x ′ y ′ z ′ ] , η = [ − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] , X = [ c t x y z ] {\displaystyle X'={\begin{bmatrix}c\,t'\\x'\\y'\\z'\end{bmatrix}}\,,\quad \eta ={\begin{bmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}\,,\quad X={\begin{bmatrix}c\,t\\x\\y\\z\end{bmatrix}}} the spacetime interval takes the form (superscript T denotes transpose ) X ⋅ X = X T η X = X ′ T η X ′ {\displaystyle X\cdot X=X^{\mathrm {T} }\eta X={X'}^{\mathrm {T} }\eta {X'}} and is invariant under a Lorentz transformation X ′ = Λ X {\displaystyle X'=\Lambda X} where Λ is a square matrix which can depend on parameters.
The set of all Lorentz transformations Λ {\displaystyle \Lambda } in this article is denoted L {\displaystyle {\mathcal {L}}} . This set together with matrix multiplication forms a group , in this context known as the Lorentz group . Also, the above expression X · X is a quadratic form of signature (3,1) on spacetime, and the group of transformations which leaves this quadratic form invariant is the indefinite orthogonal group O(3,1), a Lie group . In other words, the Lorentz group is O(3,1). As presented in this article, any Lie groups mentioned are matrix Lie groups . In this context the operation of composition amounts to matrix multiplication .
From the invariance of the spacetime interval it follows η = Λ T η Λ {\displaystyle \eta =\Lambda ^{\mathrm {T} }\eta \Lambda } and this matrix equation contains the general conditions on the Lorentz transformation to ensure invariance of the spacetime interval. Taking the determinant of the equation using the product rule [ nb 4 ] gives immediately [ det ( Λ ) ] 2 = 1 ⇒ det ( Λ ) = ± 1 {\displaystyle \left[\det(\Lambda )\right]^{2}=1\quad \Rightarrow \quad \det(\Lambda )=\pm 1}
Writing the Minkowski metric as a block matrix, and the Lorentz transformation in the most general form, η = [ − 1 0 0 I ] , Λ = [ Γ − a T − b M ] , {\displaystyle \eta ={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}\,,\quad \Lambda ={\begin{bmatrix}\Gamma &-\mathbf {a} ^{\mathrm {T} }\\-\mathbf {b} &\mathbf {M} \end{bmatrix}}\,,} carrying out the block matrix multiplications obtains general conditions on Γ, a , b , M to ensure relativistic invariance. Not much information can be directly extracted from all the conditions, however one of the results Γ 2 = 1 + b T b {\displaystyle \Gamma ^{2}=1+\mathbf {b} ^{\mathrm {T} }\mathbf {b} } is useful; b T b ≥ 0 always so it follows that Γ 2 ≥ 1 ⇒ Γ ≤ − 1 , Γ ≥ 1 {\displaystyle \Gamma ^{2}\geq 1\quad \Rightarrow \quad \Gamma \leq -1\,,\quad \Gamma \geq 1}
The negative inequality may be unexpected, because Γ multiplies the time coordinate and this has an effect on time symmetry . If the positive equality holds, then Γ is the Lorentz factor.
The determinant and inequality provide four ways to classify L orentz T ransformations ( herein LT s for brevity ). Any particular LT has only one determinant sign and only one inequality. There are four sets which include every possible pair given by the intersections ("n"-shaped symbol meaning "and") of these classifying sets.
where "+" and "−" indicate the determinant sign, while "↑" for ≥ and "↓" for ≤ denote the inequalities.
The full Lorentz group splits into the union ("u"-shaped symbol meaning "or") of four disjoint sets L = L + ↑ ∪ L − ↑ ∪ L + ↓ ∪ L − ↓ {\displaystyle {\mathcal {L}}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\uparrow }\cup {\mathcal {L}}_{+}^{\downarrow }\cup {\mathcal {L}}_{-}^{\downarrow }}
A subgroup of a group must be closed under the same operation of the group (here matrix multiplication). In other words, for two Lorentz transformations Λ and L from a particular subgroup, the composite Lorentz transformations Λ L and L Λ must be in the same subgroup as Λ and L . This is not always the case: the composition of two antichronous Lorentz transformations is orthochronous, and the composition of two improper Lorentz transformations is proper. In other words, while the sets L + ↑ {\displaystyle {\mathcal {L}}_{+}^{\uparrow }} , L + {\displaystyle {\mathcal {L}}_{+}} , L ↑ {\displaystyle {\mathcal {L}}^{\uparrow }} , and L 0 = L + ↑ ∪ L − ↓ {\displaystyle {\mathcal {L}}_{0}={\mathcal {L}}_{+}^{\uparrow }\cup {\mathcal {L}}_{-}^{\downarrow }} all form subgroups, the sets containing improper and/or antichronous transformations without enough proper orthochronous transformations (e.g. L + ↓ {\displaystyle {\mathcal {L}}_{+}^{\downarrow }} , L − ↓ {\displaystyle {\mathcal {L}}_{-}^{\downarrow }} , L − ↑ {\displaystyle {\mathcal {L}}_{-}^{\uparrow }} ) do not form subgroups.
If a Lorentz covariant 4-vector is measured in one inertial frame with result X {\displaystyle X} , and the same measurement made in another inertial frame (with the same orientation and origin) gives result X ′ {\displaystyle X'} , the two results will be related by X ′ = B ( v ) X {\displaystyle X'=B(\mathbf {v} )X} where the boost matrix B ( v ) {\displaystyle B(\mathbf {v} )} represents the rotation-free Lorentz transformation between the unprimed and primed frames and v {\displaystyle \mathbf {v} } is the velocity of the primed frame as seen from the unprimed frame. The matrix is given by [ 23 ] B ( v ) = [ γ − γ v x / c − γ v y / c − γ v z / c − γ v x / c 1 + ( γ − 1 ) v x 2 v 2 ( γ − 1 ) v x v y v 2 ( γ − 1 ) v x v z v 2 − γ v y / c ( γ − 1 ) v y v x v 2 1 + ( γ − 1 ) v y 2 v 2 ( γ − 1 ) v y v z v 2 − γ v z / c ( γ − 1 ) v z v x v 2 ( γ − 1 ) v z v y v 2 1 + ( γ − 1 ) v z 2 v 2 ] = [ γ − γ β → T − γ β → I + ( γ − 1 ) β → β → T β 2 ] , {\displaystyle B(\mathbf {v} )={\begin{bmatrix}\gamma &-\gamma v_{x}/c&-\gamma v_{y}/c&-\gamma v_{z}/c\\-\gamma v_{x}/c&1+(\gamma -1){\dfrac {v_{x}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{x}v_{y}}{v^{2}}}&(\gamma -1){\dfrac {v_{x}v_{z}}{v^{2}}}\\-\gamma v_{y}/c&(\gamma -1){\dfrac {v_{y}v_{x}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{y}^{2}}{v^{2}}}&(\gamma -1){\dfrac {v_{y}v_{z}}{v^{2}}}\\-\gamma v_{z}/c&(\gamma -1){\dfrac {v_{z}v_{x}}{v^{2}}}&(\gamma -1){\dfrac {v_{z}v_{y}}{v^{2}}}&1+(\gamma -1){\dfrac {v_{z}^{2}}{v^{2}}}\end{bmatrix}}={\begin{bmatrix}\gamma &-\gamma {\vec {\beta }}^{T}\\-\gamma {\vec {\beta }}&I+(\gamma -1){\dfrac {{\vec {\beta }}{\vec {\beta }}^{T}}{\beta ^{2}}}\end{bmatrix}},}
where v = v x 2 + v y 2 + v z 2 {\textstyle v={\sqrt {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}}} is the magnitude of the velocity and γ = 1 1 − v 2 c 2 {\textstyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}} is the Lorentz factor. This formula represents a passive transformation, as it describes how the coordinates of the measured quantity changes from the unprimed frame to the primed frame. The active transformation is given by B ( − v ) {\displaystyle B(-\mathbf {v} )} .
If a frame F′ is boosted with velocity u relative to frame F , and another frame F′′ is boosted with velocity v relative to F′ , the separate boosts are X ″ = B ( v ) X ′ , X ′ = B ( u ) X {\displaystyle X''=B(\mathbf {v} )X'\,,\quad X'=B(\mathbf {u} )X} and the composition of the two boosts connects the coordinates in F′′ and F , X ″ = B ( v ) B ( u ) X . {\displaystyle X''=B(\mathbf {v} )B(\mathbf {u} )X\,.} Successive transformations act on the left. If u and v are collinear (parallel or antiparallel along the same line of relative motion), the boost matrices commute : B ( v ) B ( u ) = B ( u ) B ( v ) . This composite transformation happens to be another boost, B ( w ) , where w is collinear with u and v .
If u and v are not collinear but in different directions, the situation is considerably more complicated. Lorentz boosts along different directions do not commute: B ( v ) B ( u ) and B ( u ) B ( v ) are not equal. Although each of these compositions is not a single boost, each composition is still a Lorentz transformation as it preserves the spacetime interval. It turns out the composition of any two Lorentz boosts is equivalent to a boost followed or preceded by a rotation on the spatial coordinates, in the form of R ( ρ ) B ( w ) or B ( w ) R ( ρ ) . The w and w are composite velocities , while ρ and ρ are rotation parameters (e.g. axis-angle variables, Euler angles , etc.). The rotation in block matrix form is simply R ( ρ ) = [ 1 0 0 R ( ρ ) ] , {\displaystyle \quad R({\boldsymbol {\rho }})={\begin{bmatrix}1&0\\0&\mathbf {R} ({\boldsymbol {\rho }})\end{bmatrix}}\,,} where R ( ρ ) is a 3 × 3 rotation matrix , which rotates any 3-dimensional vector in one sense (active transformation), or equivalently the coordinate frame in the opposite sense (passive transformation). It is not simple to connect w and ρ (or w and ρ ) to the original boost parameters u and v . In a composition of boosts, the R matrix is named the Wigner rotation , and gives rise to the Thomas precession . These articles give the explicit formulae for the composite transformation matrices, including expressions for w , ρ , w , ρ .
In this article the axis-angle representation is used for ρ . The rotation is about an axis in the direction of a unit vector e , through angle θ (positive anticlockwise, negative clockwise, according to the right-hand rule ). The "axis-angle vector" θ = θ e {\displaystyle {\boldsymbol {\theta }}=\theta \mathbf {e} } will serve as a useful abbreviation.
Spatial rotations alone are also Lorentz transformations since they leave the spacetime interval invariant. Like boosts, successive rotations about different axes do not commute. Unlike boosts, the composition of any two rotations is equivalent to a single rotation. Some other similarities and differences between the boost and rotation matrices include:
The most general proper Lorentz transformation Λ( v , θ ) includes a boost and rotation together, and is a nonsymmetric matrix. As special cases, Λ( 0 , θ ) = R ( θ ) and Λ( v , 0 ) = B ( v ) . An explicit form of the general Lorentz transformation is cumbersome to write down and will not be given here. Nevertheless, closed form expressions for the transformation matrices will be given below using group theoretical arguments. It will be easier to use the rapidity parametrization for boosts, in which case one writes Λ( ζ , θ ) and B ( ζ ) .
The set of transformations { B ( ζ ) , R ( θ ) , Λ ( ζ , θ ) } {\displaystyle \{B({\boldsymbol {\zeta }}),R({\boldsymbol {\theta }}),\Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})\}} with matrix multiplication as the operation of composition forms a group, called the "restricted Lorentz group", and is the special indefinite orthogonal group SO + (3,1). (The plus sign indicates that it preserves the orientation of the temporal dimension).
For simplicity, look at the infinitesimal Lorentz boost in the x direction (examining a boost in any other direction, or rotation about any axis, follows an identical procedure). The infinitesimal boost is a small boost away from the identity, obtained by the Taylor expansion of the boost matrix to first order about ζ = 0 , B x = I + ζ ∂ B x ∂ ζ | ζ = 0 + ⋯ {\displaystyle B_{x}=I+\zeta \left.{\frac {\partial B_{x}}{\partial \zeta }}\right|_{\zeta =0}+\cdots } where the higher order terms not shown are negligible because ζ is small, and B x is simply the boost matrix in the x direction. The derivative of the matrix is the matrix of derivatives (of the entries, with respect to the same variable), and it is understood the derivatives are found first then evaluated at ζ = 0 , ∂ B x ∂ ζ | ζ = 0 = − K x . {\displaystyle \left.{\frac {\partial B_{x}}{\partial \zeta }}\right|_{\zeta =0}=-K_{x}\,.}
For now, K x is defined by this result (its significance will be explained shortly). In the limit of an infinite number of infinitely small steps, the finite boost transformation in the form of a matrix exponential is obtained B x = lim N → ∞ ( I − ζ N K x ) N = e − ζ K x {\displaystyle B_{x}=\lim _{N\to \infty }\left(I-{\frac {\zeta }{N}}K_{x}\right)^{N}=e^{-\zeta K_{x}}} where the limit definition of the exponential has been used (see also characterizations of the exponential function ). More generally [ nb 5 ]
B ( ζ ) = e − ζ ⋅ K , R ( θ ) = e θ ⋅ J . {\displaystyle B({\boldsymbol {\zeta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }\,,\quad R({\boldsymbol {\theta }})=e^{{\boldsymbol {\theta }}\cdot \mathbf {J} }\,.}
The axis-angle vector θ and rapidity vector ζ are altogether six continuous variables which make up the group parameters (in this particular representation), and the generators of the group are K = ( K x , K y , K z ) and J = ( J x , J y , J z ) , each vectors of matrices with the explicit forms [ nb 6 ]
K x = [ 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 ] , K y = [ 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 ] , K z = [ 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 ] J x = [ 0 0 0 0 0 0 0 0 0 0 0 − 1 0 0 1 0 ] , J y = [ 0 0 0 0 0 0 0 1 0 0 0 0 0 − 1 0 0 ] , J z = [ 0 0 0 0 0 0 − 1 0 0 1 0 0 0 0 0 0 ] {\displaystyle {\begin{alignedat}{3}K_{x}&={\begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\\\end{bmatrix}}\,,\quad &K_{y}&={\begin{bmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{bmatrix}}\,,\quad &K_{z}&={\begin{bmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{bmatrix}}\\[10mu]J_{x}&={\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\\\end{bmatrix}}\,,\quad &J_{y}&={\begin{bmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&-1&0&0\end{bmatrix}}\,,\quad &J_{z}&={\begin{bmatrix}0&0&0&0\\0&0&-1&0\\0&1&0&0\\0&0&0&0\end{bmatrix}}\end{alignedat}}}
These are all defined in an analogous way to K x above, although the minus signs in the boost generators are conventional. Physically, the generators of the Lorentz group correspond to important symmetries in spacetime: J are the rotation generators which correspond to angular momentum , and K are the boost generators which correspond to the motion of the system in spacetime. The derivative of any smooth curve C ( t ) with C (0) = I in the group depending on some group parameter t with respect to that group parameter, evaluated at t = 0 , serves as a definition of a corresponding group generator G , and this reflects an infinitesimal transformation away from the identity. The smooth curve can always be taken as an exponential as the exponential will always map G smoothly back into the group via t → exp( tG ) for all t ; this curve will yield G again when differentiated at t = 0 .
Expanding the exponentials in their Taylor series obtains B ( ζ ) = I − sinh ζ ( n ⋅ K ) + ( cosh ζ − 1 ) ( n ⋅ K ) 2 {\displaystyle B({\boldsymbol {\zeta }})=I-\sinh \zeta (\mathbf {n} \cdot \mathbf {K} )+(\cosh \zeta -1)(\mathbf {n} \cdot \mathbf {K} )^{2}} R ( θ ) = I + sin θ ( e ⋅ J ) + ( 1 − cos θ ) ( e ⋅ J ) 2 . {\displaystyle R({\boldsymbol {\theta }})=I+\sin \theta (\mathbf {e} \cdot \mathbf {J} )+(1-\cos \theta )(\mathbf {e} \cdot \mathbf {J} )^{2}\,.} which compactly reproduce the boost and rotation matrices as given in the previous section.
It has been stated that the general proper Lorentz transformation is a product of a boost and rotation. At the infinitesimal level the product Λ = ( I − ζ ⋅ K + ⋯ ) ( I + θ ⋅ J + ⋯ ) = ( I + θ ⋅ J + ⋯ ) ( I − ζ ⋅ K + ⋯ ) = I − ζ ⋅ K + θ ⋅ J + ⋯ {\displaystyle {\begin{aligned}\Lambda &=(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )\\&=(I+{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots )(I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +\cdots )\\&=I-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} +\cdots \end{aligned}}} is commutative because only linear terms are required (products like ( θ · J )( ζ · K ) and ( ζ · K )( θ · J ) count as higher order terms and are negligible). Taking the limit as before leads to the finite transformation in the form of an exponential Λ ( ζ , θ ) = e − ζ ⋅ K + θ ⋅ J . {\displaystyle \Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})=e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }.}
The converse is also true, but the decomposition of a finite general Lorentz transformation into such factors is nontrivial. In particular, e − ζ ⋅ K + θ ⋅ J ≠ e − ζ ⋅ K e θ ⋅ J , {\displaystyle e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} }\neq e^{-{\boldsymbol {\zeta }}\cdot \mathbf {K} }e^{{\boldsymbol {\theta }}\cdot \mathbf {J} },} because the generators do not commute. For a description of how to find the factors of a general Lorentz transformation in terms of a boost and a rotation in principle (this usually does not yield an intelligible expression in terms of generators J and K ), see Wigner rotation . If, on the other hand, the decomposition is given in terms of the generators, and one wants to find the product in terms of the generators, then the Baker–Campbell–Hausdorff formula applies.
Lorentz generators can be added together, or multiplied by real numbers, to obtain more Lorentz generators. In other words, the set of all Lorentz generators V = { ζ ⋅ K + θ ⋅ J } {\displaystyle V=\{{\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} \}} together with the operations of ordinary matrix addition and multiplication of a matrix by a number , forms a vector space over the real numbers. [ nb 7 ] The generators J x , J y , J z , K x , K y , K z form a basis set of V , and the components of the axis-angle and rapidity vectors, θ x , θ y , θ z , ζ x , ζ y , ζ z , are the coordinates of a Lorentz generator with respect to this basis. [ nb 8 ]
Three of the commutation relations of the Lorentz generators are [ J x , J y ] = J z , [ K x , K y ] = − J z , [ J x , K y ] = K z , {\displaystyle [J_{x},J_{y}]=J_{z}\,,\quad [K_{x},K_{y}]=-J_{z}\,,\quad [J_{x},K_{y}]=K_{z}\,,} where the bracket [ A , B ] = AB − BA is known as the commutator , and the other relations can be found by taking cyclic permutations of x , y , z components (i.e. change x to y , y to z , and z to x , repeat).
These commutation relations, and the vector space of generators, fulfill the definition of the Lie algebra s o ( 3 , 1 ) {\displaystyle {\mathfrak {so}}(3,1)} . In summary, a Lie algebra is defined as a vector space V over a field of numbers, and with a binary operation [ , ] (called a Lie bracket in this context) on the elements of the vector space, satisfying the axioms of bilinearity , alternatization , and the Jacobi identity . Here the operation [ , ] is the commutator which satisfies all of these axioms, the vector space is the set of Lorentz generators V as given previously, and the field is the set of real numbers.
Linking terminology used in mathematics and physics: A group generator is any element of the Lie algebra. A group parameter is a component of a coordinate vector representing an arbitrary element of the Lie algebra with respect to some basis. A basis, then, is a set of generators being a basis of the Lie algebra in the usual vector space sense.
The exponential map from the Lie algebra to the Lie group, exp : s o ( 3 , 1 ) → S O ( 3 , 1 ) , {\displaystyle \exp \,:\,{\mathfrak {so}}(3,1)\to \mathrm {SO} (3,1),} provides a one-to-one correspondence between small enough neighborhoods of the origin of the Lie algebra and neighborhoods of the identity element of the Lie group. In the case of the Lorentz group, the exponential map is just the matrix exponential . Globally, the exponential map is not one-to-one, but in the case of the Lorentz group, it is surjective (onto). Hence any group element in the connected component of the identity can be expressed as an exponential of an element of the Lie algebra.
Lorentz transformations also include parity inversion P = [ 1 0 0 − I ] {\displaystyle P={\begin{bmatrix}1&0\\0&-\mathbf {I} \end{bmatrix}}} which negates all the spatial coordinates only, and time reversal T = [ − 1 0 0 I ] {\displaystyle T={\begin{bmatrix}-1&0\\0&\mathbf {I} \end{bmatrix}}} which negates the time coordinate only, because these transformations leave the spacetime interval invariant. Here I is the 3 × 3 identity matrix . These are both symmetric, they are their own inverses (see involution (mathematics) ), and each have determinant −1. This latter property makes them improper transformations.
If Λ is a proper orthochronous Lorentz transformation, then T Λ is improper antichronous, P Λ is improper orthochronous, and TP Λ = PT Λ is proper antichronous.
Two other spacetime symmetries have not been accounted for. In order for the spacetime interval to be invariant, it can be shown [ 24 ] that it is necessary and sufficient for the coordinate transformation to be of the form X ′ = Λ X + C {\displaystyle X'=\Lambda X+C} where C is a constant column containing translations in time and space. If C ≠ 0, this is an inhomogeneous Lorentz transformation or Poincaré transformation . [ 25 ] [ 26 ] If C = 0, this is a homogeneous Lorentz transformation . Poincaré transformations are not dealt further in this article.
Writing the general matrix transformation of coordinates as the matrix equation [ x ′ 0 x ′ 1 x ′ 2 x ′ 3 ] = [ Λ 0 0 Λ 0 1 Λ 0 2 Λ 0 3 x ′ 0 Λ 1 0 Λ 1 1 Λ 1 2 Λ 1 3 x ′ 0 Λ 2 0 Λ 2 1 Λ 2 2 Λ 2 3 x ′ 0 Λ 3 0 Λ 3 1 Λ 3 2 Λ 3 3 x ′ 0 ] [ x 0 x ′ 0 x 1 x ′ 0 x 2 x ′ 0 x 3 x ′ 0 ] {\displaystyle {\begin{bmatrix}{x'}^{0}\\{x'}^{1}\\{x'}^{2}\\{x'}^{3}\end{bmatrix}}={\begin{bmatrix}{\Lambda ^{0}}_{0}&{\Lambda ^{0}}_{1}&{\Lambda ^{0}}_{2}&{\Lambda ^{0}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{1}}_{0}&{\Lambda ^{1}}_{1}&{\Lambda ^{1}}_{2}&{\Lambda ^{1}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{2}}_{0}&{\Lambda ^{2}}_{1}&{\Lambda ^{2}}_{2}&{\Lambda ^{2}}_{3}{\vphantom {{x'}^{0}}}\\{\Lambda ^{3}}_{0}&{\Lambda ^{3}}_{1}&{\Lambda ^{3}}_{2}&{\Lambda ^{3}}_{3}{\vphantom {{x'}^{0}}}\\\end{bmatrix}}{\begin{bmatrix}x^{0}{\vphantom {{x'}^{0}}}\\x^{1}{\vphantom {{x'}^{0}}}\\x^{2}{\vphantom {{x'}^{0}}}\\x^{3}{\vphantom {{x'}^{0}}}\end{bmatrix}}} allows the transformation of other physical quantities that cannot be expressed as four-vectors; e.g., tensors or spinors of any order in 4-dimensional spacetime, to be defined. In the corresponding tensor index notation , the above matrix expression is x ′ ν = Λ ν μ x μ , {\displaystyle {x'}^{\nu }={\Lambda ^{\nu }}_{\mu }x^{\mu },}
where lower and upper indices label covariant and contravariant components respectively, [ 27 ] and the summation convention is applied. It is a standard convention to use Greek indices that take the value 0 for time components, and 1, 2, 3 for space components, while Latin indices simply take the values 1, 2, 3, for spatial components (the opposite for Landau and Lifshitz). Note that the first index (reading left to right) corresponds in the matrix notation to a row index . The second index corresponds to the column index.
The transformation matrix is universal for all four-vectors , not just 4-dimensional spacetime coordinates. If A is any four-vector, then in tensor index notation A ′ ν = Λ ν μ A μ . {\displaystyle {A'}^{\nu }={\Lambda ^{\nu }}_{\mu }A^{\mu }\,.}
Alternatively, one writes A ν ′ = Λ ν ′ μ A μ . {\displaystyle A^{\nu '}={\Lambda ^{\nu '}}_{\mu }A^{\mu }\,.} in which the primed indices denote the indices of A in the primed frame. For a general n -component object one may write X ′ α = Π ( Λ ) α β X β , {\displaystyle {X'}^{\alpha }={\Pi (\Lambda )^{\alpha }}_{\beta }X^{\beta }\,,} where Π is the appropriate representation of the Lorentz group , an n × n matrix for every Λ . In this case, the indices should not be thought of as spacetime indices (sometimes called Lorentz indices), and they run from 1 to n . E.g., if X is a bispinor , then the indices are called Dirac indices .
There are also vector quantities with covariant indices. They are generally obtained from their corresponding objects with contravariant indices by the operation of lowering an index ; e.g., x ν = η μ ν x μ , {\displaystyle x_{\nu }=\eta _{\mu \nu }x^{\mu },} where η is the metric tensor . (The linked article also provides more information about what the operation of raising and lowering indices really is mathematically.) The inverse of this transformation is given by x μ = η μ ν x ν , {\displaystyle x^{\mu }=\eta ^{\mu \nu }x_{\nu },} where, when viewed as matrices, η μν is the inverse of η μν . As it happens, η μν = η μν . This is referred to as raising an index . To transform a covariant vector A μ , first raise its index, then transform it according to the same rule as for contravariant 4 -vectors, then finally lower the index; A ′ ν = η ρ ν Λ ρ σ η μ σ A μ . {\displaystyle {A'}_{\nu }=\eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }A_{\mu }.}
But η ρ ν Λ ρ σ η μ σ = ( Λ − 1 ) μ ν , {\displaystyle \eta _{\rho \nu }{\Lambda ^{\rho }}_{\sigma }\eta ^{\mu \sigma }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },}
That is, it is the ( μ , ν ) -component of the inverse Lorentz transformation. One defines (as a matter of notation), Λ ν μ ≡ ( Λ − 1 ) μ ν , {\displaystyle {\Lambda _{\nu }}^{\mu }\equiv {\left(\Lambda ^{-1}\right)^{\mu }}_{\nu },} and may in this notation write A ′ ν = Λ ν μ A μ . {\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }.}
Now for a subtlety. The implied summation on the right hand side of A ′ ν = Λ ν μ A μ = ( Λ − 1 ) μ ν A μ {\displaystyle {A'}_{\nu }={\Lambda _{\nu }}^{\mu }A_{\mu }={\left(\Lambda ^{-1}\right)^{\mu }}_{\nu }A_{\mu }} is running over a row index of the matrix representing Λ −1 . Thus, in terms of matrices, this transformation should be thought of as the inverse transpose of Λ acting on the column vector A μ . That is, in pure matrix notation, A ′ = ( Λ − 1 ) T A . {\displaystyle A'=\left(\Lambda ^{-1}\right)^{\mathrm {T} }A.}
This means exactly that covariant vectors (thought of as column matrices) transform according to the dual representation of the standard representation of the Lorentz group. This notion generalizes to general representations, simply replace Λ with Π(Λ) .
If A and B are linear operators on vector spaces U and V , then a linear operator A ⊗ B may be defined on the tensor product of U and V , denoted U ⊗ V according to [ 28 ]
( A ⊗ B ) ( u ⊗ v ) = A u ⊗ B v , u ∈ U , v ∈ V , u ⊗ v ∈ U ⊗ V . {\displaystyle (A\otimes B)(u\otimes v)=Au\otimes Bv,\qquad u\in U,v\in V,u\otimes v\in U\otimes V.} (T1)
From this it is immediately clear that if u and v are a four-vectors in V , then u ⊗ v ∈ T 2 V ≡ V ⊗ V transforms as
u ⊗ v → Λ u ⊗ Λ v = Λ μ ν u ν ⊗ Λ ρ σ v σ = Λ μ ν Λ ρ σ u ν ⊗ v σ ≡ Λ μ ν Λ ρ σ w ν σ . {\displaystyle u\otimes v\rightarrow \Lambda u\otimes \Lambda v={\Lambda ^{\mu }}_{\nu }u^{\nu }\otimes {\Lambda ^{\rho }}_{\sigma }v^{\sigma }={\Lambda ^{\mu }}_{\nu }{\Lambda ^{\rho }}_{\sigma }u^{\nu }\otimes v^{\sigma }\equiv {\Lambda ^{\mu }}_{\nu }{\Lambda ^{\rho }}_{\sigma }w^{\nu \sigma }.} (T2)
The second step uses the bilinearity of the tensor product and the last step defines a 2-tensor on component form, or rather, it just renames the tensor u ⊗ v .
These observations generalize in an obvious way to more factors, and using the fact that a general tensor on a vector space V can be written as a sum of a coefficient (component!) times tensor products of basis vectors and basis covectors, one arrives at the transformation law for any tensor quantity T . It is given by [ 29 ]
T θ ′ ι ′ ⋯ κ ′ α ′ β ′ ⋯ ζ ′ = Λ α ′ μ Λ β ′ ν ⋯ Λ ζ ′ ρ Λ θ ′ σ Λ ι ′ υ ⋯ Λ κ ′ ζ T σ υ ⋯ ζ μ ν ⋯ ρ , {\displaystyle T_{\theta '\iota '\cdots \kappa '}^{\alpha '\beta '\cdots \zeta '}={\Lambda ^{\alpha '}}_{\mu }{\Lambda ^{\beta '}}_{\nu }\cdots {\Lambda ^{\zeta '}}_{\rho }{\Lambda _{\theta '}}^{\sigma }{\Lambda _{\iota '}}^{\upsilon }\cdots {\Lambda _{\kappa '}}^{\zeta }T_{\sigma \upsilon \cdots \zeta }^{\mu \nu \cdots \rho },} (T3)
where Λ χ′ ψ is defined above. This form can generally be reduced to the form for general n -component objects given above with a single matrix ( Π(Λ) ) operating on column vectors. This latter form is sometimes preferred; e.g., for the electromagnetic field tensor.
Lorentz transformations can also be used to illustrate that the magnetic field B and electric field E are simply different aspects of the same force — the electromagnetic force , as a consequence of relative motion between electric charges and observers. [ 30 ] The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment. [ 31 ]
The electric and magnetic fields transform differently from space and time, but exactly the same way as relativistic angular momentum and the boost vector.
The electromagnetic field strength tensor is given by F μ ν = [ 0 − 1 c E x − 1 c E y − 1 c E z 1 c E x 0 − B z B y 1 c E y B z 0 − B x 1 c E z − B y B x 0 ] (SI units, signature ( + , − , − , − ) ) . {\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-{\frac {1}{c}}E_{x}&-{\frac {1}{c}}E_{y}&-{\frac {1}{c}}E_{z}\\{\frac {1}{c}}E_{x}&0&-B_{z}&B_{y}\\{\frac {1}{c}}E_{y}&B_{z}&0&-B_{x}\\{\frac {1}{c}}E_{z}&-B_{y}&B_{x}&0\end{bmatrix}}{\text{(SI units, signature }}(+,-,-,-){\text{)}}.} in SI units . In relativity, the Gaussian system of units is often preferred over SI units, even in texts whose main choice of units is SI units, because in it the electric field E and the magnetic induction B have the same units making the appearance of the electromagnetic field tensor more natural. [ 32 ] Consider a Lorentz boost in the x -direction. It is given by [ 33 ] Λ μ ν = [ γ − γ β 0 0 − γ β γ 0 0 0 0 1 0 0 0 0 1 ] , F μ ν = [ 0 E x E y E z − E x 0 B z − B y − E y − B z 0 B x − E z B y − B x 0 ] (Gaussian units, signature ( − , + , + , + ) ) , {\displaystyle {\Lambda ^{\mu }}_{\nu }={\begin{bmatrix}\gamma &-\gamma \beta &0&0\\-\gamma \beta &\gamma &0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}},\qquad F^{\mu \nu }={\begin{bmatrix}0&E_{x}&E_{y}&E_{z}\\-E_{x}&0&B_{z}&-B_{y}\\-E_{y}&-B_{z}&0&B_{x}\\-E_{z}&B_{y}&-B_{x}&0\end{bmatrix}}{\text{(Gaussian units, signature }}(-,+,+,+){\text{)}},} where the field tensor is displayed side by side for easiest possible reference in the manipulations below.
The general transformation law (T3) becomes F μ ′ ν ′ = Λ μ ′ μ Λ ν ′ ν F μ ν . {\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }.}
For the magnetic field one obtains B x ′ = F 2 ′ 3 ′ = Λ 2 μ Λ 3 ν F μ ν = Λ 2 2 Λ 3 3 F 23 = 1 × 1 × B x = B x , B y ′ = F 3 ′ 1 ′ = Λ 3 μ Λ 1 ν F μ ν = Λ 3 3 Λ 1 ν F 3 ν = Λ 3 3 Λ 1 0 F 30 + Λ 3 3 Λ 1 1 F 31 = 1 × ( − β γ ) ( − E z ) + 1 × γ B y = γ B y + β γ E z = γ ( B − β × E ) y B z ′ = F 1 ′ 2 ′ = Λ 1 μ Λ 2 ν F μ ν = Λ 1 μ Λ 2 2 F μ 2 = Λ 1 0 Λ 2 2 F 02 + Λ 1 1 Λ 2 2 F 12 = ( − γ β ) × 1 × E y + γ × 1 × B z = γ B z − β γ E y = γ ( B − β × E ) z {\displaystyle {\begin{aligned}B_{x'}&=F^{2'3'}={\Lambda ^{2}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{2}}_{2}{\Lambda ^{3}}_{3}F^{23}=1\times 1\times B_{x}\\&=B_{x},\\B_{y'}&=F^{3'1'}={\Lambda ^{3}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{\nu }F^{3\nu }={\Lambda ^{3}}_{3}{\Lambda ^{1}}_{0}F^{30}+{\Lambda ^{3}}_{3}{\Lambda ^{1}}_{1}F^{31}\\&=1\times (-\beta \gamma )(-E_{z})+1\times \gamma B_{y}=\gamma B_{y}+\beta \gamma E_{z}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{y}\\B_{z'}&=F^{1'2'}={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{1}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{1}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{1}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=(-\gamma \beta )\times 1\times E_{y}+\gamma \times 1\times B_{z}=\gamma B_{z}-\beta \gamma E_{y}\\&=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{z}\end{aligned}}}
For the electric field results E x ′ = F 0 ′ 1 ′ = Λ 0 μ Λ 1 ν F μ ν = Λ 0 1 Λ 1 0 F 10 + Λ 0 0 Λ 1 1 F 01 = ( − γ β ) ( − γ β ) ( − E x ) + γ γ E x = − γ 2 β 2 ( E x ) + γ 2 E x = E x ( 1 − β 2 ) γ 2 = E x , E y ′ = F 0 ′ 2 ′ = Λ 0 μ Λ 2 ν F μ ν = Λ 0 μ Λ 2 2 F μ 2 = Λ 0 0 Λ 2 2 F 02 + Λ 0 1 Λ 2 2 F 12 = γ × 1 × E y + ( − β γ ) × 1 × B z = γ E y − β γ B z = γ ( E + β × B ) y E z ′ = F 0 ′ 3 ′ = Λ 0 μ Λ 3 ν F μ ν = Λ 0 μ Λ 3 3 F μ 3 = Λ 0 0 Λ 3 3 F 03 + Λ 0 1 Λ 3 3 F 13 = γ × 1 × E z − β γ × 1 × ( − B y ) = γ E z + β γ B y = γ ( E + β × B ) z . {\displaystyle {\begin{aligned}E_{x'}&=F^{0'1'}={\Lambda ^{0}}_{\mu }{\Lambda ^{1}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{1}{\Lambda ^{1}}_{0}F^{10}+{\Lambda ^{0}}_{0}{\Lambda ^{1}}_{1}F^{01}\\&=(-\gamma \beta )(-\gamma \beta )(-E_{x})+\gamma \gamma E_{x}=-\gamma ^{2}\beta ^{2}(E_{x})+\gamma ^{2}E_{x}=E_{x}(1-\beta ^{2})\gamma ^{2}\\&=E_{x},\\E_{y'}&=F^{0'2'}={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{2}}_{2}F^{\mu 2}={\Lambda ^{0}}_{0}{\Lambda ^{2}}_{2}F^{02}+{\Lambda ^{0}}_{1}{\Lambda ^{2}}_{2}F^{12}\\&=\gamma \times 1\times E_{y}+(-\beta \gamma )\times 1\times B_{z}=\gamma E_{y}-\beta \gamma B_{z}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{y}\\E_{z'}&=F^{0'3'}={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{\nu }F^{\mu \nu }={\Lambda ^{0}}_{\mu }{\Lambda ^{3}}_{3}F^{\mu 3}={\Lambda ^{0}}_{0}{\Lambda ^{3}}_{3}F^{03}+{\Lambda ^{0}}_{1}{\Lambda ^{3}}_{3}F^{13}\\&=\gamma \times 1\times E_{z}-\beta \gamma \times 1\times (-B_{y})=\gamma E_{z}+\beta \gamma B_{y}\\&=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{z}.\end{aligned}}}
Here, β = ( β , 0, 0) is used. These results can be summarized by E ∥ ′ = E ∥ B ∥ ′ = B ∥ E ⊥ ′ = γ ( E ⊥ + β × B ⊥ ) = γ ( E + β × B ) ⊥ , B ⊥ ′ = γ ( B ⊥ − β × E ⊥ ) = γ ( B − β × E ) ⊥ , {\displaystyle {\begin{aligned}\mathbf {E} _{\parallel '}&=\mathbf {E} _{\parallel }\\\mathbf {B} _{\parallel '}&=\mathbf {B} _{\parallel }\\\mathbf {E} _{\bot '}&=\gamma \left(\mathbf {E} _{\bot }+{\boldsymbol {\beta }}\times \mathbf {B} _{\bot }\right)=\gamma \left(\mathbf {E} +{\boldsymbol {\beta }}\times \mathbf {B} \right)_{\bot },\\\mathbf {B} _{\bot '}&=\gamma \left(\mathbf {B} _{\bot }-{\boldsymbol {\beta }}\times \mathbf {E} _{\bot }\right)=\gamma \left(\mathbf {B} -{\boldsymbol {\beta }}\times \mathbf {E} \right)_{\bot },\end{aligned}}} and are independent of the metric signature. For SI units, substitute E → E ⁄ c . Misner, Thorne & Wheeler (1973) refer to this last form as the 3 + 1 view as opposed to the geometric view represented by the tensor expression F μ ′ ν ′ = Λ μ ′ μ Λ ν ′ ν F μ ν , {\displaystyle F^{\mu '\nu '}={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu },} and make a strong point of the ease with which results that are difficult to achieve using the 3 + 1 view can be obtained and understood. Only objects that have well defined Lorentz transformation properties (in fact under any smooth coordinate transformation) are geometric objects. In the geometric view, the electromagnetic field is a six-dimensional geometric object in spacetime as opposed to two interdependent, but separate, 3-vector fields in space and time . The fields E (alone) and B (alone) do not have well defined Lorentz transformation properties. The mathematical underpinnings are equations (T1) and (T2) that immediately yield (T3) . One should note that the primed and unprimed tensors refer to the same event in spacetime . Thus the complete equation with spacetime dependence is F μ ′ ν ′ ( x ′ ) = Λ μ ′ μ Λ ν ′ ν F μ ν ( Λ − 1 x ′ ) = Λ μ ′ μ Λ ν ′ ν F μ ν ( x ) . {\displaystyle F^{\mu '\nu '}\left(x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }\left(\Lambda ^{-1}x'\right)={\Lambda ^{\mu '}}_{\mu }{\Lambda ^{\nu '}}_{\nu }F^{\mu \nu }(x).}
Length contraction has an effect on charge density ρ and current density J , and time dilation has an effect on the rate of flow of charge (current), so charge and current distributions must transform in a related way under a boost. It turns out they transform exactly like the space-time and energy-momentum four-vectors, j ′ = j − γ ρ v n + ( γ − 1 ) ( j ⋅ n ) n ρ ′ = γ ( ρ − j ⋅ v n c 2 ) , {\displaystyle {\begin{aligned}\mathbf {j} '&=\mathbf {j} -\gamma \rho v\mathbf {n} +\left(\gamma -1\right)(\mathbf {j} \cdot \mathbf {n} )\mathbf {n} \\\rho '&=\gamma \left(\rho -\mathbf {j} \cdot {\frac {v\mathbf {n} }{c^{2}}}\right),\end{aligned}}}
or, in the simpler geometric view, j μ ′ = Λ μ ′ μ j μ . {\displaystyle j^{\mu '}={\Lambda ^{\mu '}}_{\mu }j^{\mu }.}
Charge density transforms as the time component of a four-vector. It is a rotational scalar. The current density is a 3-vector.
The Maxwell equations are invariant under Lorentz transformations.
Equation (T1) hold unmodified for any representation of the Lorentz group, including the bispinor representation. In (T2) one simply replaces all occurrences of Λ by the bispinor representation Π(Λ) ,
u ⊗ v → Π ( Λ ) u ⊗ Π ( Λ ) v = Π ( Λ ) α β u β ⊗ Π ( Λ ) ρ σ v σ = Π ( Λ ) α β Π ( Λ ) ρ σ u β ⊗ v σ ≡ Π ( Λ ) α β Π ( Λ ) ρ σ w β σ {\displaystyle {\begin{aligned}u\otimes v\rightarrow \Pi (\Lambda )u\otimes \Pi (\Lambda )v&={\Pi (\Lambda )^{\alpha }}_{\beta }u^{\beta }\otimes {\Pi (\Lambda )^{\rho }}_{\sigma }v^{\sigma }\\&={\Pi (\Lambda )^{\alpha }}_{\beta }{\Pi (\Lambda )^{\rho }}_{\sigma }u^{\beta }\otimes v^{\sigma }\\&\equiv {\Pi (\Lambda )^{\alpha }}_{\beta }{\Pi (\Lambda )^{\rho }}_{\sigma }w^{\beta \sigma }\end{aligned}}} (T4)
The above equation could, for instance, be the transformation of a state in Fock space describing two free electrons.
A general noninteracting multi-particle state (Fock space state) in quantum field theory transforms according to the rule [ 34 ]
where W (Λ, p ) is the Wigner's little group [ 35 ] and D ( j ) is the (2 j + 1) -dimensional representation of SO(3) . | https://en.wikipedia.org/wiki/Lorentz_transformation |
Lorenz ( Lorencino ) Bruno Puntel ( / ˈ p ʊ n t ə l / ; German: [ˈpʊntəl] ; [ 1 ] Brazilian Portuguese: [pũˈtɛw] ; 22 September 1935 – 16 July 2024) was a Brazilian-born German philosopher who established the school of structural-systematic philosophy . [ 2 ] [ 3 ] Professor emeritus at the University of Munich , Puntel was named as one of the great contemporary philosophers, articulating his ideas from the most varied traditions. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
Puntel studied philosophy , theology , philology and psychology in Munich , Innsbruck , Vienna , Paris , and Rome . He earned a doctorate in philosophy from the University of Munich (1968) and in Catholic theology (1969) from the University of Innsbruck . He became a professor at the Institute of Philosophy at the University of Munich in 1975. He was a student of Karl Rahner and studied with Martin Heidegger , whose philosophy concerned him throughout his life. [ 8 ]
Puntel died in Augsburg , Bavaria on 16 July 2024, at the age of 88. [ 9 ]
Puntel's thought tries to reconstruct the systematics of philosophy from a very unique viewpoint, which involves the elaboration of a theoretical language, abandoning the idea of a language of predicates. Puntel drew on sources ranging from G. W. Leibniz , German idealism , Heidegger's phenomenology , and even analytic philosophy . [ 10 ]
From 1983, Puntel was a visiting professor at Pittsburgh, Harvard and Princeton. Retired in 2001, in 2016, he received an honorary doctorate from the Munich School of Philosophy . [ 11 ]
He also received the Findlay Book Prize in 2011. [ 12 ] | https://en.wikipedia.org/wiki/Lorenz_Bruno_Puntel |
In electromagnetism , the Lorenz gauge condition or Lorenz gauge (after Ludvig Lorenz ) is a partial gauge fixing of the electromagnetic vector potential by requiring ∂ μ A μ = 0. {\displaystyle \partial _{\mu }A^{\mu }=0.} The name is frequently confused with Hendrik Lorentz , who has given his name to many concepts in this field. [ 1 ] The condition is Lorentz invariant . The Lorenz gauge condition does not completely determine the gauge: one can still make a gauge transformation A μ ↦ A μ + ∂ μ f , {\displaystyle A^{\mu }\mapsto A^{\mu }+\partial ^{\mu }f,} where ∂ μ {\displaystyle \partial ^{\mu }} is the four-gradient and f {\displaystyle f} is any harmonic scalar function: that is, a scalar function obeying ∂ μ ∂ μ f = 0 , {\displaystyle \partial _{\mu }\partial ^{\mu }f=0,} the equation of a massless scalar field .
The Lorenz gauge condition is used to eliminate the redundant spin-0 component in Maxwell's equations when these are used to describe a massless spin-1 quantum field. It is also used for massive spin-1 fields where the concept of gauge transformations does not apply at all.
In electromagnetism , the Lorenz condition is generally used in calculations of time-dependent electromagnetic fields through retarded potentials . [ 2 ] The condition is ∂ μ A μ ≡ A μ , μ = 0 , {\displaystyle \partial _{\mu }A^{\mu }\equiv A^{\mu }{}_{,\mu }=0,} where A μ {\displaystyle A^{\mu }} is the four-potential , the comma denotes a partial differentiation and the repeated index indicates that the Einstein summation convention is being used. The condition has the advantage of being Lorentz invariant . It still leaves substantial gauge degrees of freedom.
In ordinary vector notation and SI units, the condition is ∇ ⋅ A + 1 c 2 ∂ φ ∂ t = 0 , {\displaystyle \nabla \cdot {\mathbf {A} }+{\frac {1}{c^{2}}}{\frac {\partial \varphi }{\partial t}}=0,} where A {\displaystyle \mathbf {A} } is the magnetic vector potential and φ {\displaystyle \varphi } is the electric potential ; [ 3 ] [ 4 ] see also gauge fixing .
In Gaussian units the condition is [ 5 ] [ 6 ] ∇ ⋅ A + 1 c ∂ φ ∂ t = 0. {\displaystyle \nabla \cdot {\mathbf {A} }+{\frac {1}{c}}{\frac {\partial \varphi }{\partial t}}=0.}
A quick justification of the Lorenz gauge can be found using Maxwell's equations and the relation between the magnetic vector potential and the magnetic field: ∇ × E = − ∂ B ∂ t = − ∂ ( ∇ × A ) ∂ t {\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}=-{\frac {\partial (\nabla \times \mathbf {A} )}{\partial t}}}
Therefore, ∇ × ( E + ∂ A ∂ t ) = 0. {\displaystyle \nabla \times \left(\mathbf {E} +{\frac {\partial \mathbf {A} }{\partial t}}\right)=0.}
Since the curl is zero, that means there is a scalar function φ {\displaystyle \varphi } such that − ∇ φ = E + ∂ A ∂ t . {\displaystyle -\nabla \varphi =\mathbf {E} +{\frac {\partial \mathbf {A} }{\partial t}}.}
This gives a well known equation for the electric field: E = − ∇ φ − ∂ A ∂ t . {\displaystyle \mathbf {E} =-\nabla \varphi -{\frac {\partial \mathbf {A} }{\partial t}}.}
This result can be plugged into the Ampère–Maxwell equation , ∇ × B = μ 0 J + 1 c 2 ∂ E ∂ t ∇ × ( ∇ × A ) = ⇒ ∇ ( ∇ ⋅ A ) − ∇ 2 A = μ 0 J − 1 c 2 ∂ ( ∇ φ ) ∂ t − 1 c 2 ∂ 2 A ∂ t 2 . {\displaystyle {\begin{aligned}\nabla \times \mathbf {B} &=\mu _{0}\mathbf {J} +{\frac {1}{c^{2}}}{\frac {\partial \mathbf {E} }{\partial t}}\\\nabla \times \left(\nabla \times \mathbf {A} \right)&=\\\Rightarrow \nabla \left(\nabla \cdot \mathbf {A} \right)-\nabla ^{2}\mathbf {A} &=\mu _{0}\mathbf {J} -{\frac {1}{c^{2}}}{\frac {\partial (\nabla \varphi )}{\partial t}}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}.\\\end{aligned}}}
This leaves ∇ ( ∇ ⋅ A + 1 c 2 ∂ φ ∂ t ) = μ 0 J − 1 c 2 ∂ 2 A ∂ t 2 + ∇ 2 A . {\displaystyle \nabla \left(\nabla \cdot \mathbf {A} +{\frac {1}{c^{2}}}{\frac {\partial \varphi }{\partial t}}\right)=\mu _{0}\mathbf {J} -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}+\nabla ^{2}\mathbf {A} .}
To have Lorentz invariance, the time derivatives and spatial derivatives must be treated equally (i.e. of the same order). Therefore, it is convenient to choose the Lorenz gauge condition, which makes the left hand side zero and gives the result ◻ A = [ 1 c 2 ∂ 2 ∂ t 2 − ∇ 2 ] A = μ 0 J . {\displaystyle \Box \mathbf {A} =\left[{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}\right]\mathbf {A} =\mu _{0}\mathbf {J} .}
A similar procedure with a focus on the electric scalar potential and making the same gauge choice will yield ◻ φ = [ 1 c 2 ∂ 2 ∂ t 2 − ∇ 2 ] φ = 1 ε 0 ρ . {\displaystyle \Box \varphi =\left[{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}\right]\varphi ={\frac {1}{\varepsilon _{0}}}\rho .}
These are simpler and more symmetric forms of the inhomogeneous Maxwell's equations .
Here c = 1 ε 0 μ 0 {\displaystyle c={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}} is the vacuum velocity of light, and ◻ {\displaystyle \Box } is the d'Alembertian operator with the (+ − − −) metric signature. These equations are not only valid under vacuum conditions, but also in polarized media, [ 7 ] if ρ {\displaystyle \rho } and J → {\displaystyle {\vec {J}}} are source density and circulation density, respectively, of the electromagnetic induction fields E → {\displaystyle {\vec {E}}} and B → {\displaystyle {\vec {B}}} calculated as usual from φ {\displaystyle \varphi } and A → {\displaystyle {\vec {A}}} by the equations E = − ∇ φ − ∂ A ∂ t B = ∇ × A {\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla \varphi -{\frac {\partial \mathbf {A} }{\partial t}}\\\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}}
The explicit solutions for φ {\displaystyle \varphi } and A {\displaystyle \mathbf {A} } – unique, if all quantities vanish sufficiently fast at infinity – are known as retarded potentials .
When originally published in 1867, Lorenz's work was not received well by James Clerk Maxwell . Maxwell had eliminated the Coulomb electrostatic force from his derivation of the electromagnetic wave equation since he was working in what would nowadays be termed the Coulomb gauge . The Lorenz gauge hence contradicted Maxwell's original derivation of the EM wave equation by introducing a retardation effect to the Coulomb force and bringing it inside the EM wave equation alongside the time varying electric field , which was introduced in Lorenz's paper "On the identity of the vibrations of light with electrical currents". Lorenz's work was the first use of symmetry to simplify Maxwell's equations after Maxwell himself published his 1865 paper. In 1888, retarded potentials came into general use after Heinrich Rudolf Hertz 's experiments on electromagnetic waves . In 1895, a further boost to the theory of retarded potentials came after J. J. Thomson 's interpretation of data for electrons (after which investigation into electrical phenomena changed from time-dependent electric charge and electric current distributions over to moving point charges ). [ 2 ]
Lorenz derived the condition from postulated integral expressions for the potentials (nowadays known as retarded potentials); Lorentz (and before him Emil Wiechert) imposed it to fix the gauge (e.g, in his 1904 Encyclopedia article on electron theory). [ citation needed ] | https://en.wikipedia.org/wiki/Lorenz_gauge_condition |
Lorvotuzumab mertansine (IMGN901) is an antibody-drug conjugate . It comprises the CD56-binding antibody, lorvotuzumab (huN901), with a maytansinoid cell-killing agent, DM1, attached using a disulfide linker, SPP. (When DM1 is attached to an antibody with the SPP linker, it is mertansine ; when it is attached with the thioether linker, SMCC, it is emtansine.) [ citation needed ]
Lorvotuzumab mertansine is an experimental agent created for the treatment of CD56 positive cancers (e.g. small-cell lung cancer, ovarian cancer ). [ 1 ] [ 2 ]
It has been granted Orphan drug status for Merkel cell carcinoma . [ 3 ]
It has reported encouraging Phase II results for small-cell lung cancer . [ 4 ]
This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it .
This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lorvotuzumab_mertansine |
A losing stream , disappearing stream , influent stream or sinking river is a stream or river that loses water as it flows downstream. The water infiltrates into the ground recharging the local groundwater , because the water table is below the bottom of the stream channel. This is the opposite of a more common gaining stream (or effluent stream ) which increases in water volume farther downstream as it gains water from the local aquifer . [ 1 ]
Losing streams are common in arid areas due to the climate which results in huge amounts of water evaporating from the river generally towards the mouth. [ 2 ] Losing streams are also common in regions of karst topography where the streamwater may be completely captured by a cavern system, becoming a subterranean river .
There are many natural examples of subterranean rivers including: | https://en.wikipedia.org/wiki/Losing_stream |
In cognitive science and behavioral economics , loss aversion refers to a cognitive bias in which the same situation is perceived as worse if it is framed as a loss, rather than a gain. [ 1 ] [ 2 ] It should not be confused with risk aversion , which describes the rational behavior of valuing an uncertain outcome at less than its expected value .
When defined in terms of the pseudo-utility function as in cumulative prospect theory (CPT), the left-hand of the function increases much more steeply than gains, thus being more "painful" than the satisfaction from a comparable gain. [ 3 ] Empirically, losses tend to be treated as if they were twice as large as an equivalent gain. [ 4 ] Loss aversion was first proposed by Amos Tversky and Daniel Kahneman as an important component of prospect theory. [ 5 ] [ 6 ]
In 1979, Daniel Kahneman and his associate Amos Tversky originally coined the term "loss aversion" in their initial proposal of prospect theory as an alternative descriptive model of decision making under risk. [ 5 ] "The response to losses is stronger than the response to corresponding gains" is Kahneman's definition of loss aversion.
After the first 1979 proposal in the prospect theory framework paper, Tversky and Kahneman used loss aversion for a paper in 1991 about a consumer choice theory that incorporates reference dependence , loss aversion, and diminishing sensitivity. [ 7 ] Compared to the original paper above that discusses loss aversion in risky choices, Tversky and Kahneman (1991) discuss loss aversion in riskless choices, for instance, not wanting to trade or even sell something that is already in our possession. Here, "losses loom larger than gains" correspondingly reflects how outcomes below the reference level (e.g. what we do not own) loom larger than those above the reference level (e.g. what we own), showing people's tendency to value losses more than gains relative to a reference point. Additionally, the paper supported loss aversion with the endowment effect theory and status quo bias theory. Loss aversion was popular in explaining many phenomena in traditional choice theory. In 1980, loss aversion was used in Thaler (1980) regarding endowment effect. [ 8 ] Loss aversion was also used to support the status quo bias in 1988, [ 9 ] and the equity premium puzzle in 1995. [ 10 ] In the 2000s, behavioural finance was an area with frequent application of this theory, [ 11 ] [ 12 ] including on asset prices and individual stock returns. [ 13 ] [ 14 ]
In marketing , the use of trial periods and rebates tries to take advantage of the buyer's tendency to value the good more after the buyer incorporates it in the status quo. In past behavioral economics studies, users participate up until the threat of loss equals any incurred gains. Methods established by Botond Kőszegi and Matthew Rabin in experimental economics illustrates the role of expectation, wherein an individual's belief about an outcome can create an instance of loss aversion, whether or not a tangible change of state has occurred. [ 15 ]
Whether a transaction is framed as a loss or as a gain is important to this calculation. The same change in price framed differently, for example as a $5 discount or as a $5 surcharge avoided, has a significant effect on consumer behavior. [ 16 ] Although traditional economists consider this " endowment effect ", and all other effects of loss aversion, to be completely irrational , it is important to the fields of marketing and behavioral finance . Users in behavioral and experimental economics studies decided to cease participation in iterative money-making games when the threat of loss was close to the expenditure of effort, even when the user stood to further their gains. Loss aversion coupled with myopia has been shown to explain macroeconomic phenomena, such as the equity premium puzzle . [ 17 ] Loss aversion to kinship is an explanation for aversion to inheritance tax . [ 18 ]
Loss aversion is part of prospect theory, a cornerstone in behavioral economics. The theory explored numerous behavioral biases leading to sub-optimal decisions making. [ 5 ] Kahneman and Tversky found that people are biased in their real estimation of probability of events happening. They tend to over-weight both low and high probabilities and under-weight medium probabilities. [ 4 ] [ 5 ] [ 19 ]
One example is which option is more attractive between option A ($1,500 with a probability of 33%, $1,400 with a probability of 66%, and $0 with a probability of 1%) and option B (a guaranteed $920). Prospect theory and loss aversion suggests that most people would choose option B as they prefer the guaranteed $920 since there is a probability of winning $0, even though it is only 1%. This demonstrates that people think in terms of expected utility relative to a reference point (i.e. current wealth) as opposed to absolute payoffs. [ 5 ] [ 19 ] [ 20 ] When choices are framed as risky (i.e. risk losing 1 out of 10 lives vs the opportunity to save 9 out of 10 lives), individuals tend to be loss-averse as they weigh losses more heavily than comparable gains. [ 5 ]
Loss aversion was first proposed as an explanation for the endowment effect—the fact that people place a higher value on a good that they own than on an identical good that they do not own—by Kahneman, Knetsch, and Thaler (1990). [ 21 ] Loss aversion and the endowment effect lead to a violation of the Coase theorem —that "the allocation of resources will be independent of the assignment of property rights when costless trades are possible". [ 22 ]
In several studies, the authors demonstrated that the endowment effect could be explained by loss aversion but not five alternatives, namely transaction costs, misunderstandings, habitual bargaining behaviors , income effects, and trophy effects. In each experiment, half of the subjects were randomly assigned a good and asked for the minimum amount they would be willing to sell it for while the other half of the subjects were given nothing and asked for the maximum amount they would be willing to spend to buy the good. Since the value of the good is fixed and individual valuation of the good varies from this fixed value only due to sampling variation, the supply and demand curves should be perfect mirrors of each other and thus half the goods should be traded. The authors also ruled out the explanation that lack of experience with trading would lead to the endowment effect by conducting repeated markets. [ 23 ]
The first two alternative explanation are that under-trading was due to transaction costs or misunderstanding—were tested by comparing goods markets to induced-value markets under the same rules. If it was possible to trade to the optimal level in induced value markets, under the same rules, there should be no difference in goods markets. The results showed drastic differences between induced-value markets and goods markets. The median prices of buyers and sellers in induced-value markets matched almost every time leading to near perfect market efficiency, but goods markets sellers had much higher selling prices than buyers' buying prices. This effect was consistent over trials, indicating that this was not due to inexperience with the procedure or the market. Since the transaction cost that could have been due to the procedure was equal in the induced-value and goods markets, transaction costs were eliminated as an explanation for the endowment effect. [ 23 ]
The third alternative explanation was that people have habitual bargaining behaviors, such as overstating their minimum selling price or understating their maximum bargaining price, that may spill over from strategic interactions where these behaviors are useful to the laboratory setting where they are sub-optimal. An experiment was conducted to address this by having the clearing prices selected at random. Buyers who indicated a willingness-to-pay (WTP) higher than the randomly drawn price got the good, and vice versa for those who indicated a lower WTP. Likewise, sellers who indicated a lower willingness-to-accept than the randomly drawn price sold the good and vice versa. This incentive compatible value elicitation method did not eliminate the endowment effect but did rule out habitual bargaining behavior as an alternative explanation. [ 23 ]
Income effects were ruled out by giving one third of the participants mugs, one third chocolates, and one third neither mug nor chocolate. They were then given the option of trading the mug for the chocolate or vice versa and those with neither were asked to merely choose between mug and chocolate. Thus, wealth effects were controlled for those groups who received mugs and chocolate. The results showed that 86% of those starting with mugs chose mugs, 10% of those starting with chocolates chose mugs, and 56% of those with nothing chose mugs. This ruled out income effects as an explanation for the endowment effect. Also, since all participants in the group had the same good, it could not be considered a "trophy", eliminating the final alternative explanation. [ 23 ] Thus, the five alternative explanations were eliminated, the first two through induced-value market vs. consumption goods market, the third with incentive compatible value elicitation procedure, and the fourth and fifth through a choice between endowed or alternative good. [ 24 ]
Multiple studies have questioned the existence of loss aversion. In several studies examining the effect of losses in decision-making, no loss aversion was found under risk and uncertainty. [ 25 ] There are several explanations for these findings: one is that loss aversion does not exist in small payoff magnitudes (called magnitude dependent loss aversion by Mukherjee et al.(2017); [ 26 ] which seems to hold true for time as well. [ 27 ] The other is that the generality of the loss aversion pattern is lower than previously thought. David Gal (2006) argued that many of the phenomena commonly attributed to loss aversion, including the status quo bias, the endowment effect, and the preference for safe over risky options, are more parsimoniously explained by psychological inertia than by a loss/gain asymmetry. Gal and Rucker (2018) made similar arguments. [ 28 ] [ 29 ] Mkrva, Johnson, Gächter, and Herrmann (2019) cast doubt on these critiques, replicating loss aversion in five unique samples while also showing how the magnitude of loss aversion varies in theoretically predictable ways. [ 30 ]
Loss aversion may be more salient when people compete. Gill and Prowse (2012) provide experimental evidence that people are loss averse around reference points given by their expectations in a competitive environment with real effort. [ 31 ] Losses may also have an effect on attention but not on the weighting of outcomes; losses lead to more autonomic arousal than gains even in the absence of loss aversion. [ 32 ] This latter effect is sometimes known as Loss Attention. [ 33 ]
Loss attention refers to the tendency of individuals to allocate more attention to a task or situation when it involve losses than when it does not involve losses. What distinguishes loss attention from loss aversion is that it does not imply that losses are given more subjective weight (or utility ) than gains. Moreover, under loss aversion losses have a biasing effect whereas under loss attention they can have a debiasing effect. Loss attention was proposed as a distinct regularity from loss aversion by Eldad Yechiam and Guy Hochman. [ 34 ] [ 35 ]
Specifically, the effect of losses is assumed to be on general attention rather than plain visual or auditory attention. The loss attention account assumes that losses in a given task mainly increase the general attentional resource pool available for that task. The increase in attention is assumed to have an inverse-U shape effect on performance (following the so called Yerkes-Dodson law ). [ 34 ] The inverse U-shaped effect implies that the effect of losses on performance is most apparent in settings where task attention is low to begin with, for example in a monotonous vigilance task or when a concurrent task is more appealing. Indeed, it was found that the positive effect of losses on performance in a given task was more pronounced in a task performed concurrently with another task which was primary in its importance. [ 36 ]
Loss attention is consistent with several empirical findings in economics, finance, marketing, and decision making. Some of these effects have been previously attributed to loss aversion, but can be explained by a mere attention asymmetry between gains and losses. An example is the performance advantage attributed to golf rounds where a player is under par (or in a disadvantage) compared to other rounds where a player is at an advantage. [ 37 ] Clearly, the difference could be attributed to increased attention in the former type of rounds. 2010s studies suggested that loss aversion mostly occur for very large losses, [ 34 ] although the exact boundaries of the effect are unclear. On the other hand, loss attention was found even for small payoffs, such as $1. This suggests that loss attention may be more robust than loss aversion. Still, one might argue that loss aversion is more parsimonious than loss attention. [ 35 ]
Two types of explanations have been proposed for loss aversion. First, loss aversion may arise because downside risks are more threatening to survival than upside opportunities. Humans are theorized to be hardwired for loss aversion due to asymmetric evolutionary pressure on losses and gains: "for an organism operating close to the edge of survival, the loss of a day's food could cause death, whereas the gain of an extra day's food would not cause an extra day of life (unless the food could be easily and effectively stored)". [ 43 ] This explanation has been proposed by Kahneman himself: "Organisms that treat threats as more urgent than opportunities have a better chance to survive and reproduce." [ 19 ]
It has also been proposed that loss aversion may be a useful feature of cognition by keeping aspirations around the level of achievement within our reach. From this perspective, loss aversion prevents us from setting aspirations that are too high and unrealistic. If we set aspirations too high, loss aversion increases the subjective pain of failing to reach them. Loss aversion complements the existence of anticipatory utility, which encourages us not to set aspirations that are too low. [ 44 ]
In 2005, experiments were conducted on the ability of capuchin monkeys to use money. After several months of training, the monkeys began showing behavior considered to reflect understanding of the concept of a medium of exchange. They exhibited the same propensity to avoid perceived losses demonstrated by human subjects and investors. [ 45 ] Chen, Lakshminarayanan and Santos (2006) also conducted experiments on capuchin monkeys to determine whether behavioral biases extend across species. In one of their experiments, subjects were presented with two choices that both delivered an identical payoff of one apple piece in exchange of their coins. Experimenter 1 displayed one apple piece and gave that exact amount. Experimenter 2 displayed two apple pieces initially but always removed one piece before delivering the remaining apple piece to the subject. Therefore, identical payoffs are yielded regardless of which experimenter the subject traded with. It was found that subjects strongly preferred the experimenter who initially displayed only one apple piece, even though both experimenters yielded the same outcome of one apple piece. This study suggests that capuchins weighted losses more heavily than equivalent gains. [ 46 ]
Expectation-based loss aversion is a phenomenon in behavioral economics. When the expectations of an individual fail to match reality, they lose an amount of utility from the lack of experiencing fulfillment of these expectations. Analytical framework by Botond Kőszegi and Matthew Rabin provides a methodology through which such behavior can be classified and even predicted. [ 47 ] An individual's most recent expectations influences loss aversion in outcomes outside the status quo; a shopper intending to buy a pair of shoes on sale experiences loss aversion when the pair they had intended to buy is no longer available. [ 48 ]
Subsequent research performed by Johannes Abeler, Armin Falk , Lorenz Goette, and David Huffman in conjunction with the Institute of Labor Economics used the framework of Kőszegi and Rabin to prove that people experience expectation-based loss aversion at multiple thresholds. [ 49 ] The study evinced that reference points of people causes a tendency to avoid expectations going unmet. Participants were asked to participate in an iterative money-making task given the possibilities that they would receive either an accumulated sum for each round of "work", or a predetermined amount of money. With a 50% chance of receiving the "fair" compensation, participants were more likely to quit the experiment as this amount approached the fixed payment. They chose to stop when the values were equal as no matter which random result they received, their expectations would be matched. Participants were reluctant to work for more than the fixed payment as there was an equal chance their expected compensation would not be met. [ 50 ]
Loss aversion experimentation has most recently been applied within an educational setting in an effort to improve achievement within the U.S. In this latest experiment, Fryer et al. posits framing merit pay in terms of a loss in order to be most effective. This study was performed in the city of Chicago Heights within nine K-8 urban schools, which included 3,200 students. 150 out of 160 eligible teachers participated and were assigned to one of four treatment groups or a control group. Teachers in the incentive groups received rewards based on their students' end of the year performance on the ThinkLink Predictive Assessment and K-2 students took the Iowa Test of Basic Skills (ITBS). The control group followed the traditional merit pay process of receiving "bonus pay" at the end of the year based on student performance on standardized exams. The experimental groups received a lump sum given at beginning of the year, that would have to be paid back. The bonus was equivalent to approximately 8% of the average teacher salary in Chicago Heights, approximately $8,000. According to the authors, 'this suggests that there may be significant potential for exploiting loss aversion in the pursuit of both optimal public policy and the pursuit of profits'. [ 51 ] Thomas Amadio, superintendent of Chicago Heights Elementary School District 170, where the experiment was conducted, stated that "the study shows the value of merit pay as an encouragement for better teacher performance". [ 52 ]
In earlier studies, both bidirectional mesolimbic responses of activation for gains and deactivation for losses (or vice versa) and gain or loss-specific responses have been seen. While reward anticipation is associated with ventral striatum activation, [ 53 ] [ 54 ] negative outcome anticipation engages the amygdala . Only some studies have shown involvement of amygdala during negative outcome anticipation but not others, [ 55 ] [ 56 ] which has led to some inconsistencies. It has later been proven that inconsistencies may only have been due to methodological issues including the utilisation of different tasks and stimuli, coupled with ranges of potential gains or losses sampled from either payoff matrices rather than parametric designs, and most of the data are reported in groups, therefore ignoring the variability amongst individuals. Rather than focusing on subjects in groups, later studies focus more on individual differences in the neural bases by jointly looking at behavioural analyses and neuroimaging [ 57 ]
Neuroimaging studies on loss aversion involves measuring brain activity with functional magnetic resonance imaging (fMRI) to investigate whether individual variability in loss aversion were reflected in differences in brain activity through bidirectional or gain or loss specific responses, as well as multivariate source-based morphometry (SBM) to investigate a structural network of loss aversion and univariate voxel-based morphometry (VBM) to identify specific functional regions within this network. [ 58 ]
Brain activity in a right ventral striatum cluster increases particularly when anticipating gains. This involves the ventral caudate nucleus , pallidum , putamen , bilateral orbitofrontal cortex , superior frontal and middle gyri , posterior cingulate cortex , dorsal anterior cingulate cortex , and parts of the dorsomedial thalamus connecting to temporal and prefrontal cortex . There is a significant correlation between degree of loss aversion and strength of activity in both the frontomedial cortex and the ventral striatum. This is shown by the slope of brain activity deactivation for increasing losses being significantly greater than the slope of activation for increasing gains in the appetitive system involving the ventral striatum in the network of reward-based behavioural learning. On the other hand, when anticipating loss, the central and basal nuclei of amygdala, right posterior insula extending into the supramarginal gyrus mediate the output to other structures involved in the expression of fear and anxiety, such as the right parietal operculum and supramarginal gyrus. Consistent with gain anticipation, the slope of the activation for increasing losses was significantly greater than the slope of the deactivation for increasing gains.
Multiple neural mechanisms are recruited while making choices, showing functional and structural individual variability. Biased anticipation of negative outcomes leading to loss aversion involves specific somatosensory and limbic structures. fMRI test measuring neural responses in striatal, limbic and somatosensory brain regions help track individual differences in loss aversion. Its limbic component involved the amygdala (associated with negative emotion and plays a role in the expression of fear) and putamen in the right hemisphere . The somatosensory component included the middle cingulate cortex , as well as the posterior insula and rolandic operculum bilaterally. The latter cluster partially overlaps with the right hemispheric one displaying the loss-oriented bidirectional response previously described, but, unlike that region, it mostly involved the posterior insula bilaterally. All these structures play a critical role in detecting threats and prepare the organism for appropriate action, with the connections between amygdala nuclei and the striatum controlling the avoidance of aversive events. There are functional differences between the right and left amygdala. Overall, the role of amygdala in loss anticipation suggested that loss aversion may reflect a Pavlovian conditioned approach-avoidance response. Hence, there is a direct link between individual differences in the structural properties of this network and the actual consequences of its associated behavioral defense responses. The neural activity involved in the processing of aversive experience and stimuli is not just a result of a temporary fearful overreaction prompted by choice-related information, but rather a stable component of one's own preference function, reflecting a specific pattern of neural activity encoded in the functional and structural construction of a limbic-somatosensory neural system anticipating heightened aversive state of the brain. Even when no choice is required, individual differences in the intrinsic responsiveness of this interoceptive system reflect the impact of anticipated negative effects on evaluative processes, leading preference for avoiding losses rather than acquiring greater but riskier gains. [ 59 ]
Individual differences in loss aversion are related to variables such as age, [ 60 ] gender, and genetic factors, [ 61 ] all of which affect thalamic norepinephrine transmission, as well as neural structure and activities. Outcome anticipation and ensuing loss aversion involve multiple neural systems, showing functional and structural individual variability directly related to the actual outcomes of choices. In a study, adolescents and adults are found to be similarly loss-averse on behavioural level but they demonstrated different underlying neural responses to the process of rejecting gambles. Although adolescents rejected the same proportion of trials as adults, adolescents displayed greater caudate and frontal pole activation than adults to achieve this. These findings suggest a difference in neural development during the avoidance of risk. It is possible that adding affectively arousing factors (e.g. peer influences) may overwhelm the reward-sensitive regions of the adolescent decision making system leading to risk-seeking behaviour. On the other hand, although men and women did not differ on their behavioural task performance, men showed greater neural activation than women in various areas during the task. Loss of striatal dopamine neurons is associated with reduced risk-taking behaviour. Acute administration of D2 dopamine agonists may cause an increase in risky choices in humans. This suggests dopamine acting on stratum and possibly other mesolimbic structures can modulate loss aversion by reducing loss prediction signalling. [ 62 ] | https://en.wikipedia.org/wiki/Loss_aversion |
In genetics , loss of heterozygosity ( LOH ) is a type of genetic abnormality in diploid organisms in which one copy of an entire gene and its surrounding chromosomal region are lost. [ 1 ] Since diploid cells have two copies of their genes, one from each parent, a single copy of the lost gene still remains when this happens, but any heterozygosity (slight differences between the versions of the gene inherited from each parent) is no longer present.
The loss of heterozygosity is a common occurrence in cancer development. Originally, a heterozygous state is required and indicates the absence of a functional tumor suppressor gene copy in the region of interest. However, many people remain healthy with such a loss, because there still is one functional gene left on the other chromosome of the chromosome pair . The remaining copy of the tumor suppressor gene can be inactivated by a point mutation or via other mechanisms, resulting in a loss of heterozygosity event, and leaving no tumor suppressor gene to protect the body. Loss of heterozygosity does not imply a homozygous state (which would require the presence of two identical alleles in the cell). The exact targets for LOH are not characterised for all chromosomal losses in cancer, but certain are very well mapped. Some examples are 17p13 loss in multiple cancer types where a copy of TP53 gene gets inactivated, 13q14 loss in retinoblastoma with RB1 gene deletion or 11p13 in Wilms' tumor where WT1 gene is lost. [ 2 ] Other commonly lost chromosomal loci are still being investigated in terms of potential tumor suppressors located in those regions.
Copy-neutral LOH is thus called because no net change in the copy number occurs in the affected individual. Possible causes for copy-neutral LOH include acquired uniparental disomy (UPD) and gene conversion. In UPD, a person receives two copies of a chromosome, or part of a chromosome, from one parent and no copies from the other parent due to errors in meiosis I or meiosis II. This acquired homozygosity could lead to development of cancer if the individual inherited a non-functional allele of a tumor suppressor gene.
In tumor cells copy-neutral LOH can be biologically equivalent to the second hit in the Knudson hypothesis. [ 3 ] Acquired UPD is quite common in both hematologic and solid tumors, and is reported to constitute 20 to 80% of the LOH seen in human tumors. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Determination of virtual karyotypes using SNP-based arrays can provide genome-wide copy number and LOH status, including detection of copy-neutral LOH. Copy-neutral LOH cannot be detected by arrayCGH, FISH, or conventional cytogenetics. SNP-based arrays are preferred for virtual karyotyping of tumors and can be performed on fresh or paraffin-embedded tissues.
The classical example of such a loss of protecting genes is hereditary retinoblastoma , in which one parent's contribution of the tumor suppressor Rb1 is flawed. Although most cells will have a functional second copy, chance loss of heterozygosity events in individual cells almost invariably lead to the development of this retinal cancer in the young child.
The genes BRCA1 and BRCA2 show loss of heterozygosity in samplings of tumors from patients who have germline mutations . [ citation needed ] BRCA1/2 are genes that produce proteins which regulate the DNA repair pathway by binding to Rad51 . [ citation needed ]
In breast , ovarian , pancreatic , and prostate cancers, a core enzyme employed in homologous recombination repair (HRR) of DNA damage is often defective due to LOH, that is genetic defects in both copies (in the diploid human genome) of the gene encoding an enzyme necessary for HRR. [ 8 ] Such LOH in these different cancers was found for DNA repair genes BRCA1 , BRCA2 , BARD1 , PALB2 , FANCC , RAD51C and RAD51D . [ 8 ] Reduced ability to accurately repair DNA damages by homologous recombination may lead to compensating inaccurate repair, increased mutation and progression to cancer.
Loss of heterozygosity can be identified in cancers by noting the presence of heterozygosity at a genetic locus in an organism's germline DNA , and the absence of heterozygosity at that locus in the cancer cells. This is often done using polymorphic markers, such as microsatellites or single-nucleotide polymorphisms , for which the two parents contributed different alleles . Genome-wide LOH status of fresh or paraffin embedded tissue samples can be assessed by virtual karyotyping using SNP arrays.
It has been proposed that LOH may limit the longevity of asexual organisms. [ 9 ] [ 10 ] The minor allele in heterozygous areas of the genome is likely to have mild fitness consequences compared to de-novo mutations because selection has had time to remove deleterious alleles. When allelic gene conversion removes the major allele at these sites organisms are likely to experience a mild decline in fitness. Because LOH is much more common than de-novo mutation, and because the fitness consequences are closer to neutrality, this process should drive Muller's ratchet more quickly than de-novo mutations. While this process has received little experimental investigation, it is known that major signature of asexuality in metazoan genomes appears to be genome wide LOH, a sort of anti- meselson effect . | https://en.wikipedia.org/wiki/Loss_of_heterozygosity |
Loss of load in an electrical grid is a term used to describe the situation when the available generation capacity is less than the system load . [ 1 ] Multiple probabilistic reliability indices for the generation systems are using loss of load in their definitions, with the more popular [ 2 ] being Loss of Load Probability ( LOLP ) that characterizes a probability of a loss of load occurring within a year. [ 1 ] Loss of load events are calculated before the mitigating actions (purchasing electricity from other systems, load shedding ) are taken, so a loss of load does not necessarily cause a blackout .
Multiple reliability indices for the electrical generation are based on the loss of load being observed/calculated over a long interval (one or multiple years) in relatively small increments (an hour or a day). The total number of increments inside the long interval is designated as N {\displaystyle N} (e.g., for a yearlong interval N = 365 {\displaystyle N=365} if the increment is a day, N = 8760 {\displaystyle N=8760} if the increment is an hour): [ 3 ]
A typically accepted design goal for L O L E {\displaystyle LOLE} is 0.1 day per year [ 10 ] (" one-day-in-ten-years criterion " [ 10 ] a.k.a. "1 in 10" [ 11 ] ), corresponding to L O L P = 1 10 ⋅ 365 ≈ 0.000274 {\displaystyle {LOLP}={\frac {1}{10\cdot 365}}\approx 0.000274} . In the US, the threshold is set by the regional entities , like Northeast Power Coordinating Council : [ 11 ]
resources will be planned in such a manner that ... the probability of disconnecting non-interruptible customers will be no more than once in ten years
This electricity-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Loss_of_load |
Loss on ignition (LOI) is a test used in inorganic analytical chemistry and soil science , particularly in the analysis of minerals and the chemical makeup of soil. It consists of strongly heating ( "igniting" ) a sample of the material at a specified temperature, allowing volatile substances to escape, until its mass ceases to change. This may be done in air or in some other reactive or inert atmosphere. The simple test typically consists of placing a few grams of the material in a tared, pre-ignited crucible and determining its mass, placing it in a temperature-controlled furnace for a set time, cooling it in a controlled (e.g., water-free, CO2-free) atmosphere, and re-determining the mass. The process may be repeated to show that the mass change is complete. A variant of the test in which mass change is continually monitored as the temperature changes is called thermogravimetry .
The loss on ignition is reported as part of an elemental or oxide analysis of a mineral. The volatile materials lost usually consist of 'combined water' ( hydrates and labile hydroxy-compounds) and carbon dioxide from carbonates. It may be used as a quality test, commonly carried out for minerals such as iron ore . For example, the loss on ignition of fly ash is composed of contaminants and unburnt fuel.
In pyroprocessing industries such as lime ,calcined bauxite , refractories or cement manufacture, the loss on ignition of the raw material is roughly equivalent to the mass loss it will experience in a kiln . Likewise, in minerals, the loss on ignition indicates the material actually lost during smelting or refining in a furnace or smelter. The loss on ignition of the product indicates the extent to which the pyroprocessing was incomplete. ASTM tests are defined for limestone and lime [ 1 ] and cement [ 2 ] among others.
Soil is composed of living organisms, water, carbonates, carbon containing material, decomposing matter and much more. To determine how much one of these soil components make up the entire soil mass, the LOI procedure is implemented. Initially, the researcher will take the mass of the sample prior to LOI and then place the sample into a heating device. Depending on what the researcher is trying to determine in the soil, the temperature of the device can be set to the corresponding temperature. The soil sample is kept at this temperature for an extended period of time after which it is removed and allowed to cool down before re-weighing the sample. The amount of mass lost after the LOI treatment is equal to the mass of the component the researcher is trying to determine. The typical set of materials needed to use LOI include: a high precision mass balance, a drying oven, temperature controlled furnace, preheated crucibles and soil sample from the location of interest.
There are many ways to properly utilize loss on ignition for scientific research. [ 3 ] A soil sample left overnight in a drying oven at 100 °C would have its water content completely evaporated by morning. [ 4 ] This could allow the researchers to determine the amount of water initially in the soil sample and its porosity by comparing the change in weight of the sample before and after the evaporation . This new weight of the sample is called the dry weight and its previous weight is called the wet weight.
A general procedure of how to perform a loss on ignition is as follows: [ 5 ]
Typically, this method is used to determine water content levels, carbon levels, amount of organic matter levels, amount of volatile compounds. [ 6 ] LOI is also used in the cement industry which operates the furnace in the 950 °C range (e.g. cement kilns ), combustion engineers also use LOI but at temperatures lower than 950 °C range. [ 7 ]
In many research labs, the use of asbestos gloves is required when operating the furnace because it can reach very high temperatures. [ 6 ] The use of face masks is also recommended at higher temperatures to ensure the safety of researchers and junior lab members. [ 8 ] It is also recommended that researchers performing the LOI procedure remove all jewelry and watches as they are excellent conductors of heat. When removing samples at high temperatures, these accessories can easily heat up and result in burns. [ 9 ]
The cement industry uses the LOI method by heating a cement sample to 900-1000 °C until the mass of the sample stabilizes. Once the mass stabilizes, the mass loss due to LOI is determined. This is usually done to assess the high water content in the cement or carbonation, as these factors diminish the quality of cement. [ 10 ] High losses are generally attributed to poor cement storage conditions or manipulation of cement quality by suppliers. This practice ensures that the cement used on a site adheres to the correct composition, meeting safety protocols and customer requirements.
In the mining industry, the utilization of LOI is essential for determining the moisture and volatile material present in the rock. Thus, when performing whole-rock analysis to ascertain total volatiles, the LOI method is employed. To eliminate all volatiles and convert all iron into iron oxides, the LOI temperature is set at 900-1000 °C. | https://en.wikipedia.org/wiki/Loss_on_ignition |
The Lossen rearrangement is the conversion of a hydroxamate ester to an isocyanate . Typically O-acyl, sulfonyl, or phosphoryl O-derivative are employed. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The isocyanate can be used further to generate ureas in the presence of amines or generate amines in the presence of H 2 O.
The mechanism below begins with an O-acylated hydroxamic acid derivative that is treated with base to form an isocyanate that generates an amine and CO 2 gas in the presence of H 2 O. The hydroxamic acid derivative is first converted to its conjugate base by abstraction of a hydrogen by a base. Spontaneous rearrangement releases a carboxylate anion to produce the isocyanate intermediate. The isocyanate is then hydrolyzed in the presence of H 2 O. Finally, the respective amine and CO 2 are generated by abstraction of a proton with a base and decarboxylation .
Hydroxamic acids are commonly synthesized from their corresponding esters . [ 5 ] | https://en.wikipedia.org/wiki/Lossen_rearrangement |
Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information . Lossless compression is possible because most real-world data exhibits statistical redundancy . [ 1 ] By contrast, lossy compression permits reconstruction only of an approximation of the original data , though usually with greatly improved compression rates (and therefore reduced media sizes).
By operation of the pigeonhole principle , no lossless compression algorithm can shrink the size of all possible data: Some data will get longer by at least one symbol or bit.
Compression algorithms are usually effective for human- and machine-readable documents and cannot shrink the size of random data that contain no redundancy . Different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the GNU tool gzip . It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by MP3 encoders and other lossy audio encoders). [ 2 ]
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavourable. Common examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF , use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (i.e. frequently encountered) data will produce shorter output than "improbable" data.
The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by the deflate algorithm ) and arithmetic coding . Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy , whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm ( general-purpose meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images .
These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones.
Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes.
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform . JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but the distribution of values could be more peaked. [ citation needed ]
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy. [ 3 ]
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the United States and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing still image files in favor of Portable Network Graphics (PNG), which combines the LZ77 -based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003. [ 4 ]
Many of the lossless compression techniques used for text also work reasonably well for indexed images , but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data — essentially using autoregressive models to predict the "next" value and encoding the (possibly small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the error ) tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression , of successive images within a sequence). This is called delta encoding (from the Greek letter Δ , which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
No lossless compression algorithm can efficiently compress all possible data (see § Limitations for more on this) . For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Some of the most common lossless compression algorithms are listed below.
See list of lossless video codecs
Cryptosystems often compress data (the "plaintext") before encryption for added security. When properly implemented, compression greatly increases the unicity distance by removing patterns that might facilitate cryptanalysis . [ 9 ] However, many ordinary lossless compression algorithms produce headers, wrappers, tables, or other predictable output that might instead make cryptanalysis easier. Thus, cryptosystems must utilize compression algorithms whose output does not contain these predictable patterns.
Genetics compression algorithms (not to be confused with genetic algorithms ) are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and specific algorithms adapted to genetic data. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities. [ 10 ]
Genomic sequence compression algorithms, also known as DNA sequence compressors, explore the fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo. [ 11 ] For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.
Self-extracting executables contain a compressed application and a decompressor. When executed, the decompressor transparently decompresses and runs the original application. This is especially often used in demo coding, where competitions are held for demos with strict size limits, as small as 1 kilobyte .
This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as JavaScript .
Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarks . There are a number of better-known compression benchmarks. Some benchmarks cover only the data compression ratio , so winners in these benchmarks may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software.
Matt Mahoney , in his February 2010 edition of the free booklet Data Compression Explained , additionally lists the following: [ 12 ]
The Compression Ratings website published a chart summary of the "frontier" in compression ratio and time. [ 15 ]
The Compression Analysis Tool [ 16 ] is a Windows application that enables end users to benchmark the performance characteristics of streaming implementations of LZF4, Deflate, ZLIB, GZIP, BZIP2 and LZMA using their own data. It produces measurements and charts with which users can compare the compression speed, decompression speed and compression ratio of the different compression methods and to examine how the compression level, buffer size and flushing operations affect the results.
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument called the pigeonhole principle , as follows: [ 17 ] [ 18 ]
Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, deflate compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. [ citation needed ] So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm; indeed, this result is used to define the concept of randomness in Kolmogorov complexity . [ 19 ]
It is provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 1. [ 18 ]
On the other hand, it has also been proven [ 20 ] that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it is possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi , which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
Abstractly, a compression algorithm can be viewed as a function on sequences (normally of octets). Compression is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map). For a compression algorithm to be lossless , the compression map must form an injection from "plain" to "compressed" bit sequences. The pigeonhole principle prohibits a bijection between the collection of sequences of length N and any subset of the collection of sequences of length N −1. Therefore, it is not possible to produce a lossless algorithm that reduces the size of every possible input sequence. [ 21 ]
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics ; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimizing the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim. [ 22 ]
Mark Nelson, in response to claims of "magic" compression algorithms appearing in comp.compression, has constructed a 415,241 byte binary file of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute it without error. [ 23 ] A similar challenge, with $5,000 as reward, was issued by Mike Goldman. [ 24 ] | https://en.wikipedia.org/wiki/Lossless_compression |
Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information . Lossless compression is possible because most real-world data exhibits statistical redundancy . [ 1 ] By contrast, lossy compression permits reconstruction only of an approximation of the original data , though usually with greatly improved compression rates (and therefore reduced media sizes).
By operation of the pigeonhole principle , no lossless compression algorithm can shrink the size of all possible data: Some data will get longer by at least one symbol or bit.
Compression algorithms are usually effective for human- and machine-readable documents and cannot shrink the size of random data that contain no redundancy . Different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the GNU tool gzip . It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by MP3 encoders and other lossy audio encoders). [ 2 ]
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavourable. Common examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF , use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (i.e. frequently encountered) data will produce shorter output than "improbable" data.
The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by the deflate algorithm ) and arithmetic coding . Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy , whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm ( general-purpose meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images .
These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones.
Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes.
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform . JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but the distribution of values could be more peaked. [ citation needed ]
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy. [ 3 ]
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the United States and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing still image files in favor of Portable Network Graphics (PNG), which combines the LZ77 -based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003. [ 4 ]
Many of the lossless compression techniques used for text also work reasonably well for indexed images , but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data — essentially using autoregressive models to predict the "next" value and encoding the (possibly small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the error ) tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression , of successive images within a sequence). This is called delta encoding (from the Greek letter Δ , which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
No lossless compression algorithm can efficiently compress all possible data (see § Limitations for more on this) . For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Some of the most common lossless compression algorithms are listed below.
See list of lossless video codecs
Cryptosystems often compress data (the "plaintext") before encryption for added security. When properly implemented, compression greatly increases the unicity distance by removing patterns that might facilitate cryptanalysis . [ 9 ] However, many ordinary lossless compression algorithms produce headers, wrappers, tables, or other predictable output that might instead make cryptanalysis easier. Thus, cryptosystems must utilize compression algorithms whose output does not contain these predictable patterns.
Genetics compression algorithms (not to be confused with genetic algorithms ) are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and specific algorithms adapted to genetic data. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities. [ 10 ]
Genomic sequence compression algorithms, also known as DNA sequence compressors, explore the fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo. [ 11 ] For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.
Self-extracting executables contain a compressed application and a decompressor. When executed, the decompressor transparently decompresses and runs the original application. This is especially often used in demo coding, where competitions are held for demos with strict size limits, as small as 1 kilobyte .
This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as JavaScript .
Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarks . There are a number of better-known compression benchmarks. Some benchmarks cover only the data compression ratio , so winners in these benchmarks may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software.
Matt Mahoney , in his February 2010 edition of the free booklet Data Compression Explained , additionally lists the following: [ 12 ]
The Compression Ratings website published a chart summary of the "frontier" in compression ratio and time. [ 15 ]
The Compression Analysis Tool [ 16 ] is a Windows application that enables end users to benchmark the performance characteristics of streaming implementations of LZF4, Deflate, ZLIB, GZIP, BZIP2 and LZMA using their own data. It produces measurements and charts with which users can compare the compression speed, decompression speed and compression ratio of the different compression methods and to examine how the compression level, buffer size and flushing operations affect the results.
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument called the pigeonhole principle , as follows: [ 17 ] [ 18 ]
Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, deflate compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. [ citation needed ] So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm; indeed, this result is used to define the concept of randomness in Kolmogorov complexity . [ 19 ]
It is provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 1. [ 18 ]
On the other hand, it has also been proven [ 20 ] that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it is possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi , which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
Abstractly, a compression algorithm can be viewed as a function on sequences (normally of octets). Compression is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map). For a compression algorithm to be lossless , the compression map must form an injection from "plain" to "compressed" bit sequences. The pigeonhole principle prohibits a bijection between the collection of sequences of length N and any subset of the collection of sequences of length N −1. Therefore, it is not possible to produce a lossless algorithm that reduces the size of every possible input sequence. [ 21 ]
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics ; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimizing the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim. [ 22 ]
Mark Nelson, in response to claims of "magic" compression algorithms appearing in comp.compression, has constructed a 415,241 byte binary file of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute it without error. [ 23 ] A similar challenge, with $5,000 as reward, was issued by Mike Goldman. [ 24 ] | https://en.wikipedia.org/wiki/Lossless_compression_benchmarks |
Lossless predictive audio compression ( LPAC ) is an improved lossless audio compression algorithm developed by Tilman Liebchen, Marcus Purat and Peter Noll at the Institute for Telecommunications, Technische Universität Berlin (TU Berlin), [ 1 ] to compress PCM audio in a lossless manner, in contrast to lossy compression algorithms.
It is no longer developed because an advanced version of it has become an official standard under the name of MPEG-4 Audio Lossless Coding .
This sound technology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lossless_predictive_audio_compression |
In information technology , lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing, handling, and transmitting content. Higher degrees of approximation create coarser images as more details are removed. This is opposed to lossless data compression (reversible data compression) which does not degrade the data. The amount of data reduction possible using lossy compression is much higher than using lossless techniques.
Well-designed lossy compression technology often reduces file sizes significantly before degradation is noticed by the end-user. Even when noticeable by the user, further data reduction may be desirable (e.g., for real-time communication or to reduce transmission times or storage needs). The most widely used lossy compression algorithm is the discrete cosine transform (DCT), first published by Nasir Ahmed , T. Natarajan and K. R. Rao in 1974.
Lossy compression is most commonly used to compress multimedia data ( audio , video , and images ), especially in applications such as streaming media and internet telephony . By contrast, lossless compression is typically required for text and data files, such as bank records and text articles. It can be advantageous to make a master lossless file which can then be used to produce additional copies from. This allows one to avoid basing new compressed copies on a lossy source file, which would yield additional artifacts and further unnecessary information loss .
It is possible to compress many types of digital data in a way that reduces the size of a computer file needed to store it, or the bandwidth needed to transmit it, with no loss of the full information contained in the original file. A picture, for example, is converted to a digital file by considering it to be an array of dots and specifying the color and brightness of each dot. If the picture contains an area of the same color, it can be compressed without loss by saying "200 red dots" instead of "red dot, red dot, ...(197 more times)..., red dot."
The original data contains a certain amount of information, and there is a lower bound to the size of a file that can still carry all the information. Basic information theory says that there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely. For example, a compressed ZIP file is smaller than its original, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data.
In many cases, files or data streams contain more information than is needed. For example, a picture may have more detail than the eye can distinguish when reproduced at the largest size intended; likewise, an audio file does not need a lot of fine detail during a very loud passage. Developing lossy compression techniques as closely matched to human perception as possible is a complex task. Sometimes the ideal is a file that provides exactly the same perception as the original, with as much digital information as possible removed; other times, perceptible loss of quality is considered a valid tradeoff.
The terms "irreversible" and "reversible" are preferred over "lossy" and "lossless" respectively for some applications, such as medical image compression, to circumvent the negative implications of "loss". The type and amount of loss can affect the utility of the images. Artifacts or undesirable effects of compression may be clearly discernible yet the result still useful for the intended purpose. Or lossy compressed images may be ' visually lossless ', or in the case of medical images, so-called diagnostically acceptable irreversible compression (DAIC) [ 1 ] may have been applied.
Some forms of lossy compression can be thought of as an application of transform coding , which is a type of data compression used for digital images , digital audio signals , and digital video . The transformation is typically used to enable better (more targeted) quantization . Knowledge of the application is used to choose information to discard, thereby lowering its bandwidth . The remaining information can then be compressed via a variety of methods. When the output is decoded, the result may not be identical to the original input, but is expected to be close enough for the purpose of the application.
The most common form of lossy compression is a transform coding method, the discrete cosine transform (DCT), [ 2 ] which was first published by Nasir Ahmed , T. Natarajan and K. R. Rao in 1974. [ 3 ] DCT is the most widely used form of lossy compression, for popular image compression formats (such as JPEG ), [ 4 ] video coding standards (such as MPEG and H.264/AVC ) and audio compression formats (such as MP3 and AAC ).
In the case of audio data, a popular form of transform coding is perceptual coding , which transforms the raw data to a domain that more accurately reflects the information content. For example, rather than expressing a sound file as the amplitude levels over time, one may express it as the frequency spectrum over time, which corresponds more accurately to human audio perception. While data reduction (compression, be it lossy or lossless) is a main goal of transform coding, it also allows other goals: one may represent data more accurately for the original amount of space [ 5 ] – for example, in principle, if one starts with an analog or high-resolution digital master , an MP3 file of a given size should provide a better representation than a raw uncompressed audio in WAV or AIFF file of the same size. This is because uncompressed audio can only reduce file size by lowering bit rate or depth, whereas compressing audio can reduce size while maintaining bit rate and depth. This compression becomes a selective loss of the least significant data, rather than losing data across the board. Further, a transform coding may provide a better domain for manipulating or otherwise editing the data – for example, equalization of audio is most naturally expressed in the frequency domain (boost the bass, for instance) rather than in the raw time domain.
From this point of view, perceptual encoding is not essentially about discarding data, but rather about a better representation of data. Another use is for backward compatibility and graceful degradation : in color television, encoding color via a luminance - chrominance transform domain (such as YUV ) means that black-and-white sets display the luminance, while ignoring the color information. Another example is chroma subsampling : the use of color spaces such as YIQ , used in NTSC , allow one to reduce the resolution on the components to accord with human perception – humans have highest resolution for black-and-white (luma), lower resolution for mid-spectrum colors like yellow and green, and lowest for red and blues – thus NTSC displays approximately 350 pixels of luma per scanline , 150 pixels of yellow vs. green, and 50 pixels of blue vs. red, which are proportional to human sensitivity to each component.
Lossy compression formats suffer from generation loss : repeatedly compressing and decompressing the file will cause it to progressively lose quality. This is in contrast with lossless data compression , where data will not be lost via the use of such a procedure. Information-theoretical foundations for lossy data compression are provided by rate-distortion theory . Much like the use of probability in optimal coding theory , rate-distortion theory heavily draws on Bayesian estimation and decision theory in order to model perceptual distortion and even aesthetic judgment.
There are two basic lossy compression schemes:
In some systems the two techniques are combined, with transform codecs being used to compress the error signals generated by the predictive stage.
The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a much smaller compressed file than any lossless method, while still meeting the requirements of the application. Lossy methods are most often used for compressing sound, images or videos. This is because these types of data are intended for human interpretation where the mind can easily "fill in the blanks" or see past very minor errors or inconsistencies – ideally lossy compression is transparent (imperceptible), which can be verified via an ABX test . Data files using lossy compression are smaller in size and thus cost less to store and to transmit over the Internet, a crucial consideration for streaming video services such as Netflix and streaming audio services such as Spotify .
When a user acquires a lossily compressed file, (for example, to reduce download time) the retrieved file can be quite different from the original at the bit level while being indistinguishable to the human ear or eye for most practical purposes. Many compression methods focus on the idiosyncrasies of human physiology , taking into account, for instance, that the human eye can see only certain wavelengths of light. The psychoacoustic model describes how sound can be highly compressed without degrading perceived quality. Flaws caused by lossy compression that are noticeable to the human eye or ear are known as compression artifacts .
The compression ratio (that is, the size of the compressed file compared to that of the uncompressed file) of lossy video codecs is nearly always far superior to that of the audio and still-image equivalents.
An important caveat about lossy compression (formally transcoding), is that editing lossily compressed files causes digital generation loss from the re-encoding. This can be avoided by only producing lossy files from (lossless) originals and only editing (copies of) original files, such as images in raw image format instead of JPEG . If data which has been compressed lossily is decoded and compressed losslessly, the size of the result can be comparable with the size of the data before lossy compression, but the data already lost cannot be recovered. When deciding to use lossy conversion without keeping the original, format conversion may be needed in the future to achieve compatibility with software or devices ( format shifting ), or to avoid paying patent royalties for decoding or distribution of compressed files.
By modifying the compressed data directly without decoding and re-encoding, some editing of lossily compressed files without degradation of quality is possible. Editing which reduces the file size as if it had been compressed to a greater degree, but without more loss than this, is sometimes also possible.
The primary programs for lossless editing of JPEGs are jpegtran , and the derived exiftran (which also preserves Exif information), and Jpegcrop (which provides a Windows interface).
These allow the image to be cropped , rotated, flipped , and flopped , or even converted to grayscale (by dropping the chrominance channel). While unwanted information is destroyed, the quality of the remaining portion is unchanged.
Some other transforms are possible to some extent, such as joining images with the same encoding (composing side by side, as on a grid) or pasting images such as logos onto existing images (both via Jpegjoin ), or scaling. [ 6 ]
Some changes can be made to the compression without re-encoding:
The freeware Windows-only IrfanView has some lossless JPEG operations in its JPG_TRANSFORM plugin .
Metadata, such as ID3 tags , Vorbis comments , or Exif information, can usually be modified or removed without modifying the underlying data.
One may wish to downsample or otherwise decrease the resolution of the represented source signal and the quantity of data used for its compressed representation without re-encoding, as in bitrate peeling , but this functionality is not supported in all designs, as not all codecs encode data in a form that allows less important detail to simply be dropped. Some well-known designs that have this capability include JPEG 2000 for still images and H.264/MPEG-4 AVC based Scalable Video Coding for video. Such schemes have also been standardized for older designs as well, such as JPEG images with progressive encoding, and MPEG-2 and MPEG-4 Part 2 video, although those prior schemes had limited success in terms of adoption into real-world common usage. Without this capacity, which is often the case in practice, to produce a representation with lower resolution or lower fidelity than a given one, one needs to start with the original source signal and encode, or start with a compressed representation and then decompress and re-encode it ( transcoding ), though the latter tends to cause digital generation loss .
Another approach is to encode the original signal at several different bitrates, and then either choose which to use (as when streaming over the internet – as in RealNetworks ' " SureStream " – or offering varying downloads, as at Apple's iTunes Store ), or broadcast several, where the best that is successfully received is used, as in various implementations of hierarchical modulation . Similar techniques are used in mipmaps , pyramid representations , and more sophisticated scale space methods. Some audio formats feature a combination of a lossy format and a lossless correction which when combined reproduce the original signal; the correction can be stripped, leaving a smaller, lossily compressed, file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack , OptimFROG DualStream , and DTS-HD Master Audio in lossless (XLL) mode ).
Researchers have performed lossy compression on text by either using a thesaurus to substitute short words for long ones, or generative text techniques, [ 14 ] although these sometimes fall into the related category of lossy data conversion .
A general kind of lossy compression is to lower the resolution of an image, as in image scaling , particularly decimation . One may also remove less "lower information" parts of an image, such as by seam carving . Many media transforms, such as Gaussian blur , are, like lossy compression, irreversible: the original signal cannot be reconstructed from the transformed signal. However, in general these will have the same size as the original, and are not a form of compression. Lowering resolution has practical uses, as the NASA New Horizons craft transmitted thumbnails of its encounter with Pluto-Charon before it sent the higher resolution images. Another solution for slow connections is the usage of Image interlacing which progressively defines the image. Thus a partial transmission is enough to preview the final image, in a lower resolution version, without creating a scaled and a full version too. [ citation needed ] | https://en.wikipedia.org/wiki/Lossy_compression |
Generally, lossy data conversion refers to the conversion of data from one storage format to another in a way that doesn't allow the exact recovery of the original data. In particular, it can refer to lossy type conversion , where some values in the original type cannot be represented in the target type, [ 1 ] or to lossy file conversion , where the target format does not support all the feature of the original format.
Such conversions are typically used between incompatible software or as the export target of edition tools. Most of the time, the document saved in the lossy format will look identical, but the conversion can also cause some loss in fidelity or functionality.
There are three basic types of lossy data conversion:
Graphic data (images) is often converted from one data storage format to another. Such conversions are usually described separately as either lossy data compression or lossless data compression . | https://en.wikipedia.org/wiki/Lossy_data_conversion |
Lost-wax casting – also called investment casting , precision casting , or cire perdue ( French: [siʁ pɛʁdy] ; borrowed from French ) [ 1 ] – is the process by which a duplicate sculpture (often a metal , such as silver , gold , brass , or bronze ) is cast from an original sculpture. Intricate works can be achieved by this method.
The oldest known examples of this technique are approximately 6,500 years old (4550–4450 BC) and attributed to gold artefacts found at Bulgaria's Varna Necropolis . [ 2 ] A copper amulet from Mehrgarh , Indus Valley civilization , in present-day Pakistan, is dated to circa 4,000 BC. [ 3 ] Cast copper objects, found in the Nahal Mishmar hoard in southern Israel , which belong to the Chalcolithic period (4500–3500 BC), are estimated, from carbon-14 dating , to date to circa 3500 BC. [ 4 ] [ 5 ] Other examples from somewhat later periods are from Mesopotamia in the third millennium BC. [ 6 ] Lost-wax casting was widespread in Europe until the 18th century, when a piece-moulding process came to predominate.
The steps used in casting small bronze sculptures are fairly standardized, though the process today varies from foundry to foundry (in modern industrial use, the process is called investment casting). Variations of the process include: "lost mould ", which recognizes that materials other than wax can be used (such as tallow , resin , tar , and textile ); [ 7 ] and "waste wax process" (or "waste mould casting"), because the mould is destroyed to remove the cast item. [ 8 ] [ 9 ]
Casts can be made of the wax model itself, the direct method, or of a wax copy of a model that need not be of wax, the indirect method. These are the steps for the indirect process (the direct method starts at step 7):
Prior to silica-based casting moulds, these moulds were made of a variety of other fire-proof materials, the most common being plaster based, with added grout, and clay based. Prior to rubber moulds gelatine was used.
The methods used for small parts and jewellery vary somewhat from those used for sculpture. A wax model is obtained either from injection into a rubber mould or by being custom-made by carving. The wax or waxes are sprued and fused onto a rubber base, called a "sprue base". Then a metal flask, which resembles a short length of steel pipe that ranges roughly from 3.5 to 15 centimeters tall and wide, is put over the sprue base and the waxes. Most sprue bases have a circular rim which grips the standard-sized flask, holding it in place. Investment (refractory plaster) is mixed and poured into the flask, filling it. It hardens, then is burned out as outlined above. Casting is usually done straight from the kiln either by centrifugal casting or vacuum casting .
The lost-wax process can be used with any material that can burn , melt , or evaporate to leave a mould cavity. Some automobile manufacturers use a lost-foam technique to make engine blocks . The model is made of polystyrene foam, which is placed into a casting flask , consisting of a cope and drag , which is then filled with casting sand . The foam supports the sand, allowing shapes that would be impossible if the process had to rely on the sand alone. The metal is poured in, vaporizing the foam with its heat.
In dentistry, gold crowns, inlays and onlays are made by the lost-wax technique. Application of Lost Wax technique for the fabrication of cast inlay was first reported by Taggart. A typical gold alloy is about 60% gold and 28% silver with copper and other metals making up the rest. Careful attention to tooth preparation, impression taking and laboratory technique are required to make this type of restoration a success. Dental laboratories make other items this way as well.
In this process, the wax and the textile are both replaced by the metal during the casting process, whereby the fabric reinforcement allows for a thinner model, and thus reduces the amount of metal expended in the mould. [ 10 ] Evidence of this process is seen by the textile relief on the reverse side of objects and is sometimes referred to as "lost-wax, lost textile". This textile relief is visible on gold ornaments from burial mounds in southern Siberia of the ancient horse riding tribes , such as the distinctive group of openwork gold plaques housed in the Hermitage Museum , Saint Petersburg . [ 10 ] The technique may have its origins in the Far East , as indicated by the few Han examples, and the bronze buckle and gold plaques found at the cemetery at Xigou. [ 11 ] Such a technique may also have been used to manufacture some Viking Age oval brooches , indicated by numerous examples with fabric imprints such as those of Castletown (Scotland) . [ 12 ]
The lost-wax casting process may also be used in the production of cast glass sculptures. The original sculpture is made from wax. The sculpture is then covered with mold material (e.g., plaster), except for the bottom of the mold which must remain open. When the mold has hardened, the encased sculpture is removed by applying heat to the bottom of the mold. This melts out the wax (the wax is 'lost') and destroys the original sculpture. The mold is then placed in a kiln upside down with a funnel-like cup on top that holds small chunks of glass. When the kiln is brought up to temperature (1450-1530 degrees Fahrenheit), the glass chunks melt and flow down into the mold. Annealing time is usually 3–5 days, and total kiln time is 5 or more days. After the mold is removed from the kiln, the mold material is removed to reveal the sculpture inside.
Cast gold knucklebones, beads, and bracelets, found in graves at Bulgaria's Varna Necropolis , have been dated to approximately 6500 years BP . They are believed to be both some of the oldest known manufactured golden objects, and the oldest objects known to have been made using lost wax casting. [ 2 ]
Some of the oldest known examples of the lost-wax technique are the objects discovered in the Nahal Mishmar hoard in southern Land of Israel , and which belong to the Chalcolithic period (4500–3500 BC). Conservative Carbon-14 estimates date the items to around 3700 BC, making them more than 5700 years old. [ 4 ] [ 5 ]
In Mesopotamia , from c. 3500 –2750 BC, the lost-wax technique was used for small-scale, and then later large-scale copper and bronze statues. [ 4 ] One of the earliest surviving lost-wax castings is a small lion pendant from Uruk IV . Sumerian metalworkers were practicing lost-wax casting from approximately c. 3500 –3200 BC. [ 13 ] Much later examples from northeastern Mesopotamia / Anatolia include the Great Tumulus at Gordion (late 8th century BC), as well as other types of Urartian cauldron attachments. [ 14 ]
The oldest known example of applying the lost-wax technique to copper casting comes from a 6,000-year-old ( c. 4000 BC ) copper, wheel-shaped amulet found at Mehrgarh , Pakistan. [ 3 ]
Metal casting, by the Indus Valley civilization , produced some of the earliest known examples of lost-wax casting applied to the casting of copper alloys, a bronze figurine, found at Mohenjo-daro , and named the " dancing girl ", is dated to 2300-1750 BCE . [ 15 ] [ 16 ] Other examples include the buffalo, bull and dog found at Mohenjodaro and Harappa , [ 7 ] [ 16 ] [ 17 ] two copper figures found at the Harappan site Lothal in the district of Ahmedabad of Gujarat, [ 15 ] and likely a covered cart with wheels missing and a complete cart with a driver found at Chanhudaro . [ 7 ] [ 17 ]
During the post-Harappan period, hoards of copper and bronze implements made by the lost-wax process are known from Tamil Nadu , Uttar Pradesh , Bihar , Madhya Pradesh , Odisha , Andhra Pradesh and West Bengal . [ 15 ] Gold and copper ornaments, apparently Hellenistic in style, made by cire perdue were found at the ruins at Sirkap . One example of this Indo-Greek art dates to the 1st century BCE , the juvenile figure of Harpocrates excavated at Taxila . [ 15 ] Bronze icons were produced during the 3rd and 4th centuries, such as the Buddha image at Amaravati , and the images of Rama and Kartikeya in the Guntur district of Andhra Pradesh. [ 15 ] A further two bronze images of Parsvanatha and a small hollow-cast bull came from Sahribahlol, Gandhara , and a standing Tirthankara ( 2nd~3rd century CE ) from Chausa in Bihar should be mentioned here as well. [ 15 ] Other notable bronze figures and images have been found in Rupar , Mathura (in Uttar Pradesh) and Brahmapura , Maharashtra . [ 15 ]
Gupta and post-Gupta period bronze figures have been recovered from the following sites: Saranath , Mirpur-Khas (in Pakistan ), Sirpur (District of Raipur), Balaighat (near Mahasthan now in Bangladesh ), Akota (near Vadodara , Gujarat), Vasantagadh, Chhatarhi , Barmer and Chambi (in Rajesthan ). [ 15 ] The bronze casting technique and making of bronze images of traditional icons reached a high stage of development in South India during the medieval period. Although bronze images were modelled and cast during the Pallava Period in the eighth and ninth centuries, some of the most beautiful and exquisite statues were produced during the Chola Period in Tamil Nadu from the tenth to the twelfth century. The technique and art of fashioning bronze images is still skillfully practised in South India, particularly in Kumbakonam. The distinguished patron during the tenth century was the widowed Chola queen, Sembiyan Maha Devi. Chola bronzes are the most soughtafter collectors’ items by art lovers all over the world . The technique was used throughout India, as well as in the neighbouring countries Nepal , Tibet , Ceylon , Burma and Siam . [ 16 ]
The inhabitants of Ban Na Di were casting bronze from c. 1200 BC to 200 AD, using the lost-wax technique to manufacture bangles . [ 18 ] Bangles made by the lost-wax process are characteristic of northeast Thailand . [ 19 ] Some of the bangles from Ban Na Di revealed a dark grey substance between the central clay core and the metal, which on analysis was identified as an unrefined form of insect wax. [ 19 ] [ 18 ] It is likely that decorative items, like bracelets and rings , were made by cire perdue at Non Nok Tha and Ban Chiang . [ 7 ] There are technological and material parallels between northeast Thailand and Vietnam concerning the lost-wax technique. [ 7 ] The sites exhibiting artifacts made by the lost-mould process in Vietnam, such as the Dong Son drums , come from the Dong Son , and Phung Nguyen cultures, [ 7 ] such as one sickle and the figure of a seated individual from Go Mun (near Phung Nguyen, the Bac Bo Region), dating to the Go Mun phase (end of the General B period, up until the 7th century BC). [ 18 ]
Cast bronzes are known to have been produced in Africa by the 9th century AD in Igboland ( Igbo-Ukwu ) in Nigeria , the 12th century AD in Yorubaland ( Ife ) and the 15th century AD in the kingdom of Benin . Some portrait heads remain. [ 16 ] Benin mastered bronze during the 16th century, produced portraiture and reliefs in the metal using the lost wax process. [ 20 ]
The Egyptians were practicing cire perdue from the mid 3rd millennium BC, shown by Early Dynastic bracelets and gold jewellery. [ 21 ] [ 22 ] Inserted spouts for ewers (copper water vessels) from the Fourth Dynasty (Old Kingdom) were made by the lost-wax method. [ 22 ] [ 23 ] Hollow castings, such as the Louvre statuette from the Fayum find appeared during the Middle Kingdom , followed by solid cast statuettes (like the squatting, nursing mother , in Brooklyn ) of the Second Intermediate/Early New Kingdom . [ 23 ] The hollow casting of statues is represented in the New Kingdom by the kneeling statue of Tuthmosis IV ( British Museum , London ) and the head fragment of Ramesses V (Fitzwilliam Museum, Cambridge). [ 24 ] Hollow castings become more detailed and continue into the Eighteenth Dynasty , shown by the black bronze kneeling figure of Tutankhamun ( Museum of the University of Pennsylvania ). Cire Perdue is used in mass-production during the Late Period to Graeco - Roman times when figures of deities were cast for personal devotion and votive temple offerings . [ 13 ] Nude female-shaped handles on bronze mirrors were cast by the lost-wax process. [ 13 ]
The lost-wax technique came to be known in the Mediterranean during the Bronze Age . [ 25 ] It was a major metalworking technique utilized in the ancient Mediterranean world, notably during the Classical period of Greece for large-scale bronze statuary [ 26 ] and in the Roman world .
Direct imitations and local derivations of Oriental , Syro - Palestinian and Cypriot figurines are found in Late Bronze Age Sardinia , with a local production of figurines from the 11th to 10th century BC. [ 25 ] The cremation graves (mainly 8th-7th centuries BC, but continuing until the beginning of the 4th century) from the necropolis of Paularo (Italian Oriental Alps) contained fibulae , pendants and other copper-based objects that were made by the lost-wax process. [ 27 ] Etruscan examples, such as the bronze anthropomorphic handle from the Bocchi collection (National Archaeological Museum of Adria ), dating back to the 6th to 5th centuries BC, were made by cire perdue . [ 28 ] Most of the handles in the Bocchi collection, as well as some bronze vessels found in Adria ( Rovigo , Italy ) were made using the lost-wax technique. [ 28 ] The better known lost-wax produced items from the classical world include the "Praying Boy" c. 300 BC (in the Berlin Museum ), the statue of Hera from Vulci (Etruria), which, like most statues, was cast in several parts which were then joined. [ 29 ] Geometric bronzes such as the four copper horses of San Marco (Venice, probably 2nd century) are other prime examples of statues cast in many parts.
Examples of works made using the lost-wax casting process in Ancient Greece largely are unavailable due to the common practice in later periods of melting down pieces to reuse their materials. [ 31 ] Much of the evidence for these products come from shipwrecks . [ 32 ] As underwater archaeology became feasible, artifacts lost to the sea became more accessible. [ 32 ] Statues like the Artemision Bronze Zeus or Poseidon (found near Cape Artemision ), as well as the Victorious Youth (found near Fano ), are two such examples of Greek lost-wax bronze statuary that were discovered underwater. [ 32 ] [ 33 ]
Some Late Bronze Age sites in Cyprus have produced cast bronze figures of humans and animals. One example is the male figure found at Enkomi . Three objects from Cyprus (held in the Metropolitan Museum of Art in New York ) were cast by the lost-wax technique from the 13th and 12th centuries BC, namely, the amphorae rim, the rod tripod , and the cast tripod. [ 34 ]
Other, earlier examples that show this assembly of lost-wax cast pieces include the bronze head of the Chatsworth Apollo and the bronze head of Aphrodite from Satala ( Turkey ) from the British Museum. [ 35 ]
There is great variability in the use of the lost-wax method in East Asia. The casting method to make bronzes till the early phase of Eastern Zhou (770-256 BCE ) was almost invariably section-mold process. [ 36 ] Starting from around 600 BCE , there was an unmistakable rise of lost-wax casting in the central plains of China, first witnessed in the Chu cultural sphere. [ 37 ] Further investigations have revealed this not to be the case as it is clear that the piece-mould casting method was the principal technique used to manufacture bronze vessels in China . [ 38 ] The lost-wax technique did not appear in northern China until the 6th century BC. [ 19 ] Lost-wax casting is known as rōgata in Japanese , and dates back to the Yayoi period , c. 200 BC . [ 16 ] The most famous piece made by cire perdue is the bronze image of Buddha in the temple of the Todaiji monastery at Nara . [ 16 ] It was made in sections between 743 and 749, allegedly using seven tons of wax. [ 16 ]
The Dunaverney (1050–910 BC) and Little Thetford (1000–701 BC) flesh-hooks have been shown to be made using a lost-wax process. The Little Thetford flesh-hook, in particular, employed distinctly inventive construction methods. [ 39 ] [ 40 ] The intricate Gloucester Candlestick (1104–1113 AD) was made as a single-piece wax model, then given a complex system of gates and vents before being invested in a mould. [ 9 ]
The lost-wax casting tradition was developed by the peoples of Nicaragua , Costa Rica , Panama , Colombia , northwest Venezuela , Andean America, and the western portion of South America . [ 41 ] Lost-wax casting produced some of the region's typical gold wire and delicate wire ornament, such as fine ear ornaments. The process was employed in prehispanic times in Colombia's Muisca and Sinú cultural areas. [ 42 ] Two lost-wax moulds, one complete and one partially broken, were found in a shaft and chamber tomb in the vereda of Pueblo Tapado in the municipio of Montenegro ( Department of Quindío ), dated roughly to the pre-Columbian period. [ 43 ] The lost-wax method did not appear in Mexico until the 10th century, [ 44 ] and was thereafter used in western Mexico to make a wide range of bell forms. [ 45 ]
Some early literary works allude to lost-wax casting. Columella , a Roman writer of the 1st century AD, mentions the processing of wax from beehives in De Re Rustica , perhaps for casting, as does Pliny the Elder , [ 46 ] who details a sophisticated procedure for making Punic wax. [ 47 ] One Greek inscription refers to the payment of craftsmen for their work on the Erechtheum in Athens (408/7–407/6 BC). Clay-modellers may use clay moulds to make terracotta negatives for casting or to produce wax positives. [ 47 ] Pliny portrays [ 46 ] Zenodorus [ fr ] as a well-reputed ancient artist producing bronze statues, [ 48 ] and describes [ 46 ] Lysistratos of Sikyon , who takes plaster casts from living faces to create wax casts using the indirect process. [ 48 ]
Many bronze statues or parts of statues in antiquity were cast using the lost wax process. Theodorus of Samos is commonly associated with bronze casting. [ 46 ] [ 49 ] Pliny also mentions the use of lead , which is known to help molten bronze flow into all areas and parts of complex moulds. [ 50 ] Quintilian documents the casting of statues in parts, whose moulds may have been produced by the lost wax process. Scenes on the early-5th century BC Berlin Foundry Cup depict the creation of bronze statuary working, probably by the indirect method of lost-wax casting. [ 51 ]
The lost-wax method is well documented in ancient Indian literary sources. The Shilpa Shastras , a text from the Gupta Period ( c. 320 –550 AD), contains detailed information about casting images in metal. The 5th-century AD Vishnusamhita , an appendix to the Vishnu Purana , refers directly to the modeling of wax for making metal objects in chapter XIV: "if an image is to be made of metal, it must first be made of wax." [ 15 ] Chapter 68 of the ancient Sanskrit text Mānasāra Silpa details casting idols in wax and is entitled Maduchchhista Vidhānam , or the "lost wax method". [ 15 ] [ 16 ] The 12th century text Mānasollāsa , allegedly written by King Someshvara III of the Western Chalukya Empire , also provides detail about lost-wax and other casting processes. [ 15 ] [ 16 ]
In a 16th-century treatise, the Uttarabhaga of the Śilparatna written by Srïkumāra , verses 32 to 52 of Chapter 2 (" Linga Lakshanam "), give detailed instructions on making a hollow casting. [ 15 ] [ 16 ]
An early medieval writer Theophilus Presbyter , believed to be the Benedictine monk and metalworker Roger of Helmarshausen , wrote a treatise in the early-to-mid-12th century [ 52 ] that includes original work and copied information from other sources, such as the Mappae clavicula and Eraclius, De dolorous et artibus Romanorum . [ 52 ] It provides step-by-step procedures for making various articles, some by lost-wax casting: "The Copper Wind Chest and Its Conductor" (Chapter 84); "Tin Cruets" (Chapter 88), and "Casting Bells" (Chapter 85), which call for using "tallow" instead of wax; and "The Cast Censer". In Chapters 86 and 87 Theophilus details how to divide the wax into differing ratios before moulding and casting to achieve accurately tuned small musical bells . The 16th-century Florentine sculptor Benvenuto Cellini may have used Theophilus' writings when he cast his bronze Perseus with the Head of Medusa . [ 16 ] [ 53 ]
The Spanish writer Releigh (1596) in brief account refers to Aztec casting. [ 16 ] | https://en.wikipedia.org/wiki/Lost-wax_casting |
A Lot (formerly Loth ) was an old unit of measurement for the relative fineness [ 1 ] to gross weight in metallurgy and especially in coinage until the 19th century. A Lot was thus a proportion of the precious metal content in a piece of metal. [ 2 ] It was used in the four main monetary systems of Germany: Austrian, South German, North German and Hamburg. [ 1 ]
The lot was defined as the sixteenth part of a Mark . [ 3 ] [ 4 ] For example, in silver , the total weight was divided into 16 (proportional) Lots until about 1857, according to which a " 12-Lot " silver alloy (750 silver) contained 12/16 = 3 ⁄ 4 or 75% by weight of silver and 25% of another metal (usually copper ). A 14-Lot silver alloy ( 14 ⁄ 16 ), on the other hand, corresponded to 875 silver. For refinement, a Lot was further divided into 18 grains. [ 4 ] Thus 14 Lots , 4 grains fine then correspond to a fineness of 888.89 ‰ = (14 + 4 / 18) / 16 = (252 + 4)/288, i.e. 256/288 grains.
The German proportional measure, the Lot , was finally replaced on 1 January 1888 in the German Empire by the proportional measure, permille (thousandths). [ 2 ] | https://en.wikipedia.org/wiki/Lot_(fineness) |
Lotte Bjerre Knudsen (born 10 March 1964) is a Danish scientist and university professor . She led the development of liraglutide and oversaw the development of semaglutide , [ 1 ] [ 2 ] two notable drugs approved for indications in the treatment of diabetes and obesity . [ 2 ] [ 3 ]
Knudsen originally studied chemical engineering at the Technical University of Denmark , [ citation needed ] and obtained a doctorate in scientific medicine (DMSc) from the University of Copenhagen in 2014. [ 2 ]
Knudsen began work as a scientist at the pharmaceutical company Novo Nordisk in Denmark in 1989. [ 2 ] As of December 2015, she was being referred to as Scientific Vice President for Global Research at Novo-Nordisk. [ 4 ] She served as an adjunct faculty member at Aarhus University from 2015-2020, as a professor in translational medicine . [ 2 ]
Knudsen has been employed as a Chief Scientific Advisor in Research and Early Development at Novo Nordisk . [ 5 ] [ 6 ]
While still a student, Knudsen worked at Novo Nordisk, initially working on laundry detergent enzymes. Alongside fellow student Shamkant Patkar, she discovered an enzyme capable of removing microscopic strands of cotton that pill up on clothing from repeated wear. [ 7 ]
After this project, Knudsen joined full-time as part of a research group at Novo Nordisk that aimed to identify new treatments for diabetes , by developing small molecule drugs targeting specific metabolic pathways. [ 7 ] One project revolved around glucagon-like peptide-1 (GLP-1), [ 7 ] a hormone that stimulates the production of insulin but has a short half-life of minutes in the body. [ 3 ] [ independent source needed ]
GLP-1 had been previously identified by researchers such as Jens Juul Holst in Denmark, who joined Novo Nordisk as a consultant, [ 7 ] [ full citation needed ] and Joel Habener , Daniel J. Drucker , and Svetlana Mojsov at Massachusetts General Hospital . [ 8 ] [ 9 ] [ verification needed ] Knudsen's team screened numerous chemical compounds to identify whether they could bind to the GLP-1 receptor sufficiently to stimulate insulin secretion. [ 10 ]
Eventually, they developed a new compound called liraglutide , which is an agonist for the GLP-1 receptor . [ 11 ] It is a chemical analogue of GLP-1 , with a fatty acid and spacer attached. These modifications increased its ability to dissolve in water and bind to albumin , which increase its bioavailability —its lifetime in the bloodstream, and so the duration of its action in the body. [ 3 ] [ 6 ] Liraglutide was approved as a treatment for diabetes under the brand name Victoza in the United States in 2010. [ 12 ]
Knudsen’s team, specifically Jesper Lau and Thomas Kruse, then worked on what became semaglutide , which had greater stability and affinity to albumin, lengthening its duration of action further to a once-weekly drug. [ 6 ] [ 13 ]
Semaglutide was approved in the United States under the brand name Ozempic as a treatment for type 2 diabetes in 2017, [ 14 ] [ 15 ] and under the brand name Wegovy, as a first injectable (at 2.4 mg once weekly), for chronic weight management in June 2021. [ 16 ] [ 17 ] [ needs update ]
Martin Müller and Alexander Preker, writing for Der Spiegel in January 2024, have referred to Knudsen discovery in inventing the semaglutide weight-loss injections as "revolutionary", with the "drug Wegovy... [having] changed the world," and having made Novo Nordisk "Europe's most valuable company, [more valuable] than Daimler, Bayer, Lufthansa and BMW combined". [ 2 ]
Knudsen received the 2023 Paul Langerhans Medal by the German Diabetes Society for her work developing liraglutide . [ 18 ] [ 19 ] In October 2023, she received the STAT Biomedical Innovation award, [ 20 ] and in 2024, she received the Mani L. Bhaumik Breakthrough of the Year Award. [ 6 ] In 2024 she received the Lasker Award in clinical research. [ 21 ] [ 22 ] In 2024, Knudsen received the Golden Plate Award of the American Academy of Achievement , presented by Awards Council member Robert S. Langer . [ 23 ] In 2025, Knudsen received the 2025 Breakthrough Prize in Life Sciences. [ 24 ] | https://en.wikipedia.org/wiki/Lotte_Bjerre_Knudsen |
In expected utility theory , a lottery is a discrete distribution of probability on a set of states of nature . The elements of a lottery correspond to the probabilities that each of the states of nature will occur, (e.g. Rain: 0.70, No Rain: 0.30). [ 1 ] Much of the theoretical analysis of choice under uncertainty involves characterizing the available choices in terms of lotteries.
In economics , individuals are assumed to rank lotteries according to a rational system of preferences , although it is now accepted that people make irrational choices systematically. Behavioral economics studies what happens in markets in which some of the agents display human complications and limitations. [ 2 ]
According to expected utility theory, someone chooses among lotteries by multiplying his subjective estimate of the probabilities of the possible outcomes by a utility attached to each outcome by his personal utility function . Thus, each lottery has an expected utility, a linear combination of the utilities of the outcomes in which weights are the subjective probabilities. [ 3 ] It is also founded in the famous example, the St. Petersburg paradox : as Daniel Bernoulli mentioned, the utility function in the lottery could be dependent on the amount of money which he had before the lottery. [ 4 ]
For example, let there be three outcomes that might result from a sick person taking either novel drug A or B for his condition: "Cured", "Uncured", and "Dead". Each drug is a lottery. Suppose the probabilities for lottery A are (Cured: .90, Uncured: .00, Dead: .10), and for lottery B are (Cured: .50, Uncured: .50, Dead: .00).
If the person had to choose between lotteries A and B, how would they do it? A theory of choice under risk starts by letting people have preferences on the set of lotteries over the three states of nature—not just A and B, but all other possible lotteries. If preferences over lotteries are complete and transitive, they are called rational . If people follow the axioms of expected utility theory, their preferences over lotteries will follow each lottery's ranking in terms of expected utility. Let the utility values for the sick person be:
In this case, the expected utility of Lottery A is 14.4 (= .90(16) + .10(0)) and the expected utility of Lottery B is 14 (= .50(16) + .50(12)) [ clarification needed ] , so the person would prefer Lottery A. Expected utility theory implies that the same utilities could be used to predict the person's behavior in all possible lotteries. If, for example, he had a choice between lottery A and a new lottery C consisting of (Cured: .80, Uncured: .15 Dead: .05), expected utility theory says he would choose C, because its expected utility is 14.6 (= .80(16) + .15(12) + .05(0)).
The paradox argued by Maurice Allais complicates expected utility in the lottery. [ 5 ]
Many people tend to make different decisions between situations. [ 5 ] People prefer option 1a to 1b in situation 1, and 2b to 2a in situation 2. However two situations have the same structure, which causes a paradox:
The possible explanation for the above is that it has a ‘certainty effect’, that the outcomes without probabilities (determined in advance) will make a larger effect on the utility functions and final decisions. [ 5 ] In many cases, this focusing on the certainty may cause inconsistent decisions and preferences. Plus, people tend to find some clues from the format or context of the lotteries. [ 6 ]
It was additionally argued that how much people got trained about statistics could impact the decision making in the lottery. [ 7 ] Throughout a series of experiments, he concluded that a person statistically trained will be more likely to have consistent and confident outcomes which could be a generalized form.
The assumption about combining linearly the individual utilities and making the resulting number be the criterion to be maximized can be justified of the grounds of the independence axiom . Therefore, the validity of expected utility theory depends on the validity of the independence axiom. The preference relation ≿ {\displaystyle \succsim \!} satisfies independence if for any three simple lotteries p {\displaystyle p} , q {\displaystyle q} , r {\displaystyle r} , and any number α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} it holds that
p ≿ q {\displaystyle p\succsim \!q} if and only if α p + ( 1 − α ) r ≿ α q + ( 1 − α ) r . {\displaystyle \alpha p+(1-\alpha )r\succsim \!\alpha q+(1-\alpha )r.}
Indifference maps can be represented in the simplex .
2) http://www.stanford.edu/~jdlevin/Econ%20202/Uncertainty.pdf | https://en.wikipedia.org/wiki/Lottery_(decision_theory) |
Lottery competition in ecology is a model for how organisms compete . It was first used to describe competition in coral reef fish. [ 1 ] Under lottery competition, many offspring compete for a small number of sites (e.g., many fry competing for a few territories, or many seedlings competing for a few treefall gaps ). Under lottery competition, one individual is chosen randomly to "win" that site (typically becoming an adult soon after), and the "losers" typically die off. Thus, in an analogy to a lottery or raffle , every individual has an equal chance of winning (like every ticket has an equal chance of being chosen), and therefore more abundant species are proportionately more likely to win (just as an individual who buys more tickets is more likely to win).
Some models [ 1 ] [ 2 ] generalize this idea by weighting some individuals who are more likely to be chosen (by analogy, this would be like some tickets counting as two tickets instead of one). When a population is below carrying capacity , e.g. due to ecological disturbance , then producing twice as many individuals is not identical to producing individuals twice as likely to win; the two specialized groups can coexist in a competition-colonization trade-off . [ 3 ]
Lottery competition has been used to in understanding many key ideas in ecology, including the storage effect (species coexist because they are affected differently by environmental variation) [ 1 ] [ 2 ] and neutral theory (species diversity is maintained because species are competitively equivalent, and extinction rates are slow enough to be offset by speciation and dispersal events). [ 4 ] | https://en.wikipedia.org/wiki/Lottery_competition |
Lottery mathematics is used to calculate probabilities of winning or losing a lottery game. It is based primarily on combinatorics , particularly the twelvefold way and combinations without replacement . It can also be used to analyze coincidences that happen in lottery drawings, such as repeated numbers appearing across different draws. [ 1 ]
In a typical 6/49 game, each player chooses six distinct numbers from a range of 1–49. If the six numbers on a ticket match the numbers drawn by the lottery, the ticket holder is a jackpot winner— regardless of the order of the numbers. The probability of this happening is 1 in 13,983,816.
The chance of winning can be demonstrated as follows: The first number drawn has a 1 in 49 chance of matching. When the draw comes to the second number, there are now only 48 balls left in the bag, because the balls are drawn without replacement . So there is now a 1 in 48 chance of predicting this number.
Thus for each of the 49 ways of choosing the first number there are 48 different ways of choosing the second. This means that the probability of correctly predicting 2 numbers drawn from 49 in the correct order is calculated as 1 in 49 × 48. On drawing the third number there are only 47 ways of choosing the number; but we could have arrived at this point in any of 49 × 48 ways, so the chances of correctly predicting 3 numbers drawn from 49, again in the correct order, is 1 in 49 × 48 × 47. This continues until the sixth number has been drawn, giving the final calculation, 49 × 48 × 47 × 46 × 45 × 44, which can also be written as 49 ! ( 49 − 6 ) ! {\displaystyle {49! \over (49-6)!}} or 49 factorial divided by 43 factorial or FACT(49)/FACT(43) or simply PERM(49,6) .
608281864034267560872252163321295376887552831379210240000000000 / 60415263063373835637355132068513997507264512000000000 = 10068347520
This works out to 10,068,347,520, which is much bigger than the ~14 million stated above.
Perm(49,6)=10068347520 and 49 nPr 6 =10068347520.
However, the order of the 6 numbers is not significant for the payout. That is, if a ticket has the numbers 1, 2, 3, 4, 5, and 6, it wins as long as all the numbers 1 through 6 are drawn, no matter what order they come out in. Accordingly, given any combination of 6 numbers, there are 6 × 5 × 4 × 3 × 2 × 1 = 6 ! or 720 orders in which they can be drawn. Dividing 10,068,347,520 by 720 gives 13,983,816, also written as 49 ! 6 ! ∗ ( 49 − 6 ) ! {\displaystyle {49! \over 6!*(49-6)!}} , or COMBIN(49,6) or 49 nCr 6 or more generally as
This function is called the combination function, COMBIN(n,k) . For the rest of this article, we will use the notation ( n k ) {\displaystyle {n \choose k}} . "Combination" means the group of numbers selected, irrespective of the order in which they are drawn. A combination of numbers is usually presented in ascending order. An eventual 7th drawn number, the reserve or bonus, is presented at the end.
An alternative method of calculating the odds is to note that the probability of the first ball corresponding to one of the six chosen is 6/49; the probability of the second ball corresponding to one of the remaining five chosen is 5/48; and so on. This yields a final formula of
A 7th ball often is drawn as reserve ball, in the past only a second chance to get 5+1 numbers correct with 6 numbers played.
One must divide the number of combinations producing the given result by the total number of possible combinations (for example, ( 49 6 ) = 13 , 983 , 816 {\displaystyle {49 \choose 6}=13,983,816} ). The numerator equates to the number of ways to select the winning numbers multiplied by the number of ways to select the losing numbers.
For a score of n (for example, if 3 choices match three of the 6 balls drawn, then n = 3), ( 6 n ) {\displaystyle {6 \choose n}} describes the odds of selecting n winning numbers from the 6 winning numbers. This means that there are 6 - n losing numbers, which are chosen from the 43 losing numbers in ( 43 6 − n ) {\displaystyle {43 \choose 6-n}} ways. The total number of combinations giving that result is, as stated above, the first number multiplied by the second. The expression is therefore ( 6 n ) ( 43 6 − n ) ( 49 6 ) {\displaystyle {6 \choose n}{43 \choose 6-n} \over {49 \choose 6}} .
This can be written in a general form for all lotteries as:
( K B ) ( N − K K − B ) ( N K ) {\displaystyle {K \choose B}{N-K \choose K-B} \over {N \choose K}}
where N {\displaystyle N} is the number of balls in lottery, K {\displaystyle K} is the number of balls in a single ticket, and B {\displaystyle B} is the number of matching balls for a winning ticket.
The generalisation of this formula is called the hypergeometric distribution .
This gives the following results:
When a 7th number is drawn as bonus number then we have 49!/6!/1!/42!.=combin(49,6)*combin(49-6,1)=601304088 different possible drawing results.
You would expect to score 3 of 6 or better once in around 36.19 drawings. Notice that It takes a 3 if 6 wheel of 163 combinations to be sure of at least one 3/6 score.
1/p changes when several distinct combinations are played together. It mostly is about winning something, not just the jackpot.
There is only one known way to ensure winning the jackpot. That is to buy at least one lottery ticket for every possible number combination. For example, one has to buy 13,983,816 different tickets to ensure to win the jackpot in a 6/49 game.
Lottery organizations have laws, rules and safeguards in place to prevent gamblers from executing such an operation. Further, just winning the jackpot by buying every possible combination does not guarantee that one will break even or make a profit.
If p {\displaystyle p} is the probability to win; c t {\displaystyle c_{t}} the cost of a ticket; c l {\displaystyle c_{l}} the cost for obtaining a ticket (e.g. including the logistics); c f {\displaystyle c_{f}} one time costs for the operation (such as setting up and conducting the operation); then the jackpot m j {\displaystyle m_{j}} should contain at least
m j ≥ c f + c t + c l p {\displaystyle m_{j}\geq c_{f}+{\frac {c_{t}+c_{l}}{p}}}
to have a chance to at least break even.
The above theoretical "chance to break-even" point is slightly offset by the sum ∑ i m i {\displaystyle \sum _{i}{}m_{i}} of the minor wins also included in all the lottery tickets:
m j ≥ c f + c t + c l p − ∑ i m i {\displaystyle m_{j}\geq c_{f}+{\frac {c_{t}+c_{l}}{p}}-\sum _{i}{}m_{i}}
Still, even if the above relation is satisfied, it does not guarantee to break even. The payout depends on the number of winning tickets for all the prizes n x {\displaystyle n_{x}} , resulting in the relation
m j n j ≥ c f + c t + c l p − ∑ i m i n i {\displaystyle {\frac {m_{j}}{n_{j}}}\geq c_{f}+{\frac {c_{t}+c_{l}}{p}}-\sum _{i}{}{\frac {m_{i}}{n_{i}}}}
In probably the only known successful operations [ 2 ] the threshold to execute an operation was set at three times the cost of the tickets alone for unknown reasons
m j ≥ 3 × c t p {\displaystyle m_{j}\geq 3\times {\frac {c_{t}}{p}}}
I.e.
n j p c t ( c f + c t + c l p − ∑ i m i n i ) ≪ 3 {\displaystyle {\frac {n_{j}p}{c_{t}}}\left(c_{f}+{\frac {c_{t}+c_{l}}{p}}-\sum _{i}{}{\frac {m_{i}}{n_{i}}}\right)\ll 3}
This does, however, not eliminate all risks to make no profit. The success of the operations still depended on a bit of luck. In addition, in one operation the logistics failed and not all combinations could be obtained. This added the risk of not even winning the jackpot at all.
Many lotteries have a Powerball (or "bonus ball"). If the powerball is drawn from a pool of numbers different from the main lottery, the odds are multiplied by the number of powerballs. For example, in the 6 from 49 lottery, given 10 powerball numbers, then the odds of getting a score of 3 and the powerball would be 1 in 56.66 × 10, or 566.6 (the probability would be divided by 10, to give an exact value of 8815 4994220 {\textstyle {\frac {8815}{4994220}}} ). Another example of such a game is Mega Millions , albeit with different jackpot odds.
Where more than 1 powerball is drawn from a separate pool of balls to the main lottery (for example, in the EuroMillions game), the odds of the different possible powerball matching scores are calculated using the method shown in the " other scores " section above (in other words, the powerballs are like a mini-lottery in their own right), and then multiplied by the odds of achieving the required main-lottery score.
If the powerball is drawn from the same pool of numbers as the main lottery, then, for a given target score, the number of winning combinations includes the powerball. For games based on the Canadian lottery (such as the lottery of the United Kingdom ), after the 6 main balls are drawn, an extra ball is drawn from the same pool of balls, and this becomes the powerball (or "bonus ball"). An extra prize is given for matching 5 balls and the bonus ball. As described in the " other scores " section above, the number of ways one can obtain a score of 5 from a single ticket is ( 6 5 ) ( 43 1 ) = 258 {\textstyle {6 \choose 5}{43 \choose 1}=258} . Since the number of remaining balls is 43, and the ticket has 1 unmatched number remaining, 1 / 43 of these 258 combinations will match the next ball drawn (the powerball), leaving 258/43 = 6 ways of achieving it. Therefore, the odds of getting a score of 5 and the powerball are 6 ( 49 6 ) = 1 2 , 330 , 636 {\textstyle {6 \over {49 \choose 6}}={1 \over 2,330,636}} .
Of the 258 combinations that match 5 of the main 6 balls, in 42/43 of them the remaining number will not match the powerball, giving odds of 258 ⋅ 42 43 ( 49 6 ) = 3 166 , 474 ≈ 1.802 × 10 − 5 {\textstyle {{258\cdot {\frac {42}{43}}} \over {49 \choose 6}}={\frac {3}{166,474}}\approx 1.802\times 10^{-5}} for obtaining a score of 5 without matching the powerball.
Using the same principle, the odds of getting a score of 2 and the powerball are ( 6 2 ) ( 43 4 ) = 1 , 851 , 150 {\textstyle {6 \choose 2}{43 \choose 4}=1,\!851,\!150} for the score of 2 multiplied by the probability of one of the remaining four numbers matching the bonus ball, which is 4/43 . Since 1 , 851 , 150 ⋅ 4 43 = 172 , 200 {\textstyle 1,851,150\cdot {\frac {4}{43}}=172,\!200} , the probability of obtaining the score of 2 and the bonus ball is 172 , 200 ( 49 6 ) = 1025 83237 = 1.231 % {\textstyle {\frac {172,200}{49 \choose 6}}={\frac {1025}{83237}}=1.231\%} , approximate decimal odds of 1 in 81.2.
The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with one bonus ball from the N {\displaystyle N} pool of balls is:
K − B N − K ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {\frac {{\frac {K-B}{N-K}}{K \choose B}{N-K \choose K-B}}{N \choose K}}}
The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with zero bonus ball from the N {\displaystyle N} pool of balls is:
N − K − K + B N − K ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {N-K-K+B \over N-K}{K \choose B}{N-K \choose K-B} \over {N \choose K}}
The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with one bonus ball from a separate pool of P {\displaystyle P} balls is:
1 P ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {1 \over P}{K \choose B}{N-K \choose K-B} \over {N \choose K}}
The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with no bonus ball from a separate pool of P {\displaystyle P} balls is:
P − 1 P ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {P-1 \over P}{K \choose B}{N-K \choose K-B} \over {N \choose K}}
It is a hard (and often open) problem to calculate the minimum number of tickets one needs to purchase to guarantee that at least one of these tickets matches at least 2 numbers. In the 5-from-90 lotto, the minimum number of tickets that can guarantee a ticket with at least 2 matches is 100. [ 3 ]
Coincidences in lottery drawings often capture our imagination and can make news headlines as they seemingly highlight patterns in what should be entirely random outcomes. For example, repeated numbers appearing across different draws may appear on the surface to be too implausible to be by pure chance. For instance, on September 6, 2009, the six numbers 4, 15, 23, 24, 35, and 42 were drawn from 49 in the Bulgarian national 6/49 lottery, and in the very next drawing on September 10th, the same six numbers were drawn again. Lottery mathematics can be used to analyze these extraordinary events. [ 1 ]
As a discrete probability space , the probability of any particular lottery outcome is atomic , meaning it is greater than zero. Therefore, the probability of any event is the sum of probabilities of the outcomes of the event. This makes it easy to calculate quantities of interest from information theory . For example, the information content of any event is easy to calculate, by the formula
I ( E ) := − log [ Pr ( E ) ] = − log ( P ) . {\displaystyle \operatorname {I} (E):=-\log {\left[\Pr {\left(E\right)}\right]}=-\log {\left(P\right)}.}
In particular, the information content of outcome x {\displaystyle x} of discrete random variable X {\displaystyle X} is
I X ( x ) := − log [ p X ( x ) ] = log ( 1 p X ( x ) ) . {\displaystyle \operatorname {I} _{X}(x):=-\log {\left[p_{X}{\left(x\right)}\right]}=\log {\left({\frac {1}{p_{X}{\left(x\right)}}}\right)}.}
For example, winning in the example § Choosing 6 from 49 above is a Bernoulli-distributed random variable X {\displaystyle X} with a 1 / 13,983,816 chance of winning (" success ") We write X ∼ B e r n o u l l i ( p ) = B ( 1 , p ) {\textstyle X\sim \mathrm {Bernoulli} \!\left(p\right)=\mathrm {B} \!\left(1,p\right)} with p = 1 13 , 983 , 816 {\textstyle p={\tfrac {1}{13,983,816}}} and q = 13 , 983 , 815 13 , 983 , 816 {\textstyle q={\tfrac {13,983,815}{13,983,816}}} . The information content of winning is
I X ( win ) = − log 2 p X ( win ) = − log 2 1 13 , 983 , 816 ≈ 23.73725 {\displaystyle \operatorname {I} _{X}({\text{win}})=-\log _{2}{p_{X}{({\text{win}})}}=-\log _{2}\!{\tfrac {1}{13,983,816}}\approx 23.73725} shannons or bits of information. (See units of information for further explanation of terminology.) The information content of losing is
I X ( lose ) = − log 2 p X ( lose ) = − log 2 13 , 983 , 815 13 , 983 , 816 ≈ 1.0317 × 10 − 7 shannons . {\displaystyle {\begin{aligned}\operatorname {I} _{X}({\text{lose}})&=-\log _{2}{p_{X}{({\text{lose}})}}=-\log _{2}\!{\tfrac {13,983,815}{13,983,816}}\\&\approx 1.0317\times 10^{-7}{\text{ shannons}}.\end{aligned}}}
The information entropy of a lottery probability distribution is also easy to calculate as the expected value of the information content.
H ( X ) = ∑ x − p X ( x ) log p X ( x ) = ∑ x p X ( x ) I X ( x ) = d e f E [ I X ( x ) ] {\displaystyle {\begin{alignedat}{2}\mathrm {H} (X)&=\sum _{x}{-p_{X}{\left(x\right)}\log {p_{X}{\left(x\right)}}}\ &=\sum _{x}{p_{X}{\left(x\right)}\operatorname {I} _{X}(x)}\\&{\overset {\underset {\mathrm {def} }{}}{=}}\ \mathbb {E} {\left[\operatorname {I} _{X}(x)\right]}\end{alignedat}}}
Oftentimes the random variable of interest in the lottery is a Bernoulli trial . In this case, the Bernoulli entropy function may be used. Using X {\displaystyle X} representing winning the 6-of-49 lottery, the Shannon entropy of 6-of-49 above is
H ( X ) = − p log ( p ) − q log ( q ) = − 1 13 , 983 , 816 log 1 13 , 983 , 816 − 13 , 983 , 815 13 , 983 , 816 log 13 , 983 , 815 13 , 983 , 816 ≈ 1.80065 × 10 − 6 shannons. {\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log(p)-q\log(q)=-{\tfrac {1}{13,983,816}}\log \!{\tfrac {1}{13,983,816}}-{\tfrac {13,983,815}{13,983,816}}\log \!{\tfrac {13,983,815}{13,983,816}}\\&\approx 1.80065\times 10^{-6}{\text{ shannons.}}\end{aligned}}} | https://en.wikipedia.org/wiki/Lottery_mathematics |
The lotus effect refers to self-cleaning properties that are a result of ultrahydrophobicity as exhibited by the leaves of Nelumbo , the lotus flower. [ 1 ] Dirt particles are picked up by water droplets due to the micro- and nanoscopic architecture on the surface, which minimizes the droplet's adhesion to that surface. Ultrahydrophobicity and self-cleaning properties are also found in other plants, such as Tropaeolum (nasturtium), Opuntia (prickly pear), Alchemilla , cane, and also on the wings of certain insects. [ 2 ]
The phenomenon of ultrahydrophobicity was first studied by Dettre and Johnson in 1964 [ 3 ] using rough hydrophobic surfaces. Their work developed a theoretical model based on experiments with glass beads coated with paraffin or PTFE telomer . The self-cleaning property of ultrahydrophobic micro- nanostructured surfaces was studied by Wilhelm Barthlott and Ehler in 1977, [ 4 ] who described such self-cleaning and ultrahydrophobic properties for the first time as the "lotus effect"; perfluoroalkyl and perfluoropolyether ultrahydrophobic materials were developed by Brown in 1986 for handling chemical and biological fluids. [ 5 ] Other biotechnical applications have emerged since the 1990s. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
The high surface tension of water causes droplets to assume a nearly spherical shape, since a sphere has minimal surface area, and this shape therefore minimizes the solid-liquid surface energy. On contact of liquid with a surface, adhesion forces result in wetting of the surface. Either complete or incomplete wetting may occur depending on the structure of the surface and the fluid tension of the droplet. [ 12 ] The cause of self-cleaning properties is the hydrophobic water-repellent double structure of the surface. [ 13 ] This enables the contact area and the adhesion force between surface and droplet to be significantly reduced, resulting in a self-cleaning process. [ 14 ] [ 15 ] [ 16 ] This hierarchical double structure is formed out of a characteristic epidermis (its outermost layer called the cuticle) and the covering waxes. The epidermis of the lotus plant possesses papillae 10 μm to 20 μm in height and 10 μm to 15 μm in width on which the so-called epicuticular waxes are imposed. These superimposed waxes are hydrophobic and form the second layer of the double structure. This system regenerates. This biochemical property is responsible for the functioning of the water repellency of the surface.
The hydrophobicity of a surface can be measured by its contact angle . The higher the contact angle the higher the hydrophobicity of a surface. Surfaces with a contact angle < 90° are referred to as hydrophilic and those with an angle >90° as hydrophobic. Some plants show contact angles up to 160° and are called ultrahydrophobic, meaning that only 2–3% of the surface of a droplet (of typical size) is in contact. Plants with a double structured surface like the lotus can reach a contact angle of 170°, whereby the droplet's contact area is only 0.6%. All this leads to a self-cleaning effect.
Dirt particles with an extremely reduced contact area are picked up by water droplets and are thus easily cleaned off the surface. If a water droplet rolls across such a contaminated surface the adhesion between the dirt particle, irrespective of its chemistry, and the droplet is higher than between the particle and the surface. This cleaning effect has been demonstrated on common materials such as stainless steel when a superhydrophobic surface is produced. [ 17 ] As this self-cleaning effect is based on the high surface tension of water it does not work with organic solvents. Therefore, the hydrophobicity of a surface is no protection against graffiti.
This effect is of a great importance for plants as a protection against pathogens like fungi or algae growth, and also for animals like butterflies , dragonflies and other insects not able to cleanse all their body parts.
Another positive effect of self-cleaning is the prevention of contamination of the area of a plant surface exposed to light resulting in reduced photosynthesis.
When it was discovered that the self-cleaning qualities of ultrahydrophobic surfaces come from physical-chemical properties at the microscopic to nanoscopic scale rather than from the specific chemical properties of the leaf surface, [ 18 ] [ 19 ] [ 20 ] the possibility arose of using this effect in manmade surfaces, by mimicking nature in a general way rather than a specific one.
Some nanotechnologists have developed treatments, coatings, paints, roof tiles, fabrics and other surfaces that can stay dry and clean themselves by replicating in a technical manner the self-cleaning properties of plants, such as the lotus plant. This can usually be achieved using special fluorochemical or silicone treatments on structured surfaces or with compositions containing micro-scale particulates.
In addition to chemical surface treatments, which can be removed over time, metals have been sculpted with femtosecond pulse lasers to produce the lotus effect. [ 21 ] The materials are uniformly black at any angle, which combined with the self-cleaning properties might produce very low maintenance solar thermal energy collectors, while the high durability of the metals could be used for self-cleaning latrines to reduce disease transmission. [ 22 ]
Further applications have been marketed, such as self-cleaning glasses installed in the sensors of traffic control units on German autobahns developed by a cooperation partner (Ferro GmbH). [ citation needed ] The Swiss companies HeiQ and Schoeller Textil have developed stain-resistant textiles under the brand names " HeiQ Eco Dry " and " nanosphere " respectively. In October 2005, tests of the Hohenstein Research Institute showed that clothes treated with NanoSphere technology allowed tomato sauce, coffee and red wine to be easily washed away even after a few washes. Another possible application is thus with self-cleaning awnings, tarpaulins and sails, which otherwise quickly become dirty and difficult to clean.
Superhydrophobic coatings applied to microwave antennas can significantly reduce rain fade and the buildup of ice and snow. "Easy to clean" products in ads are often mistaken in the name of the self-cleaning properties of hydrophobic or ultrahydrophobic surfaces. Patterned ultrahydrophobic surfaces also show promise for "lab-on-a-chip" microfluidic devices and can greatly improve surface-based bioanalysis. [ 23 ]
Superhydrophobic or hydrophobic properties have been used in dew harvesting, or the funneling of water to a basin for use in irrigation. The Groasis Waterboxx has a lid with a microscopic pyramidal structure based on the ultrahydrophobic properties that funnel condensation and rainwater into a basin for release to a growing plant's roots. [ 24 ]
Although the self-cleaning phenomenon of the lotus was possibly known in Asia long before (reference to the lotus effect is found in the Bhagavad Gita [ 25 ] ), its mechanism was explained only in the early 1970s after the introduction of the scanning electron microscope . [ 4 ] [ 16 ] Studies were performed with leaves of Tropaeolum and lotus ( Nelumbo ). [ 6 ] Similar to lotus effect, a recent study has revealed honeycomb-like micro-structures on the taro leaf, which makes the leaf superhydrophobic. The measured contact angle on this leaf in this study is around 148 degrees. [ 26 ] | https://en.wikipedia.org/wiki/Lotus_effect |
Loudness monitoring of programme levels is needed in radio and television broadcasting , as well as in audio post production . Traditional methods of measuring signal levels, such as the peak programme meter and VU meter , do not give the subjectively valid measure of loudness that many would argue is needed to optimise the listening experience when changing channels or swapping disks.
The need for proper loudness monitoring is apparent in the loudness war that is now found everywhere in the audio field, and the extreme compression that is now applied to programme levels.
Meters have been introduced that aim to measure the human perceived loudness by taking account of the equal-loudness contours and other factors, such as audio spectrum, duration, compression and intensity. One such device was developed by CBS Laboratories in the 1980s. Complaints to broadcasters about the intrusive level of interstitials programs (advertisements, commercials) has resulted in projects to develop such meters. Based on loudness metering, many manufacturers have developed real-time audio processors that adjust the audio signal to match a specified target loudness level that preserves volume consistency at home listeners.
In August 2010, the European Broadcasting Union published a new metering specification EBU Tech 3341 , which builds on ITU-R BS.1770 . To make sure meters from different manufacturers provide the same reading in LUFS units, EBU Tech 3341 specifies the EBU Mode , which includes a Momentary (400 ms), Short term (3 s) and Integrated (from start to stop) meter and a set of audio signals to test the meters. [ 1 ] | https://en.wikipedia.org/wiki/Loudness_monitoring |
The loudness war (or loudness race ) is a trend of increasing audio levels in recorded music, which reduces audio fidelity and—according to many critics—listener enjoyment. Increasing loudness was first reported as early as the 1940s, with respect to mastering practices for 7-inch singles . [ 1 ] The maximum peak level of analog recordings such as these is limited by varying specifications of electronic equipment along the chain from source to listener, including vinyl and cassette players. The issue garnered renewed attention starting in the 1990s with the introduction of digital signal processing capable of producing further loudness increases.
With the advent of the compact disc (CD), music is encoded to a digital format with a clearly defined maximum peak amplitude. Once the maximum amplitude of a CD is reached, loudness can be increased still further through signal processing techniques such as dynamic range compression and equalization . Engineers can apply an increasingly high ratio of compression to a recording until it peaks more frequently at the maximum amplitude, a technique colloquially known as brickwalling. In extreme cases, efforts to increase loudness can result in clipping and other audible distortion . [ 2 ] Modern recordings that use extreme dynamic range compression and other measures to increase loudness therefore can sacrifice sound quality to loudness. The competitive escalation of loudness has led music fans and members of the musical press to refer to the affected albums as "victims of the loudness war".
The practice of focusing on loudness in audio mastering can be traced back to the introduction of the compact disc, [ 3 ] but also existed to some extent when the vinyl phonograph record was the primary released recording medium and when 7-inch singles were played on jukebox machines in clubs and bars. The so-called wall of sound (not to be confused with the Phil Spector Wall of Sound ) formula preceded the loudness war, but achieved its goal using a variety of techniques, such as instrument doubling and reverberation , as well as compression . [ 4 ]
Jukeboxes became popular in the 1940s and were often set to a predetermined level by the owner, so any record that was mastered louder than the others would stand out. Similarly, starting in the 1950s, producers would request louder 7-inch singles so that songs would stand out when auditioned by program directors for radio stations. [ 1 ] In particular, many Motown records pushed the limits of how loud records could be made; according to one of their engineers, they were "notorious for cutting some of the hottest 45s in the industry." [ 5 ] In the 1960s and 1970s, compilation albums of hits by multiple different artists became popular, and if artists and producers found their song was quieter than others on the compilation, they would insist that their song be remastered to be competitive.
Because of the limitations of the vinyl format, the ability to manipulate loudness was also limited. Attempts to achieve extreme loudness could render the medium unplayable. One example was the "hot" master of Led Zeppelin II by mastering engineer Bob Ludwig which caused some cartridges to mistrack; the album was recalled and issued with lower compression levels. [ 6 ] Digital media such as CDs remove these restrictions and as a result, increasing loudness levels have been a more severe issue in the CD era. [ 7 ] Modern computer-based digital audio effects processing allows mastering engineers to have greater direct control over the loudness of a song: for example, a brick-wall limiter can look ahead at an upcoming signal to limit its level. [ 8 ]
Since CDs were not the primary medium for popular music until the late 1980s, there was little motivation for competitive loudness practices then. The common practice of mastering music for CD involved matching the highest peak of a recording at, or close to, digital full scale , and referring to digital levels along the lines of more familiar analog VU meters . When using VU meters, a certain point (usually −14 dB below the disc's maximum amplitude) was used in the same way as the saturation point (signified as 0 dB) of analog recording, with several dB of the CD's recording level reserved for amplitude exceeding the saturation point (often referred to as the red zone , signified by a red bar in the meter display), because digital media cannot exceed 0 decibels relative to full scale ( dBFS ). [ citation needed ] The average RMS level of the average rock song during most of the decade was around −16.8 dBFS. [ 10 ] : 246
By the early 1990s, mastering engineers had learned how to optimize for the CD medium and the loudness war had not yet begun in earnest. [ 11 ] However, in the early 1990s, CDs with louder music levels began to surface, and CD levels became more and more likely to bump up to the digital limit, [ note 1 ] resulting in recordings where the peaks on an average rock or beat-heavy pop CD hovered near 0 dBFS, [ note 2 ] but only occasionally reached it. [ citation needed ]
The concept of making music releases hotter began to appeal to people within the industry, in part because of how noticeably louder some releases had become and also in part because the industry believed that customers preferred louder-sounding CDs, even though that may not have been true. [ 12 ] Engineers, musicians, and labels each developed their own ideas of how CDs could be made louder. [ 13 ] In 1994, the first digital brick-wall limiter with look-ahead (the Waves L1 ) was mass-produced; this feature, since then, has been commonly incorporated in digital mastering limiters and maximizers. [ note 3 ] While the increase in CD loudness was gradual throughout the 1990s, some opted to push the format to the limit, such as on Oasis 's widely popular album (What's the Story) Morning Glory? , whose RMS level averaged −8 dBFS on many of its tracks—a rare occurrence, especially in the year it was released (1995). [ 11 ] Red Hot Chili Peppers 's Californication (1999) represented another milestone, with prominent clipping occurring throughout the album. [ 13 ]
By the early 2000s, the loudness war had become fairly widespread, especially with some remastered re-releases and greatest hits collections of older music. In 2008, loud mastering practices received mainstream media attention with the release of Metallica 's Death Magnetic album. The CD version of the album has a high average loudness that pushes peaks beyond the point of digital clipping, causing distortion. This was reported by customers and music industry professionals, and covered in multiple international publications, including Rolling Stone , [ 14 ] The Wall Street Journal , [ 15 ] BBC Radio , [ 16 ] Wired , [ 17 ] and The Guardian . [ 18 ] Ted Jensen , a mastering engineer involved in the Death Magnetic recordings, criticized the approach employed during the production process. [ 19 ] When a version of the album without dynamic range compression was included in the downloadable content for the video game Guitar Hero III , copies of this version were actively sought out by those who had already purchased the official CD release. The Guitar Hero version of the album songs exhibit much higher dynamic range and less clipping than those on the CD release, as can be seen from the illustration. [ 20 ]
In late 2008, mastering engineer Bob Ludwig offered three versions of the Guns N' Roses album Chinese Democracy for approval to co-producers Axl Rose and Caram Costanzo. They selected the one with the least compression. Ludwig wrote, "I was floored when I heard they decided to go with my full dynamics version and the loudness-for-loudness-sake versions be damned." Ludwig said the "fan and press backlash against the recent heavily compressed recordings finally set the context for someone to take a stand and return to putting music and dynamics above sheer level." [ 21 ]
In March 2010, mastering engineer Ian Shepherd organised the first Dynamic Range Day, [ 22 ] a day of online activity intended to raise awareness of the issue and promote the idea that "Dynamic music sounds better". The day was a success and its follow-ups in the following years have built on this, gaining industry support from companies like SSL , Bowers & Wilkins , TC Electronic and Shure as well as engineers like Bob Ludwig , Guy Massey and Steve Lillywhite . [ 23 ] Shepherd cites research showing there is no connection between sales and loudness, and that people prefer more dynamic music. [ 4 ] [ 24 ] He also argues that file-based loudness normalization will eventually render the war irrelevant. [ 25 ]
One of the biggest albums of 2013 was Daft Punk 's Random Access Memories , with many reviews commenting on the album's great sound. [ 26 ] [ 27 ] Mixing engineer Mick Guzauski deliberately chose to use less compression on the project, commenting "We never tried to make it loud and I think it sounds better for it." [ 28 ] In January 2014, the album won five Grammy Awards, including Best Engineered Album (Non-Classical). [ 29 ]
Analysis in the early 2010s suggests that the loudness trend may have peaked around 2005 and subsequently reduced, with a pronounced increase in dynamic range (both overall and minimum) for albums since 2005. [ 30 ]
In 2013, mastering engineer Bob Katz predicted that the loudness war would be over by mid-2014, claiming that mandatory use of Sound Check by Apple would lead to producers and mastering engineers to turn down the level of their songs to the standard level, or Apple will do it for them. He believed this would eventually result in producers and engineers making more dynamic masters to take account of this factor. [ 31 ] [ 32 ] [ 33 ]
Earache Records reissued much of its catalog as part of its Full Dynamic Range series, intended to counteract the loudness war and ensure that fans hear the music as it was intended. [ 34 ]
By the late 2010s/early 2020s, most major U.S. streaming services began normalizing audio by default. [ 35 ] Target loudness for normalization varies by platform:
Measured LUFS may further vary among streaming services due to differing measurement systems and adjustment algorithms. For example, Amazon, Tidal, and YouTube do not increase the volume of tracks. [ 36 ]
Some services do not normalize audio, for example Bandcamp . [ 36 ]
When music is broadcast over radio, the station applies its own signal processing , further reducing the dynamic range of the material to closely match levels of absolute amplitude, regardless of the original recording's loudness. [ 42 ]
Competition for listeners between radio stations has contributed to a loudness war in radio broadcasting. [ 43 ] Loudness jumps between television broadcast channels and between programmes within the same channel, and between programs and intervening adverts are a frequent source of audience complaints. [ 44 ] The European Broadcasting Union has addressed this issue in the EBU PLOUD Group with publication of the EBU R 128 recommendation. In the U.S., legislators passed the CALM act , which led to enforcement of the formerly voluntary ATSC A/85 standard for loudness management.
In 2007, Suhas Sreedhar published an article about the loudness war in the engineering magazine IEEE Spectrum . Sreedhar said that the greater possible dynamic range of CDs was being set aside in favor of maximizing loudness using digital technology. Sreedhar said that the over-compressed modern music was fatiguing, that it did not allow the music to breathe . [ 45 ]
The production practices associated with the loudness war have been condemned by recording industry professionals including Alan Parsons and Geoff Emerick , [ 46 ] along with mastering engineers Doug Sax , Stephen Marcussen , and Bob Katz . [ 5 ] Musician Bob Dylan has also condemned the practice, saying, "You listen to these modern records, they're atrocious, they have sound all over them. There's no definition of nothing, no vocal, no nothing, just like—static." [ 47 ] [ 48 ] Music critics have complained about excessive compression. The Rick Rubin –produced albums Californication and Death Magnetic have been criticised for loudness by The Guardian ; the latter was also criticised by Audioholics . [ 49 ] [ 50 ] Stylus Magazine said the former suffered from so much digital clipping that "even non-audiophile consumers complained about it". [ 11 ]
Opponents have called for immediate changes in the music industry regarding the level of loudness. [ 48 ] In August 2006, Angelo Montrone , the vice-president of A&R for One Haven Music (a Sony Music company), in an open letter decrying the loudness war, claimed that mastering engineers are being forced against their will or are preemptively making releases louder to get the attention of industry heads. [ 7 ] Some bands are being petitioned by the public to re-release their music with less distortion. [ 46 ]
The nonprofit organization Turn Me Up! was created by Charles Dye , John Ralston , and Allen Wagner in 2007 with the aim of certifying albums that contain a suitable level of dynamic range [ 51 ] and encourage the sale of quieter records by placing a Turn Me Up! sticker on certified albums. [ 52 ] As of 2019 [update] , the group has not produced an objective method for determining what will be certified. [ 53 ]
A hearing researcher at House Ear Institute is concerned that the loudness of new albums could possibly harm listeners' hearing, particularly that of children. [ 52 ] The Journal of General Internal Medicine has published a paper suggesting increasing loudness may be a risk factor in hearing loss. [ 54 ] [ 55 ]
A two-minute YouTube video addressing this issue by audio engineer Matt Mayfield [ 56 ] has been referenced by The Wall Street Journal [ 57 ] and the Chicago Tribune . [ 58 ] Pro Sound Web quoted Mayfield, "When there is no quiet, there can be no loud." [ 59 ]
The book Perfecting Sound Forever: An Aural History of Recorded Music , by Greg Milner, presents the loudness war in radio and music production as a central theme. [ 13 ] The book Mastering Audio: The Art and the Science , by Bob Katz, includes chapters about the origins of the loudness war and another suggesting methods of combating the war. [ 10 ] : 241 These chapters are based on Katz's presentation at the 107th Audio Engineering Society Convention (1999) and subsequent Audio Engineering Society Journal publication (2000). [ 60 ]
In September 2011, Emmanuel Deruty wrote in Sound on Sound , a recording industry magazine, that the loudness war has not led to a decrease in dynamic variability in modern music, possibly because the original digitally recorded source material of modern recordings is more dynamic than analogue material. Deruty and Tardieu analyzed the loudness range (LRA) over a 45-year span of recordings and observed that the crest factor of recorded music diminished significantly between 1985 and 2010, but the LRA remained relatively constant. [ 30 ] Deruty and Damien Tardieu criticized Sreedhar's methods in an AES paper, saying that Sreedhar had confused crest factor (peak to RMS) with dynamics in the musical sense (pianissimo to fortissimo). [ 61 ]
This analysis was also challenged by Ian Shepherd and Bob Katz on the basis that the LRA was designed for assessing loudness variation within a track while the EBU R128 peak to loudness ratio (PLR) is a measure of the peak level of a track relative to a reference loudness level and is a more helpful metric than LRA in assessing overall perceived dynamic range. PLR measurements show a trend of reduced dynamic range throughout the 1990s. [ 62 ] [ 63 ]
Debate continues regarding which measurement methods are most appropriate to evaluating the loudness war. [ 64 ] [ 65 ] [ 66 ]
Albums that have been criticized for their sound quality include: | https://en.wikipedia.org/wiki/Loudness_war |
A loudspeaker (commonly referred to as a speaker or, more fully, a speaker system ) is a combination of one or more speaker drivers , an enclosure , and electrical connections (possibly including a crossover network ). The speaker driver is an electroacoustic transducer [ 1 ] : 597 that converts an electrical audio signal into a corresponding sound . [ 2 ]
The driver is a linear motor connected to a diaphragm , which transmits the motor's movement to produce sound by moving air. An audio signal, typically originating from a microphone, recording, or radio broadcast, is electronically amplified to a power level sufficient to drive the motor, reproducing the sound corresponding to the original unamplified signal. This process functions as the inverse of a microphone . In fact, the dynamic speaker driver—the most common type—shares the same basic configuration as a dynamic microphone , which operates in reverse as a generator .
The dynamic speaker was invented in 1925 by Edward W. Kellogg and Chester W. Rice . When the electrical current from an audio signal passes through its voice coil —a coil of wire capable of moving axially in a cylindrical gap containing a concentrated magnetic field produced by a permanent magnet —the coil is forced to move rapidly back and forth due to Faraday's law of induction ; this attaches to a diaphragm or speaker cone (as it is usually conically shaped for sturdiness) in contact with air, thus creating sound waves . In addition to dynamic speakers, several other technologies are possible for creating sound from an electrical signal, a few of which are in commercial use.
For a speaker to efficiently produce sound, especially at lower frequencies, the speaker driver must be baffled so that the sound emanating from its rear does not cancel out the (intended) sound from the front; this generally takes the form of a speaker enclosure or speaker cabinet , an often rectangular box made of wood, but sometimes metal or plastic. The enclosure's design plays an important acoustic role thus determining the resulting sound quality. Most high fidelity speaker systems (picture at right) include two or more sorts of speaker drivers, each specialized in one part of the audible frequency range. The smaller drivers capable of reproducing the highest audio frequencies are called tweeters , those for middle frequencies are called mid-range drivers and those for low frequencies are called woofers . In a two-way or three-way speaker system (one with drivers covering two or three different frequency ranges) there is a small amount of passive electronics called a crossover network which helps direct components of the electronic signal to the speaker drivers best capable of reproducing those frequencies. In a powered speaker system, the power amplifier actually feeding the speaker drivers is built into the enclosure itself; these have become more and more common, especially as computer and Bluetooth speakers.
Smaller speakers are found in devices such as radios , televisions , portable audio players , personal computers ( computer speakers ), headphones , and earphones . Larger, louder speaker systems are used for home hi-fi systems ( stereos ), electronic musical instruments , sound reinforcement in theaters and concert halls, and in public address systems .
The term loudspeaker may refer to individual transducers (also known as drivers ) or to complete speaker systems consisting of an enclosure and one or more drivers.
To adequately and accurately reproduce a wide range of frequencies with even coverage, most loudspeaker systems employ more than one driver, particularly for higher sound pressure level (SPL) or maximum accuracy. Individual drivers are used to reproduce different frequency ranges. The drivers are named subwoofers (for very low frequencies); woofers (low frequencies); mid-range speakers (middle frequencies); tweeters (high frequencies); and sometimes supertweeters , for the highest audible frequencies and beyond. The terms for different speaker drivers differ, depending on the application. In two-way systems there is no mid-range driver, so the task of reproducing the mid-range sounds is divided between the woofer and tweeter. When multiple drivers are used in a system, a filter network, called an audio crossover , separates the incoming signal into different frequency ranges and routes them to the appropriate driver. A loudspeaker system with n separate frequency bands is described as n-way speakers : a two-way system will have a woofer and a tweeter; a three-way system employs a woofer, a mid-range, and a tweeter. Loudspeaker drivers of the type pictured are termed dynamic (short for electrodynamic) to distinguish them from other sorts including moving iron speakers , and speakers using piezoelectric or electrostatic systems.
Johann Philipp Reis installed an electric loudspeaker in his telephone in 1861; it was capable of reproducing clear tones, but later revisions could also reproduce muffled speech . [ 3 ] Alexander Graham Bell patented his first electric loudspeaker (a moving iron type capable of reproducing intelligible speech) as part of his telephone in 1876, which was followed in 1877 by an improved version from Ernst Siemens . During this time, Thomas Edison was issued a British patent for a system using compressed air as an amplifying mechanism for his early cylinder phonographs, but he ultimately settled for the familiar metal horn driven by a membrane attached to the stylus. In 1898, Horace Short patented a design for a loudspeaker driven by compressed air; he then sold the rights to Charles Parsons , who was issued several additional British patents before 1910. A few companies, including the Victor Talking Machine Company and Pathé , produced record players using compressed-air loudspeakers. Compressed-air designs are significantly limited by their poor sound quality and their inability to reproduce sound at low volume. Variants of the design were used for public address applications, and more recently, other variations have been used to test space-equipment resistance to the very loud sound and vibration levels that the launching of rockets produces. [ 4 ]
The first experimental moving-coil (also called dynamic ) loudspeaker was invented by Oliver Lodge in 1898. [ 5 ] The first practical moving-coil loudspeakers were manufactured by Danish engineer Peter L. Jensen and Edwin Pridham in 1915, in Napa, California . [ 6 ] Like previous loudspeakers these used horns to amplify the sound produced by a small diaphragm. Jensen was denied patents. Being unsuccessful in selling their product to telephone companies, in 1915 they changed their target market to radios and public address systems , and named their product Magnavox . Jensen was, for years after the invention of the loudspeaker, a part owner of The Magnavox Company. [ 7 ]
The moving-coil principle commonly used today in speakers was patented in 1925 by Edward W. Kellogg and Chester W. Rice . The key difference between previous attempts and the patent by Rice and Kellogg is the adjustment of mechanical parameters to provide a reasonably flat frequency response . [ 8 ]
These first loudspeakers used electromagnets , because large, powerful permanent magnets were generally not available at a reasonable price. The coil of an electromagnet, called a field coil, was energized by a current through a second pair of connections to the driver. This winding usually served a dual role, acting also as a choke coil , filtering the power supply of the amplifier that the loudspeaker was connected to. [ 9 ] AC ripple in the current was attenuated by the action of passing through the choke coil. However, AC line frequencies tended to modulate the audio signal going to the voice coil and added to the audible hum. In 1930 Jensen introduced the first commercial fixed-magnet loudspeaker; however, the large, heavy iron magnets of the day were impractical and field-coil speakers remained predominant until the widespread availability of lightweight alnico magnets after World War II.
In the 1930s, loudspeaker manufacturers began to combine two and three drivers or sets of drivers each optimized for a different frequency range in order to improve frequency response and increase sound pressure level. [ 10 ] In 1937, the first film industry-standard loudspeaker system, "The Shearer Horn System for Theatres", [ 11 ] a two-way system, was introduced by Metro-Goldwyn-Mayer . It used four 15" low-frequency drivers, a crossover network set for 375 Hz , and a single multi-cellular horn with two compression drivers providing the high frequencies. John Kenneth Hilliard , James Bullough Lansing , and Douglas Shearer all played roles in creating the system. At the 1939 New York World's Fair , a very large two-way public address system was mounted on a tower at Flushing Meadows . The eight 27" low-frequency drivers were designed by Rudy Bozak in his role as chief engineer for Cinaudagraph. High-frequency drivers were likely made by Western Electric . [ 12 ]
Altec Lansing introduced the 604 , which became their most famous coaxial Duplex driver, in 1943. It incorporated a high-frequency horn that sent sound through a hole in the pole piece of a 15-inch woofer for near-point-source performance. [ 13 ] Altec's "Voice of the Theatre" loudspeaker system was first sold in 1945, offering better coherence and clarity at the high output levels necessary in movie theaters. [ 14 ] The Academy of Motion Picture Arts and Sciences immediately began testing its sonic characteristics; they made it the film house industry standard in 1955. [ 15 ]
In 1954, Edgar Villchur developed the acoustic suspension principle of loudspeaker design. This allowed for better bass response than previously obtainable from drivers mounted in larger cabinets. [ 16 ] He and his partner Henry Kloss formed the Acoustic Research company to manufacture and market speaker systems using this principle. [ 17 ] Subsequently, continuous developments in enclosure design and materials led to significant audible improvements. [ 18 ]
The most notable improvements to date in modern dynamic drivers, and the loudspeakers that employ them, are improvements in cone materials, the introduction of higher-temperature adhesives, improved permanent magnet materials, improved measurement techniques, computer-aided design , and finite element analysis. At low frequencies, Thiele/Small parameters electrical network theory has been used to optimize bass driver and enclosure synergy since the early 1970s. [ 19 ]
The most common type of driver, commonly called a dynamic loudspeaker, uses a lightweight diaphragm , or cone , connected to a rigid basket , or frame , via a flexible suspension, commonly called a spider , that constrains a voice coil to move axially through a cylindrical magnetic gap. A protective dust cap glued in the cone's center prevents dust, most importantly ferromagnetic debris, from entering the gap.
When an electrical signal is applied to the voice coil, a magnetic field is created by the electric current in the voice coil, making it a variable electromagnet. The coil and the driver's magnetic system interact in a manner similar to a solenoid , generating a mechanical force that moves the coil (and thus, the attached cone). Application of alternating current moves the cone back and forth, accelerating and reproducing sound under the control of the applied electrical signal coming from the amplifier.
The following is a description of the individual components of this type of loudspeaker.
The diaphragm is usually manufactured with a cone- or dome-shaped profile. A variety of different materials may be used, but the most common are paper, plastic, and metal. The ideal material is rigid, to prevent uncontrolled cone motions, has low mass to minimize starting force requirements and energy storage issues and is well damped to reduce vibrations continuing after the signal has stopped with little or no audible ringing due to its resonance frequency as determined by its usage. In practice, all three of these criteria cannot be met simultaneously using existing materials; thus, driver design involves trade-offs . For example, paper is light and typically well-damped, but is not stiff; metal may be stiff and light, but it usually has poor damping; plastic can be light, but typically, the stiffer it is made, the poorer the damping. As a result, many cones are made of some sort of composite material. For example, a cone might be made of cellulose paper, into which some carbon fiber , Kevlar , glass , hemp or bamboo fibers have been added; or it might use a honeycomb sandwich construction; or a coating might be applied to it so as to provide additional stiffening or damping.
The chassis, frame, or basket, is designed to be rigid, preventing deformation that could change critical alignments with the magnet gap, perhaps allowing the voice coil to rub against the magnet around the gap. Chassis are typically cast from aluminum alloy, in heavier magnet-structure speakers; or stamped from thin sheet steel in lighter-structure drivers. [ 20 ] Other materials such as molded plastic and damped plastic compound baskets are becoming common, especially for inexpensive, low-mass drivers. A metallic chassis can play an important role in conducting heat away from the voice coil; heating during operation changes resistance, causes physical dimensional changes, and if extreme, broils the varnish on the voice coil; it may even demagnetize permanent magnets.
The suspension system keeps the coil centered in the gap and provides a restoring (centering) force that returns the cone to a neutral position after moving. A typical suspension system consists of two parts: the spider , which connects the diaphragm or voice coil to the lower frame and provides the majority of the restoring force, and the surround , which helps center the coil/cone assembly and allows free pistonic motion aligned with the magnetic gap. The spider is usually made of a corrugated fabric disk, impregnated with a stiffening resin. The name comes from the shape of early suspensions, which were two concentric rings of Bakelite material, joined by six or eight curved legs . Variations of this topology included the addition of a felt disc to provide a barrier to particles that might otherwise cause the voice coil to rub.
The cone surround can be rubber or polyester foam , treated paper or a ring of corrugated, resin-coated fabric; it is attached to both the outer cone circumference and to the upper frame. These diverse surround materials, their shape and treatment can dramatically affect the acoustic output of a driver; each implementation has advantages and disadvantages. Polyester foam, for example, is lightweight and economical, though usually leaks air to some degree and is degraded by time, exposure to ozone, UV light, humidity and elevated temperatures, limiting useful life before failure.
The wire in a voice coil is usually made of copper , though aluminum —and, rarely, silver —may be used. The advantage of aluminum is its light weight, which reduces the moving mass compared to copper. This raises the resonant frequency of the speaker and increases its efficiency. A disadvantage of aluminum is that it is not easily soldered, and so connections must be robustly crimped together and sealed. Voice-coil wire cross sections can be circular, rectangular, or hexagonal, giving varying amounts of wire volume coverage in the magnetic gap space. The coil is oriented co-axially inside the gap; it moves back and forth within a small circular volume (a hole, slot, or groove) in the magnetic structure. The gap establishes a concentrated magnetic field between the two poles of a permanent magnet; the outside ring of the gap is one pole, and the center post (called the pole piece) is the other. The pole piece and backplate are often made as a single piece, called the poleplate or yoke.
The size and type of magnet and details of the magnetic circuit differ, depending on design goals. For instance, the shape of the pole piece affects the magnetic interaction between the voice coil and the magnetic field, and is sometimes used to modify a driver's behavior. A shorting ring , or Faraday loop , may be included as a thin copper cap fitted over the pole tip or as a heavy ring situated within the magnet-pole cavity. The benefits of this complication is reduced impedance at high frequencies, providing extended treble output, reduced harmonic distortion, and a reduction in the inductance modulation that typically accompanies large voice coil excursions. On the other hand, the copper cap requires a wider voice-coil gap, with increased magnetic reluctance; this reduces available flux, requiring a larger magnet for equivalent performance.
Electromagnets were often used in musical instrument amplifiers cabinets well into the 1950s; there were economic savings in those using tube amplifiers as the field coil could, and usually did, do double duty as a power supply choke. Very few manufacturers still produce electrodynamic loudspeakers with electrically powered field coils , as was common in the earliest designs.
Speaker system design involves subjective perceptions of timbre and sound quality, measurements and experiments. [ 23 ] [ 24 ] [ 25 ] Adjusting a design to improve performance is done using a combination of magnetic, acoustic, mechanical, electrical, and materials science theory, and tracked with high-precision measurements and the observations of experienced listeners. A few of the issues speaker and driver designers must confront are distortion, acoustic lobing , phase effects, off-axis response, and crossover artifacts. Designers can use an anechoic chamber to ensure the speaker can be measured independently of room effects, or any of several electronic techniques that, to some extent, substitute for such chambers. Some developers eschew anechoic chambers in favor of specific standardized room setups intended to simulate real-life listening conditions.
Individual electrodynamic drivers provide their best performance within a limited frequency range. Multiple drivers (e.g. subwoofers, woofers, mid-range drivers, and tweeters) are generally combined into a complete loudspeaker system to provide performance beyond that constraint. The three most commonly used sound radiation systems are the cone, dome and horn-type drivers.
A full- or wide-range driver is a speaker driver designed to be used alone to reproduce an audio channel without the help of other drivers and therefore must cover the audio frequency range required by the application. These drivers are small, typically 3 to 8 inches (7.6 to 20.3 cm) in diameter to permit reasonable high-frequency response, and carefully designed to give low-distortion output at low frequencies, though with reduced maximum output level. Full-range drivers are found, for instance, in public address systems, in televisions, small radios, intercoms, and some computer speakers .
In hi-fi speaker systems, the use of wide-range drivers can avoid undesirable interactions between multiple drivers caused by non-coincident driver location or crossover network issues but also may limit frequency response and output abilities (most especially at low frequencies). Hi-fi speaker systems built with wide-range drivers may require large, elaborate or, expensive enclosures to approach optimum performance.
Full-range drivers often employ an additional cone called a whizzer : a small, light cone attached to the joint between the voice coil and the primary cone. The whizzer cone extends the high-frequency response of the driver and broadens its high-frequency directivity, which would otherwise be greatly narrowed due to the outer diameter cone material failing to keep up with the central voice coil at higher frequencies. The main cone in a whizzer design is manufactured so as to flex more in the outer diameter than in the center. The result is that the main cone delivers low frequencies and the whizzer cone contributes most of the higher frequencies. Since the whizzer cone is smaller than the main diaphragm, output dispersion at high frequencies is improved relative to an equivalent single larger diaphragm.
Limited-range drivers, also used alone, are typically found in computers, toys, and clock radios . These drivers are less elaborate and less expensive than wide-range drivers, and they may be severely compromised to fit into very small mounting locations. In these applications, sound quality is a low priority.
A subwoofer is a woofer driver used only for the lowest-pitched part of the audio spectrum: typically below 200 Hz for consumer systems, [ 26 ] below 100 Hz for professional live sound, [ 27 ] and below 80 Hz in THX -approved systems. [ 28 ] Because the intended range of frequencies is limited, subwoofer system design is usually simpler in many respects than for conventional loudspeakers, often consisting of a single driver enclosed in a suitable enclosure. Since sound in this frequency range can easily bend around corners by diffraction , the speaker aperture does not have to face the audience, and subwoofers can be mounted in the bottom of the enclosure, facing the floor. This is eased by the limitations of human hearing at low frequencies; Such sounds cannot be located in space, due to their large wavelengths compared to higher frequencies which produce differential effects in the ears due to shadowing by the head, and diffraction around it, both of which we rely upon for localization clues.
To accurately reproduce very low bass notes, subwoofer systems must be solidly constructed and properly braced to avoid unwanted sounds from cabinet vibrations. As a result, good subwoofers are typically quite heavy. Many subwoofer systems include integrated power amplifiers and electronic subsonic -filters, with additional controls relevant to low-frequency reproduction (e.g. a crossover knob and a phase switch). These variants are known as active or powered subwoofers. [ 29 ] In contrast, passive subwoofers require external amplification.
In typical installations, subwoofers are physically separated from the rest of the speaker cabinets. Because of propagation delay and positioning, their output may be out of phase with the rest of the sound. Consequently, a subwoofer's power amp often has a phase-delay adjustment which may be used improve performance of the system as a whole. Subwoofers are widely used in large concert and mid-sized venue sound reinforcement systems. Subwoofer cabinets are often built with a bass reflex port, a design feature which if properly engineered improves bass performance and increases efficiency.
A woofer is a driver that reproduces low frequencies. The driver works with the characteristics of the speaker enclosure to produce suitable low frequencies. Some loudspeaker systems use a woofer for the lowest frequencies, sometimes well enough that a subwoofer is not needed. Additionally, some loudspeakers use the woofer to handle middle frequencies, eliminating the mid-range driver.
A mid-range speaker is a loudspeaker driver that reproduces a band of frequencies generally between 1–6 kHz, otherwise known as the mid frequencies (between the woofer and tweeter). Mid-range driver diaphragms can be made of paper or composite materials and can be direct radiation drivers (rather like smaller woofers) or they can be compression drivers (rather like some tweeter designs). If the mid-range driver is a direct radiator, it can be mounted on the front baffle of a loudspeaker enclosure, or, if a compression driver, mounted at the throat of a horn for added output level and control of radiation pattern.
A tweeter is a high-frequency driver that reproduces the highest frequencies in a speaker system. A major problem in tweeter design is achieving wide angular sound coverage (off-axis response), since high-frequency sound tends to leave the speaker in narrow beams. Soft-dome tweeters are widely found in home stereo systems, and horn-loaded compression drivers are common in professional sound reinforcement. Ribbon tweeters have gained popularity as the output power of some designs has been increased to levels useful for professional sound reinforcement, and their output pattern is wide in the horizontal plane, a pattern that has convenient applications in concert sound. [ 30 ]
A coaxial driver is a loudspeaker driver with two or more combined concentric drivers. Coaxial drivers have been produced by Altec , Tannoy , Pioneer , KEF , SEAS, B&C Speakers, BMS, Cabasse and Genelec . [ 31 ]
Used in multi-driver speaker systems , the crossover is an assembly of filters that separate the input signal into different frequency bands according to the requirements of each driver. Hence the drivers receive power only in the sound frequency range they were designed for, thereby reducing distortion in the drivers and interference between them. Crossovers can be passive or active .
A passive crossover is an electronic circuit that uses a combination of one or more resistors , inductors and capacitors . These components are combined to form a filter network and are most often placed between the full frequency-range power amplifier and the loudspeaker drivers to divide the amplifier's signal into the necessary frequency bands before being delivered to the individual drivers. Passive crossover circuits need no external power beyond the audio signal itself, but have some disadvantages: they may require larger inductors and capacitors due to power handling requirements. Unlike active crossovers which include a built-in amplifier, passive crossovers have an inherent attenuation within the passband , typically leading to a reduction in damping factor before the voice coil. [ citation needed ]
An active crossover is an electronic filter circuit that divides the signal into individual frequency bands before power amplification, thus requiring at least one power amplifier for each band. [ citation needed ] Passive filtering may also be used in this way before power amplification, but it is an uncommon solution, being less flexible than active filtering. Any technique that uses crossover filtering followed by amplification is commonly known as bi-amping, tri-amping, quad-amping, and so on, depending on the minimum number of amplifier channels. [ 32 ]
Some loudspeaker designs use a combination of passive and active crossover filtering, such as a passive crossover between the mid- and high-frequency drivers and an active crossover for the low-frequency driver. [ 33 ] [ 34 ]
Passive crossovers are commonly installed inside speaker boxes and are by far the most common type of crossover for home and low-power use. In car audio systems, passive crossovers may be in a separate box, necessary to accommodate the size of the components used. Passive crossovers may be simple for low-order filtering, or complex to allow steep slopes such as 18 or 24 dB per octave. Passive crossovers can also be designed to compensate for undesired characteristics of driver, horn, or enclosure resonances, and can be tricky to implement, due to component interaction. Passive crossovers, like the driver units that they feed, have power handling limits, have insertion losses , and change the load seen by the amplifier. The changes are matters of concern for many in the hi-fi world. [ citation needed ] When high output levels are required, active crossovers may be preferable. Active crossovers may be simple circuits that emulate the response of a passive network or may be more complex, allowing extensive audio adjustments. Some active crossovers, usually digital loudspeaker management systems, may include electronics and controls for precise alignment of phase and time between frequency bands, equalization, dynamic range compression and limiting . [ citation needed ]
Most loudspeaker systems consist of drivers mounted in an enclosure, or cabinet. The role of the enclosure is to prevent sound waves emanating from the back of a driver from interfering destructively with those from the front. The sound waves emitted from the back are 180° out of phase with those emitted forward, so without an enclosure they typically cause cancellations which significantly degrade the level and quality of sound at low frequencies.
The simplest driver mount is a flat panel ( baffle ) with the drivers mounted in holes in it. However, in this approach, sound frequencies with a wavelength longer than the baffle dimensions are canceled out because the antiphase radiation from the rear of the cone interferes with the radiation from the front. With an infinitely large panel, this interference could be entirely prevented. A sufficiently large sealed box can approach this behavior. [ 35 ] [ 36 ]
Since panels of infinite dimensions are impossible, most enclosures function by containing the rear radiation from the moving diaphragm. A sealed enclosure prevents transmission of the sound emitted from the rear of the loudspeaker by confining the sound in a rigid and airtight box. Techniques used to reduce the transmission of sound through the walls of the cabinet include thicker cabinet walls, internal bracing and lossy wall material.
However, a rigid enclosure reflects sound internally, which can then be transmitted back through the loudspeaker diaphragm—again resulting in degradation of sound quality. This can be reduced by internal absorption using absorptive materials such as glass wool , wool, or synthetic fiber batting, within the enclosure. The internal shape of the enclosure can also be designed to reduce this by reflecting sounds away from the loudspeaker diaphragm, where they may then be absorbed.
Other enclosure types alter the rear sound radiation so it can add constructively to the output from the front of the cone. Designs that do this (including bass reflex , passive radiator , transmission line , etc.) are often used to extend the effective low-frequency response and increase the low-frequency output of the driver.
To make the transition between drivers as seamless as possible, system designers have attempted to time align the drivers by moving one or more driver mounting locations forward or back so that the acoustic center of each driver is in the same vertical plane. This may also involve tilting the driver back, providing a separate enclosure mounting for each driver, or using electronic techniques to achieve the same effect. These attempts have resulted in some unusual cabinet designs.
The speaker mounting scheme (including cabinets) can also cause diffraction, resulting in peaks and dips in the frequency response. The problem is usually greatest at higher frequencies, where wavelengths are similar to, or smaller than, cabinet dimensions.
Horn loudspeakers are the oldest form of loudspeaker system. The use of horns as voice-amplifying megaphones dates at least to the 17th century, [ 37 ] and horns were used in mechanical gramophones as early as 1877. Horn loudspeakers use a shaped waveguide in front of or behind the driver to increase the directivity of the loudspeaker and to transform a small diameter, high-pressure condition at the driver cone surface to a large diameter, low-pressure condition at the mouth of the horn. This improves the acoustic—electro/mechanical impedance match between the driver and ambient air, increasing efficiency, and focusing the sound over a narrower area.
The size of the throat, mouth, the length of the horn, as well as the area expansion rate along it must be carefully chosen to match the driver to properly provide this transforming function over a range of frequencies. [ a ] The length and cross-sectional mouth area required to create a bass or sub-bass horn dictates a horn many feet long. Folded horns can reduce the total size, but compel designers to make compromises and accept increased cost and construction complications. Some horn designs not only fold the low-frequency horn but use the walls in a room corner as an extension of the horn mouth. In the late 1940s, horns whose mouths took up much of a room wall were not unknown among hi-fi fans. Room-sized installations became much less acceptable when two or more were required.
A horn-loaded speaker can have a sensitivity as high as 110 dB SPL at 2.83 volts (1 watt at 8 ohms) at 1 meter. This is a hundredfold increase in output compared to a speaker rated at 90 dB sensitivity (given the aforementioned specifications) and is invaluable in applications where high sound levels are required or amplifier power is limited.
A transmission line loudspeaker is a loudspeaker enclosure design that uses an acoustic transmission line within the cabinet, compared to the simpler enclosure-based designs. Instead of reverberating in a fairly simple damped enclosure, sound from the back of the bass speaker is directed into a long (generally folded) damped pathway within the speaker enclosure, which allows greater control and efficient use of speaker energy.
Most home hi-fi loudspeakers use two wiring points to connect to the source of the signal (for example, to the audio amplifier or receiver ). To accept the wire connection, the loudspeaker enclosure may have binding posts , spring clips, or a panel-mount jack. If the wires for a pair of speakers are not connected with respect to the proper electrical polarity , [ b ] the loudspeakers are said to be out of phase or more properly out of polarity . [ 38 ] [ 39 ] Given identical signals, motion in the cone of an out of polarity loudspeaker is in the opposite direction of the others. This typically causes monophonic material in a stereo recording to be canceled out, reduced in level, and made more difficult to localize, all due to destructive interference of the sound waves. The cancellation effect is most noticeable at frequencies where the loudspeakers are separated by a quarter wavelength or less; low frequencies are affected the most. This type of miswiring error does not damage speakers, but is not optimal for listening. [ 40 ] [ 41 ]
With sound reinforcement system, PA system and instrument amplifier speaker enclosures, cables and some type of jack or connector are typically used. Lower- and mid-priced sound system and instrument speaker cabinets often use 1/4" jacks . Higher-priced and higher-powered sound system cabinets and instrument speaker cabinets often use Speakon connectors. Speakon connectors are considered to be safer for high-wattage amplifiers, because the connector is designed so that human users cannot touch the connectors.
Wireless speakers are similar to wired powered speakers , but they receive audio signals using radio frequency (RF) waves rather than over audio cables. There is an amplifier integrated in the speaker's cabinet because the RF waves alone are not enough to drive the speaker. Wireless speakers still need power, so require a nearby AC power outlet, or onboard batteries. Only the wire for the audio is eliminated.
Speaker specifications generally include:
and optionally:
To make sound, a loudspeaker is driven by modulated electric current (produced by an amplifier) that passes through a speaker coil which then (through inductance ) creates a magnetic field around the coil. The electric current variations that pass through the speaker are thus converted to a varying magnetic field, whose interaction with the driver's magnetic field moves the speaker diaphragm, which thus forces the driver to produce air motion that is similar to the original signal from the amplifier.
The load that a driver presents to an amplifier consists of a complex electrical impedance —a combination of resistance and both capacitive and inductive reactance , which combines properties of the driver, its mechanical motion, the effects of crossover components (if any are in the signal path between amplifier and driver), and the effects of air loading on the driver as modified by the enclosure and its environment. Most amplifiers' output specifications are given at a specific power into an ideal resistive load; however, a loudspeaker does not have a constant impedance across its frequency range. Instead, the voice coil is inductive, the driver has mechanical resonances, the enclosure changes the driver's electrical and mechanical characteristics, and a passive crossover between the drivers and the amplifier contributes its own variations. The result is a load impedance that varies widely with frequency, and usually a varying phase relationship between voltage and current as well, also changing with frequency. Some amplifiers can cope with the variation better than others can.
Electrical models of loudspeakers are available that address these effects in detail. [ 45 ]
Examples of typical loudspeaker measurement are: amplitude and phase characteristics vs. frequency; impulse response under one or more conditions (e.g. square waves, sine wave bursts, etc.); directivity vs. frequency (e.g. horizontally, vertically, spherically, etc.); harmonic and intermodulation distortion vs. sound pressure level (SPL) output, using any of several test signals; stored energy (i.e. ringing) at various frequencies; impedance vs. frequency; and small-signal vs. large-signal performance. Most of these measurements require sophisticated and often expensive equipment to perform. [ citation needed ] The sound pressure level (SPL) a loudspeaker produces is commonly expressed as decibels relative to 20 μPa ( dB SPL ).
Loudspeaker efficiency is defined as the sound power output divided by the electrical power input. Most loudspeakers are inefficient transducers; only about 1% of the electrical energy sent by an amplifier to a typical home loudspeaker is converted to acoustic energy. The remainder is converted to heat, mostly in the voice coil and magnet assembly. The main reason for this is the difficulty of achieving proper impedance matching between the acoustic impedance of the drive unit and the air it radiates into. [ c ] The efficiency of loudspeaker drivers varies with frequency as well. For instance, the output of a woofer driver decreases as the input frequency decreases because of the increasingly poor impedance match between air and the driver.
Driver ratings based on the SPL for a given input are called sensitivity ratings and are notionally similar to efficiency. Sensitivity is usually expressed as the SPL ( dB SPL by common usage meaning dB relative to 20 μPa ) at 1 W electrical input, measured at 1 meter, [ d ] often at a single frequency. The voltage used is often 2.83 V RMS , which results in 1 watt into a nominal 8 Ω speaker impedance. Measurements taken with this reference are typically quoted as dB SPL with 2.83 V @ 1 m. [ citation needed ]
The sound pressure output is measured at (or mathematically scaled to be equivalent to a measurement taken at) one meter from the loudspeaker and on-axis (directly in front of it), under the condition that the loudspeaker is radiating into an infinitely large space and mounted on an infinite baffle . Clearly then, sensitivity does not correlate precisely with efficiency, as it also depends on the directivity of the driver being tested and the acoustic environment in front of the actual loudspeaker. For example, a cheerleader's horn produces more sound output in the direction it is pointed by concentrating sound waves from the cheerleader in one direction, thus focusing them. The horn also improves impedance matching between the voice and the air, which produces more acoustic power for a given speaker power. In some cases, improved impedance matching (via careful enclosure design) lets the speaker produce more acoustic power.
Typical home loudspeakers have sensitivities of about 85 to 95 dB SPL for 1 W @ 1 m—an efficiency of 0.5–4%. Sound reinforcement and public address loudspeakers have sensitivities of perhaps 95 to 102 dB SPL for 1 W @ 1 m—an efficiency of 4–10%. Rock concert, stadium PA, marine hailing, etc. speakers generally have higher sensitivities of 103 to 110 dB SPL for 1 W @ 1 m—an efficiency of 10–20%. [ citation needed ]
Since sensitivity and power handling are largely independent properties, a driver with a higher maximum power rating cannot necessarily be driven to louder levels than a lower-rated one. In the example that follows, assume (for simplicity) that the drivers being compared have the same electrical impedance, are operated at the same frequency within both driver's respective passbands, and that power compression and distortion are insignificant. A speaker 3 dB more sensitive than another produces very nearly double the sound power (is 3 dB louder) for the same electrical power input. Thus, a 100 W driver (A) rated at 92 dB SPL for 1 W @ 1 m sensitivity puts out twice as much acoustic power as a 200 W driver (B) rated at 89 dB SPL for 1 W @ 1 m when both are driven with 100 W of electrical power. In this example, when driven at 100 W, speaker A produces the same SPL, or loudness as speaker B would produce with 200 W input. Thus, a 3 dB increase in the sensitivity of the speaker means that it needs half the amplifier power to achieve a given SPL. This translates into a smaller, less complex power amplifier—and often, to reduced overall system cost.
It is typically not possible to combine high efficiency (especially at low frequencies) with compact enclosure size and adequate low-frequency response. One can, for the most part, choose only two of the three parameters when designing a speaker system. So, for example, if extended low-frequency performance and small box size are important, one must accept low efficiency. This rule of thumb is sometimes called Hofmann's Iron Law (after J.A. Hofmann , the H in KLH ). [ 46 ] [ 47 ]
The interaction of a loudspeaker system with its environment is complex and is largely out of the loudspeaker designer's control. Most listening rooms present a more or less reflective environment, depending on size, shape, volume, and furnishings. This means the sound reaching a listener's ears consists not only of sound directly from the speaker system, but also the same sound delayed by traveling to and from (and being modified by) one or more surfaces. These reflected sound waves, when added to the direct sound, cause cancellation and addition at assorted frequencies (e.g. from resonant room modes ), thus changing the timbre and character of the sound at the listener's ears. The human brain is sensitive to small variations in reflected sound, and this is part of the reason why a loudspeaker system sounds different at different listening positions or in different rooms.
A significant factor in the sound of a loudspeaker system is the amount of absorption and diffusion present in the environment. Clapping one's hands in a typical empty room, without draperies or carpet, produces a zippy, fluttery echo due to a lack of absorption and diffusion.
In a typical rectangular listening room, the hard, parallel surfaces of the walls, floor and ceiling cause primary acoustic resonance nodes in each of the three dimensions: left–right, up–down and forward–backward. [ 48 ] Furthermore, there are more complex resonance modes involving up to all six boundary surfaces combining to create standing waves . This is called speaker boundary interference response (SBIR). [ 49 ] Low frequencies excite these modes the most, since long wavelengths are not much affected by furniture compositions or placement. The mode spacing is critical, especially in small and medium-sized rooms like recording studios, home theaters and broadcast studios. The proximity of the loudspeakers to room boundaries affects how strongly the resonances are excited as well as affecting the relative strength at each frequency. The location of the listener is critical, too, as a position near a boundary can have a great effect on the perceived balance of frequencies. This is because standing-wave patterns are most easily heard in these locations and at lower frequencies, below the Schroeder frequency , depending on room size. [ citation needed ]
Acousticians, in studying the radiation of sound sources have developed some concepts important to understanding how loudspeakers are perceived. The simplest possible radiating source is a point source . An ideal point source is an infinitesimally small point radiating sound. It may be easier to imagine a tiny pulsating sphere, uniformly increasing and decreasing in diameter, sending out sound waves in all directions equally.
Any object radiating sound, including a loudspeaker system, can be thought of as being composed of combinations of such simple point sources. The radiation pattern of a combination of point sources is not the same as for a single source but depends on the distance and orientation between the sources, the position relative to them from which the listener hears the combination, and the frequency of the sound involved. Using mathematics, some simple combinations of sources are easily solved.
One simple combination is two simple sources separated by a distance and vibrating out of phase, one miniature sphere expanding while the other is contracting. The pair is known as a dipole , and the radiation of this combination is similar to that of a very small dynamic loudspeaker operating without a baffle. The directivity of a dipole is a figure-8 shape with maximum output along a vector that connects the two sources and minimum output to the sides when the observing point is equidistant from the two sources – the sum of the positive and negative waves cancel each other. While most drivers are dipoles, depending on the enclosure to which they are attached, they may radiate as point sources or dipoles. If mounted on a finite baffle, and these out-of-phase waves are allowed to interact, peaks and nulls in the frequency response result. When the rear radiation is absorbed or trapped in a box, the diaphragm becomes an approximate point-source radiator. Bipolar speakers, made by mounting in-phase drivers (both moving out of or into the box in unison) on opposite sides of a box, are a method of approaching omnidirectional radiation patterns.
In real life, individual drivers are complex 3D shapes such as cones and domes, and they are placed on a baffle for various reasons. Deriving a mathematical expression for the directivity of a complex shape, based on modeling combinations of point sources, is usually not possible, but in the far field, the directivity of a loudspeaker with a circular diaphragm is close to that of a flat circular piston, so it can be used as an illustrative simplification for discussion.
Far-field directivity of a flat circular piston in an infinite baffle is [ citation needed ]
p ( θ ) = p 0 J 1 ( k a sin θ ) k a sin θ {\displaystyle p(\theta )={\frac {p_{0}J_{1}(k_{a}\sin \theta )}{k_{a}\sin \theta }}}
where k a = 2 π a λ {\displaystyle k_{a}={\frac {2\pi a}{\lambda }}} , p 0 {\displaystyle p_{0}} is the pressure on axis, a {\displaystyle a} is the piston radius, λ {\displaystyle \lambda } is the wavelength (i.e. λ = c f = speed of sound frequency {\displaystyle \lambda ={\frac {c}{f}}={\frac {\text{speed of sound}}{\text{frequency}}}} ), θ {\displaystyle \theta } is the angle off axis and J 1 {\displaystyle J_{1}} is the Bessel function of the first kind.
A planar source such as this radiates sound uniformly for wavelengths longer than the dimensions of the planar source, and as frequency increases, the sound from such a source focuses into an increasingly narrower angle. The smaller the driver, the higher the frequency where this narrowing of directivity occurs. Even if the diaphragm is not perfectly circular, this effect occurs such that larger sources are more directive. Several loudspeaker designs approximate this behavior. Most are electrostatic or planar magnetic designs.
Various manufacturers use different driver mounting arrangements to create a specific type of sound field in the space for which they are designed. The resulting radiation patterns may be intended to more closely simulate the way sound is produced by real instruments, or simply create a controlled energy distribution from the input signal. An example of the first is a room corner system with many small drivers on the surface of a 1/8 sphere. A system design of this type was patented and produced commercially as the Bose 2201 .
Directivity is an important issue because it affects the frequency balance of sound a listener hears, and also the interaction of the speaker system with the room and its contents. A very directive (sometimes termed beamy ) speaker (i.e. on an axis perpendicular to the speaker face) may result in a reverberant field lacking in high frequencies, giving the impression the speaker is deficient in treble even though it measures well on axis (e.g. flat across the entire frequency range). Speakers with very wide, or rapidly increasing directivity at high frequencies, can give the impression that there is too much treble (if the listener is on axis) or too little (if the listener is off axis). This is part of the reason why on-axis frequency response measurement is not a complete characterization of the sound of a given loudspeaker.
While dynamic cone speakers remain the most popular choice, many other speaker technologies exist. [ 1 ] : 705–714
The original loudspeaker design was the moving iron. Unlike the newer dynamic (moving coil) design, a moving-iron speaker uses a stationary coil to vibrate a magnetized piece of metal (called the iron, reed, or armature). The metal is either attached to the diaphragm or is the diaphragm itself. This design originally appeared in the early telephone.
Moving iron drivers are inefficient and can only produce a small band of sound. They require large magnets and coils to increase force. [ 51 ]
Balanced armature drivers (a type of moving iron driver) use an armature that moves like a see-saw or diving board. Since they are not damped, they are highly efficient, but they also produce strong resonances. They are still used today for high-end earphones and hearing aids, where small size and high efficiency are important. [ 52 ]
Piezoelectric speakers are frequently used as beepers in watches and other electronic devices, and are sometimes used as tweeters in less-expensive speaker systems, such as computer speakers and portable radios. Piezoelectric speakers have several advantages over conventional loudspeakers: they are resistant to overloads that would normally destroy most high-frequency drivers, and they can be used without a crossover due to their electrical properties. There are also disadvantages: some amplifiers can oscillate when driving capacitive loads like most piezoelectrics, which results in distortion or damage to the amplifier. Additionally, their frequency response, in most cases, is inferior to that of other technologies. This is why they are generally used in single-frequency (beeper) or non-critical applications.
Piezoelectric speakers can have extended high-frequency output, and this is useful in some specialized circumstances; for instance, sonar applications in which piezoelectric variants are used as both output devices (generating underwater sound) and as input devices (acting as the sensing components of underwater microphones ). They have advantages in these applications, not the least of which is simple and solid-state construction that resists seawater better than a ribbon or cone-based device would.
In 2013, Kyocera introduced piezoelectric ultra-thin medium-size film speakers with only one millimeter of thickness and seven grams of weight for their 55" OLED televisions and they hope the speakers will also be used in PCs and tablets. Besides medium-size, there are also large and small sizes which can all produce relatively the same quality of sound and volume within 180 degrees. The highly responsive speaker material provides better clarity than traditional TV speakers. [ 53 ]
Instead of a voice coil driving a speaker cone, a magnetostatic speaker uses an array of metal strips bonded to a large film membrane. The magnetic field produced by signal current flowing through the strips interacts with the field of permanent bar magnets mounted behind them. The force produced moves the membrane and so the air in front of it. Typically, these designs are less efficient than conventional moving-coil speakers.
Magnetostrictive transducers, based on magnetostriction , have been predominantly used as sonar ultrasonic sound wave radiators, but their use has spread also to audio speaker systems. Magnetostrictive speaker drivers have some special advantages: they can provide greater force (with smaller excursions) than other technologies; low excursion can avoid distortions from large excursion as in other designs; the magnetizing coil is stationary and therefore more easily cooled; they are robust because delicate suspensions and voice coils are not required. Magnetostrictive speaker modules have been produced by Fostex [ 54 ] [ 55 ] [ 56 ] and FeONIC [ 57 ] [ 58 ] [ 59 ] [ 60 ] and subwoofer drivers have also been produced. [ 61 ]
Electrostatic loudspeakers use a high-voltage electric field (rather than a magnetic field) to drive a thin statically charged membrane. Because they are driven over the entire membrane surface rather than from a small voice coil, they ordinarily provide a more linear and lower-distortion motion than dynamic drivers. They also have a relatively narrow dispersion pattern that can make for precise sound-field positioning. However, their optimum listening area is small and they are not very efficient speakers. They have the disadvantage that the diaphragm excursion is severely limited because of practical construction limitations—the further apart the stators are positioned, the higher the voltage must be to achieve acceptable efficiency. This increases the tendency for electrical arcs as well as increasing the speaker's attraction of dust particles. Arcing remains a potential problem with current technologies, especially when the panels are allowed to collect dust or dirt and are driven with high signal levels.
Electrostatics are inherently dipole radiators and due to the thin flexible membrane are less suited for use in enclosures to reduce low-frequency cancellation as with common cone drivers. Due to this and the low excursion capability, full-range electrostatic loudspeakers are large by nature, and the bass rolls off at a frequency corresponding to a quarter wavelength of the narrowest panel dimension. To reduce the size of commercial products, they are sometimes used as a high-frequency driver in combination with a conventional dynamic driver that handles the bass frequencies effectively.
Electrostatics are usually driven through a step-up transformer that multiplies the voltage swings produced by the power amplifier. This transformer also multiplies the capacitive load that is inherent in electrostatic transducers, which means the effective impedance presented to the power amplifiers varies widely by frequency. A speaker that is nominally 8 ohms may actually present a load of 1 ohm at higher frequencies, which is challenging to some amplifier designs.
A ribbon speaker consists of a thin metal-film ribbon suspended in a magnetic field. The electrical signal is applied to the ribbon, which moves with it to create the sound. The advantage of a ribbon driver is that the ribbon has very little mass ; thus, it can accelerate very quickly, yielding a very good high-frequency response. Ribbon loudspeakers are often very fragile. Most ribbon tweeters emit sound in a dipole pattern. A few have backings that limit the dipole radiation pattern. Above and below the ends of the more or less rectangular ribbon, there is less audible output due to phase cancellation, but the precise amount of directivity depends on the ribbon length. Ribbon designs generally require exceptionally powerful magnets, which makes them costly to manufacture. Ribbons have a very low resistance that most amplifiers cannot drive directly. As a result, a step down transformer is typically used to increase the current through the ribbon. The amplifier sees a load that is the ribbon's resistance times the transformer turns ratio squared. The transformer must be carefully designed so that its frequency response and parasitic losses do not degrade the sound, further increasing cost and complication relative to conventional designs.
Planar magnetic speakers (having printed or embedded conductors on a flat diaphragm) are sometimes described as ribbons, but are not truly ribbon speakers. The term planar is generally reserved for speakers with roughly rectangular flat surfaces that radiate in a bipolar (i.e. front and back) manner. Planar magnetic speakers consist of a flexible membrane with a voice coil printed or mounted on it. The current flowing through the coil interacts with the magnetic field of carefully placed magnets on either side of the diaphragm, causing the membrane to vibrate more or less uniformly and without much bending or wrinkling. The driving force covers a large percentage of the membrane surface and reduces resonance problems inherent in coil-driven flat diaphragms.
Bending wave transducers use a diaphragm that is intentionally flexible. The rigidity of the material increases from the center to the outside. Short wavelengths radiate primarily from the inner area, while longer waves reach the edge of the speaker. To prevent reflections from the outside back into the center, long waves are absorbed by a surrounding damper. Such transducers can cover a wide frequency range and have been promoted as being close to an ideal point sound source. [ 62 ] This uncommon approach is being taken by only a very few manufacturers, in very different arrangements.
The Ohm Walsh loudspeakers use a unique driver designed by Lincoln Walsh , who had been a radar development engineer in WWII. He became interested in audio equipment design and his last project was a unique, one-way speaker using a single driver. The cone faced down into a sealed, airtight enclosure. Rather than move back and forth as conventional speakers do, the cone rippled and created sound in a manner known in RF electronics as a "transmission line". The new speaker created a cylindrical sound field. Lincoln Walsh died before his speaker was released to the public. The Ohm Acoustics firm has produced several loudspeaker models using the Walsh driver design since then. German Physiks, an audio equipment firm in Germany, also produces speakers using this approach.
The German firm Manger has designed and produced a bending wave driver that at first glance appears conventional. In fact, the round panel attached to the voice coil bends in a carefully controlled way to produce full-range sound. [ 63 ] Josef W. Manger was awarded with the Rudolf-Diesel-Medaille for extraordinary developments and inventions by the German institute of inventions.
There have been many attempts to reduce the size of speaker systems, or alternatively to make them less obvious. One such attempt was the development of exciter transducer coils mounted to flat panels to act as sound sources, most accurately called exciter/panel drivers. [ 64 ] These can then be made in a neutral color and hung on walls where they are less noticeable than many speakers, or can be deliberately painted with patterns, in which case they can function decoratively. There are two related problems with flat panel techniques: first, a flat panel is necessarily more flexible than a cone shape in the same material, and therefore moves as a single unit even less, and second, resonances in the panel are difficult to control, leading to considerable distortions. Some progress has been made using such lightweight, rigid, materials such as Styrofoam , and there have been several flat panel systems commercially produced in recent years. [ 65 ]
Oskar Heil invented the air motion transducer in the 1960s. In this approach, a pleated diaphragm is mounted in a magnetic field and forced to close and open under control of a music signal. Air is forced from between the pleats in accordance with the imposed signal, generating sound. The drivers are less fragile than ribbons and considerably more efficient (and able to produce higher absolute output levels) than ribbon, electrostatic, or planar magnetic tweeter designs. ESS, a California manufacturer, licensed the design, employed Heil, and produced a range of speaker systems using his tweeters during the 1970s and 1980s. Lafayette Radio , a large US retail store chain, also sold speaker systems using such tweeters for a time. There are several manufacturers of these drivers (at least two in Germany—one of which produces a range of high-end professional speakers using tweeters and mid-range drivers based on the technology) and the drivers are increasingly used in professional audio. Martin Logan produces several AMT speakers in the US and GoldenEar Technologies incorporates them in its entire speaker line.
In 2013, a research team introduced a transparent ionic conduction speaker which has two sheets of transparent conductive gel and a layer of transparent rubber in between to make high voltage and high actuation work to reproduce good sound quality. The speaker is suitable for robotics, mobile computing and adaptive optics fields. [ 66 ]
Digital speakers have been the subject of experiments performed by Bell Labs as far back as the 1920s. [ 67 ] The design is simple; each bit controls a driver, which is either fully 'on' or 'off'. Problems with this design have led manufacturers to abandon it as impractical for the present. First, for a reasonable number of bits (required for adequate sound reproduction quality), the physical size of a speaker system becomes very large. Secondly, due to inherent analog-to-digital conversion problems, the effect of aliasing is unavoidable, so that the audio output is reflected at equal amplitude in the frequency domain, on the other side of the Nyquist limit (half the sampling frequency), causing an unacceptably high level of ultrasonics to accompany the desired output. No workable scheme has been found to adequately deal with this.
Plasma arc loudspeakers use electrical plasma as a radiating element. Since plasma has minimal mass, but is charged and therefore can be manipulated by an electric field , the result is a very linear output at frequencies far higher than the audible range. Problems of maintenance and reliability for this approach tend to make it unsuitable for mass market use. In 1978 Alan E. Hill of the Air Force Weapons Laboratory in Albuquerque, NM, designed the Plasmatronics Hill Type I, a tweeter whose plasma was generated from helium gas. [ 68 ] This avoided the ozone and NOx [ 69 ] produced by RF decomposition of air in an earlier generation of plasma tweeters made by the pioneering DuKane Corporation, who produced the Ionovac (marketed as the Ionofane in the UK) during the 1950s. [ 70 ]
A less expensive variation on this theme is the use of a flame for the driver, as flames contain ionized (electrically charged) gases. [ 71 ]
In 2008, researchers of Tsinghua University demonstrated a thermoacoustic loudspeaker (or thermophone ) of carbon nanotube thin film, [ 72 ] whose working mechanism is a thermoacoustic effect. Sound frequency electric currents are used to periodically heat the CNT and thus result in sound generation in the surrounding air. The CNT thin film loudspeaker is transparent, stretchable and flexible.
In 2013, researchers of Tsinghua University further present a thermoacoustic earphone of carbon nanotube thin yarn and a thermoacoustic surface-mounted device. [ 73 ] They are both fully integrated devices and compatible with Si-based semiconducting technology.
A rotary woofer is essentially a fan with blades that constantly change their pitch, allowing them to easily push the air back and forth. Rotary woofers are able to efficiently reproduce subsonic frequencies, which are difficult to impossible to achieve on a traditional speaker with a diaphragm. They are often employed in movie theaters to recreate rumbling bass effects, such as explosions. [ 74 ] [ 75 ] | https://en.wikipedia.org/wiki/Loudspeaker |
A loudspeaker enclosure or loudspeaker cabinet is an enclosure (often rectangular box-shaped) in which speaker drivers (e.g., woofers and tweeters ) and associated electronic hardware, such as crossover circuits and, in some cases, power amplifiers , are mounted. Enclosures may range in design from simple, homemade DIY rectangular particleboard boxes to very complex, expensive computer-designed hi-fi cabinets that incorporate composite materials, internal baffles, horns, bass reflex ports and acoustic insulation. Loudspeaker enclosures range in size from small "bookshelf" speaker cabinets with 4-inch (10 cm) woofers and small tweeters designed for listening to music with a hi-fi system in a private home to huge, heavy subwoofer enclosures with multiple 18-inch (46 cm) or even 21-inch (53 cm) speakers in huge enclosures which are designed for use in stadium concert sound reinforcement systems for rock music concerts.
The primary role of an enclosure is to prevent sound waves generated by the rearward-facing surface of the diaphragm of an open speaker driver interacting with sound waves generated at the front of the speaker driver. Because the forward- and rearward-generated sounds are out of phase with each other, any interaction between the two in the listening space creates a distortion of the original signal as it was intended to be reproduced. As such, a loudspeaker cannot be used without installing it in a baffle of some type, such as a closed box, vented box, open baffle, or a wall or ceiling (infinite baffle). [ 1 ] [ 2 ]
An enclosure also plays a role in managing vibration induced by the driver frame and moving airmass within the enclosure, as well as heat generated by driver voice coils and amplifiers (especially where woofers and subwoofers are concerned). Sometimes considered part of the enclosure, the base, may include specially designed feet to decouple the speaker from the floor. Enclosures designed for use in PA systems , sound reinforcement systems and for use by electric musical instrument players (e.g., bass amp cabinets ) have a number of features to make them easier to transport, such as carrying handles on the top or sides, metal or plastic corner protectors, and metal grilles to protect the speakers. Speaker enclosures designed for use in a home or recording studio typically do not have handles or corner protectors, although they do still usually have a cloth or mesh cover to protect the woofer and tweeter. These speaker grilles are a metallic or cloth mesh that are used to protect the speaker by forming a protective cover over the speaker's cone while allowing sound to pass through undistorted. [ 3 ]
Speaker enclosures are used in homes in stereo systems, home cinema systems, televisions , boom boxes and many other audio appliances. Small speaker enclosures are used in car stereo systems. Speaker cabinets are key components of a number of commercial applications, including sound reinforcement systems , movie theatre sound systems and recording studios . Electric musical instruments invented in the 20th century, such as the electric guitar , electric bass and synthesizer , among others, are amplified using instrument amplifiers and speaker cabinets (e.g., guitar amplifier speaker cabinets).
Early on, radio loudspeakers consisted of horns , often sold separately from the radio itself (typically a small wooden box containing the radio's electronic circuits, so they were not usually housed in an enclosure. [ 4 ] When paper cone loudspeaker drivers were introduced in the mid 1920s, radio cabinets began to be made larger to enclose both the electronics and the loudspeaker. [ 5 ] These cabinets were made largely for the sake of appearance, with the loudspeaker simply mounted behind a round hole in the cabinet. It was observed that the enclosure had a strong effect on the bass response of the speaker. Since the rear of the loudspeaker radiates sound out of phase from the front, there can be constructive and destructive interference for loudspeakers without enclosures, and below frequencies related to the baffle dimensions in open-baffled loudspeakers (see § Background , below) . This results in a loss of bass and in comb filtering , i.e., peaks and dips in the response power regardless of the signal that is meant to be reproduced. The resulting response is akin to two loudspeakers playing the same signal but at different distances from the listener, which is like adding a delayed version of the signal to itself, whereby both constructive and destructive interference occurs.
Before the 1950s many manufacturers did not fully enclose their loudspeaker cabinets; the back of the cabinet was typically left open. This was done for several reasons, not least because electronics (at that time tube equipment) could be placed inside and cooled by convection in the open enclosure.
Most of the enclosure types discussed in this article were invented either to wall off the out of phase sound from one side of the driver, or to modify it so that it could be used to enhance the sound produced from the other side.
In some respects, the ideal mounting for a low-frequency loudspeaker driver would be a rigid flat panel of infinite size with infinite space behind it. This would entirely prevent the rear sound waves from interfering (i.e., comb filter cancellations) with the sound waves from the front. An open baffle loudspeaker is an approximation of this, since the driver is mounted on a panel, with dimensions comparable to the longest wavelength to be reproduced. In either case, the driver would need a relatively stiff suspension to provide the restoring force which might have been provided at low frequencies by a smaller sealed or ported enclosure, so few drivers are suitable for this kind of mounting.
The forward- and rearward-generated sounds of a speaker driver appear out of phase from each other because they are generated through the opposite motion of the diaphragm and because they travel different paths before converging at the listener's position. A speaker driver mounted on a finite baffle will display a physical phenomenon known as interference , which can result in perceivable frequency-dependent sound attenuation. This phenomenon is particularly noticeable at low frequencies where the wavelengths are large enough that interference will affect the entire listening area.
Since infinite baffles are impractical and finite baffles tend to suffer poor response as wavelengths approach the dimensions of the baffle (i.e. at lower frequencies), most loudspeaker cabinets use some sort of structure (usually a box) to contain the out of phase sound energy. The box is typically made of wood, wood composite, or more recently plastic, for reasons of ease of construction and appearance. Stone, concrete, plaster, and even building structures have also been used.
Enclosures can have a significant effect beyond what was intended, with panel resonances , diffraction from cabinet edges [ 6 ] [ 7 ] and standing wave energy from internal reflection/reinforcement modes being among the possible problems. Bothersome resonances can be reduced by increasing enclosure mass or rigidity, by increasing the damping of enclosure walls or wall/surface treatment combinations, by adding stiff cross bracing, or by adding internal absorption. Wharfedale , in some designs, reduced panel resonance by using two wooden cabinets (one inside the other) with the space between filled with sand . Home experimenters have even designed speakers built from concrete , granite [ 8 ] and other exotic materials for similar reasons.
Many diffraction problems, above the lower frequencies, can be alleviated by the shape of the enclosure, such as by avoiding sharp corners on the front of the enclosure. A comprehensive study of the effect of cabinet configuration on the sound distribution pattern and overall response-frequency characteristics of loudspeakers was undertaken by Harry F. Olson . [ 6 ] [ 7 ] It involved a very wide number of different enclosure shapes, and it showed that curved loudspeaker baffles reduce some response deviations due to sound wave diffraction. It was discovered later that careful placement of a speaker on a sharp-edged baffle can reduce diffraction-caused response problems.
Sometimes the differences in phase response at frequencies shared by different drivers can be addressed by adjusting the vertical location of the smaller drivers (usually backwards), or by leaning or stepping the front baffle, so that the wavefront from all drivers is coherent at and around the crossover frequencies in the speaker's normal sound field. The acoustic center of the driver dictates the amount of rearward offset needed to time-align the drivers.
Enclosures used for woofers and subwoofers can be adequately modeled in the low-frequency region using acoustics and the lumped component models. [ 9 ] Electrical filter theory has been used with considerable success for some enclosure types. For the purposes of this type of analysis, each enclosure must be classified according to a specific topology. The designer must balance low bass extension, linear frequency response, efficiency, distortion, loudness and enclosure size, while simultaneously addressing issues higher in the audible frequency range such as diffraction from enclosure edges, [ 6 ] the baffle step effect when wavelengths approach enclosure dimensions, crossovers, and driver blending.
The loudspeaker driver's moving mass and compliance (slackness or reciprocal stiffness of the suspension) determines the driver's resonance frequency ( F s ). In combination with the damping properties of the system (both mechanical and electrical) all these factors affect the low-frequency response of sealed-box systems. The response of closed-box loudspeaker systems has been extensively studied by Small [ 10 ] [ 11 ] and Benson, [ 12 ] amongst many others. Output falls below the system's resonance frequency ( F c ), defined as the frequency of peak impedance. In a closed-box loudspeaker, the air inside the box acts as a spring, returning the cone to the zero position in the absence of a signal. A significant increase in the effective volume of a closed-box loudspeaker can be achieved by a filling of fibrous material, typically fiberglass, bonded acetate fiber (BAF) or long-fiber wool. The effective volume increase can be as much as 40% and is due primarily to a reduction in the speed of sound propagation through the filler material as compared to air. [ 13 ] The enclosure or driver must have a small leak so that the internal and external pressures can equalise over time, to compensate for changes in barometric pressure or altitude; the porous nature of paper cones, or an imperfectly sealed enclosure, is normally sufficient to provide this slow pressure equalisation.
A variation on the open baffle approach is to mount the loudspeaker driver in a very large sealed enclosure, providing minimal air spring restoring force to the cone. This minimizes the change in the driver's resonance frequency caused by the enclosure. The low-frequency response of infinite baffle loudspeaker systems has been extensively analysed by Benson. [ 12 ] Some infinite baffle enclosures have used an adjoining room, basement, or a closet or attic. This is often the case with exotic rotary woofer installations, as they are intended to go to frequencies lower than 20 Hz and displace large volumes of air. Infinite baffle ( IB ) is also used as a generic term for sealed enclosures of any size, the name being used because of the ability of a sealed enclosure to prevent any interaction between the forward and rear radiation of a driver at low frequencies.
In conceptual terms an infinite baffle is a flat baffle that extends out to infinity – the so-called endless plate . A genuine infinite baffle cannot be constructed but a very large baffle such as the wall of a room can be considered to be a practical equivalent. A genuine infinite-baffle loudspeaker has an infinite volume (a half-space) on each side of the baffle and has no baffle step. However, the term infinite-baffle loudspeaker can fairly be applied to any loudspeaker that behaves (or closely approximates) in all respects as if the drive unit is mounted in a genuine infinite baffle. The term is often and erroneously used of sealed enclosures which cannot exhibit infinite-baffle behavior unless their internal volume is much greater than the Vas Thiele/Small of the drive unit AND the front baffle dimensions are ideally several wavelengths of the lowest output frequency. It is important to distinguish between genuine infinite-baffle topology and so-called infinite-baffle or IB enclosures which may not meet genuine infinite-baffle criteria. The distinction becomes important when interpreting textbook usage of the term (see Beranek (1954, p. 118) [ 14 ] and Watkinson (2004) [ 15 ] ).
Acoustic suspension or air suspension is a variation of the closed-box enclosure, using a box size that exploits the almost linear air spring resulting in a −3 dB low-frequency cut-off point of 30–40 Hz from a box of only one to two cubic feet or so. [ 16 ] The spring suspension that restores the cone to a neutral position is a combination of an exceptionally compliant (soft) woofer suspension, and the air inside the enclosure. At frequencies below system resonance, the air pressure caused by the cone motion is the dominant force. Developed by Edgar Villchur in 1954, this technique was used in the very successful Acoustic Research line of bookshelf speakers in the 1960s–70s. The acoustic suspension principle takes advantage of this relatively linear spring. The enhanced suspension linearity of this type of system is an advantage. For a specific driver, an optimal acoustic suspension cabinet will be smaller than a bass reflex, but the bass reflex cabinet will have a lower −3 dB point. The voltage sensitivity above the tuning frequency remains a function of the driver, and not of the cabinet design.
The isobaric loudspeaker configuration was first introduced by Harry F. Olson in the early 1950s, and refers to systems in which two or more identical woofers (bass drivers) operate simultaneously, with a common body of enclosed air adjoining one side of each diaphragm. In practical applications, they are most often used to improve low-end frequency response without increasing cabinet size, though at the expense of cost and weight. Two identical loudspeakers are coupled to work together as one unit: they are mounted one behind the other in a casing to define a chamber of air in between. The volume of this isobaric chamber is usually chosen to be fairly small for reasons of convenience. The two drivers operating in tandem exhibit exactly the same behavior as one loudspeaker in twice the cabinet.
Also known as vented (or ported) systems, these enclosures have a vent or hole cut into the cabinet and a port tube affixed to the hole, to improve low-frequency output, increase efficiency, or reduce the size of an enclosure. Bass reflex designs are used in home stereo speakers (including both low- to mid-priced speaker cabinets and expensive hi-fi cabinets), bass amplifier speaker cabinets, keyboard amplifier cabinets, subwoofer cabinets and PA system speaker cabinets. Vented or ported cabinets use cabinet openings or transform and transmit low-frequency energy from the rear of the speaker to the listener. They deliberately and successfully exploit Helmholtz resonance . As with sealed enclosures, they may be empty, lined, filled or (rarely) stuffed with damping materials. Port tuning frequency is a function of the cross-sectional area of the port and its length. This enclosure type is very common, and provides more sound pressure level near the tuning frequency than a sealed enclosure of the same volume, although it actually has less low frequency output at frequencies well below the cut-off frequency, since the rolloff is steeper (24 dB/octave versus 12 dB/octave for a sealed enclosure). Malcolm Hill pioneered the use of these designs in a live event context in the early 1970s. [ 17 ]
Vented system design using computer modeling has been practiced since about 1985. It made extensive use of the theory developed by researchers such as Thiele, [ 18 ] [ 19 ] [ 20 ] Benson, [ 21 ] [ 22 ] Small [ 23 ] [ 24 ] [ 25 ] [ 26 ] and Keele, [ 27 ] who had systematically applied electrical filter theory to the acoustic behavior of loudspeakers in enclosures. In particular Thiele and Small became very well known for their work. While ported loudspeakers had been produced for many years before computer modeling, achieving optimum performance was challenging, as it is a complex sum of the properties of the specific driver, the enclosure and port, because of imperfect understanding of the assorted interactions. These enclosures are sensitive to small variations in driver characteristics and require special quality control concern for uniform performance across a production run. Bass ports are widely used in subwoofers for PA systems and sound reinforcement systems , in bass amp speaker cabinets and in keyboard amp speaker cabinets.
A passive radiator speaker uses a second passive driver, or drone, to produce similar low-frequency extension, or efficiency increase, or enclosure size reduction, similar to ported enclosures. Small [ 28 ] [ 29 ] and Hurlburt [ 30 ] have published the results of research into the analysis and design of passive-radiator loudspeaker systems. The passive-radiator principle was identified as being particularly useful in compact systems where vent realization is difficult or impossible, but it can also be applied satisfactorily to larger systems. The passive driver is not wired to an amplifier; instead, it moves in response to changing enclosure pressures. In theory, such designs are variations of the bass reflex type, but with the advantage of avoiding a relatively small port or tube through which air moves, sometimes noisily. Tuning adjustments for a passive radiator are usually accomplished more quickly than with a bass reflex design since such corrections can be as simple as mass adjustments to the drone. The disadvantages are that a passive radiator requires precision construction like a driver, thus increasing costs, and may have excursion limitations.
A fourth-order electrical bandpass filter can be simulated by a vented box in which the contribution from the rear face of the driver cone is trapped in a sealed box, and the radiation from the front surface of the cone is directed into a ported chamber. This modifies the resonance of the driver. In its simplest form a compound enclosure has two chambers. The dividing wall between the chambers holds the driver; typically only one chamber is ported.
If the enclosure on each side of the woofer has a port in it then the enclosure yields a 6th-order band-pass response. These are considerably harder to design and tend to be very sensitive to driver characteristics. As in other reflex enclosures, the ports may generally be replaced by passive radiators if desired. An eighth-order bandpass box is another variation which also has a narrow frequency range. They are often used to achieve sound pressure levels in which case a bass tone of a specific frequency would be used versus anything musical. They are complicated to build and must be done quite precisely in order to perform nearly as intended. [ 31 ]
This design falls between acoustic suspension and bass reflex enclosures. It can be thought of as either a leaky sealed box or a ported box with large amounts of port damping. By setting up a port, and then blocking it precisely with sufficiently tightly packed fiber filling, it is possible to adjust the damping in the port as desired. The result is control of the resonance behavior of the system which improves low-frequency reproduction, according to some designers. Dynaco was a primary producer of these enclosures for many years, using designs developed by a Scandinavian driver maker. The design remains uncommon among commercial designs currently available. A reason for this may be that adding damping material is a needlessly inefficient method of increasing damping; the same alignment can be achieved by simply choosing a loudspeaker driver with the appropriate parameters and precisely tuning the enclosure and port for the desired response.
A similar technique has been used in aftermarket car audio ; it is called aperiodic membrane (AP). A resistive mat is placed in front of or directly behind the loudspeaker driver (usually mounted on the rear deck of the car in order to use the trunk as an enclosure). The loudspeaker driver is sealed to the mat so that all acoustic output in one direction must pass through the mat. This increases mechanical damping, and the resulting decrease in the impedance magnitude at resonance is generally the desired effect, though there is no perceived or objective benefit to this. Again, this technique reduces efficiency, and the same result can be achieved through selection of a driver with a lower Q factor , or even via electronic equalization . This is reinforced by the purveyors of AP membranes; they are often sold with an electronic processor which, via equalization, restores the bass output lost through the mechanical damping. The effect of the equalization is opposite to that of the AP membrane, resulting in a loss of damping and an effective response similar to that of the loudspeaker without the aperiodic membrane and electronic processor.
A dipole enclosure in its simplest form is a driver located on a flat baffle panel, similar to older open back cabinet designs. The baffle's edges are sometimes folded back to reduce its apparent size, creating a sort of open-backed box. A rectangular cross-section is more common than curved ones since it is easier to fabricate in a folded form than a circular one. The baffle dimensions are typically chosen to obtain a particular low-frequency response, with larger dimensions giving a lower frequency before the front and rear waves interfere with each other. A dipole enclosure has a figure-of-eight radiation pattern, which means that there is a reduction in sound pressure, or loudness, at the sides as compared to the front and rear. This is useful if it can be used to prevent the sound from being as loud in some places as in others.
A horn loudspeaker is a speaker system using a horn to match the driver cone to the air. The horn structure itself does not amplify, but rather improves the coupling between the speaker driver and the air. Properly designed horns have the effect of making the speaker cone transfer more of the electrical energy in the voice coil into the air; in effect the driver appears to have higher efficiency. Horns can help control dispersion at higher frequencies which is useful in some applications such as sound reinforcement. The mathematical theory of horn coupling is well developed and understood, though implementation is sometimes difficult. Properly designed horns for high frequencies are small (above 2,000 Hz on average, a few centimetres or inches), those for mid-range frequencies (perhaps 200 to 2,000 Hz) much larger, perhaps 30 to 60 cm (1 or 2 feet), and for low frequencies (under 200 Hz) very large, a few metres (dozens of feet). In the 1950s, a few high fidelity enthusiasts actually built full-sized horns whose structures were built into a house wall or basement. With the coming of stereo (two speakers) and surround sound (four or more), plain horns became even more impractical. Various speaker manufacturers have produced folded low-frequency horns which are much smaller (e.g., Altec Lansing, JBL, Klipsch, Lowther, Tannoy) and actually fit in practical rooms. These are necessarily compromises, and because they are physically complex, they are expensive.
The multiple-entry horn (also known under the trademarks CoEntrant , Unity and Synergy horn ) is a manifold speaker design; it uses several different drivers mounted on the horn at stepped distances from the horn's apex, where the high-frequency driver is placed. Depending on implementation, this design offers an improvement in transient response as each of the drivers is aligned in phase and time and exits the same horn mouth. A more uniform radiation pattern throughout the frequency range is also possible. [ 32 ] A uniform pattern is handy for smoothly arraying multiple enclosures. [ 33 ]
Both sides of a long-excursion high-power driver in a tapped horn enclosure are ported into the horn itself, with one path length long and the other short. These two paths combine in phase at the horn's mouth within the frequency range of interest. This design is especially effective at subwoofer frequencies and offers reductions in enclosure size along with more output. [ 33 ]
A perfect transmission line loudspeaker enclosure has an infinitely long line, stuffed with absorbent material such that all the rear radiation of the driver is fully absorbed, down to the lowest frequencies. Theoretically, the vent at the far end could be closed or open with no difference in performance. The density of and material used for the stuffing is critical, as too much stuffing will cause reflections due to back-pressure, [ dubious – discuss ] whilst insufficient stuffing will allow sound to pass through to the vent. Stuffing often is of different materials and densities, changing as it gets further from the back of the driver's diaphragm.
Consequent to the above, practical transmission line loudspeakers are not true transmission lines, as there is generally output from the vent at the lowest frequencies. They can be thought of as a waveguide in which the structure shifts the phase of the driver's rear output by at least 90° [ dubious – discuss ] , thereby reinforcing the frequencies near the driver's free-air resonance frequency f s . Transmission lines tend to be larger than ported enclosures of approximately comparable performance, due to the size and length of the guide that is required (typically 1/4 the longest wavelength of interest).
The design is often described as non-resonant, and some designs are sufficiently stuffed with absorbent material that there is indeed not much output from the line's port. But it is the inherent resonance (typically at 1/4 wavelength) that can enhance the bass response in this type of enclosure, albeit with less absorbent stuffing. Among the first examples of this enclosure design approach were the projects published in Wireless World by Bailey [ 34 ] in the early 1970s, and the commercial designs of the now defunct IMF Electronics which received critical acclaim at about the same time.
A variation on the transmission line enclosure uses a tapered tube, with the terminus (opening/port) having a smaller area than the throat. The tapering tube can be coiled for lower frequency driver enclosures to reduce the dimensions of the speaker system, resulting in a seashell like appearance. Bose uses similar patented technology on their Wave and Acoustic Waveguide music systems. [ 35 ]
Numerical simulations by Augspurger [ 36 ] and King [ 37 ] have helped refine the theory and practical design of these systems.
A quarter wave resonator is a transmission line tuned to form a standing quarter wave at a frequency somewhat below the driver's resonance frequency F s . When properly designed, a port that is of much smaller diameter than the main pipe located at the end of the pipe then produces the driver's backward radiation in phase with the speaker driver itself; greatly adding to the bass output. Such designs tend to be less dominant in certain bass frequencies than the more common bass reflex designs and followers of such designs claim an advantage in clarity of the bass with a better congruency of the fundamental frequencies to the overtones. [ 38 ] Some loudspeaker designers like Martin J. King and Bjørn Johannessen consider the term quarter wave enclosure as a more fitting term for most transmission lines and since acoustically, quarter wavelengths produce standing waves inside the enclosure that are used to produce the bass response emanating from the port. These designs can be considered a mass-loaded transmission line design or a bass reflex design, as well as a quarter wave enclosure. [ 39 ] Quarter wave resonators have seen a revival as commercial applications with the onset of neodymium drivers that enable this design to produce relatively low bass extensions within a relatively small speaker enclosure. [ 38 ]
The tapered quarter-wave pipe (TQWP) is an example of a combination of transmission line and horn effects. It is highly regarded by some speaker designers. The concept is that the sound emitted from the rear of the loudspeaker driver is progressively reflected and absorbed along the length of the tapering tube, almost completely preventing internally reflected sound being retransmitted through the cone of the loudspeaker. The lower part of the pipe acts as a horn while the top can be visualised as an extended compression chamber. The entire pipe can also be seen as a tapered transmission line in inverted form. (A traditional tapered transmission line, confusingly also sometimes referred to as a TQWP, has a smaller mouth area than throat area.) Its relatively low adoption in commercial speakers can mostly be attributed to the large resulting dimensions of the speaker produced and the expense of manufacturing a rigid tapering tube. The TQWP is also known as a Voigt pipe , and was introduced in 1934 by Paul G. A. H. Voigt, Lowther's original driver designer. | https://en.wikipedia.org/wiki/Loudspeaker_enclosure |
Louis Leithold ( San Francisco , United States, 16 November 1924 – Los Angeles , 29 April 2005) was an American mathematician and teacher . He is best known for authoring The Calculus , a classic textbook about calculus that changed the teaching methods for calculus in world high schools and universities . [ 1 ] Known as "a legend in AP calculus circles," Leithold was the mentor of Jaime Escalante , the Los Angeles high-school teacher whose story is the subject of the 1988 movie Stand and Deliver . [ 2 ]
Leithold attended the University of California, Berkeley , where is attained his B.A., M.A. and PhD. He went on to teach at Phoenix College (Arizona) [ 1 ] (which has a math scholarship in his name [ 3 ] ), California State University, Los Angeles , the University of Southern California , Pepperdine University , and The Open University (UK). [ 4 ] In 1968, Leithold published The Calculus , a "blockbuster best-seller" which simplified the teaching of calculus. [ 5 ]
At age 72, after his retirement [ 4 ] from Pepperdine, [ 6 ] he began teaching calculus at Malibu High School , in Malibu, California , drilling his students for the Advanced Placement Calculus , and achieving considerable success. [ 4 ] He regularly assigned two hours of homework per night, and had two training sessions at his own house that ran Saturdays or Sundays from 9AM to 4PM before the AP test. [ 7 ] His teaching methods were praised for their liveliness, and his love for the topic was well known. [ 5 ] He also taught workshops for calculus teachers. [ 7 ] [ 8 ] One of the people he influenced was Jaime Escalante, who taught math to minority students at Garfield High School in East Los Angeles . Escalante's subsequent success as a teacher is portrayed in the 1988 film Stand and Deliver . [ 4 ]
Leithold died of natural causes the week before his class (which he had been "relentlessly drilling" for eight months [ 4 ] ) was to take the AP exam; [ 4 ] his students went on to receive top scores. [ 8 ] A memorial service was held in Glendale , and a scholarship established in his name. [ 6 ]
Leithold experienced a notable legal event in his personal life in 1959 when he and his then-wife, musician Dr. Thyra N. Pliske, adopted a minor child, Gordon Marc Leithold. The couple eventually divorced in 1962, with an Arizona court granting Thyra custody of the child and Louis receiving certain visitation rights. Thyra later married Gilbert Norman Plass , and the family moved to Dallas, Texas in 1963.
In 1965, Louis filed a suit against his former wife and her new husband in the Juvenile Court of Dallas County, Texas. The suit, titled "Application for Modification of Visitation and Custody," sought significant changes to the Arizona decree based on allegations of changed conditions and circumstances. Following a hearing, the Dallas court modified the Arizona decree with respect to Louis' visitation rights. His son would die in 1994, at the age of 35 in Houston, Texas.
He was an art collector, and had art by Vasa Mihich . He also used art in his Calculus book by Patrick Caulfield . | https://en.wikipedia.org/wiki/Louis_Leithold |
In 1745 Louis Pierre Ancillon de la Sablonnière established the Pechelbronn bitumen mine at Merkwiller-Pechelbronn , Bas-Rhin , Alsace .
He was an interpreter with the French ambassador to Switzerland , then the General Treasurer of the Ligues Suisses and Grisons .
He learned of Jean Theophile Hoeffel's 1734 thesis "Historia Balsami Mineralis Alsatici sev Petrolei Vallis Sancti Lamperti", which described bitumen springs near Lampertsloch . The farm was later called BächelBrunn / Baechel-Brunn for a "source of a brook" or "Baechelbronner" in 1768 when purchased by the LeBel family. [ 1 ]
Along with Jean d’Amascéne Eyrénis , the son of physician Eyrini d'Eyrinis , the developer of the La Presta asphalt mine of Val de Travers , Neuchâtel , Switzerland , he obtained permission to start searches around the spring. He created the first oil company in 1740, putting 40 shares on the market. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
This French business–related biographical article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Louis_Pierre_Ancillon_de_la_Sablonnière |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.