text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Larry Robinson is an American professor and academic administrator, who served as president of Florida A&M University from 2017 to 2024.
Robinson, an African American , started his college education at LeMoyne-Owen College and graduated from Memphis State University now the University of Memphis , in 1979 with summa cum laude honors and a B.S. degree in chemistry. [ 1 ] He received a Ph.D. in nuclear chemistry from Washington University in St. Louis in 1984. In that same year, he joined the research staff of Oak Ridge National Laboratory (ORNL), where he was a research scientist and served as a group leader, of the neutron activation analysis facility. [ 1 ] He accepted a position as a visiting professor at FAMU in January 1995, and left ORNL two years later to accept a permanent faculty position at FAMU. [ 2 ]
At FAMU, Robinson became director of the university's Environmental Sciences Institute. In addition to conducting research on environmental chemistry of coastal ecosystems, he had a leadership role in establishing new B.S. and Ph.D. degree programs. [ 3 ] In 2003, he became FAMU provost and vice president for academic affairs, serving until 2005. In 2007 he became the university's chief operating officer and vice president for research, and served for several weeks as the school's interim president. In May 2010, he left that position to become Assistant Secretary for Conservation and Management in the National Oceanic and Atmospheric Administration . In November 2011 he returned to FAMU as a professor and special assistant, and in March 2012 he was named provost and vice president for academic affairs. [ 4 ] [ 5 ] [ 6 ]
In July 2012, the FAMU Board of Trustees appointed Robinson to serve as the university's interim president, replacing James H. Ammons . On September 15, 2016, he was named to a third stint as interim university president following the approval of a separation agreement with the 11th president, Elmira Mangum . On November 30, 2017, Robinson was named the 12th President of Florida A&M University . [ 1 ] Robinson resigned from the presidency in July 2024.
One of Robinson's primary research interests is environmental chemistry, including the detection of trace elements in environmental matrices by nuclear methods. [ 7 ] In 1991, while at ORNL, Robinson was a participant in a well-publicized investigation into the cause of the death of 19th-century U.S. President Zachary Taylor . When Taylor died rather suddenly in 1850, the cause of his death was listed as gastroenteritis , but some historians thought he might have been poisoned with arsenic . His descendants gave permission for his remains to be exhumed in order to allow analysis of tiny samples of his hair and fingernails. With ORNL's High Flux Isotope Reactor as a neutron source , Robinson and colleagues used neutron activation analysis to measure arsenic levels in the samples. [ 8 ] [ 9 ] The analysis led to a finding that Taylor did not die from arsenic poisoning, as arsenic was not detected in the samples, indicating that arsenic concentrations were many times lower than would be expected in arsenic poisoning. [ 8 ] [ 10 ] | https://en.wikipedia.org/wiki/Larry_Robinson_(chemist) |
The Larson–Miller relation , also widely known as the Larson–Miller parameter and often abbreviated LMP , is a parametric relation used to extrapolate experimental data on creep and rupture life of engineering materials.
F.R. Larson and J. Miller proposed that creep rate could adequately be described by the Arrhenius type equation :
Where r is the creep process rate, A is a constant, R is the universal gas constant , T is the absolute temperature , and Δ H {\displaystyle \Delta H} is the activation energy for the creep process. Taking the natural log of both sides:
With some rearrangement:
Using the fact that creep rate is inversely proportional to time, the equation can be written as:
Taking the natural log:
After some rearrangement the relation finally becomes:
This equation is of the same form as the Larson–Miller relation.
where the quantity LMP is known as the Larson–Miller parameter. Using the assumption that activation energy is independent of applied stress, the equation can be used to relate the difference in rupture life to differences in temperature for a given stress. The material constant C is typically found to be in the range of 20 to 22 for metals when time is expressed in hours and temperature in degrees Rankine.
The Larson–Miller model is used for experimental tests so that results at certain temperatures and stresses can predict rupture lives of time spans that would be impractical to reproduce in the laboratory.
Expanding the equation as a Taylor series makes the relationship easier to understand. Only the first terms are kept.
Changing the time, by a factor of 10, changes the logarithm by 1 and the LMP changes by an amount equal to the temperature.
To get an equal change in LMP by changing the temperature, the temperature needs to be raised or lowered by about 5% of its absolute value.
Typically a 5% increase in absolute temperature will increase the rate of creep by a factor of ten.
The equation was developed during the 1950s while Miller and Larson were employed by GE performing research on turbine blade life.
The Omega Method is a comprehensive approach developed for assessing the remaining life of components operating in the creep range. Unlike other methods such as replication, life summation based on Larson-Miller parameters, or Kachanov's approach. [ 1 ]
The Omega Method aims to overcome limitations in accurately estimating strain accumulation, damage, and the rate of damage accumulation. It provides a broader methodology for life assessment [ 2 ] that incorporates strain-rate parameters, multi-axial damage parameters (including Omega), and material-specific property relations.
In 1986, the Petroleum and Chemical Committee of MPC initiated a research program to evaluate different approaches to life assessment. Through extensive experimentation on various materials, including carbon steel and hard chromium-molybdenum steel, several important observations were made:
Based on their findings, the researchers concluded that strain rate, at the operating stress and temperature, can indicate material damage. They aimed to develop a model linking strain rate, strain, consumed life, and remaining life. Initially designed for thermally stabilized materials, the Omega Method's applicability extends to diverse situations. It incorporates Kachanov's equations for strain rate acceleration, prioritizing monotonically increasing strain rates. Emphasizing strain rate's significance, the method recommends referencing an ex-service database for ex-service materials.
In API 579, [ 3 ] the MPC Project Omega program, which incorporates the Omega Method, offers a broader methodology for assessing remaining life compared to the Larson-Miller model. It considers strain-rate parameters, multi-axial damage parameters (including Omega), and material-specific property relations in the refining and petrochemical industry.
The MPC Project Omega program provides a comprehensive framework encompassing the Larson-Miller model for predicting remaining life in the creep regime. [ 4 ]
The remaining life of a component, L, can be calculated using the following equations, where stress is in ksi (MPa), temperature is in degrees Fahrenheit (degrees Celsius), and the remaining life and time are in hours.
where
where
Von Mises yield criterion is specifically applicable to ductile materials
Value obtained by MPC Omega project for the equation for different materials can be found in ASME API 579-1/ASME FFS-1-2021 Fitness-For-Service.
Here, Nomenclature
Source: [ 6 ]
Ω {\displaystyle \Omega } can be obtained by accelerated creep test in which strain is recirded, interpolating the data ( ln , ε c ) {\displaystyle (\ln ,\varepsilon _{c})}
When adopting the Omega Method for a remaining life assessment, it is sufficient to estimate the creep strain rate at the service stress and temperature by conducting creep tests on the material that has been exposed to service conditions.
The creep test program followed the guidelines provided in technical literature and API 579-1 for the implementation of the Omega Method. The program consisted of the following steps:
The consumed life fraction "f" can be calculated using the following equation:
Here, ε ˙ 0 {\displaystyle {\dot {\varepsilon }}_{0}} represents the initial strain rate, ε ˙ ( t ) {\displaystyle {\dot {\varepsilon }}(t)} represents the current strain rate, A 0 ( t ) {\displaystyle A_{0}(t)} represents the logarithm of the strain rate at time t {\displaystyle t} , and A 0 ( t = 0 ) {\displaystyle A_{0}(t=0)} represents the logarithm of the initial strain rate.
Overall, the creep test program involved conducting tests on various specimens, comparing different directions, ensuring compliance with testing standards, and validating the results with the Omega Method model and API 579-1 data.
Source: [ 7 ]
CSEF steels have complex microstructures, and conventional techniques struggle to accurately assess their creep life. The Omega method offers a systematic approach that combines hardness measurements with other techniques to overcome these challenges. The article highlights that the Omega method provides a systematic approach for creep life assessment by combining hardness measurements with other techniques such as the potential drop method and tertiary creep modeling. The potential drop method measures the electric potential drop ratio, which is correlated to the hardness drop. This correlation enables accurate creep life prediction using the hardness model. This integration of hardness measurements and the potential drop method enhances the accuracy of creep life assessments.
Compared to the Larson-Miller parameter commonly used for creep life assessment, the Omega method offers several advantages for assessing CSEF steels. The Omega method provides a more suitable and accurate approach by considering microstructural factors and utilizing hardness measurements, which are directly influenced by the material's degradation. This combination of microstructure and mechanical property assessment allows for a comprehensive evaluation of the creep life of CSEF steels, leading to more reliable predictions of the material's remaining useful life.
In comparison to the Larson-Miller parameter, which is commonly used for creep life assessment, the Omega method offers advantages in assessing CSEF steels. CSEF steels exhibit different degradation behavior compared to conventional steels, making it challenging to apply conventional techniques. The Omega method, with its focus on microstructural factors and hardness measurements, provides a more suitable approach for accurately assessing the creep life of CSEF steels. | https://en.wikipedia.org/wiki/Larson–Miller_relation |
Larval hemolymph feeding is a behaviour trait found in the queens of some species of ant . This is found mainly in the ants of the subfamily Amblyoponinae and give them the other name of Dracula ant . In colonies of the Amblyopone silvestrii the queens feed on the hemolymph (or insect blood, also spelt haemolymph) of their larvae when food is not available. In one species, Myopopone castanea , worker ants consume larval hemolymph. [ 1 ] This is said to be a precursor to trophallaxis in other ant families. The larvae themselves are not killed by this process. This behaviour is also seen in Proceratium and in Leptanilla the larvae have special organs that exude the haemolymph. On the other hand, the foundresses suppress larval hemolymph feeding (LHF) when prey is available, allowing them to rear the first workers more swiftly. The nondestructive form of cannibalism can be regarded as a nutritive adaptation related to: (1) the lack of social food transfer in this species, and (2) its specialized predation on large sporadic prey (centipedes). LHF similar to that in Amblyopone was found in Proceratium and another type of LHF, with a larval specialized exudatory organ, in Leptanilla . [ 2 ] [ 3 ]
This ant -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Larval_hemolymph_feeding |
Las Cabezas de San Juan ( Spanish for 'the San Juan capes' or 'headlands'), officially Cabo San Juan ( Cape San Juan in English), [ 1 ] is a coastal area and nature reserve located in the northeastern corner of the main island of Puerto Rico , particularly in the Cabezas barrio of the municipality of Fajardo . The reserve is famous for its biodiversity, with its bioluminescent lagoon ( Laguna Grande , one of the three year-round bioluminescent bodies of water in the territory and one of seven in the Caribbean), its coral reefs , and its subtropical dry and mangrove forests , [ 2 ] and for its history, particularly for its lighthouse and its role during the Puerto Rico campaign of the Spanish–American War . [ 3 ] [ 4 ]
Cabezas de San Juan Nature Reserve consists mainly of a large peninsula located in the north-westernmost corner of Puerto Rico and its surrounding bodies of water. The reserve is connected to the west to Seven Seas State Park ( Parque Nacional Seven Seas ) and the Northeast Ecological Corridor , and by sea in the east to La Cordillera Reef Nature Reserve , a large protected marine area consisting of a small chain of cays, reefs, and islets, collectively known as La Cordillera (''the mountain range'') or Cayos de la Cordillera (Cordillera Cays). [ 5 ] [ 6 ] [ 7 ] [ 8 ] To the north it is bound by the Atlantic Ocean and in the south it borders the fishing community of Las Croabas .
Las Cabezas de San Juan obtains its name from the rocky headlands found at the northernmost point of a peninsula located in the northwestern-most point of the main island of Puerto Rico, which formerly known as San Juan Bautista until early in the 18th century. While the name for the island ( San Juan Bautista ) and its capital city ( Ciudad de Puerto Rico ) were officially exchanged by 1746, [ 9 ] [ 10 ] [ 11 ] the name "Cabezas de San Juan" rather than "Cabezas de Puerto Rico" kept being used to the describe this extremity of the island despite the change of name of the island colloquially and in official documents. [ 12 ]
Laguna Grande (Spanish for 'big lagoon'), located within the nature reserve, is one of the three bodies of water in Puerto Rico with year-round bioluminescence, and one of seven in the Caribbean. [ 2 ] The other three bioluminescent bodies of water in Puerto Rico are Puerto Mosquito in Vieques and Bahía Fosforescente at La Parguera Nature Reserve in Lajas .
The area of Las Cabezas de San Juan was inhabited by the indigenous Taino people at the time of the Spanish arrival to the Americas in 1492. Archaeological findings in the area suggest it was a prominent entry point into the island for Pre-Columbian trade. [ 13 ] Although the town of Fajardo was founded in 1760 close to the south around the river of the same name , the area of the peninsula was not settled at the time. Throughout the 18th century these headlands were a hotspot for smuggling, which later prompted the establishment of a port to regulate trade and commerce in the area in 1820. [ 14 ] A lighthouse was built in the summit of the highest point of Cabezas de San Juan in 1880, and inaugurated on May 2, 1882. [ 1 ] The peninsula and the lighthouse itself played a role later at the Battle of Fajardo in the Puerto Rico campaign of the Spanish–American War when Spanish troops under the command of Captain Pedro del Pino successfully repelled a US landing under the command of Rear Admiral Frederick Rodgers in August of 1898, who only managed to capture the lighthouse. [ 15 ] [ 16 ] [ 17 ]
The peninsula itself became a wildlife refuge in 1975 when it was acquired by the Conservation Trust of Puerto Rico , it was later proclaimed a nature reserve in 1986. [ 13 ] Hurricane Hugo made landfall in the cape and crossed the reserve as a strong Category 3 storm on September 18, 1989, after having devastated the island of Vieques earlier that same day. [ 18 ]
Cabezas de San Juan Nature Reserve today is owned and managed by the Conservation Trust of Puerto Rico , and it is open to the public. The lighthouse operates now as a museum, also managed by the Conservation Trust of Puerto Rico. [ 1 ] | https://en.wikipedia.org/wiki/Las_Cabezas_de_San_Juan_(Puerto_Rico) |
Laser-heated pedestal growth ( LHPG ) or laser floating zone ( LFZ ) is a crystal growth technique. A narrow region of a crystal is melted with a powerful CO 2 or Nd:YAG laser . The laser and hence the floating zone , is moved along the crystal. The molten region melts impure solid at its forward edge and leaves a wake of purer material solidified behind it. This technique for growing crystals from the melt (liquid/solid phase transition ) is used in materials research. [ 1 ] [ 2 ]
The main advantages of this technique are the high pulling rates (60 times greater than the conventional Czochralski technique ) and the possibility of growing materials with very high melting points. [ 3 ] [ 4 ] [ 5 ] In addition, LHPG is a crucible -free technique, which allows single crystals to be grown with high purity and low stress.
The geometric shape of the crystals (the technique can produce small diameters), and the low production cost, make the single-crystal fibers (SCF) produced by LHPG suitable substitutes for bulk crystals in many devices, especially those that use high- melting-point materials. [ 6 ] [ 7 ] However, single-crystal fibers must have equal or superior optical and structural qualities compared to bulk crystals to substitute for them in technological devices. This can be achieved by carefully controlling the growth conditions. [ 8 ] [ 9 ] [ 10 ]
Until 1980, laser-heated crystal growth used only two laser beams focused over the source material. [ 11 ] This condition generated a high radial thermal gradient in the molten zone, making the process unstable. Increasing the number of beams to four did not solve the problem, although it improved the growth process. [ 12 ]
An improvement to the laser-heated crystal growth technique was made by Fejer et al. , [ 13 ] who incorporated a special optical component known as a reflaxicon , consisting of an inner cone surrounded by a larger coaxial cone section, both with reflecting surfaces. This optical element converts the cylindrical laser beam into a larger diameter hollow cylinder surface. [ 14 ] This optical component allows radial distribution of the laser energy over the molten zone, reducing radial thermal gradients. The axial temperature gradient in this technique can go as high as 10000 °C/cm, which is very high when compared to traditional crystal growth techniques (10–100 °C/cm).
A feature of the LHPG technique is its high convection speed in the liquid phase due to Marangoni convection . [ 15 ] [ 16 ] It is possible to see that it spins very fast. Even when it appears to be standing still, it is in fact spinning fast on its axis. | https://en.wikipedia.org/wiki/Laser-heated_pedestal_growth |
Laser-induced breakdown spectroscopy ( LIBS ) is a type of atomic emission spectroscopy which uses a highly energetic laser pulse as the excitation source. [ 1 ] [ 2 ] The laser is focused to form a plasma, which atomizes and excites samples. The formation of the plasma only begins when the focused laser achieves a certain threshold for optical breakdown, which generally depends on the environment and the target material. [ 3 ]
From 2000 to 2010, the U.S. Army Research Laboratory (ARL) researched potential extensions to LIBS technology, which focused on hazardous material detection. [ 4 ] [ 5 ] Applications investigated at ARL included the standoff detection of explosive residues and other hazardous materials, plastic landmine discrimination, and material characterization of various metal alloys and polymers. Results presented by ARL suggest that LIBS may be able to discriminate between energetic and non-energetic materials. [ 6 ]
Broadband high-resolution spectrometers were developed in 2000 and commercialized in 2003. Designed for material analysis, the spectrometer allowed the LIBS system to be sensitive to chemical elements in low concentration. [ 7 ]
ARL LIBS applications studied from 2000 to 2010 included: [ 5 ]
ARL LIBS prototypes studied during this period included: [ 5 ]
LIBS is one of several analytical techniques that can be deployed in the field as opposed to pure laboratory techniques e.g. spark OES . As of 2015 [update] , recent research on LIBS focuses on compact and (man-)portable systems. Some industrial applications of LIBS include the detection of material mix-ups, [ 8 ] analysis of inclusions in steel, analysis of slags in secondary metallurgy, [ 9 ] analysis of combustion processes, [ 10 ] and high-speed identification of scrap pieces for material-specific recycling tasks. Armed with data analysis techniques, this technique is being extended to pharmaceutical samples. [ 11 ] [ 12 ]
Following multiphoton or tunnel ionization the electron is being accelerated by inverse Bremsstrahlung and can collide with the nearby molecules and generate new electrons through collisions. If the pulse duration is long, the newly ionized electrons can be accelerated and eventually avalanche or cascade ionization follows. Once the density of the electrons reaches a critical value, breakdown occurs and high density plasma is created which has no memory of the laser pulse. So, the criterion for the shortness of a pulse in dense media is as follows: A pulse interacting with a dense matter is considered to be short if during the interaction the threshold for the avalanche ionization is not reached. At the first glance this definition may appear to be too limiting. Fortunately, due to the delicately balanced behavior of the pulses in dense media, the threshold cannot be reached easily. [ citation needed ] The phenomenon responsible for the balance is the intensity clamping [ 13 ] through the onset of filamentation process during the propagation of strong laser pulses in dense media.
A potentially important development to LIBS involves the use of a short laser pulse as a spectroscopic source. [ 14 ] In this method, a plasma column is created as a result of focusing ultrafast laser pulses in a gas. The self-luminous plasma is far superior in terms of low level of continuum and also smaller line broadening. This is attributed to the lower density of the plasma in the case of short laser pulses due to the defocusing effects which limits the intensity of the pulse in the interaction region and thus prevents further multiphoton/tunnel ionization of the gas. [ 15 ] [ 16 ]
For an optically thin plasma composed of a single, neutral atomic species in local thermal equilibrium (LTE), the density of photons emitted by a transition from level i to level j is [ 17 ]
I i j ( λ ) = 1 4 π n 0 A i j g i exp − E i / k B T U ( T ) I ( λ ) {\displaystyle I_{ij}(\lambda )={\frac {1}{4\pi }}n_{0}A_{ij}{\frac {g_{i}\exp ^{-E_{i}/k_{B}T}}{U(T)}}I(\lambda )}
where :
The partition function U ( T ) {\displaystyle U(T)} is the statistical occupation fraction of every level k {\displaystyle k} of the atomic species :
U ( T ) = ∑ j g j exp − E j / k B T {\displaystyle U(T)=\sum _{j}g_{j}\exp ^{-E_{j}/k_{B}T}}
Recently, LIBS has been investigated as a fast, micro-destructive food analysis tool. It is considered a potential analytical tool for qualitative and quantitative chemical analysis, making it suitable as a PAT (Process Analytical Technology) or portable tool. Milk, bakery products, tea, vegetable oils, water, cereals, flour, potatoes, palm date and different types of meat have been analyzed using LIBS. [ 18 ] Few studies have shown its potential as an adulteration detection tool for certain foods. [ 19 ] [ 20 ] LIBS has also been evaluated as a promising elemental imaging technique in meat. [ 21 ]
In 2019, researchers of the University of York and of the Liverpool John Moores University employed LIBS for studying 12 European oysters ( Ostrea edulis , Linnaeus , 1758) from the Late Mesolithic shell midden at Conors Island ( Republic of Ireland ). The results highlighted the applicability of LIBS to determine prehistoric seasonality practices as well as biological age and growth at an improved rate and reduced cost than was previously achievable. [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/Laser-induced_breakdown_spectroscopy |
Laser-induced incandescence (LII) is an in situ method of measuring aerosol particle volume fraction , primary particle sizes, and other thermophysical properties in flames , during gas-phase nanoparticle synthesis, and in aerosol streams more broadly. The technique is prominently used to characterize soot . [ 1 ]
The technique can broadly be separated into applications involving continuous or pulsed laser sources, with the former implemented in the Single Particle Soot Photometer (SP2) and the latter used in time-resolved laser-induced incandescence (TiRe-LII) analyses.
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Laser-induced_incandescence |
The Laser 50 is an educational portable computer sold by Vtech that ran the BASIC programming language . It was released in 1984.
The Laser 50 used a Zilog Z80 central processing unit running at 3.5 MHz , 2 kB to 18 kB of RAM , a 12 kB ROM , and a 80x7 dots LCD screen .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Laser_50 |
Laser Doppler velocimetry , also known as laser Doppler anemometry , is the technique of using the Doppler shift in a laser beam to measure the velocity in transparent or semi-transparent fluid flows or the linear or vibratory motion of opaque, reflecting surfaces. The measurement with laser Doppler anemometry is absolute and linear with velocity and requires no pre-calibration.
The development of the helium–neon laser (He-Ne) in 1962 at the Bell Telephone Laboratories provided the optics community with a continuous wave electromagnetic radiation source that was highly concentrated at a wavelength of 632.8 nanometers (nm) in the red portion of the visible spectrum . [ 1 ] It was discovered that fluid flow measurements could be made using the Doppler effect on a He-Ne beam scattered by small polystyrene spheres in the fluid. [ 2 ]
At the Research Laboratories of Brown Engineering Company (later Teledyne Brown Engineering), this phenomenon was used to develop the first laser Doppler flowmeter using heterodyne signal processing. [ 3 ] This instrument became known as the laser Doppler velocimeter and the technique was called laser Doppler velocimetry. It is also referred to as laser Doppler anemometry.
Early laser Doppler velocimetry applications included measuring and mapping the exhaust from rocket engines with speeds up to 1000 m/s, as well as determining flow in a near-surface blood artery. Similar instruments were also developed for solid surface monitoring, with applications ranging from measuring product speeds in production lines of paper and steel mills to measuring vibration frequency and amplitude of surfaces. [ 4 ]
In its simplest and most presently used form, laser Doppler velocimetry crosses two beams of collimated , monochromatic , and coherent laser light in the flow of the fluid being measured. The two beams are usually obtained by splitting a single beam, thus ensuring coherence between the two. Lasers with wavelengths in the visible spectrum (390–750 nm) are commonly used; these are typically He-Ne, Argon ion , or laser diode , allowing the beam path to be observed. A transmitting optics system focuses the beams to intersect at their waists (the focal point of a laser beam), where they interfere and generate a set of straight fringes. As particles (either naturally occurring or induced) entrained in the fluid pass through the fringes, they scatter light that is then collected by a receiving optics and focused on a photodetector (typically an avalanche photodiode ).
The scattered light fluctuates in intensity, the frequency of which is equivalent to the Doppler shift between the incident and scattered light, and is thus proportional to the component of particle velocity which lies in the plane of two laser beams. If the sensor is aligned to the flow such that the fringes are perpendicular to the flow direction, the electrical signal from the photodetector will then be proportional to the full particle velocity. By combining three devices (e.g., He-Ne, Argon ion, and laser diode) with different wavelengths, all three flow velocity components can be simultaneously measured. [ 5 ]
Another form of laser Doppler velocimetry, particularly used in early device developments, has a completely different approach akin to an interferometer . The sensor also splits the laser beam into two parts; one (the measurement beam) is focused into the flow and the second (the reference beam) passes outside the flow. A receiving optics provides a path that intersects the measurement beam, forming a small volume. Particles passing through this volume will scatter light from the measurement beam with a Doppler shift; a portion of this light is collected by the receiving optics and transferred to the photodetector. The reference beam is also sent to the photodetector where optical heterodyne detection produces an electrical signal proportional to the Doppler shift, by which the particle velocity component perpendicular to the plane of the beams can be determined. [ 6 ]
The signal detection scheme of the instrument is using the principle of optical heterodyne detection. This principle is similar to other laser Doppler-based instruments such as laser Doppler vibrometer , or laser surface velocimeter . It is possible to apply digital techniques to the signal to obtain the velocity as a measured fraction of the speed-of-light , and therefore in one sense Laser Doppler velocimetry is a particularly fundamental measurement traceable to the S.I. system of measurement. [ 7 ]
In the decades since the laser Doppler velocimetry was first introduced, there has been a wide variety of laser Doppler sensors developed and applied.
Laser Doppler velocimetry is often chosen over other forms of flow measurement because the equipment can be outside of the flow being measured and therefore has no effect on the flow. Some typical applications include the following:
One disadvantage has been that laser Doppler velocimetry sensors are range-dependent; they have to be calibrated minutely and the distances where they measure has to be precisely defined. This distance restriction has recently been at least partially overcome with a new sensor that is range independent. [ 9 ]
Laser Doppler velocimetry can be useful in automation, which includes the flow examples above. It can also be used to measure the speed of solid objects, like conveyor belts . This can be useful in situations where attaching a rotary encoder (or a different mechanical speed measurement device) to the conveyor belt is impossible or impractical.
Laser Doppler velocimetry is used in hemodynamics research as a technique to partially quantify blood flow in human tissues such as skin or the eye fundus. Within the clinical environment, the technology is often referred to as laser Doppler flowmetry; when images are made, it is referred to as laser Doppler imaging . The beam from a low-power laser (usually a laser diode ) penetrates the skin sufficiently to be scattered with a Doppler shift by the red blood cells and return to be concentrated on a detector. These measurements are useful to monitor the effect of exercise, drug treatments, environmental, or physical manipulations on targeted micro-sized vascular areas. [ 10 ]
The laser Doppler vibrometer is being used in clinical otology for the measurement of tympanic membrane (eardrum), malleus (hammer), and prosthesis head displacement in response to sound inputs of 80- to 100-dB sound-pressure level . It also has potential use in the operating room to perform measurements of prosthesis and stapes (stirrup) displacement. [ 11 ]
The Autonomous Landing Hazard Avoidance Technology used in NASA's Project Morpheus lunar lander to automatically find a safe landing place contains a lidar Doppler velocimeter that measures the vehicle's altitude and velocity. [ 12 ] The AGM-129 ACM cruise missile uses laser doppler velocimeter for precise terminal guidance. [ 13 ]
Laser Doppler velocimetry is used in the analysis of vibration of MEMS devices, often to compare the performance of devices such as accelerometers-on-a-chip with their theoretical (calculated) modes of vibration. As a specific example in which the unique features of Laser Doppler velocimetry are important, the measurement of velocity of a MEMS watt balance device [ 14 ] has allowed greater accuracy in the measurement of small forces than previously possible, through directly measuring the ratio of this velocity to the speed of light. This is a fundamental, traceable measurement that now allows traceability of small forces to the S.I. System. | https://en.wikipedia.org/wiki/Laser_Doppler_velocimetry |
A laser Doppler vibrometer ( LDV ) is a scientific instrument that is used to make non-contact vibration measurements of a surface. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the reflected laser beam frequency due to the motion of the surface. The output of an LDV is generally a continuous analog voltage that is directly proportional to the target velocity component along the direction of the laser beam.
Some advantages of an LDV over similar measurement devices such as an accelerometer are that the LDV can be directed at targets that are difficult to access, or that may be too small or too hot to attach a physical transducer . Also, the LDV makes the vibration measurement without mass-loading the target, which is especially important for MEMS devices.
A vibrometer is generally a two beam laser interferometer that measures the frequency (or phase) difference between an internal reference beam and a test beam. The most common type of laser in an LDV is the helium–neon laser , although laser diodes , fiber lasers , and Nd:YAG lasers are also used. The test beam is directed to the target, and scattered light from the target is collected and interfered with the reference beam on a photodetector , typically a photodiode . Most commercial vibrometers work in a heterodyne regime by adding a known frequency shift (typically 30–40 MHz) to one of the beams. This frequency shift is usually generated by a Bragg cell , or acousto-optic modulator. [ 1 ]
A schematic of a typical laser vibrometer is shown above. The beam from the laser, which has a frequency f o , is divided into a reference beam and a test beam with a beamsplitter . The test beam then passes through the Bragg cell, which adds a frequency shift f b . This frequency shifted beam then is directed to the target. The motion of the target adds a Doppler shift to the beam given by f d = 2*v(t)*cos(α)/λ, where v(t) is the velocity of the target as a function of time, α is the angle between the laser beam and the velocity vector, and λ is the wavelength of the light.
Light scatters from the target in all directions, but some portion of the light is collected by the LDV and reflected by the beamsplitter to the photodetector. This light has a frequency equal to f o + f b + f d . This scattered light is combined with the reference beam at the photo-detector. The initial frequency of the laser is very high (> 10 14 Hz), which is higher than the response of the detector. The detector does respond, however, to the beat frequency between the two beams, which is at f b + f d (typically in the tens of MHz range).
The output of the photodetector is a standard frequency modulated (FM) signal, with the Bragg cell frequency as the carrier frequency , and the Doppler shift as the modulation frequency. This signal can be demodulated to derive the velocity vs. time of the vibrating target.
LDVs are used in a wide variety of scientific, industrial, and medical applications. Some examples are provided below: | https://en.wikipedia.org/wiki/Laser_Doppler_vibrometer |
Laser Induced Deep Etching ( LIDE ) is a glass microfabrication technique. The two-step process enables precise, high-aspect-ratio microstructures in thin glass substrates, avoiding defects such as microcracks or chipping. [ 1 ] LIDE is often used for applications requiring high-precision through-glass vias (TGVs) and other intricate glass structures for semiconductor packaging and high-frequency communication devices. [ 2 ]
The technology was introduced in 2017 by LPKF Laser & Electronics AG as an enabling technology for precision glass processing in microsystems and semiconductor applications. [ 1 ] [ 3 ]
In 2018, LIDE was used to create fine-glass-masks (FGMs) for OLED displays, offering a potential alternative to fine-metal-masks (FMMs), which are commonly used in structured OLED material deposition. [ 4 ] The technology is more commonly utilised in advanced IC packaging, where it enables the processing of glass wafers and panels with through-glass vias (TGVs) for semiconductor packaging and MEMS devices. [ 5 ] LIDE technology has also been used to fabricate high-frequency RF and mm-wave communication components, particularly for system-in-package (SiP) systems. It enables the creation of high-precision TGVs, cavities, and cutouts essential for electromagnetically and galvanically coupled transitions in dielectric waveguide (DWG) applications. These transitions allow for RF signal coupling above 150 GHz. [ 2 ] The technique won the SID Honorary Award during the Society for Information Display (SID) Display Week in the United States in May 2019. [ 6 ]
In May 2020, a licence agreement was concluded with Nippon Electric Glass (NEG), under which NEG uses LIDE technology for the mass production of glass components, including cover glass, substrate glass and other glass components. [ 7 ] Besides the agreement with NEG, the technology is offered to the market without any limit. [ 8 ]
LIDE enables the creation of deep structures in thin glass with high aspect ratios exceeding 1:10, resulting in structures as small as 5 μm or less. [ 9 ]
A single laser pulse locally modifies the glass according to the desired layout, penetrating either through the entire thickness of the substrate or to an individually definable point. [ 5 ] [ 9 ] For the production of TGVs, LIDE modifies the glass to change its isotropic etching characteristics to anisotropic , allowing for precise aperture creation with well-defined sizes such as 2–3 µm. [ 4 ]
In the second step, the entire glass surface undergoes isotropic wet chemical etching . [ 5 ] The laser-modified regions etch at a significantly faster rate than unmodified areas, resulting in the formation of precise microstructures. [ 9 ] The process allows for the creation of glass structures with varied profiles, such as rounded, dimpled, or flat bottoms, depending on the specific requirements of the application. Additionally, it enables the fabrication of high-aspect-ratio structures with dimensions ranging from micrometres to millimetres. LIDE can also be applied to both single-layer and double-layer glass chips. [ 10 ] [ 5 ]
LIDE technology is particularly beneficial in fields requiring precise glass microstructures. The technology is employed in the production of microchips and sensors , with applications extending to industries such as automotive and aerospace , and for smartphone displays. [ 11 ] [ 12 ]
LIDE technology enables the creation of high-frequency communication systems, such as radar sensors, that require integration of multiple functions in small form factors. This development was part of the GlaRA research project, a publicly funded research project by LPKF Laser & Electronics AG with partners such as the Fraunhofer Institute for Reliability and Micro-integration. [ 13 ]
The process yields microstructures without microcracks, chipping, or heat-induced stress, preserving the inherent strength and optical clarity of the glass. [ 5 ] [ 17 ] [ 18 ]
LIDE enables the fabrication of structures with sub-micron precision, making it suitable for high-resolution imaging-based applications. It is also highly scalable, allowing for cost-effective production of complex glass components. [ 5 ] [ 19 ] FGMs made using LIDE technology offer advantages like absence of shadowing effects and the prevention of wrinkles. [ 4 ]
However, the technique is dependent on specific laser equipment and etching solutions, which limits its accessibility and increases spatial footprint. [ 10 ] [ 19 ] LIDE technology offers limited 2.5D structuring capabilities due to its elongated focus, [ 19 ] whereas other methods, like LightFab, use a dot-like focus for full 3D structures. [ 20 ] LIDE compensates for this with higher throughput, as a single pulse can modify the entire substrate thickness. [ 5 ] [ 18 ] | https://en.wikipedia.org/wiki/Laser_Induced_Deep_Etching |
The Laser Interferometer Space Antenna ( LISA ) is a planned space probe to detect and measure gravitational waves [ 2 ] —tiny ripples in the fabric of spacetime —from astronomical sources. [ 3 ] LISA will be the first dedicated space-based gravitational-wave observatory . It aims to measure gravitational waves directly by using laser interferometry . The LISA concept features three spacecraft arranged in an equilateral triangle with each side 2.5 million kilometers long, flying in an Earth-like heliocentric orbit . The distance between the satellites is precisely monitored to detect a passing gravitational wave. [ 2 ]
The LISA project started out as a joint effort between NASA and the European Space Agency (ESA). However, in 2011, NASA announced that it would be unable to continue its original LISA partnership with the European Space Agency [ 4 ] due to funding limitations. [ 5 ] In response, ESA continued developing the mission and in 2017, NASA re-engaged with LISA, contributing technology and scientific expertise to the mission. [ 6 ] The project is also a recognized CERN experiment (RE8), collaborating with CERN on precision measurement techniques. [ 7 ] [ 8 ] A revised, scaled-down design – originally known as the New Gravitational-wave Observatory ( NGO ) – was proposed as one of three large-scale projects in ESA's long-term plans . [ 9 ] In 2013, ESA selected “The Gravitational Universe” as the theme for its third large-class (L3) mission under the Cosmic Vision program. This decision set the foundation for LISA’s selection as the space-based gravitational wave observatory planned for launch in the 2030s. [ 10 ] [ 11 ]
In January 2017, LISA was proposed as a candidate mission. [ 12 ] On June 20, 2017, the suggested mission received its clearance goal for the 2030s, and was approved as one of the main research missions of ESA. [ 13 ] [ 14 ]
On 25 January 2024, the LISA Mission was formally adopted by ESA, marking the transition from conceptual design to hardware development. As part of its renewed participation, NASA is contributing laser systems, telescopes, and charge management devices, all critical for detecting gravitational waves. [ 15 ] This adoption reflects that the mission’s technology is now sufficiently advanced to begin full-scale construction of the spacecraft and instruments. [ 16 ] In March 2024, NASA and ESA signed a Memorandum of Understanding (MoU), officially defining NASA’s role in supplying key mission components.
The LISA mission is designed for direct observation of gravitational waves , which are distortions of spacetime travelling at the speed of light . Passing gravitational waves alternately squeeze and stretch space itself by a tiny amount. Gravitational waves are caused by energetic events in the universe and, unlike any other radiation , can pass unhindered by intervening mass. Launching LISA will add a new sense to scientists' perception of the universe and enable them to study phenomena that are invisible in normal light. [ 17 ] [ 18 ]
Potential sources for signals are merging massive black holes at the centre of galaxies , [ 19 ] massive black holes orbited by small compact objects , known as extreme mass ratio inspirals , [ 20 ] binaries of compact stars, [ 21 ] substellar objects orbiting such binaries, [ 22 ] and possibly other sources of cosmological origin, such as a cosmological phase transition shortly after the Big Bang , [ 23 ] and speculative astrophysical objects like cosmic strings and domain boundaries . [ 24 ]
The LISA mission's primary objective is to detect and measure gravitational waves produced by compact binary systems and mergers of supermassive black holes. LISA will observe gravitational waves by measuring differential changes in the length of its arms, as sensed by laser interferometry. [ 25 ] Each of the three LISA spacecraft contains two telescopes, two lasers and two test masses (each a 46 mm, roughly 2 kg, gold-coated cube of gold/platinum), arranged in two optical assemblies pointed at the other two spacecraft. [ 12 ] These form Michelson-like interferometers , each centred on one of the spacecraft, with the test masses defining the ends of the arms. [ 26 ] The entire arrangement, which is ten times larger than the orbit of the Moon, will be placed in solar orbit at the same distance from the Sun as the Earth, but trailing the Earth by 20 degrees, and with the orbital planes of the three spacecraft inclined relative to the ecliptic by about 0.33 degree, which results in the plane of the triangular spacecraft formation being tilted 60 degrees from the plane of the ecliptic. [ 25 ] The mean linear distance between the formation and the Earth will be 50 million kilometres. [ 27 ]
To eliminate non-gravitational forces such as light pressure and solar wind on the test masses, each spacecraft is constructed as a zero-drag satellite . The test mass floats free inside, effectively in free-fall, while the spacecraft around it absorbs all these local non-gravitational forces. Then, using capacitive sensing to determine the spacecraft's position relative to the mass, very precise thrusters adjust the spacecraft so that it follows, keeping itself centered around the mass. [ 28 ]
The longer the arms, the more sensitive the detector is to long-period gravitational waves, but its sensitivity to wavelengths shorter than the arms is reduced (2,500,000 km is 8.3 lightseconds , or 0.12 Hz [compare to LIGO 's peak sensitivity around 500 Hz]). As the satellites are free-flying, the spacing is easily adjusted before launch, with upper bounds being imposed by the sizes of the telescopes required at each end of the interferometer (which are constrained by the size of the launch vehicle's payload fairing ) and the stability of the constellation orbit (larger constellations are more sensitive to the gravitational effects of other planets, limiting the mission lifetime). Another length-dependent factor which must be compensated for is the "point-ahead angle" between the incoming and outgoing laser beams; the telescope must receive its incoming beam from where its partner was a few seconds ago, but send its outgoing beam to where its partner will be a few seconds from now .
The original 2008 LISA proposal had arms 5 million kilometres (5 Gm) long. [ 29 ] When downscoped to eLISA in 2013, arms of 1 million kilometres were proposed. [ 30 ] The approved 2017 LISA proposal has arms 2.5 million kilometres (2.5 Gm) long. [ 31 ] [ 12 ]
Like most modern gravitational wave-observatories , LISA is based on laser interferometry . Its three satellites form a giant Michelson interferometer in which two "transponder" satellites play the role of reflectors and one "master" satellite the roles of source and observer. When a gravitational wave passes the interferometer, the lengths of the two LISA arms vary due to spacetime distortions caused by the wave. Practically, LISA measures a relative phase shift between one local laser and one distant laser by light interference . Comparison between the observed laser beam frequency (in return beam) and the local laser beam frequency (sent beam) encodes the wave parameters. The principle of laser-interferometric inter-satellite ranging measurements was successfully implemented in the Laser Ranging Interferometer onboard GRACE Follow-On . [ 32 ]
Unlike terrestrial gravitational-wave observatories, LISA cannot keep its arms "locked" in position at a fixed length. Instead, the distances between satellites vary significantly over each year's orbit, and the detector must keep track of the constantly changing distance, counting the millions of wavelengths by which the distance changes each second. Then, the signals are separated in the frequency domain : changes with periods of less than a day are signals of interest, while changes with periods of a month or more are irrelevant.
This difference means that LISA cannot use high-finesse Fabry–Pérot resonant arm cavities and signal recycling systems like terrestrial detectors, limiting its length-measurement accuracy. But with arms almost a million times longer, the motions to be detected are correspondingly larger.
An ESA test mission called LISA Pathfinder (LPF) was launched in 2015 to test the technology necessary to put a test mass in (almost) perfect free fall conditions. [ 33 ] LPF consists of a single spacecraft with one of the LISA interferometer arms shortened to about 38 cm (15 in), so that it fits inside a single spacecraft. The spacecraft reached its operational location in heliocentric orbit at the Lagrange point L1 on 22 January 2016, where it underwent payload commissioning. [ 34 ] Scientific research started on March 8, 2016. [ 35 ] The goal of LPF was to demonstrate a noise level 10 times worse than needed for LISA. However, LPF exceeded this goal by a large margin, approaching the LISA requirement noise levels. [ 36 ]
Gravitational-wave astronomy seeks to use direct measurements of gravitational waves to study astrophysical systems and to test Einstein 's theory of gravity . Indirect evidence of gravitational waves was derived from observations of the decreasing orbital periods of several binary pulsars , such as the Hulse–Taylor pulsar . [ 38 ] In February 2016, the Advanced LIGO project announced that it had directly detected gravitational waves from a black hole merger. [ 39 ] [ 40 ] [ 41 ]
Observing gravitational waves requires two things: a strong source of gravitational waves—such as the merger of two black holes —and extremely high detection sensitivity. A LISA-like instrument should be able to measure relative displacements with a resolution of 20 picometres —less than the diameter of a helium atom—over a distance of a million kilometres, yielding a strain sensitivity of better than 1 part in 10 20 in the low-frequency band about a millihertz.
A LISA-like detector is sensitive to the low-frequency band of the gravitational-wave spectrum, which contains many astrophysically interesting sources. [ 42 ] Such a detector would observe signals from binary stars within our galaxy (the Milky Way ); [ 43 ] [ 44 ] signals from binary supermassive black holes in other galaxies ; [ 45 ] and extreme-mass-ratio inspirals and bursts produced by a stellar-mass compact object orbiting a supermassive black hole. [ 46 ] [ 47 ] There are also more speculative signals such as signals from cosmological phase transitions , cosmic strings and primordial gravitational waves generated during cosmological inflation . [ 48 ]
LISA will be able to detect the nearly monochromatic gravitational waves emanating of close binaries consisting of two compact stellar objects ( white dwarfs , neutron stars , and black holes ) in the Milky Way . At low frequencies these are actually expected to be so numerous that they form a source of (foreground) noise for LISA data analysis. At higher frequencies LISA is expected to detect and resolve around 25,000 galactic compact binaries. Studying the distribution of the masses, periods, and locations of this population, will teach us about the formation and evolution of binary systems in the galaxy. Furthermore, LISA will be able to resolve 10 binaries currently known from electromagnetic observations (and find ≈500 more with electromagnetic counterparts within one square degree). Joint study of these systems will allow inference on other dissipation mechanisms in these systems, e.g. through tidal interactions. [ 12 ] One of the currently known binaries that LISA will be able to resolve is the white dwarf binary ZTF J1539+5027 with a period of 6.91 minutes, the second shortest period binary white dwarf pair discovered to date. [ 49 ] [ 50 ]
LISA will also be able to detect the presence of large planets and brown dwarfs orbiting white dwarf binaries. The number of such detections in the Milky Way is estimated to range from 17 in a pessimistic scenario to more than 2000 in an optimistic scenario, and even extragalactic detections in the Magellanic Clouds might be possible, far beyond the current capabilities of other detection methods for exoplanets . [ 22 ] [ 51 ] [ 52 ]
LISA will be able to detect the gravitational waves from the merger of a pair of massive black holes with a chirp mass between 10 4 and 10 7 solar masses all the way back to their earliest formation at redshift around z ≈ 10. The most conservative population models expect at least a few such events to happen each year. For mergers closer by ( z < 3), it will be able to determine the spins of the components, which carry information about the past evolution of the components (e.g. whether they have grown primarily through accretion or mergers). For mergers around the peak of star formation ( z ≈ 2) LISA will be able to locate mergers within 100 square degrees on the night sky at least 24 hours before the actual merger, allowing electromagnetic telescopes to search for counterparts, with the potential of witnessing the formation of a quasar after a merger. [ 12 ]
Extreme mass ratio inspirals (EMRIs) consist of a stellar compact object (<60 solar masses) on a slowly decaying orbit around a massive black hole of around 10 5 solar masses. For the ideal case of a prograde orbit around a (nearly) maximally spinning black hole, LISA will be able to detect these events up to z =4. EMRIs are interesting because they are slowly evolving, spending around 10 5 orbits and between a few months and a few years in the LISA sensitivity band before merging. This allows very accurate (up to an error of 1 in 10 4 ) measurements of the properties of the system, including the mass and spin of the central object and the mass and orbital elements ( eccentricity and inclination ) of the smaller object. EMRIs are expected to occur regularly in the centers of most galaxies and in dense star clusters. Conservative population estimates predict at least one detectable event per year for LISA. [ 12 ]
LISA will also be able to detect the gravitational waves emanating from black hole binary mergers where the lighter black hole is in the intermediate black hole range (between 10 2 and 10 4 solar masses). In the case of both components being intermediate black holes between 600 and 10 4 solar masses, LISA will be able to detect events up to redshifts around 1. In the case of an intermediate mass black hole spiralling into a massive black hole (between 10 4 and 10 6 solar masses) events will be detectable up to at least z =3. Since little is known about the population of intermediate mass black holes, there is no good estimate of the event rates for these events. [ 12 ]
Following the announcement of the first gravitational wave detection , GW150914, it was realized that a similar event would be detectable by LISA well before the merger. [ 53 ] Based on the LIGO estimated event rates, it is expected that LISA will detect and resolve about 100 binaries that would merge a few weeks to months later in the LIGO detection band. LISA will be able to accurately predict the time of merger ahead of time and locate the event with 1 square degree on the sky. This will greatly aid the possibilities for searches for electromagnetic counterpart events. [ 12 ]
Gravitational wave signals from black holes could provide hints at a more fundamental theory of gravity. [ 12 ] LISA will be able to test possible modifications of Einstein's general theory of relativity, motivated by dark energy or dark matter. [ 54 ] These could manifest, for example, through modifications of the propagation of gravitational waves, or through the possibility of hairy black holes . [ 54 ]
LISA will be able to independently measure the redshift and distance of events occurring relatively close by ( z < 0.1) through the detection of massive black hole mergers and EMRIs. Consequently, it can make an independent measurement of the Hubble parameter H 0 that does not depend on the use of the cosmic distance ladder . The accuracy of such a determination is limited by the sample size and therefore the mission duration. With a mission lifetime of 4 years one expects to be able to determine H 0 with an absolute error of 0.01 (km/s)/Mpc. At larger ranges LISA events can (stochastically) be linked to electromagnetic counterparts, to further constrain the expansion curve of the universe. [ 12 ]
LISA will be sensitive to the stochastic gravitational wave background generated in the early universe through various channels, including inflation , first-order cosmological phase transitions related to spontaneous symmetry breaking , and cosmic strings. [ 12 ]
LISA will also search for currently unknown (and unmodelled) sources of gravitational waves. The history of astrophysics has shown that whenever a new frequency range/medium of detection is available new unexpected sources show up. This could for example include kinks and cusps in cosmic strings. [ 12 ]
LISA will be sensitive to the permanent displacement induced on probe masses by gravitational waves, known as gravitational memory effect . [ 55 ] [ 56 ]
Previous searches for gravitational waves in space were conducted for short periods by planetary missions that had other primary science objectives (such as Cassini–Huygens ), using microwave Doppler tracking to monitor fluctuations in the Earth–spacecraft distance. By contrast, LISA is a dedicated mission that will use laser interferometry to achieve a much higher sensitivity. [ citation needed ] Other gravitational wave antennas , such as LIGO , Virgo , and GEO600 , are already in operation on Earth, but their sensitivity at low frequencies is limited by the largest practical arm lengths, by seismic noise, and by interference from nearby moving masses. Conversely, NANOGrav measures frequencies too low for LISA. The different types of gravitational wave measurement systems — LISA, NANOGrav and ground-based detectors — are complementary rather than competitive, much like astronomical observatories in different electromagnetic bands (e.g., ultraviolet and infrared ). [ 57 ]
The first design studies for a gravitational-wave detector to be flown in space were performed in the 1980s under the name LAGOS (Laser Antena for Gravitational radiation Observation in Space). LISA was first proposed as a mission to ESA in the early 1990s. First as a candidate for the M3-cycle, and later as 'cornerstone mission' for the 'Horizon 2000 plus' program. As the decade progressed, the design was refined to a triangular configuration of three spacecraft with three 5-million-kilometre arms. This mission was pitched as a joint mission between ESA and NASA in 1997. [ 58 ] [ 59 ]
In the 2000s the joint ESA/NASA LISA mission was identified as a candidate for the 'L1' slot in ESA's Cosmic Vision 2015–2025 programme. However, due to budget cuts, NASA announced in early 2011 that it would not be contributing to any of ESA's L-class missions. ESA nonetheless decided to push the program forward, and instructed the L1 candidate missions to present reduced cost versions that could be flown within ESA's budget. A reduced version of LISA was designed with only two 1-million-kilometre arms under the name NGO (New/Next Gravitational wave Observatory). Despite NGO being ranked highest in terms of scientific potential, ESA decided to fly Jupiter Icy Moons Explorer (JUICE) as its L1 mission. One of the main concerns was that the LISA Pathfinder mission had been experiencing technical delays, making it uncertain if the technology would be ready for the projected L1 launch date. [ 58 ] [ 59 ]
Soon afterwards, ESA announced it would be selecting themes for its Large class L2 and L3 mission slots. A theme called "the Gravitational Universe" was formulated with the reduced NGO rechristened eLISA as a straw-man mission. [ 60 ] In November 2013, ESA announced that it selected "the Gravitational Universe" for its L3 mission slot (expected launch in 2034). [ 61 ] Following the successful detection of gravitational waves by the LIGO, ground-based detectors in September 2015, NASA expressed interest in rejoining the mission as a junior partner. In response to an ESA call for mission proposals for the `Gravitational Universe' themed L3 mission, [ 62 ] a mission proposal for a detector with three 2.5-million-kilometre arms again called LISA was submitted in January 2017. [ 12 ]
As of January 2024, LISA is expected to launch in 2035 on an Ariane 6 , [ 1 ] two years earlier than previously announced. [ 63 ] | https://en.wikipedia.org/wiki/Laser_Interferometer_Space_Antenna |
Laser ablation synthesis in solution ( LASiS ) is a commonly used method for obtaining colloidal solution of nanoparticles in a variety of solvents . [ 1 ] [ 2 ] Nanoparticles (NPs,), are useful in chemistry, engineering and biochemistry due to their large surface-to-volume ratio that causes them to have unique physical properties. [ 3 ] LASiS is considered a "green" method due to its lack of use for toxic chemical precursors to synthesize nanoparticles. [ 3 ] [ 4 ] [ 5 ]
In the LASiS method, nanoparticles are produced by a laser beam hitting a solid target in a liquid and during the condensation of the plasma plume, the nanoparticles are formed. Since the ablation is occurring in a liquid, versus air/vacuum/gas/, the environment allows for plume expansion, cooling and condensation with a higher temperature, pressure and density to create a plume with stronger confinement. These environmental conditions allow for more refined and smaller nanoparticles [ 1 ] [ 2 ] LASiS is usually considered a top-down physical approach. LASiS emerged as a reliable alternative to traditional chemical reduction methods for obtaining noble metal nanoparticles (NMNp). [ 1 ] LASiS is also used for synthesis of silver nanoparticles AgNPs, which are known for their antimicrobial effects. Production of AgNPs via LASiS causes nanoparticles with varying antimicrobial characteristics due to different properties achieved via the fine tuning of NPs size in liquid ablation. [ 4 ]
LASiS has some limitations in the size control of NMNp, which can be overcome by laser treatments of NMNp. Other cons of LASiS include: the slow rate of NPs production, high consumption of energy, laser equipment cost, and decreased ablation efficiency with longer usage of the laser within a session. [ 1 ] Other pros of LASiS include: minimal waste production, minimal manual operation, and refined size control of nanoparticles. [ 1 ] [ 3 ]
This nanotechnology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Laser_ablation_synthesis_in_solution |
A laser beam profiler captures, displays, and records the spatial intensity profile of a laser beam at a particular plane transverse to the beam propagation path. Since there are many types of lasers— ultraviolet , visible , infrared , continuous wave , pulsed, high-power, low-power—there is an assortment of instrumentation for measuring laser beam profiles. No single laser beam profiler can handle every power level, pulse duration, repetition rate, wavelength , and beam size.
Laser beam profiling instruments measure the following quantities:
Instruments and techniques were developed to obtain the beam characteristics listed above. These include:
As of 2002 [update] , commercial knife-edge measurement systems cost $5,000–$12,000 USD and CCD beam profilers cost $4,000–9,000 USD. [ 1 ] The cost of CCD beam profilers has come down in recent years, primarily driven by lower silicon CCD sensor costs, and as of 2008 [update] they can be found for less than $1000 USD.
The applications of laser beam profiling include:
The beam width is the single most important characteristic of a laser beam profile. At least five definitions of beam width are in common use: D4σ, 10/90 or 20/80 knife-edge, 1/e 2 , FWHM, and D86. The D4σ beam width is the ISO standard definition and the measurement of the M 2 beam quality parameter requires the measurement of the D4σ widths. [ 2 ] [ 3 ] [ 4 ] The other definitions provide complementary information to the D4σ and are used in different circumstances. The choice of definition can have a large effect on the beam width number obtained, and it is important to use the correct method for any given application. [ 5 ] The D4σ and knife-edge widths are sensitive to background noise on the detector, while the 1/e 2 and FWHM widths are not. The fraction of total beam power encompassed by the beam width depends on which definition is used.
The M 2 parameter is a measure of beam quality; a low M 2 value indicates good beam quality and ability to be focused to a tight spot. The value M is equal to the ratio of the beam's angle of divergence to that of a Gaussian beam with the same D4σ waist width. Since the Gaussian beam diverges more slowly than any other beam shape, the M 2 parameter is always greater than or equal to one. Other definitions of beam quality have been used in the past, but the one using second moment widths is most commonly accepted. [ 6 ]
Beam quality is important in many applications. In fiber-optic communications beams with an M 2 close to 1 are required for coupling to single-mode optical fiber . Laser machine shops care a lot about the M 2 parameter of their lasers because the beams will focus to an area that is M 4 times larger than that of a Gaussian beam with the same wavelength and D4σ waist width before focusing; in other words, the fluence scales as 1/M 4 . The rule of thumb is that M 2 increases as the laser power increases. It is difficult to obtain excellent beam quality and high average power (100 W to kWs) due to thermal lensing in the laser gain medium .
The M 2 parameter is determined experimentally as follows: [ 2 ]
Beam profilers measure the intensity, |E-field| 2 , of the laser beam profile but do not yield any information about the phase of the E-field. To completely characterize the E-field at a given plane, both the phase and amplitude profiles must be known. The real and imaginary parts of the electric field can be characterized using two CCD beam profilers that sample the beam at two separate propagation planes, with the application of a phase recovery algorithm to the captured data. The benefit of completely characterizing the E-field in one plane is that the E-field profile can be computed for any other plane with diffraction theory .
The M 2 parameter is not the whole story in specifying beam quality. A low M 2 only implies that the second moment of the beam profile expands slowly. Nevertheless, two beams with the same M 2 may not have the same fraction of delivered power in a given area. Power-in-the-bucket and Strehl ratio are two attempts to define beam quality as a function of how much power is delivered to a given area. Unfortunately, there is no standard bucket size (D86 width, Gaussian beam width, Airy disk nulls, etc.) or bucket shape (circular, rectangular, etc.) and there is no standard beam to compare for the Strehl ratio. Therefore, these definitions must always be specified before a number is given and it presents much difficulty when trying to compare lasers. There is also no simple conversion between M 2 , power-in-the-bucket, and Strehl ratio. The Strehl ratio, for example, has been defined as the ratio of the peak focal intensities in the aberrated and ideal point spread functions . In other cases, it has been defined as the ratio between the peak intensity of an image divided by the peak intensity of a diffraction-limited image with the same total flux . [ 8 ] [ 9 ] Since there are many ways power-in-the-bucket and Strehl ratio have been defined in the literature, the recommendation is to stick with the ISO-standard M 2 definition for the beam quality parameter and be aware that a Strehl ratio of 0.8, for example, does not mean anything unless the Strehl ratio is accompanied by a definition.
The beam divergence of a laser beam is a measure for how fast the beam expands far from the beam waist. It is usually defined as the derivative of the beam radius with respect to the axial position in the far field, i.e., in a distance from the beam waist which is much larger than the Rayleigh length. This definition yields a divergence half-angle. (Sometimes, full angles are used in the literature; these are twice as large.) For a diffraction-limited Gaussian beam, the beam divergence is λ/(πw 0 ), where λ is the wavelength (in the medium) and w 0 the beam radius (radius with 1/e 2 intensity) at the beam waist. A large beam divergence for a given beam radius corresponds to poor beam quality. A low beam divergence can be important for applications such as pointing or free-space optical communications . Beams with very small divergence, i.e., with approximately constant beam radius over significant propagation distances, are called collimated beams . For the measurement of beam divergence, one usually measures the beam radius at different positions, using e.g. a beam profiler. It is also possible to derive the beam divergence from the complex amplitude profile of the beam in a single plane: spatial Fourier transforms deliver the distribution of transverse spatial frequencies , which are directly related to propagation angles. See US Laser Corps application note [ 10 ] for a tutorial on how to measure the laser beam divergence with a lens and CCD camera.
Astigmatism in a laser beam occurs when the horizontal and vertical cross sections of the beam focus at different locations along the beam path. Astigmatism can be corrected with a pair of cylindrical lenses . The metric for astigmatism is the power of cylindrical lens needed to bring the focuses of the horizontal and vertical cross sections together. Astigmatism is caused by:
Astigmatism can easily be characterized by a CCD beam profiler by observing where the x and y beam waists occur as the profiler is translated along the beam path.
Every laser beam wanders and jitters—albeit a small amount. The typical kinematic tip-tilt mount drifts by around 100 μrad per day in a laboratory environment ( vibration isolation via optical table , constant temperature and pressure, and no sunlight that causes parts to heat). A laser beam incident upon this mirror will be translated by 100 m at a range of 1000 km. This could make the difference between hitting or not hitting a communications satellite from Earth. Hence, there is a lot of interest in characterizing the beam wander (slow time scale) or jitter (fast time scale) of a laser beam. The beam wander and jitter can be measured by tracking the centroid or peak of the beam on a CCD beam profiler. The CCD frame rate is typically 30 frames per second and therefore can capture beam jitter that is slower than 30 Hz—it cannot see fast vibrations due to one's voice, 60 Hz fan motor hum, or other sources of fast vibrations. Fortunately, this is usually not a great concern for most laboratory laser systems and the frame rates of CCDs are fast enough to capture the beam wander over the bandwidth that contains the greatest noise power. A typical beam wander measurement involves tracking the centroid of the beam over several minutes. The rms deviation of the centroid data gives a clear picture of the laser beam pointing stability. The integration time of the beam jitter measurement should always accompany the computed rms value. Even though the pixel resolution of a camera may be several micrometres, sub-pixel centroid resolution (possibly tens of nanometer resolution) is attained when the signal-to-noise ratio is good and the beam fills most of the CCD active area. [ 11 ]
Beam wander is caused by:
It is to most laser manufacturers' advantage to present specifications in a way that shows their product in the best light, even if this involves misleading the customer. Laser performance specifications can be clarified by asking questions such as:
Beam profilers generally fall into two classes: the first uses a simple photodetector behind an aperture which is scanned over the beam. The second class uses a camera to image the beam. [ 12 ]
The most common scanning aperture techniques are the knife-edge technique and the scanning-slit profiler. The former chops the beam with a knife and measures the transmitted power as the blade cuts through the beam. The measured intensity versus knife position yields a curve that is the integrated beam intensity in one direction. By measuring the intensity curve in several directions, the original beam profile can be reconstructed using algorithms developed for x-ray tomography . The measuring instrument is based on high precision multiple knife edges each deployed on a rotating drum and having a different angle with respect to beam orientation. Scanned beam is then reconstructed using tomographic algorithms and provides 2D or 3D high resolution energy distribution plots. Because of the special scanning technique the system automatically zooms in onto the current beam size enabling high resolution measurements of sub micron beams as well as relative large beams of 10 or more millimeters. To obtain measurement of various wavelength different detectors are used to allow laser beam measurements from deep UV to far IR. Unlike other camera based systems this technology also provides accurate power measurement in real time
Scanning-slit profilers use a narrow slit instead of a single knife edge. In this case, the intensity is integrated over the slit width. The resulting measurement is equivalent to the original cross section convolved with the profile of the slit.
This fusion between knife-edge technology and tomographic algorithms creates a new field of beam profiling - CKET (Computerized Knife-Edge Tomography). This creates capability of accurate measurement from a micron to over 10 millimeters with adaptable resolution over a wide spectrum range, practically if a single-surface detector exists for a certain wavelength region, then using this technology an image-like profile could be derived. [ 13 ]
These techniques can measure very small spot sizes down to 1 μm, and can be used to directly measure high power beams. They do not offer continuous readout, although repetition rates as high as twenty hertz can be achieved. Also, the profiles give integrated intensities in the x and y directions and not the actual 2D spatial profile (integrating intensities can be hard to interpret for complicated beam profiles). They do not generally work for pulsed laser sources, because of the extra complexity of synchronizing the motion of the aperture and the laser pulses. [ 14 ]
The CCD camera technique is simple: attenuate and shine a laser onto a CCD and measure the beam profile directly. It is for this reason that the camera technique is the most popular method for laser beam profiling. The most popular cameras used are silicon CCDs that have sensor diameters that range up to 25 mm (1 inch) and pixel sizes down to a few micrometres. These cameras are also sensitive to a broad range of wavelengths, from deep UV , 200 nm, to near infrared , 1100 nm; this range of wavelengths encompass a broad range of laser gain media. The advantages of the CCD camera technique are:
The disadvantages of the CCD camera technique are:
The D4σ width is sensitive to the beam energy or noise in the tail of the pulse because the pixels that are far from the beam centroid contribute to the D4σ width as the distance squared. To reduce the error in the D4σ width estimate, the baseline pixel values are subtracted from the measured signal. The baseline values for the pixels are measured by recording the values of the CCD pixels with no incident light. The finite value is due to dark current , readout noise , and other noise sources. For shot-noise -limited noise sources, baseline subtraction improves the D4σ width estimate as N {\displaystyle {\sqrt {N}}} , where N {\displaystyle N} is the number of pixels in the wings. Without baseline subtraction, the D4σ width is overestimated.
Averaging consecutive CCD images yields a cleaner profile and removes both CCD imager noise and laser beam intensity fluctuations. The signal-to-noise-ratio (SNR) of a pixel for a beam profile is defined as the mean value of the pixel divided by its root-mean-square (rms) value. The SNR improves as square root of the number of captured frames for shot noise processes – dark current noise, readout noise, and Poissonian detection noise. So, for example, increasing the number of averages by a factor of 100 smooths out the beam profile by a factor of 10.
Since CCD sensors are highly sensitive, attenuation is almost always needed for proper beam profiling. For example, 40 dB ( ND 4 or 10 −4 ) of attenuation is typical for a milliwatt HeNe laser . Proper attenuation has the following properties:
For laser beam profiling with CCD sensors, typically two types of attenuators are used: neutral density filters , and wedges or thick optical flats.
Neutral density (ND) filters come in two types: absorptive and reflective.
Absorptive filters are usually made of tinted glass. They are useful for lower-power applications that involve up to about 100 mW average power. Above those power levels, thermal lensing may occur, causing beam size change or deformation, because of the low thermal conductivity of the substrate (usually a glass). Higher power may result in melting or cracking. Absorptive filter attenuation values are usually valid for the visible spectrum (500–800 nm) and are not valid outside of that spectral region. Some filters can be ordered and calibrated for near-infrared wavelengths, up to the long wavelength absorption edge of the substrate (around 2.2 μm for glasses). Typically, one can expect about 5-10% variation of the attenuation across a 2-inch (51 mm) ND filter, unless specified otherwise to the manufacturer. The attenuation values of ND filters are specified logarithmically. A ND 3 filter transmits 10 −3 of the incident beam power. Placing the largest attenuator last before the CCD sensor will result in the best rejection of ghost images due to multiple reflections.
Reflective filters are made with a thin metallic coating and hence operate over a larger bandwidth. An ND 3 metallic filter will be good over 200–2000 nm. The attenuation will rapidly increase outside this spectral region because of absorption in the glass substrate. These filters reflect rather than absorb the incident power, and hence can handle higher input average powers. However, they are less well suited to the high peak powers of pulsed lasers. These filters work fine to about 5 W average power (over about 1 cm 2 illumination area) before heating causes them to crack. Since these filters reflect light, one must be careful when stacking multiple ND filters, since multiple reflections among the filters will cause a ghost image to interfere with the original beam profile. One way to mitigate this problem is by tilting the ND filter stack. Assuming that the absorption of the metallic ND filter is negligible, the order of the ND filter stack doesn't matter, as it does for the absorptive filters.
Diffractive beam samplers are used to monitor high power lasers where optical losses and wavefront distortions of the transmitted beam need to be kept to a minimum.
In most applications, most of the incident light must continue forward, "unaffected," in the "zero order diffracted order" while a small amount of the beam is diffracted into a higher diffractive order, providing a "sample" of the beam.
By directing the sampled light in the higher order(s) onto a detector, it is possible to monitor, in real time, not only the power levels of a laser beam, but also its profile, and other laser characteristics.
Optical wedges and reflections from uncoated optical glass surfaces are used to attenuate high power laser beams. About 4% is reflected from the air/glass interface and several wedges can be used to greatly attenuate the beam to levels that can be attenuated with ND filters. The angle of the wedge is typically selected so that the second reflection from the surface does not hit the active area of the CCD, and that no interference fringes are visible. The farther the CCD is from the wedge, the smaller the angle required. Wedges have the disadvantage of both translating and bending the beam direction — paths will no longer lie on convenient rectangular coordinates. Rather than using a wedge, an optical-quality thick glass plate tilted to the beam can also work — actually, this is the same as a wedge with a 0° angle. The thick glass will translate the beam but it will not change the angle of the output beam. The glass must be thick enough so that the beam does not overlap with itself to produce interference fringes, and if possible that the secondary reflection does not illuminate the active area of the CCD. The Fresnel reflection of a beam from a glass plate is different for the s- and p-polarizations (s is parallel to the surface of the glass, and p is perpendicular to s) and changes as a function of angle of incidence – keep this in mind if you expect that the two polarizations have different beam profiles. To prevent distortion of the beam profile, the glass should be of optical quality — surface flatness of λ/10 (λ=633 nm) and scratch-dig of 40-20 or better. A half-wave plate followed by a polarizing beam splitter form a variable attenuator and this combination is often used in optical systems. The variable attenuator made in this fashion is not recommended for attenuation for beam profiling applications because: (1) the beam profile in the two orthogonal polarizations may be different, (2) the polarization beam cube may have a low optical damage threshold value, and (3) the beam can be distorted in cube polarizers at very high attenuation. Inexpensive cube polarizers are formed by cementing two right angle prisms together. The glue does not stand up well to high powers — the intensity should be kept under 500 mW/mm 2 . Single-element polarizers are recommended for high powers.
There are two competing requirements that determine the optimal beam size on the CCD detector. One requirement is that the entire energy — or as much of it as possible — of the laser beam is incident on the CCD sensor. This would imply that we should focus all the energy in the center of the active region in as small a spot as possible using only a few of the central pixels to ensure that the tails of the beam are captured by the outer pixels. This is one extreme. The second requirement is that we need to adequately sample the beam profile shape. As a rule of thumb, we want at least 10 pixels across the area that encompasses most, say 80%, of the energy in the beam. Therefore, there is no hard and fast rule to select the optimal beam size. As long as the CCD sensor captures over 90% of the beam energy and has at least 10 pixels across it, the beam width measurements will have some accuracy.
The larger the CCD sensor, the larger the size of beam that can be profiled. Sometimes this comes at the cost of larger pixel sizes. Small pixels sizes are desired for observing focused beams. A CCD with many megapixels is not always better than a smaller array since readout times on the computer can be uncomfortably long. Reading out the array in real-time is essential for any tweaking or optimization of the laser profile.
A far-field beam profiler is nothing more than profiling the beam at the focus of a lens. This plane is sometimes called the Fourier plane and is the profile that one would see if the beam propagated very far away. The beam at the Fourier plane is the Fourier transform of the input field. Care must be taken in setting up a far-field measurement. The focused spot size must be large enough to span across several pixels. The spot size is approximately f λ/ D , where f is the focal length of the lens, λ is the wavelength of the light, and D is the diameter of the collimated beam incident upon the lens. For example, a helium-neon laser (633 nm) with 1 mm beam diameter would focus to a 317 μm spot with a 500 mm lens. A laser beam profiler with a 5.6 μm pixel size would adequately sample the spot at 56 locations.
The prohibitive costs of CCD laser beam profilers in the past have given way to low-cost beam profilers. Low-cost beam profilers have opened up a number of new applications: replacing irises for super-accurate alignment and simultaneous multiple port monitoring of laser systems.
In the past, alignment of laser beams was done with irises. Two irises uniquely defined a beam path; the farther apart the irises and the smaller the iris holes, the better the path was defined. The smallest aperture that an iris can define is about 0.8 mm. In comparison, the centroid of a laser beam can be determined to sub-micrometre accuracy with a laser beam profiler. The laser beam profiler's effective aperture size is three orders of magnitude smaller than that of an iris. Consequently, the ability to define an optical path is 1000 times better when using beam profilers over irises. Applications that need microradian alignment accuracies include earth-to-space communications, earth-to-space ladar, master oscillator to power oscillator alignment, and multi-pass amplifiers .
Experimental laser systems benefit from the use of multiple laser beam profilers to characterize the pump beam, the output beam, and the beam shape at intermediate locations in the laser system, for example, after a Kerr-lens modelocker . Changes in the pump laser beam profile indicate the health of the pump laser, which laser modes are excited in the gain crystal , and also determine whether the laser is warmed up by locating the centroid of the beam relative to the breadboard . The output beam profile is often a strong function of pump power due to thermo-optical effects in the gain medium. | https://en.wikipedia.org/wiki/Laser_beam_profiler |
A laser broom is a proposed ground-based laser beam-powered propulsion system that sweeps space debris out of the path of artificial satellites (such as the International Space Station ) to prevent collateral damage to space equipment. It heats up one side of the debris to shift its orbit trajectory, altering the path to hit the atmosphere sooner. Space researchers have proposed that a laser broom may help mitigate Kessler syndrome , a runaway cascade of collision events between orbiting objects. [ 1 ] Additionally, laser broom systems mounted on satellites or space station have also been proposed. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Lasers brooms are proposed to target space debris between one and ten centimetres (0.4–3.9 in) in diameter. Collisions with these high-velocity debris not only cause considerable damage to the satellites but secondary fragmented debris from the collided satellite parts. A laser broom is intended to be used at a high power to penetrate through the atmosphere and ablate material from the targeted debris. [ 6 ] The ablating material imparts a small thrust that lowers its orbital perigee towards the upper atmosphere, thereby increasing drag so that its remaining orbital life is cut short. [ 7 ] The laser would operate in a pulsed fashion to avoid the target from self-shielding via its ablated plasma. The power levels of lasers in this concept are well below the power levels in concepts for more rapidly effective anti-satellite weapons.
Research into this field reveal the precise physical constraints required, noting the significant relevance to the space debris's orientation and resultant trajectory of the ablated object. [ 8 ] [ 9 ] Using a laser guide star and adaptive optics, a sufficiently large ground-based laser (1 megajoule pulsed HF laser) can offset the orbits of dozens of debris daily at a reasonable cost. [ 1 ] [ 10 ]
The Space Shuttle routinely showed evidence of "tiny" impacts upon post-flight inspection. [ 11 ]
Orion was a proposed ground-based laser broom project in the 1990s, estimated to cost $500 million. [ 12 ] [ 13 ] [ 14 ]
A space-based laser also called "Project Orion" was planned to be installed on the International Space Station in 2003. [ 15 ] [ 16 ] In 2015, Japanese researchers proposed adding laser broom capabilities to the Extreme Universe Space Observatory telescope, to be launched to the ISS in 2017. [ 17 ] [ 18 ] [ 19 ] [ 5 ]
In 2014, the European CLEANSPACE project published a report studying a global architecture of debris tracking and removal laser stations. [ 20 ] [ 21 ] | https://en.wikipedia.org/wiki/Laser_broom |
Laser capture microdissection ( LCM ), also called microdissection , laser microdissection ( LMD ), or laser-assisted microdissection ( LMD or LAM ), is a method for isolating specific cells of interest from microscopic regions of tissue /cells/ organisms [ 1 ] [ 2 ] ( dissection on a microscopic scale with the help of a laser ).
Laser-capture microdissection (LCM) is a method to procure subpopulations of tissue cells under direct microscopic visualization. LCM technology can harvest the cells of interest directly or can isolate specific cells by cutting away unwanted cells to give histologically pure enriched cell populations. A variety of downstream applications exist: DNA genotyping and loss of heterozygosity (LOH) analysis, RNA transcript profiling, cDNA library generation, proteomics discovery and signal-pathway profiling. The total time required to carry out this protocol is typically 1–1.5 h. [ 3 ]
A laser is coupled into a microscope and focuses onto the tissue on the slide. By movement of the laser by optics or the stage the focus follows a trajectory which is predefined by the user. This trajectory, also called element , is then cut out and separated from the adjacent tissue. After the cutting process, an extraction process has to follow if an extraction process is desired. More recent technologies utilize non-contact microdissection.
There are several ways to extract tissue from a microscope slide with a histopathology sample on it. Press a sticky surface onto the sample and tear out. This extracts the desired region, but can also remove particles or unwanted tissue on the surface, because the surface is not selective. Melt a plastic membrane onto the sample and tear out. The heat is introduced, for example, by a red or infrared (IR) laser onto a membrane stained with an absorbing dye. As this adheres the desired sample onto the membrane, as with any membrane that is put close to the histopathology sample surface, there might be some debris extracted. Another danger is the introduced heat: Some molecules like DNA, RNA, or protein don't allow to be heated too much or at all for the goal of being isolated as purely as possible.
For transport without contact. There are three different approaches. Transport by gravity using an upright microscope (called GAM, gravity-assisted microdissection ) or transport by laser pressure catapult ; the most recent generation utilizes a technology based on laser induced forward transfer (LIFT). With cut-and-capture, a cap coated with an adhesive is positioned directly on the thinly cut (5-8 μm) tissue section, the section itself resting on a thin membrane (polyethylene naphthalene). An IR laser gently heats the adhesive on the cap fusing it to the underlying tissue and an UV laser cuts through tissue and underlying membrane. The membrane-tissue entity now adheres to the cap and the cells on the cap can be used in downstream applications (DNA, RNA, protein analysis). [ 4 ]
Under a microscope using a software interface, a tissue section (typically 5-50 micrometres thick) is viewed and individual cells or clusters of cells are identified either manually or in semi-automated or more fully automated ways allowing the imaging and then automatic selection of targets for isolation. Currently six primary isolation/collection technologies exist using a microscope and device for cell isolation. Four of these typically use an ultraviolet pulsed laser (355 nm) for the cutting of the tissues directly or the membranes/film, and sometimes in combination with an IR laser responsible for heating/melting a sticky polymer for cellular adhesion and isolation. IR laser provides a more gentle approach to microdissection. A fifth ultraviolet laser based technology uses special slides coated with an energy transfer coating which, when activated by the laser pulse, propels the tissue or cells into a collection cap.
The laser cutting width is usually less than 1 μm, thus the target cells are not affected by the laser beam. Even live cells are not damaged by the laser cutting and are viable after cutting for cloning and reculturing as appropriate. [ 5 ]
The various technologies differ in the collection process, possible imaging methods ( fluorescence microscopy / bright field microscopy / differential interference contrast microscopy / phase contrast microscopy / etc.) and the types of holders and tissue preparation needed before the imaging and isolation. Most are primarily dedicated micro-dissection systems, and some can be used as research microscopes as well, only one technology (#2 here, Leica) uses an upright microscope, limiting some of the sample handling capabilities somewhat, especially for live cell work.
The first technology (used by Carl Zeiss PALM) cuts around the sample then collects it by a "catapulting" technology. The sample can be catapulted from a slide or special culture dish by a defocused U.V laser pulse which generates a photonic force to propel the material off the slide/dish, a technique sometimes called Laser Micro-dissection Pressure Catapulting (LMPC). The dissected material is sent upward (up to several millimetres) to a microfuge tube cap or other collector which contains either a buffer or a specialized tacky material in the tube cap that the tissue will adhere to. This active catapulting process avoids some of the static problems when using membrane-coated slides. [ 6 ]
Another process follows gravity-assisted microdissection method that turns on gravity to collect samples in tube cap under the slide used (used by ION LMD system, Jungwoo F&B). In case of this system, it moves the motorized stage to cut the cells of interests, keeping the laser beam fixed. And the system uses a 355 nm Solid-state laser ( UV-A ) which is the safest way to cut the tissues without RNA or DNA damage. [ 7 ] [ failed verification ]
Another closely related LCM process (used by Leica) cuts the sample from above and the sample drops via gravity (gravity-assisted microdissection) into a capture device below the sample. [ 8 ] The different point with upper one is, the laser beam here is moving to cut tissue by moving dichroic mirror.
When the cells (on a slide or special culture dish) of choice are in the center of the field of view, the operator selects the cells of interest using instrument software. The area to be isolated when a near-IR laser to activate transfer film on a cap placed on the tissue sample, melting the adhesive which then fuses the film with the underlying cells of choice (see Arcturus systems); and/or by activating a UV laser to cut out the cell of interest. The cells are then lifted off the thin tissue section, leaving all unwanted cells behind. The cells of interest are then viewed and documented prior to extraction. [ 9 ]
The fourth UV based technology (used by Molecular Machines and Industries AG) offers a slight difference to the 3rd technology here by essentially creating a sandwich of sorts with slide>sample>and membrane overlying the sample by the use of a frame slide whose membrane surface is cut by the laser and ultimately picked up from above by a special adhesive cap. [ 10 ]
A fifth UV based technology uses standard glass slides coated with an inert energy transfer coating and a UV based laser microdissection system (typically a Leica LMD or PALM Zeiss machine). Tissue sections are mounted on top of the energy transfer coating. The energy from a UV laser is converted to kinetic energy upon striking the coating, vaporizing it, instantly propelling selected tissue features into the collection tube. The energy transfer coated slides, commercialized under the trade name DIRECTOR slides by Expression Pathology Inc. (Rockville, MD), offer several advantages for proteomic work. They also do not autofluoresce, so they can be used for applications using fluorescent stains, DIC or polarized light. [ 11 ]
In addition to tissue sections, LCM can be performed on living cells/organisms, cell smears, chromosome preparations, and plant tissue.
The laser capture microdissection process does not alter or damage the morphology and chemistry of the sample collected, nor the surrounding cells. For this reason, LCM is a useful method of collecting selected cells for DNA , RNA and/or protein analyses. LCM has also been used to isolate acellular structures, such as amyloid plaques . [ 12 ] LCM can be performed on a variety of tissue samples including blood smears , cytologic preparations, [ 13 ] cell cultures and aliquots of solid tissue. Frozen and paraffin embedded archival tissue may also be used. [ 14 ] | https://en.wikipedia.org/wiki/Laser_capture_microdissection |
Laser cooling includes several techniques where atoms , molecules , and small mechanical systems are cooled with laser light. The directed energy of lasers is often associated with heating materials, e.g. laser cutting , so it can be counterintuitive that laser cooling often results in sample temperatures approaching absolute zero . It is a routinely used in atomic physics experiments where the laser-cooled atoms are manipulated and measured, or in technologies, such as atom-based quantum computing architectures.
Laser cooling reduces the random motion of particles or the random vibrations of mechanical systems. For atoms and molecules this reduces Doppler shifts in spectroscopy, allowing for high precision measurements and instruments such as optical clocks . The reduction in thermal energy also allows for efficient loading of atoms and molecules into traps where they can be used in experiments or atom-based devices for longer periods of time.
Laser cooling relies on the momentum change when an object, such as an atom, absorbs and re-emits a photon (a particle of light). Atoms will be cooled in one dimension if they are illuminated by a pair of counter-propagating laser beams that are detuned below an atomic transition. The laser light will be preferentially absorbed from the laser beam that counter-propagates with respect to the atom's motion due to the Doppler effect . The absorbed light is re-emitted by the atom in a random direction. After this process is repeated the random motion of the atoms will be reduced along the laser cooling axis. With three pairs of counter-propagating laser beams along all three axes a warm cloud of atoms will be cooled in three dimensions. The atom cloud will expand more slowly because of the decrease in the cloud's velocity distribution, which corresponds to a lower temperature and therefore colder atoms. For an ensemble of particles, their thermodynamic temperature is proportional to the variance in their velocity, therefore the lower the distribution of velocities, the lower the temperature of the particles.
Radiation pressure is the force that electromagnetic radiation exerts on matter. In 1873 Maxwell published his treatise on electromagnetism in which he predicted radiation pressure. [ 1 ] The force was experimentally demonstrated for the first time by Lebedev and reported at a conference in Paris in 1900, [ 2 ] and later published in more detail in 1901. [ 3 ] Following Lebedev's measurements Nichols and Hull also demonstrated the force of radiation pressure in 1901, [ 4 ] with a refined measurement reported in 1903. [ 5 ] [ 6 ]
Atoms and molecules have bound states and transitions can occur between these states in the presence of light that is near the transition frequency. Sodium is historically notable because it has a strong transition at 589 nm, a wavelength which is close to the peak sensitivity of the human eye. This made it relatively easy to see the interaction of light with sodium atoms. In 1933, Otto Frisch deflected an atomic beam of sodium atoms with light. [ 7 ] This was the first realization of radiation pressure acting on an atom or molecule.
The introduction of lasers in atomic physics experiments was the precursor to the laser cooling proposals in the mid 1970s. Laser cooling was proposed separately in 1975 by two different research groups: Hänsch and Schawlow , [ 8 ] and Wineland and Dehmelt . [ 9 ] Both proposals outlined the simplest laser cooling process, known as Doppler cooling , where laser light tuned below an atom's resonant frequency is preferentially absorbed by atoms moving towards the laser and after absorption a photon is emitted in a random direction. This process is repeated many times and in a configuration with counterpropagating laser cooling light the velocity distribution of the atoms is reduced. [ 10 ]
In 1977 Ashkin submitted a paper which describes how Doppler cooling could be used to provide the necessary damping to load atoms into an optical trap. [ 11 ] In this work he emphasized how this could allow for long spectroscopic measurements which would increase precision because the atoms would be held in place. He also discussed overlapping optical traps to study interactions between different atoms.
Following the laser cooling proposals, in 1978 two research groups that Wineland, Drullinger and Walls of NIST, and Neuhauser, Hohenstatt, Toscheck and Dehmelt of the University of Washington succeeded in laser cooling atoms. The NIST group wanted to reduce the effect of Doppler broadening on spectroscopy. They cooled magnesium ions in a Penning trap to below 40 K. The Washington group cooled barium ions.
Influenced by the Wineland's work on laser cooling ions, William Phillips applied the same principles to laser cool neutral atoms. In 1982, he published the first paper where neutral atoms were laser cooled. [ 12 ] The process used is now known as the Zeeman slower and is a standard technique for slowing an atomic beam.
The 1997 Nobel Prize in Physics was awarded to Claude Cohen-Tannoudji , Steven Chu , and William Daniel Phillips "for development of methods to cool and trap atoms with laser light". [ 13 ]
The Doppler cooling limit for electric dipole transitions is typically in the hundreds of microkelvins. In the 1980s this limit was seen as the lowest achievable temperature. It was a surprise then when sodium atoms were cooled to 43 microkelvins when their Doppler cooling limit is 240 microkelvins, [ 14 ] this unforeseen low temperature was explained by considering the interaction of polarized laser light with more atomic states and transitions. Previous conceptions of laser cooling were decided to have been too simplistic. [ 15 ] The major laser cooling breakthroughs in the 70s and 80s led to several improvements to preexisting technology and new discoveries with temperatures just above absolute zero . The cooling processes were utilized to make atomic clocks more accurate and to improve spectroscopic measurements, and led to the observation of a new state of matter at ultracold temperatures. [ 16 ] [ 15 ] The new state of matter, the Bose–Einstein condensate , was observed in 1995 by Eric Cornell , Carl Wieman , and Wolfgang Ketterle . [ 17 ]
Most laser cooling experiments bring the atoms close to at rest in the laboratory frame, but cooling of relativistic atoms has also been achieved, where the effect of cooling manifests as a narrowing of the velocity distribution. In 1990, a group at JGU successfully laser-cooled a beam of 7 Li + at 13.3 MeV in a storage ring [ 18 ] from 260 K to lower than 2.9 K , using two counter-propagating lasers addressing the same transition, but at 514.5 nm and 584.8 nm , respectively, to compensate for the large Doppler shift .
Laser cooling of antimatter has also been demonstrated, first in 2021 by the ALPHA collaboration on antihydrogen atoms. [ 19 ] In 2024, positronium , made up of an electron and a positron, was laser cooled to about 1K. [ 20 ]
Molecules are significantly more challenging to laser cool than atoms because molecules have vibrational and rotational degrees of freedom. These extra degrees of freedom result in more energy levels that can be populated from excited state decays, requiring more lasers compared to atoms to address the more complex level structure. Vibrational decays are particularly challenging because there are no symmetry rules that restrict the vibrational states that can be populated.
In 2010, at team at Yale led by Dave DeMille successfully laser-cooled a diatomic molecule . [ 21 ] In 2016, a group at MPQ successfully cooled formaldehyde to 420 μK via optoelectric Sisyphus cooling. [ 22 ] In 2022, a group at Harvard successfully laser cooled and trapped CaOH to 720(40) μK in a magneto-optical trap . [ 23 ]
Starting in the 2000s, laser cooling was applied to small mechanical systems , ranging from small cantilevers to the mirrors used in the LIGO observatory. These devices are connected to a larger substrate, such as a mechanical membrane attached to a frame, or they are held in optical traps, in both cases the mechanical system is a harmonic oscillator. Laser cooling reduces the random vibrations of the mechanical oscillator, removing thermal phonons from the system.
In 2007, an MIT team successfully laser-cooled a macro-scale (1 gram) object to 0.8 K. [ 24 ] In 2011, a team from the California Institute of Technology and the University of Vienna became the first to laser-cool a (10 μm × 1 μm) mechanical object to its quantum ground state. [ 25 ]
The first realization of laser cooling and the most ubiquitous method for cooling atoms and molecules (so much so that it is often referred to simply as 'laser cooling'), is Doppler cooling .
Doppler cooling is by far the most common method of laser cooling. It is used to cool low density gases down to the Doppler cooling limit , which for rubidium (a popular choice in the field of atomic physics) is around 150 microkelvin . It is often often combined with a magnetic field gradient to realize a magneto-optical trap .
In Doppler cooling, initially, the frequency of light is tuned slightly below an electronic transition in the atom . Because the light is detuned to the "red" (i.e., at lower frequency) of the transition, the atoms will absorb more photons if they move towards the light source, due to the Doppler effect . Thus if one applies light from two opposite directions, the atoms will always scatter more photons from the laser beam pointing opposite to their direction of motion. In each scattering event the atom loses a momentum equal to the momentum of the photon. If the atom, which is now in the excited state, then emits a photon spontaneously, it will be kicked by the same amount of momentum, but in a random direction. Since the initial momentum change is a pure loss (opposing the direction of motion), while the subsequent change is random, the probable result of the absorption and emission process is to reduce the momentum of the atom, and therefore its speed —provided its initial speed was larger than the recoil speed from scattering a single photon. If the absorption and emission are repeated many times, the average speed, and therefore the kinetic energy of the atom, will be reduced. Since the temperature of a group of atoms is a measure of the average random internal kinetic energy, this is equivalent to cooling the atoms.
When atoms are Doppler cooled in three dimensions, traditionally by 6 counter-propagating red-detuned laser beams, this is called optical molasses because the atoms move slowly, as if they are moving through molasses.
After Doppler cooling it is often helpful to cool atoms (or molecules) below their Doppler limit. This is accomplished with a variety sub-Doppler cooling techniques. Different atomic structures are amenable to different sub-Doppler cooling techniques. For example, gray molasses is used with lithium and potassium because they have unresolved hyperfine structure in their excited states where polarization gradient cooling would not work.
Sub-Doppler cooling methods include:
Other laser cooling methods include:
Laser cooling is ubiquitous in the field of atomic physics. Reducing the random motion of atoms has several benefits, including the ability to trap atoms with optical or magnetic fields. Spectroscopic measurements of a cold atomic sample will also have reduced systematic uncertainties due to thermal motion.
Often multiple laser cooling techniques are used in a single experiment to prepare a cold sample of atoms, which is then subsequently manipulated and measured. In a representative experiment a vapor of strontium atoms is generated in a hot oven that exit the oven as an atomic beam. After leaving the oven the atoms are Doppler cooled in two dimensions transverse to their motion to reduce loss of atoms due to divergence of the atomic beam. The atomic beam is then slowed and cooled with a Zeeman slower to optimize the atom loading efficiency into a magneto-optical trap (MOT), which Doppler cools the atoms, that operates on the 1 S 0 → 1 P 1 with lasers at 461 nm. The MOT transitions from using light at 461 nm to using light at 689 nm to drive the 1 S 0 → 3 P 1 , which is a narrow transition, to realize even colder atoms. The atoms are then transferred into an optical dipole trap where evaporative cooling gets them to temperatures where they can be effectively loaded into an optical lattice.
Laser cooling is important for quantum computing efforts based on neutral atoms and trapped atomic ions. In an ion trap Doppler cooling reduces the random motion of the ions so they form a well-ordered crystal structure in the trap. After Doppler cooling the ions are often cooled to their motional ground state to reduce decoherence during quantum gates between ions.
Photonic cooling is under development for use to cool chip hotspots in data centers. [ 30 ]
Laser cooling atoms requires scientific equipment that when assembled forms a cold atom machine. Such machines consist of two parts: a vacuum chamber which houses the laser cooled atoms and the laser systems used for cooling, as well as for preparing and manipulating atomic states and detecting the atoms. Laser cooling molecules generally requires more lasers and optical modulators (such as electro-optic and acousto-optic) to address the more complex molecular structure. Mechanical systems also need a vacuum system as the damping from background gases quickly equilibrates them with the gas's temperature. Mechanical systems usually need only one laser which is chosen for its reliability and coherence time, such as Nd:YAG lasers or Fiber lasers , as the mechanical devices are reflective over a very wide range of wavelengths.
In order for atoms to be laser cooled, the atoms cannot collide with room temperature background gas particles. Such collisions will drastically heat the atoms, and knock them out of weak traps. Acceptable collision rates for cold atom machines typically require vacuum pressures at 10 −9 Torr, and very often hundreds or even thousands of times lower pressures are necessary. To achieve these low pressures, a vacuum chamber is needed. The vacuum chamber typically includes windows so that the atoms can be addressed with lasers (e.g. for laser cooling) and light emitted by the atoms or absorption of light be the atoms can be detected. The vacuum chamber also requires an atomic source for the atom(s) to be laser cooled. The atomic source is generally heated to produce thermal atoms that can be laser cooled. For ion trapping experiments the vacuum system must also hold the ion trap, with the appropriate electric feedthroughs for the trap. Neutral atom systems very often employ a Magneto-optical trap (MOT) as one of the early stages in collecting and cooling atoms. For a MOT typically magnetic field coils are placed outside of the vacuum chamber to generate magnetic field gradients for the MOT.
The laser required for cold atom machines are entirely dependent on the choice of atom. Each atom has unique electronic transitions at very distinct wavelengths that must be driven for the atom to be laser cooled. Rubidium, for example is a very commonly used atom which requires driving two transitions with laser light at 780 nm that are separated by a few GHz. The light for rubidium can be generated from a signal laser at 780 nm and an Electro-optic modulator . Generally tens of mW (and often hundreds of mW to cool significantly more atoms) is used to cool neutral atoms. Trapped ions on the other hand require microwatts of optical power, as they are generally tightly confined and the laser light can be focused to a small spot size. The strontium ion, for example requires light at both 422 nm and 1092 nm in order to be Doppler cooled. Because of the small Doppler shifts involved with laser cooling, very narrow lasers, order of a few MHz, are required for laser cooling. Such lasers are generally stabilized to spectroscopy reference cells, optical cavities, or sometimes wavemeters so the laser light can be precisely tuned relative to the atomic transitions. | https://en.wikipedia.org/wiki/Laser_cooling |
Laser diffraction analysis , also known as laser diffraction spectroscopy , is a technology that utilizes diffraction patterns of a laser beam passed through any object ranging from nanometers to millimeters in size [ 1 ] to quickly measure geometrical dimensions of a particle. This particle size analysis process does not depend on volumetric flow rate , the amount of particles that passes through a surface over time. [ 2 ]
Laser diffraction analysis is originally based on the Fraunhofer diffraction theory, stating that the intensity of light scattered by a particle is directly proportional to the particle size. [ 4 ] The angle of the laser beam and particle size have an inversely proportional relationship, where the laser beam angle increases as particle size decreases and vice versa. [ 5 ] The Mie scattering model, or Mie theory, is used as alternative to the Fraunhofer theory since the 1990s.
Commercial laser diffraction analyzers leave to the user the choice of using either Fraunhofer or Mie theory for data analysis, hence the importance of understanding the strengths and limitations of both models. Fraunhofer theory only takes into account the diffraction phenomena occurring at the contour of the particle. Its main advantage is that it does not require any knowledge of the optical properties ( complex refractive index ) of the particle’s material. Hence is it typically applied to samples of unknown optical properties, or to mixtures of different materials. For samples of known optical properties, Fraunhofer theory should only be applied for particles of an expected diameter at least 10 times larger than the light source’s wavelength, and/or to opaque particles. [ 6 ] [ 7 ]
The Mie theory is based on measuring the scattering of electromagnetic waves on spherical particles. Hence, it is taking into account not only the diffraction at the particle’s contour, but also the refraction, reflection and absorption phenomena within the particle and at its surface. [ 6 ] Thus, this theory is better suited than the Fraunhofer theory for particles that are not significantly larger than the wavelength of the light source, and to transparent particles. The model’s main limitation is that it requires precise knowledge of the complex refractive index (including the absorption coefficient) of the particle’s material. The lower theoretical detection limit of laser diffraction, using the Mie theory, is generally thought to lie around 10 nm.
Laser diffraction analysis is typically accomplished via a red He-Ne laser or laser diode , a high-voltage power supply, and structural packaging. [ 8 ] Alternatively, blue laser diodes or LEDs of shorter wavelength may be used. The light source affects the detection limits, with lasers of shorter wavelengths better suited for the detection of submicron particles. Angling of the light energy produced by the laser is detected by having a beam of light go through a flow of dispersed particles and then onto a sensor . A lens is placed between the object being analyzed and the detector's focal point, causing only the surrounding laser diffraction to appear. The sizes the laser can analyze depend on the lens' focal length , the distance from the lens to its point of focus. As the focal length increases, the area the laser can detect increases as well, displaying a proportional relationship.
Multiple light detectors are used to collect the diffracted light, which are placed at fixed angles relative to the laser beam. More detector elements extend sensitivity and size limits. A computer can then be used to detect the object's particle sizes from the light energy produced and its layout, which the computer derives from the data collected on the particle frequencies and wavelengths . [ 5 ]
In practical terms, laser diffraction instruments can measure particles in liquid suspension, using a carrier solvent, or as dry powders, using compressed air or simply gravity to mobilize the particles. Sprays and aerosols generally require a specific setup. [ 9 ]
Because the light energy recorded by the detector array is proportional to the volume of the particles, laser diffraction results are intrinsically volume-weighted. [ 10 ] This means that the particle size distribution represents the volume of particle material in the different size classes. This is in contrast to counting-based optical methods such as microscopy or dynamic image analysis , which report the number of particles in the different size classes. [ 11 ] That the diffracted light is proportional to the particle’s volume also implies that results are assuming particle sphericity, i.e. that the particle size result is an equivalent spherical diameter . Hence particle shape cannot be determined by the technique.
The main graphical representation of laser diffraction results is the volume-weighted particle size distribution, either represented as density distribution (which highlights the different modes) or as cumulative undersize distribution .
The most widely used numerical laser diffraction results are:
Harmonized standards for the accuracy and precision of laser diffraction measurements have been defined both by ISO , in standard ISO 13320:2020, [ 13 ] and by the United States Pharmacopoeia , in chapter USP <429>. [ 14 ]
Laser diffraction analysis has been used to measure particle-size objects in situations such as:
Since laser diffraction analysis is not the sole way of measuring particles it has been compared to the sieve-pipette method, which is a traditional technique for grain size analysis. When compared, results showed that laser diffraction analysis made fast calculations that were easy to recreate after a one-time analysis, did not need large sample sizes, and produced large amounts of data. Results can easily be manipulated because the data is on a digital surface. Both the sieve-pipette method and laser diffraction analysis are able to analyze minuscule objects, but laser diffraction analysis resulted in having better precision than its counterpart method of particle measurement. [ 23 ]
Laser diffraction analysis has been questioned in validity in the following areas: [ 24 ] [ 25 ] | https://en.wikipedia.org/wiki/Laser_diffraction_analysis |
The laser flash analysis or laser flash method is used to measure thermal diffusivity of a variety of different materials. An energy pulse heats one side of a plane-parallel sample and the resulting time dependent temperature rise on the backside due to the energy input is detected. The higher the thermal diffusivity of the sample, the faster the energy reaches the backside. A laser flash apparatus ( LFA ) to measure thermal diffusivity over a broad temperature range, is shown on the right hand side.
In a one-dimensional, adiabatic case the thermal diffusivity a {\displaystyle a} is calculated from this temperature rise as follows:
Where
As the coefficient 0.1388 is dimensionless, the formula works also for a {\displaystyle a} and d {\displaystyle d} in their corresponding SI units.
The laser flash method was developed by Parker et al. in 1961. [ 1 ] In a vertical setup, a light source (e.g. laser , flashlamp) heats the sample from the bottom side and a detector on top detects the time-dependent temperature rise. For measuring the thermal diffusivity, which is strongly temperature-dependent, at different temperatures the sample can be placed in a furnace at constant temperature.
Perfect conditions are
Several improvements on the models have been made. In 1963 Cowan takes radiation and convection on the surface into account. [ 2 ] Cape and Lehman consider transient heat transfer, finite pulse effects and also heat losses in the same year. [ 3 ] Blumm and Opfermann improved the Cape-Lehman-Model with high order solutions of radial transient heat transfer and facial heat loss, non-linear regression routine in case of high heat losses and an advanced, patented pulse length correction. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Laser_flash_analysis |
Laser peening ( LP ), or laser shock peening ( LSP ), is a surface engineering process used to impart beneficial residual stresses in materials. The deep, high-magnitude compressive residual stresses induced by laser peening increase the resistance of materials to surface-related failures, such as fatigue , fretting fatigue, and stress corrosion cracking . Laser shock peening can also be used to strengthen thin sections, harden surfaces, shape or straighten parts (known as laser peen forming), break up hard materials, compact powdered metals and for other applications where high-pressure, short duration shock waves offer desirable processing results.
Initial scientific discoveries towards modern-day laser peening began in the early 1960s as pulsed- laser technology began to proliferate around the world. In an early investigation of the laser interaction with materials by Gurgen Askaryan and E.M. Moroz, they documented pressure measurements on a targeted surface using a pulsed laser. [ 1 ] The pressures observed were much larger than could be created by the force of the laser beam alone. Research into the phenomenon indicated the high-pressure resulted from a momentum impulse generated by material vaporization at the target surface when rapidly heated by the laser pulse. Throughout the 1960s, a number of investigators further defined and modeled the laser beam pulse interaction with materials and the subsequent generation of stress waves. [ 2 ] [ 3 ] These, and other studies, observed that stress waves in the material were generated from the rapidly expanding plasma created when the pulsed laser beam struck the target. Subsequently, this led to interest in achieving higher pressures to increase the stress wave intensity. To generate higher pressures it was necessary to increase the power density and focus the laser beam (concentrate the energy), requiring that the laser beam-material interaction occur in a vacuum chamber to avoid dielectric breakdown within the beam in air. These constraints limited study of high-intensity pulsed laser–material interactions to a select group of researchers with high-energy pulsed lasers.
In the late 1960s a major breakthrough occurred when N.C. Anderholm discovered that much higher plasma pressures could be achieved by confining the expanding plasma against the target surface. [ 4 ] Anderholm confined the plasma by placing a quartz overlay, transparent to the laser beam, firmly against the target surface. With the overlay in place, the laser beam passed through the quartz before interacting with the target surface. The rapidly expanding plasma was now confined within the interface between the quartz overlay and the target surface. This method of confining the plasma greatly increased the resulting pressure, generating pressure peaks of 1 to 8 gigapascals (150 to 1,200 ksi), over an order of magnitude greater than unconfined plasma pressure measurements. The significance of Anderholm's discovery to laser peening was the demonstration that pulsed laser–material interactions to develop high-pressure stress waves could be performed in air, not constrained to a vacuum chamber.
The beginning of the 1970s saw the first investigations of the effects of pulsed laser irradiation within the target material. L. I. Mirkin observed twinning in ferrite grains in steel under the crater created by laser irradiation in vacuum. [ 5 ] S. A. Metz and F. A. Smidt, Jr. irradiated nickel and vanadium foils in air with a pulsed laser at a low power density and observed voids and vacancy loops after annealing the foils, suggesting that a high concentration of vacancies was created by the stress wave. These vacancies subsequently aggregated during post-iradiation annealing into the observed voids in nickel and dislocation loops in vanadium. [ 6 ]
In 1971, researchers at Battelle Memorial Institute in Columbus, Ohio began investigating whether the laser shocking process could improve metal mechanical properties using a high-energy pulsed laser. In 1972, the first documentation of the beneficial effects of laser shocking metals was published, reporting the strengthening of aluminum tensile specimens using a quartz overlay to confine the plasma. [ 7 ] Subsequently, the first patent on laser shock peening was granted to Phillip Mallozzi and Barry Fairand in 1974. [ 8 ] Research into the effects and possible applications of laser peening continued throughout the 1970s and early 1980s by Allan Clauer, Barry Fairand, and coworkers, supported by funding from the National Science Foundation , NASA , Army Research Office, U.S. Air Force, and internally by Battelle. This research explored the in-material effects in more depth and demonstrated the creation of deep compressive stresses and the accompanying increase in fatigue and fretting fatigue life achieved by laser peening. [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Laser shocking during the initial development stages was severely limited by the laser technology of the time period. The pulsed laser used by Battelle encompassed one large room and required several minutes of recovery time between laser pulses. [ 13 ] To become a viable, economical, and practical industrial process, the laser technology had to mature into equipment with a much smaller footprint and be capable of increased laser pulse frequencies. In the early 1980s, Wagner Castings Company located in Decatur, Illinois became interested in laser peening as a process that could potentially increase the fatigue strength of cast iron to compete with steel, but at a lower cost. Laser peening of various cast irons showed modest fatigue life improvement, and these results along with others, convinced them to fund the design and construction of a pre-prototype pulsed laser in 1986 to demonstrate the industrial viability of the process. This laser was completed and demonstrated in 1987. Although the technology had been under investigation and development for about 15 years, few people in industry had heard of it. So, with the completion of the demonstration laser, a major marketing effort was launched by Wagner Castings and Battelle engineers to introduce laser peening to potential industrial markets.
Also in the mid 1980s, Remy Fabbro of the Ecole Polytechnique was initiating a laser shock peening program in Paris. He and Jean Fournier of the Peugeot Company visited Battelle in 1986 for an extended discussion of laser shock peening with Allan Clauer. The programs initiated by Fabbro and carried forward in the 1990s and early 2000s by Patrice Peyre, Laurent Berthe, and co-workers have made major contributions, both theoretical and experimental, to the understanding and implementation of laser peening. [ 14 ] [ 15 ] [ 16 ] In 1998, they measured using VISAR ( Velocimeter Interferometer System for Any Reflector ) pressure loadings in water confinement regime as function of wavelength. They demonstrate the detrimental effect of breakdown in water limiting maximum pressure at the surface of material. [ 17 ]
In the early 1990s, the market was becoming more familiar with the potential of laser peening to increase fatigue life. In 1991, the U.S. Air Force introduced Battelle and Wagner engineers to GE Aviation to discuss the potential application of laser peening to address a foreign object damage (FOD) problem with fan blades in the General Electric F101 engine powering the Rockwell B-1B Lancer Bomber. The resulting tests showed that laser peened fan blades severely notched after laser peening had the same fatigue life as a new blade. [ 18 ] After further development, GE Aviation licensed the laser shock peening technology from Battelle, and in 1995, GE Aviation and the U.S. Air Force made the decision to move forward with production development of the technology. GE Aviation began production laser peening of the F101 fan blades in 1998.
The demand for industrial laser systems required for GE Aviation to go into production attracted several of the laser shock peening team at Battelle to start LSP Technologies, Inc. in 1995 as the first commercial supplier of laser peening equipment. Led by founder Jeff Dulaney, LSP Technologies designed and built the laser systems for GE Aviation to perform production laser peening of the F101 fan blades. Through the late 1990s and early 2000s, the U.S. Air Force continued to work with LSP Technologies to mature the laser shock peening production capabilities and implement production manufacturing cells. [ 19 ]
In the mid 1990s, independent of the laser peening developments ongoing in the United States and France, Yuji Sano of the Toshiba Corporation in Japan initiated the development of a laser peening system capable of laser peening welds in nuclear plant pressure vessels to mitigate stress corrosion cracking in these areas. [ 20 ] The system used a low-energy pulsed laser operating at a higher pulse frequency than the higher powered lasers. The laser beam was introduced into the pressure vessels through articulated tubes. Because the pressure vessels were filled with water, the process did not require a water overlay over the irradiated surface. However, the beam had to travel some distance through the water, necessitating using a shorter wavelength beam, 532 nm, to minimize dielectric breakdown of the beam in the water, instead of the 1054 nm beam used in the United States and France. Also, it was impractical to consider using an opaque overlay. This process is now known as Laser Peening without Coating (LPwC). It began to be applied to Japanese boiling water and pressurized water reactors in 1999. [ 21 ]
Also in the 1990s a significant laser peening research group was formed at the Madrid Polytechnic University by José Ocaña. Their work includes both experimental and theoretical studies using low-energy pulsed lasers both without and with an opaque overlay. [ 22 ] [ 23 ]
With the major breakthrough of commercial application of laser peening on the F101 engine to resolve a major operational problem, laser peening attracted attention around the globe. Researchers in many countries and industries undertook investigations to extend understanding of the laser shock peening process and material property effects. As a result, a large volume of research papers and patents were generated in the United States, France, and Japan. In addition to the work being done in these countries and Spain, laser peening programs were initiated in China, Britain, Germany and several other countries. The continuing growth of the technology and its applications led to the appearance of several commercial laser shock peening providers in the early 2000s.
GE Aviation and LSP Technologies were the first companies performing laser peening commercially, having licensed the technology from Battelle. GE Aviation performed laser peening for its aerospace engine components and LSP Technologies marketed laser shock peening services and equipment to a broader industrial base. In the late 1990s, Metal Improvement Company (MIC is now part of Curtis Wright Surface Technologies) partnered with Lawrence Livermore National Laboratory (LLNL) to develop its own laser peening capabilities. In Japan, Toshiba Corporation expanded the commercial applications of its LPwC system to pressurized water reactors, and in 2002 implemented fiber optic beam delivery to the underwater laser peening head. Toshiba also redesigned the laser and beam delivery into a compact system, enabling the entire system to be inserted into the pressure vessel. This system was ready for commercial use in 2013 [ 24 ] MIC developed and adapted laser shock peening for forming the wing shapes on the Boeing 747-8.
The growth of industrial suppliers and commercial proof of laser peening technology lead to many companies adopting laser peening technology to solve and prevent problems. Some of the companies who have adopted laser peening include: GE , Rolls-Royce , Siemens , Boeing , Pratt & Whitney , and others.
In the 1990s and continuing through present day, laser peening developments have targeted decreasing costs and increasing throughput to reach markets outside of high-cost low-volume components. High costs in the laser peening process were previously attributable to laser system complexity, processing rates, manual labor and overlay applications. Numerous ongoing advancements addressing these challenges have reduced laser peening costs dramatically: laser peening systems are designed to handle robust operations; pulse rates of laser systems are increasing; routine labor operations are increasingly automated; application of overlays are automated in many cases. These reduced operational costs of laser peening have made it a valuable tool for solving an extended range of fatigue and related applications. [ 25 ]
Laser peening uses the dynamic mechanical effects of a shock wave imparted by a laser to modify the surface of a target material. It does not utilize thermal effects. Fundamentally, laser peening can be accomplished with only two components: a transparent overlay and a high-energy pulsed laser system. The transparent overlay confines the plasma formed at the target surface by the laser beam. It is also often beneficial to use a thin overlay, opaque to the laser beam, between the water overlay and the target surface. This opaque overlay can provide either or each of three benefits: protect the target surface from potentially detrimental thermal effects from the laser beam, provide a consistent surface for the laser beam-material interaction and, if the overlay impedance is less than that of the target surface, increase the magnitude of the shock wave entering the target. However, there are situations where an opaque overlay is not used; in the Toshiba process, LPwC, or where the tradeoff between decreased cost and possibly somewhat lowered surface residual stress allows superficial grinding or honing after laser peening to remove the thin thermally effected layer.
The laser peening process originated with high-energy Nd-glass lasers producing pulse energies up to 50 J (more commonly 5 to 40 J) with pulse durations of 8 to 25 ns. Laser spot diameters on target are typically in the range of 2 to 7 mm. The processing sequence begins by applying the opaque overlay on the workpiece or target surface. Commonly used opaque overlay materials are black or aluminum tape, paint or a proprietary liquid, RapidCoater. The tape or paint is generally applied over the entire area to be processed, while the RapidCoater is applied over each laser spot just before triggering the laser pulse. After application of the opaque overlay, the transparent overlay is placed over it. The transparent overlay used in production processing is water; it is cheap, easily applied, readily conforms to most complex surface geometries, and is easily removed. It is applied to the surface just before triggering the laser pulse. Quartz or glass overlays produce much higher pressures than water, but are limited to flat surfaces, must be replaced after each shot and would be difficult to handle in a production setting. Clear tape may be used, but requires labor to apply and is difficult to conform to complex surface features. The transparent overlay allows the laser beam to pass through it without appreciable absorption of the laser energy or dielectric breakdown. When the laser is triggered, the beam passes through the transparent overlay and strikes the opaque overlay, immediately vaporizing a thin layer of the overlay material. This vapor is trapped in the interface between the transparent and opaque overlays. The continued delivery of energy during the laser pulse rapidly heats and ionizes the vapor, converting it into a rapidly expanding plasma. The rising pressure exerted on the opaque overlay surface by the expanding plasma enters the target surface as a high-amplitude stress wave or shock wave. Without a transparent overlay, the unconfined plasma plume moves away from the surface and the peak pressure is considerably lower. If the amplitude of the shock wave is above the Hugoniot Elastic Limit (HEL) , i.e., the dynamic yield strength, of the target, the material plastically deforms during passage of the shock wave. The magnitude of the plastic strain decreases with distance from the surface as the peak pressure of the shock wave attenuates, i.e., decreases, and becomes zero when the peak pressure falls below the HEL. After the shock wave passes, the residual plastic strain creates a compressive residual stress gradient below the target surface, highest at or immediately below the surface and decreasing with depth. By varying the laser power density, pulse duration, and number of successive shots on an area, a range of surface compressive stress magnitudes and depths can be achieved. The magnitude of surface stresses are comparable to shot peening, but the depths are much greater, ranging up to 5 mm when using multiple shots on a spot. Generally spot densities of about 10 spots/cm 2 to 40 spots/cm 2 are applied. The compressive stress depth achieved with the most common processing parameters ranges from 1 to 2 mm (0.039 to 0.079 in) deep. The deep compressive stresses are due to the shock wave peak pressure being maintained above the HEL to greater depths than for other peening technologies.
There may be instances where it is cost effective not to apply the opaque overlay and laser peen the bare surface of the work piece directly. When laser peening a bare, metallic surface a thin, micrometer-range, layer of surface material is vaporized. The rapid rise in temperature causes surface melting to a depth dependent on pulse energy and duration, and target melting point. On aluminum alloys this depth is nominally 10–20 μm, but on steels and other higher melting point alloys the depths may be just a few micrometers. Due to the short duration of the pulse, the in-depth heating of the surface is limited to a few tens of micrometers due to the rapid quenching effect of the cold substrate. Some superficial surface staining of the work piece may occur, typically from oxidation products. These detrimental effects of bare surface processing, both aesthetic and metallurgical, can be removed after laser peening by light grinding or honing. With an opaque overlay in place, the target surface experiences temperature rises of less than 50–100 °C (90–180 °F) on a nanosecond time scale.
Laser pulses are generally applied sequentially on the target to treat areas larger than the laser spot size. Laser pulse shapes are customizable to circular, elliptical, square, and other profiles to provide the most convenient and efficient processing conditions. The spot size applied depends on a number of factors that include material HEL, laser system characteristics and other processing factors. The area to be laser peened is usually determined by the part geometry, the extent of the fatigue critical area and considerations of moving the compensating tensile stresses out of this area.
The more recently developed laser peening process, the Toshiba LPwC process, varies in significant ways from the process described above. The LPwC process utilizes low-energy high-frequency Nd-YAG lasers producing pulse energies of ≤ 0.1 J and pulse durations of ≤ 10 ns , using spot sizes ≤1 mm diameter. Because the process originally was intended to operate in large water-filled vessels, the wave frequency was doubled to halve the wavelength to 532 nm. The shorter wavelength decreases the absorption of beam energy while traveling through water to the target. Due to access constraints, no opaque overlay is applied to the target surface. This factor, combined with the small spot size, requires many shots to achieve a significant surface compressive stress and depths of 1 mm. The first layers applied produce a tensile surface stress due to surface melting, although a compressive stress is developed below the melt layer. However, as more layers are added, the increasing subsurface compressive stress "bleeds" back through the melted surface layer to produce the desired surface compressive stress. Depending on material properties and the desired compressive stresses, generally about 18 spots/mm 2 to 70 spots/mm 2 or greater spot densities are applied, about 100 times the spot densities of the high-pulse-energy process. The effects of the higher spot densities on processing times are compensated for in part by the higher pulse frequency, 60 Hz, of the low-energy lasers. Newer generations of these laser systems are projected to operate at higher frequencies. This low-energy process achieves compressive residual stress magnitudes and depths equivalent to the high-energy process with nominal depths of 1 to 1.5 mm (0.039 to 0.059 in). However, the smaller spot size will not permit depths deeper than this.
The laser peening process using computer control is described in AMS 2546. Like many other surface enhancement technologies, direct measuring of the results of the process on the workpiece during processing is not practical. Therefore, the process parameters of pulse energy and duration, water and opaque overlays are closely monitored during processing. Other quality control systems are also available that rely on pressure measurements such as electromagnetic acoustic transducers (EMAT), Velocity Interferometer System for Any Reflector (VISAR) and PVDF gauges, and plasma radiometers. Almen strips are also used, but they function as a comparison tool and do not provide a definitive measure of laser peening intensity. The resultant residual stresses imparted by the laser peening process are routinely measured by industry using x-ray diffraction techniques for the purposes of process optimization and quality assurance.
The initial laser systems used during the development of laser peening were large research lasers providing high-energy pulses at very low pulse frequencies. Since the mid-late 1990s, lasers designed specifically for laser peening featured steadily smaller size and higher pulse frequencies, both of these more desirable for production environments. The laser peening systems include both rod laser systems and a slab laser system. The rod laser systems can be separated roughly into three primary groups, recognizing that there is some overlap between them: (1) high-energy low-repetition rate lasers operating typically at 10–40 J per pulse with 8–25 ns pulse length at nominally 0.5–1 Hz rep rate, nominal spot sizes of 2 to 8 mm; (2) intermediate energy, intermediate repetition rate lasers operating at 3–10 J with 10–20 ns pulse width at 10 Hz rep rate, nominal spot sizes of 1–4 mm; (3) low-energy, high-repetition rate lasers operating at ≤ 1 J per pulse with ≤10 ns pulse length at 60+ Hz rep rate, ≤ 1 mm spot size. The slab laser system operates in the range of 10–25 J per pulse with 8–25 ns pulse duration at 3–5 Hz rep rate, nominal spot sizes of 2–5 mm. The commercial systems include rod lasers represented by all three groups and the slab laser system.
For each laser peening system the output beam from the laser is directed into a laser peening cell containing the work pieces or parts to be processed. The peening cell contains the parts handling system and provides the safe environment necessary for efficient commercial laser peening. The parts to be processed are usually introduced into the cell in batches. The parts are then picked and placed in the beam path by robots or other customized parts handling systems. Within the work cell, the beam is directed to the surface of the work piece via an optical chain of mirrors and/or lenses. If tape is used, it is applied before the part enters the work cell, whereas water or RapidCoater overlays are applied within the cell individually for each spot. The workpiece, or sometimes the laser beam, is repositioned for each shot as necessary via a robot or other parts handling system. When the selected areas on each part have been processed, the batch is replaced in the work cell by another.
The shockwave generated coldwork (plastic strain) in the workpiece material creates compressive and tensile residual stresses to maintain an equilibrium state of the material. These residual stresses are compressive at the workpiece surface and gradually fade into low tensile stresses below and surrounding the laser peened area. The cold work also work hardens the surface layer. The compressive residual stresses, and to a lesser extent, the cold work, from laser peening have been shown to prevent and mitigate high cycle fatigue (HCF), low cycle fatigue (LCF), stress corrosion cracking, fretting fatigue, and, to some degree, wear and corrosion pitting . It is outstanding at mitigating foreign object damage in turbine blades.
The plastic strain introduced by laser peening is much lower than that introduced by other impact peening technologies. As a result, the residual plastic strain has much greater thermal stability than the more heavily cold worked microstructures. This enables the laser peened compressive stresses to be retained at higher operating temperatures during long exposures than is the case for the other technologies. Among the applications benefiting from this are gas turbine fan and compressor blades and nuclear plant components.
By enhancing material performance, laser peening enables more-efficient designs that reduce weight, extend component lifetimes, and increase performance. In the future, it is anticipated that laser peening will be incorporated into the design of fatigue critical components to achieve longer life, lighter weight, and perhaps a simpler design to manufacture.
Originally, the use of laser-induced shock waves on metals to achieve property or functional benefits was referred to as laser shock processing, a broader, more inclusive term. As it happened, laser peening was the first commercial aspect of laser shock processing. However, laser-induced shock waves have found uses in other industrial applications outside of surface enhancement technologies.
One application is for metal shaping or forming. By selectively laser shocking areas on the surface of metal sheets or plates, or smaller items such as airfoils, the associated compressive residual stresses cause the material to flex in a controllable manner. In this way a particular shape can be imparted to a component, or a distorted component might be brought back into the desired shape. Thus, this process is capable of bringing manufactured parts back into design tolerance limits and form shaping thin section parts.
Another variation is to use the shock wave for spallation testing of materials. This application is based on the behavior of shockwaves to reflect from the rear free surface of a work piece as a tensile wave. Depending on the material properties and the shock wave characteristics, the reflected tensile wave may be strong enough to form microcracks or voids near the back surface, or actually "blow-off" or spall material from the back surface. This approach has some value for testing ballistic materials.
Use of laser shocks to measure the bond strength of coatings on metals has been developed over a period of years in France called LASAT for Laser Adhesion Test. [ 26 ] This application is also based on the behavior of shockwaves to reflect from the rear free surface of a work piece as a tensile wave. If the back surface is coated with an adherent coating, the tensile wave can be tailored to fracture the bond upon reflection from the surface. By controlling the characteristics of the shock wave, the bond strength of the coating can be measured, or alternatively, determined in a comparative sense. [ 27 ]
Careful tailoring of the shockwave shape and intensity has also enabled the inspection of bonded composite structures via laser shocking. [ 28 ] [ 29 ] The technology, termed Laser Bond Inspection initiates a shockwave that reflects off the backside of a bonded structure and returns as a tensile wave. As the tensile wave passes back through the adhesive bond, depending on the strength of the bond and the peak tensile stress of the stress wave, the tensile wave will either pass through the bond or rupture it. By controlling the pressure of the tensile wave this procedure is capable of reliably locally testing adhesion strength between bonded joints. This technology is most often found in application to bonded fiber composite material structures but has also been shown to be successful in evaluating bonds between metal-composite material. Fundamental issues are also studied to characterize and quantify the effect of shock wave produced by laser inside these complex materials. [ 30 ] [ 31 ] [ 32 ] | https://en.wikipedia.org/wiki/Laser_peening |
The scanning laser vibrometer or scanning laser Doppler vibrometer , was first developed by the British loudspeaker company, Celestion, around 1979, [ 1 ] further developed in the 1980s, [ 2 ] and commercially introduced by Ometron, Ltd around 1986. It is an instrument for rapid non-contact measurement and imaging of vibration. [ 3 ] [ 4 ]
Fields where they are applied include automotive , medical , aerospace , micro system and information technology as well as for quality and production control. The optimization of vibration and acoustic behavior are important goals of product development in all of these fields because they are often among the key characteristics that determine a product's success in the market. They are also in widespread use throughout many universities conducting basic and applied research in areas that include structural dynamics , modal analysis , acoustic optimization and non-destructive evaluation .
The operating principle is based on the Doppler effect , which occurs when light is back-scattered from a vibrating surface. Both velocity and displacement can be determined by analyzing the optical signals in different ways. A scanning laser vibrometer integrates computer-controlled X,Y scanning mirrors and a video camera inside an optical head. The laser is scanned point-by-point over the test object's surface to provide a large number of very high spatial resolution measurements. This sequentially measured vibration data can be used to calculate and visualize animated deflection shapes in the relevant frequency bands from frequency domain analysis. Alternatively, data can be acquired in the time domain to, for example, generate animations showing wave propagation across structures. In contrast to contact measuring methods, the test object is unaffected by the vibration measuring process.
Vibrometry covers a huge range of applications such as the study of microstructures moving only a few pm at frequencies up to 2.5 GHz, all the way up to the intense dynamics occurring in Formula 1 engines with vibration velocities approaching 30 m/s.
A 3D scanning vibrometer combines three optical sensors that accurately detect dynamic movement from different directions in space in order to completely determine the 3D vectors of motion. The software allows each individual x-, y- or z-direction component to be displayed independently, or combined into a single representation. Data can be exported for finite element model validation at nodes previously imported from the model for scan grid definition. | https://en.wikipedia.org/wiki/Laser_scanning_vibrometry |
Laser schlieren deflectometry ( LSD ) is a method for a high-speed measurement of the gas temperature in microscopic dimensions , in particular for temperature peaks under dynamic conditions at atmospheric pressure. The principle of LSD is derived from schlieren photography : a narrow laser beam is used to scan an area in a gas where changes in properties are associated with characteristic changes of refractive index . Laser schlieren deflectometry is claimed to overcome limitations of other methods regarding temporal and spatial resolution. [ 1 ]
The theory of the method is analogous to the scattering experiment of Ernest Rutherford from 1911. However, instead of alpha particles scattered by gold atoms , here an optical ray is deflected by hot spots with unknown temperature.
A general equation of LSD describes the dependence of the measured maximum deflection of the ray δ 1 on the local maximum of the neutral gas temperature in the hot spot T 1 :
where T 0 is ambient temperature and δ 0 is a calibration constant depending on the configuration of the experiment. [ 2 ]
Laser schlieren deflectometry has been used for investigation of the temperature dynamics, heat transfer and energy balance in a miniaturized kind of atmospheric-pressure plasma . [ 3 ] | https://en.wikipedia.org/wiki/Laser_schlieren_deflectometry |
Laser snow is the precipitation through a chemical reaction, condensation and coagulation process, of clustered atoms or molecules, induced by passing a laser beam through certain gasses . [ 1 ] It was first observed by Tam, Moe and Happer in 1975, [ 2 ] and has since been noted in a number of gases. [ 3 ]
This atomic, molecular, and optical physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Laser_snow |
Laser spray ionization refers to one of several methods for creating ions using a laser interacting with a spray of neutral particles [ 1 ] [ 2 ] [ 3 ] or ablating material to create a plume of charged particles. [ 4 ] The ions thus formed can be separated by m/z with mass spectrometry . Laser spray is one of several ion sources that can be coupled with liquid chromatography-mass spectrometry for the detection of larger molecules. [ 5 ]
In one version of the laser spray interface, explosive vaporization and mist formation occur when an aqueous solution effusing from the tip of the stainless steel capillary is irradiated from the opposite side of the capillary by a 10.6 μm infrared laser . [ 1 ] Weak ion signals could be detected when the plume was sampled through the ion sampling orifice. When a high voltage (3–4 kV) was applied to the stainless-steel capillary, strong ion signals appeared. The ion abundances were found to be orders of magnitude greater than those obtained by conventional electrospray ionization in the case of aqueous solutions. This approach to laser spray ionization is a hybrid of three basic techniques for the generation of gaseous ions from the condensed phase, i.e., energy-sudden activation, nebulization and the action of an electric field. [ 1 ]
Laser spray mass spectrometry can faithfully reflect the solution-phase characteristics of biomolecules . It has been successfully applied to evaluate the binding affinities of protein - DNA .
Laser spray has better ionization efficiency than conventional electrospray ionization (ESI). [ 1 ] In particular, the sensitivity became more than one order of magnitude higher in negative ion modes. It was also found that this technique has a potential benefit for the low concentration samples due to condensation effect of the formed droplet by the irradiation of laser. Higher the solvation energies of triply charged metal ions, stronger are the signals for ions. [ 6 ]
Laserspray Ionization (LSI) is a newer mass spectrometric technique commonly used with biomolecules, such as proteins. This method is similar to matrix-assisted laser desorption/ionization (MALDI) at atmospheric pressure in that it involves an analyte and matrix mixture. It also contains features from electrospray ionization, in which it produces a similar mass spectra. The mechanism was initially thought to involve laser induced production of highly charge matrix/analyte clusters that upon evaporation of the matrix produces ions by the same mechanism as ESI. LSI's ability to ablate proteins at atmospheric pressure in order to form a multiple of charged ions with a mass resolution of 100,000 when coupled with a quadrupole orbitrap mass spectrometer. [ 7 ] The advantages of using LSI includes a solvent-free ionization technique, fast data acquisition, simply to use, and the improved fragmentation through multiple charging. [ 8 ]
Due to recent innovations to the laser spray technique, a new method of laser ablation using the spray method has surfaced. Laserspray inlet ionization (LSII) involves a matrix/analyte sample at atmospheric pressure being ablated , and the ionization process will take place in an ion transfer capillary tube located in the mass spectrometer inlet . [ 9 ] The LSII method is also known as laserspray ionization vacuum (LSIV). [ 10 ]
Matrix-assisted inlet ionization (MAII) has shown that the laser is not necessary for the ionization process. Ions are formed when matrix-analyte is introduced to the vacuum of a mass spectrometer through an inlet aperture. LSI is a subset of MAII and is now called laserspray inlet ionization (LSII). [ 11 ] Laser spray inlet ionization and matrix-assisted inlet ionization can be coupled to a fourier transform ion cyclotron resonance (FT-ICR) mass analyzer to improve detection of peptides and proteins . [ 12 ] | https://en.wikipedia.org/wiki/Laser_spray_ionization |
A laser surface velocimeter ( LSV ) is a non-contact optical speed sensor measuring velocity and length on moving surfaces. Laser surface velocimeters use the laser Doppler principle to evaluate the laser light scattered back from a moving object. They are widely used for process and quality control in industrial production processes.
According to J. Cimbala, in the case of laser Doppler velocimeter, the use of the term Doppler is a misnomer because no Doppler shift measurement is involved in the technique. [ 1 ]
The Doppler effect (or Doppler shift) is the change in frequency of a wave for an observer moving relative to the source of the wave. The wave has a frequency f and propagates at a speed c When the observer moves at a velocity of v relative to the source, they receive a different frequency f' according to
The above analysis is an approximation for small velocities in comparison to the speed of light which is fulfilled very well for practically all technically relevant velocities.
To make a measurement on moving objects, which can in principle be of any length, requires a measurement design with an observation axis for the sensor which is at a right angle to the direction of movement of the object under investigation.
Laser surface velocimeters work according to the so-called difference Doppler technique. Here, 2 laser beams which are each incident to the optical axis at an angle φ, are superimposed on the surface of the object. For a point P, which moves at velocity v through the intersection point of the two laser beams, the frequencies of the two laser beams are Doppler shifted in accordance with the above
formula. At the point P of the object which is moving at the velocity v , the following frequencies therefore occur:
The point P now emits scatter waves in the direction of the detector. As P is moving with the object, the scattered radiation in the direction e → e {\displaystyle {\vec {e}}_{\text{e}}} of the detector is also Doppler shifted. Thus for the frequency of the scatter waves in the direction of the detector, it can be said:
The scatter waves are superimposed on the detector. Due to the interference of the scatter waves from the two laser beams, there are different frequency components in the superimposition. The low-frequency beat frequency of the superimposed scatter radiation which corresponds to the Doppler frequency f D is analyzed metrologically. When both incidental laser beams are at the same frequency (same wavelength), this is seen as a difference of f e2 and f e1 to:
If point P moves vertically with reference to the optical axis and at the same angle of incidence φ, it can be said that:
and
This means the final result is:
The Doppler shift is thus directly proportional to the velocity. A graphic explanation which leads to the same result follows:
Both the laser beams are superimposed in the measurement volume and in this spatial area, generate an interference pattern of bright and dark fringes.
The fringe spacing Δ s is a system constant which depends on the laser wavelength λ and the angle between the laser beams 2φ:
If a particle moves through the fringe pattern, then the intensity of the light it scatters back is modulated.
As a result of this, a photo receiver in the sensor head generates an AC signal, the frequency f D of which is directly proportional to the velocity component of the surface in measurement direction v p and it can be said that:
Laser surface velocimeters work in the so-called heterodyne mode, i.e. the frequency of one of the laser beams is shifted by an offset
of 40 MHz, e.g.. This makes the fringes in the measurement volume travel with a velocity corresponding to the offset frequency f B . This then makes it possible to identify the direction of movement of the object and to measure at the velocity zero. The resulting modulation frequency f mod at the photo receiver in heterodyne mode is:
The modulation frequency is determined in the controller using Fourier transformation and converted into the measurement value for the
velocity v p . The length measurement is made by integrating the velocity signal.
Laser surface velocimeters measure speed and length of moving surfaces on coils, strips, tubes, fiber, film, paper, foil, composite lumber, or almost any other moving material, including hot steel. [ 2 ] LSVs can accomplish various tasks like cut-to-length control, part length and spool length measurement, speed measurement and speed control, differential speed measurement for mass flow control, encoder calibration, ink-jet marker control, and many others.
Operating Principle of Laser Surface Velocimetry (Video) | https://en.wikipedia.org/wiki/Laser_surface_velocimeter |
A laser turntable (or optical turntable ) is a phonograph that plays standard LP records (and other gramophone records ) using laser beams as the pickup instead of using a stylus as in conventional turntables . Although these turntables use laser pickups , the same as Compact Disc players , the signal remains in the analog realm and is never digitized .
William K. Heine presented a paper "A Laser Scanning Phonograph Record Player" to the 57th Audio Engineering Society (AES) convention in May 1977. [ 1 ] The paper details a method developed by Heine that employs a single 2.2 mW helium–neon laser for both tracking a record groove and reproducing the stereo audio of a phonograph in real time. In development since 1972, the working prototype was named the "LASERPHONE", and the methods it used for playback was awarded U.S. Patent 3,992,593 on 16 November 1976. [ 2 ] Heine concluded in his paper that he hoped his work would increase interest in using lasers for phonographic playback.
Four years later in 1981 Robert S. Reis, a graduate student in engineering at Stanford University , wrote his master's thesis on "An Optical Turntable". [ 3 ] In 1983 he and fellow Stanford electrical engineer Robert E. Stoddard founded Finial Technology to develop and market a laser turntable, raising $7 million in venture capital . In 1984 servo-control expert Robert N. Stark joined the effort. [ 4 ] [ 5 ]
A non-functioning mock-up of the proposed Finial turntable was shown at the 1984 Consumer Electronics Show (CES), generating much interest and a fair amount of mystery, since the patents had not yet been granted and the details had to be kept secret. [ 6 ] The first working model, the Finial LT-1 (Laser Turntable-1), was completed in time for the 1986 CES. The prototype revealed an interesting flaw of laser turntables: they are so accurate that they "play" every particle of dirt and dust on the record, instead of pushing them aside as a conventional stylus would. The non-contact laser pickup does have the advantages of eliminating record wear, tracking noise, turntable rumble and feedback from the speakers, but the sound is still that of an LP turntable rather than a Compact Disc. The projected $2,500 street price (later raised to $3,786 in 1988) limited the potential market to professionals (libraries, radio stations and archivists) and a few well-heeled audiophiles. [ 7 ]
The Finial turntable never went into production. After Finial showed a few hand-built (and finicky) [ 8 ] prototypes, tooling delays, component unavailability (in the days before cheap lasers), marketing blunders, and high development costs kept pushing back the release date. The long development of the laser turntable exactly coincided with two major events, the early 1980s recession , and the introduction of the Digital Compact Disc , which soon began flooding the market at prices comparable to LPs (with CD players in the $300 range). Vinyl record sales plummeted, and many established turntable manufacturers went out of business as a result.
With over US$20 million in venture capital invested, Finial faced a marketing dilemma: forge ahead with a selling price that would be too high for most consumers, or gamble on going into mass production at a much lower price and hope the market would lower costs. Neither seemed viable in a rapidly-shrinking market.
Finally, in late 1989 after almost seven years of research, Finial's investors cut their losses and liquidated the firm, selling the patents to Japanese turntable maker CTI Japan, which in turn created ELP Japan for continued development of the "super-audiophile" turntable. After eight more years of development the laser turntable was finally put on sale in 1997 – twenty years after the initial proposal – as the ELP LT-1XA Laser Turntable, with a list price of US$20,500 (in 2003 the price was lowered to US$10,500). [ 9 ] The turntable, which uses two lasers to read the groove and three more to position the head, does allow one to vary the depth at which the groove is read, possibly bypassing existing record wear. It will not, however, read clear or colored vinyl records. [ 10 ] ELP sells built-to-order laser turntables directly to consumers in two versions (LT-basic, and LT-master), [ 11 ] at a reported cost (unpublished) of approximately $16,000 for the basic model. [ 12 ]
In May 2018, Almedio of Japan, a computer drive manufacturer, [ 13 ] presented the Optora ORP-1 optical (laser) turntable at the HIGH END Munich audio show. [ 14 ] Few details were provided by the company [ 15 ] because, like the 1984 presentation of the Finial turntable, the Optora was a non-working mockup. Company representatives indicated the turntable would use five lasers and be belt-driven, [ 16 ] like the ELP. However, after producing some promotional materials (since deleted), a price was never announced [ 17 ] and the Optora has not been put on the market. The company's website devoted to the turntable has since been deleted. [ 18 ]
In a 2008 review of the model ELP LT-1LRC, Jonathan Valin in The Absolute Sound claimed:
"If I were to describe its presentation in a few words, they would be 'pleasant but dull'." [ 19 ]
Valin commended the tonal accuracy of playback, but criticized the lack of dynamic range and bass response (limitations of the vinyl records themselves). He emphasized that records must be wet-cleaned immediately before playback because:
"Unlike a relatively massive diamond stylus, which plows through a record’s grooves like the prow of a ship, the ELP’s tiny laser-beam styli have next to no mass and cannot move dust particles out of their way. Any speck of dirt, however minute, is read by the lasers along with the music." [ 19 ]
In 2008, Michael Fremer noted in Stereophile :
"...consider the LT's many pluses: no rumble or background noise of any kind; no cartridge-induced resonances or frequency-response anomalies; no compromise in channel separation (the ELP guarantees channel separation in excess of what the best cutter heads offer); zero tracking or tracing error; no inner-groove distortion; no skating; no adjustments of VTA or azimuth to worry about; no tangency error (like the cutter head itself, the laser pickup is a linear tracker); no record wear; a claimed frequency response of 10Hz–25kHz; and, because the laser beam is less than a quarter the contact area of the smallest elliptical stylus, it can negotiate sections of the engraved waveform that even the smallest stylus misses." [ 20 ]
Fremer also noted, however, that all of this comes at a cost:
"[T]he LT-2XRC's laser pickup was unable to distinguish groove modulations from dirt. Records that sound dead quiet on a conventional turntable could sound as if I was munching potato chips while listening to the ELP. Bummer. There's a solution, of course: a record-cleaning machine. This can't be considered an 'accessory' with the LT: it's mandatory. Even new records fresh out of the jacket can sound crunchy." [ 20 ]
Fremer concludes:
"Ironically, if you listen to the music itself, you won't know you're listening to an LP. It's almost like a reel-to-reel tape. Unfortunately, when there is noise, it will always make you aware that you're listening to an LP. That's the confounding thing about this fabulous contraption." [ 20 ]
A similar technology is to scan or photograph the grooves of the record, and then reconstruct the sound from the modulation of the groove revealed by the image. Research groups that developed this technology include: | https://en.wikipedia.org/wiki/Laser_turntable |
The laser voltage probe (LVP) is a laser-based voltage and timing waveform acquisition system which is used to perform failure analysis on flip-chip integrated circuits. The device to be analyzed is de-encapsulated in order to expose the silicon surface. The silicon substrate is thinned mechanically using a back side mechanical thinning tool. The thinned device is then mounted on a movable stage and connected to an electrical stimulus source. Signal measurements are performed through the back side of the device after substrate thinning has been performed. The device being probed must be electrically stimulated using a repeating test pattern, with a trigger pulse provided to the LVP as reference. The operation of the LVP is similar to that of a sampling oscilloscope.
The LVP instrument measures voltage waveform signals in the device diffusion regions. Device imaging is accomplished through the use of a laser scanning microscope (LSM). The LVP uses dual infrared (IR) lasers to perform both device imaging and waveform acquisition. One laser is used to acquire images or waveforms from the device, while the second laser provides a reference which may be used to subtract unwanted noise from the signal data being acquired. On an electrically active device, the instrument monitors the changes in the phase of the electromagnetic field surrounding a signal being applied to a junction.
The instrument obtains voltage waveform and timing information by monitoring the interaction of laser light with the changes in the electric field across a p-n junction . As the laser reaches the silicon surface, a certain amount of that light is reflected back. The amount of reflected laser light from the junction is sampled at various points in time. The changing electromagnetic field at the junction affects the amount of laser light that is reflected back. By plotting the variations in reflected laser light versus time, it is possible to construct a timing waveform of the signal at the junction. As the test pattern continues to loop, additional measurements are acquired and averaged into the previous measurements. Over a period of time, this averaging of measurements produces a more refined waveform. The end result is a waveform that is representative of the electrical signal present at the junction . | https://en.wikipedia.org/wiki/Laser_voltage_prober |
A laser warning receiver is a warning system used as a passive military defence. It detects, analyzes, and locates directions of laser emissions [ 1 ] from laser guidance systems and laser rangefinders . Then it alerts the crew and can start various countermeasures, like smoke screen , aerosol screen (e.g. Shtora ), active laser self-defence weapon with laser dazzler (LSDW, used on the Chinese Type 99 main battle tank [ 2 ] ), laser jammer, etc.
Detectors used in LWR are usually based on a semiconductor photodetector array, which is typically cryogenically or thermal-electric cooled. Sometimes avalanche photodiodes (APD), photoconductivity, photoelectromagnetic, or photodiffusion devices are used even without cooling. [ 3 ] Some devices detect only the main beam of foreign lasers while others detect even scattered rays.
Some models used by US are listed: [ 15 ]
This military -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Laser_warning_receiver |
Laser welding of polymers is a set of methods used to join polymeric components through the use of a laser . It can be performed using CO 2 lasers , Nd:YAG lasers , Diode lasers and Fiber lasers . [ 1 ]
When a laser encounters the surface of plastics, it can be reflected, absorbed or penetrate through the thickness of a component. Laser welding of plastics is based on the energy absorption of laser radiation, which can be reinforced by additives and fillers.
Laser welding techniques include:
Because of high joining speeds, low residual stresses and excellent weld appearances, laser welding processes have been widely used for automotive and medical applications.
The types of lasers used in the welding of polymers include CO 2 lasers, Nd:YAG lasers, Diode lasers and fiber lasers. CO 2 lasers are mostly applied to weld thin films and thin plastics due to the high energy absorption coefficients of most plastics. Nd:YAG lasers and Diode lasers produce short wavelength radiation, which transmit through several millimeters of unpigmented polymer. [ 2 ] They are used in the transmission laser welding techniques.
Carbon dioxide lasers have a wavelength of 10.6 μm which is rapidly absorbed by most polymers. Because of the high-energy absorption coefficients, processing of plastics using CO 2 can be done rapidly with low laser powers. This type of laser can be used in direct welding of polymers or cutting. However, the penetration of CO 2 lasers is less than 0.5 mm and is mostly applicable for the welding of thin film and surface heating. Because the beam cannot be transmitted by silicon fiber, the beam is commonly delivered by mirrors. [ 3 ]
Nd:YAG lasers have a wavelength in the range of 0.9 - 1.1 μm, with 1064 nm being the most common. These lasers provide a high beam quality allowing for small spot sizes. This type of beam can be delivered via fiber optic cable . [ 3 ]
The wavelength of diode lasers is typically in the 780 - 980 nm wavelength range. [ 2 ] Compared with Nd:YAG laser and CO 2 laser, diode laser has supreme advantage in energy efficiency. The high-energy light wave can penetrate a thickness of a few millimeters in semicrystalline plastics and further in unpigmented amorphous plastics. [ 2 ] Diode lasers can be either fiber delivered or local to the weld location. The relatively small size makes assembling arrays for larger foot prints possible.
Fiber lasers typically exhibit wavelengths ranging from 1000 to 3500 nm. [ 3 ] The expanded range of wavelengths has allowed for the development of through transmission welding without additional absorbing additives. [ 4 ]
The equipment settings may vary greatly in design and complexity. However, there are five components included in most of the machines:
This component transforms the received voltage and frequency to the corresponding voltage, current and frequency to the laser source. Diode laser and fiber laser are the two most commonly used system for laser welding. [ 1 ]
The control interface is an interface between operator and machine to monitor operations of the system. It is constructed by logic circuits to send operators the information of machine status and welding parameters . [ 1 ] Depending on different laser modes, the control interface will vary the parameters allowable to change. [ 5 ]
This component is a press activated by pneumatical and electrical power. [ 1 ] It compresses the part in the upper fixture to touch the components in the lower fixture and apply pre-determined loads during welding processes. [ 5 ] Displacement controls are added to actuators to monitor precisely the movements. [ 1 ]
Lower fixture is a jig structure that locates the lower part of a joint. [ 5 ] It provides locations and alignments that ensure the welding of components with tight tolerances.
The upper fixture is the most complicated and important component in the whole system. Laser beam is generated in this component to heat up the welding parts. The design of upper fixture often varies from laser sources and heating modes. For example, when a YAG laser or a diode laser is used as the heat source, optical fibers are often employed to provide mobility. However, the welding part cannot move. [ 5 ]
There are three types of interactions that can occur between laser radiation and plastics:
The extent of individual interaction is dependent upon materials properties, laser wavelength, laser intensity and beam speed. [ 3 ]
Reflection of incident laser radiation is typically on the order of 5 to 10% in most polymers, which is low compared with absorption and transmission. [ 6 ] The fraction of reflection (R) can be determined by the following equation ,
R = ( n − m ) 2 ( n + m ) 2 {\displaystyle R={\frac {(n-m)^{2}}{(n+m)^{2}}}}
where n {\displaystyle n} is the index of refraction of the plastics and m {\displaystyle m} is the index of refraction of air (~1). [ 5 ]
Transmission of laser energy through certain polymers allows for processes such as through transmission welding. When the laser beam travel through the interfaces between different medium, the laser beam is refracted unless the path is perpendicular to the surface. This effect needs to be considered when laser travels through multi-layer to reach the joint region. [ 4 ]
Internal scattering occur when laser pass through the thickness in semicrystalline plastics, where crystalline and amorphous phase have different index of refraction. Scattering can also occur in crystalline and amorphous plastics with reinforcement like glass fiber and certain colorants and additives. [ 1 ] In transmission laser welding, such effect can reduce the effective energy of laser radiation towards joint area and limit the thickness of components. [ 5 ]
Laser absorption can occur at the surface of plastics or during transmission through thickness. The amount of laser energy absorbed by a polymer is a function of the laser wavelength, polymer absorptivity, polymer crystallinity , and additives (i.e. composite reinforcements, pigments, etc.). [ 1 ] The absorption at surface has two possible ways, photolytic and pyrolytic .
The heat distribution within a laser welded polymer is dictated by the Bouger–Lambert law of absorption. [ 6 ]
I(z) = I(z=0) e Kz
where I(z) is the laser intensity at a certain depth z, I(z=0) is the laser intensity at the surface, K is the absorption constant. [ 6 ]
Polymers often have secondary elements added to them for various reasons (i.e. strength, color, absorption, etc.). These elements can have a profound effect on the laser interaction with the polymer component. Some common additives and their effect on laser welding are described below.
Various fibers are added to polymeric materials to create higher strength composites. Some typical fiber materials include: glass , carbon fiber , wood, etc. When the laser beam interacts with these materials it can get scattered or absorbed, changing the optical properties from that of the base polymer. In laser transmission welding, a transparent material with reinforcement may absorb or dilute the energy beam more, effecting the quality of the weld. [ 6 ] High contents of glass fiber content increase the scattering within the plastics and raise the laser energy input for welding a certain thickness. [ 2 ]
Colorants ( pigments ) are added to polymers for various reasons including aesthetics and functional requirements (such as optics). Certain color additives, such as titanium dioxide , can have a negative impact on the laser weldability of a polymer. The titanium dioxide provides a white coloring to polymers but also scatters laser energy making it difficult to weld. Another color additive, carbon black , is a very effective energy absorber and is often added to create welds. By controlling the concentration of carbon black with the absorbing polymer it is possible to control the effective area of the laser weld. [ 7 ]
The laser beam energy can be delivered to the required areas through a variety of configurations. The four most common approaches include:
In the contour heating (laser scanning or laser moving) technique, a laser beam of fixed dimension passes through the desired area to create a continuous weld seam. [ 8 ] [ 7 ] The laser source is manipulated by a galvanic mirror or a robotic system to scan at a fast rate. [ 5 ] The benefit of contour heating is that the weld can be performed with a single laser source, which can be reprogramed for different applications; however, due to the localized heating area, uneven contact between welding components can occur and form weld voids. [ 5 ] The important parameters for this technique include: laser wavelength, laser power, traverse speed, and polymer properties. [ 8 ]
In the simultaneous heating approach, a beam spot of appropriate size is used to irradiate the entire weld area without the need for relative movement between the work piece and the laser source. For creating a weld with a large area, multiple laser sources can be combined to melt the selected region simultaneously. This approach can be adopted to substitute ultrasonic welding in the case of welding components sensitive to vibration. Key processing parameters for this approach include: laser wavelength, laser power, heating time, clamp pressure, cooling time, and polymer properties. [ 3 ] [ 8 ]
In the quasi-simultaneous heating, a work area is irradiated by the use of scanning mirrors. The mirrors raster the laser beam over the entire work area rapidly, creating a simultaneously melted region. Some of the important parameters for this technique include: laser wavelength, laser power, heating time, cooling time, polymer properties. [ 8 ]
Masked heating is a process of laser line scanning through a region with a mask, which ensures that only the selected areas can be heated when the laser pass through. [ 3 ] [ 5 ] Masks can be made out of laser cut steel, or other materials that effectively block the laser radiation. This approach is capable of creating micro-scale welds on components with complex geometries . [ 3 ] Key processing parameters for this approach include: laser wavelength, laser power, heating time, clamp pressure, cooling time, and polymer properties. [ 7 ] [ 8 ]
Depending on different interactions between laser and thermoplastics, four different laser welding techniques have been developed for plastic joining. CO 2 lasers have good surface absorption for most thermoplastics , hence they are applied for direct laser welding and laser surface heating. Through transmission laser welding and intermediate film welding require the deep penetration of laser beam, so YAG lasers and diode lasers are the most common sources for these techniques.
Similar to laser welding of metals, in direct laser welding the surface of the polymer is heated to create a melt zone that joins two components together. This approach can be used to create butt joints and lap joints with complete penetration. Laser wavelengths between 2 and 10.6 μm are used for this process due to their high absorptivity in polymers. [ 3 ]
Laser surface heating is similar to non-contact hot plate welding in that mirrors are placed between components to create a molten surface layer. The exposure duration is usually between 2-10 s. [ 5 ] Then the mirror is retracted and the components are pressed together to form a joint. Process parameters for laser surface heating include the laser output, wavelength, heating time, change-over time, and forging pressure and time. [ 5 ]
Through transmission laser welding of polymers is a method to create a joint at the interface between two polymer components with different transparencies to laser wavelengths. The upper component is transparent to the laser wavelength between 0.8 μm to 1.05 μm, and the lower component is either opaque in nature, or modified by the addition of colorants which promote the absorption of laser radiation. A typical colorant is carbon black that absorb most of the electromagnetic wavelength. [ 5 ] When the joint is irradiated by the laser, the transparent layer passes the light with minimal loss while the opaque layer absorbs the laser energy and heats up. [ 8 ]
The two components are held by the lower fixture to control alignment and a small clamping force is added to the upper part to form intimate contact. A melt layer is then created at the interface between the two components, composed of a mixture of two plastic materials.
There are four different modes of transmission laser welding: scanning mode, simultaneous, quasi-simultaneous, and mask heating. [ 8 ]
Many benefits can be obtained by transmission laser welding such as fast welding velocity, flexibility, good cosmetic properties and low residual stresses. From processing perspectives, laser welding can be performed in the pre-assembled conditions, reducing the necessity for complex fixtures; however, this method is not suitable for plastics with high crystallinity due to refraction and geometric limitations. [ 5 ]
Intermediate film welding is a method to join incompatible plastic components by using an intermediate film between them. Similar to transmission welding, laser radiation passes through the transparent components and melts the intermediate layers to create a joint. [ 1 ] This film can be made of an opaque thermoplastic, solvent , viscous fluid, or other substances that heat up upon exposure to laser energy. The combination of intermediate films and adhesion promoters is able to join incompatible thermoplastics together. [ 1 ] The thin layer then generates the heat required to fuse the system together. [ 8 ]
The black body of car keys is welded by the Through Transmission Laser Welding (TTLW) technique, in which laser radiation transmits through the upper component and forms a joint at the interface. Carbon black is added to the lower part of car keys to absorb laser radiation. The black color of the upper part is made by the addition of dye, which makes the component appear black but transparent to laser radiation.
Other applications of laser welding in automotive industry include brake fluid reservoirs and lighting components. [ 8 ]
Laser welding of plastics is applied to weld medical devices like IV-bags . Joints of high geometrical complexity can be produced by laser welding without particulate formation. This is critical for the safety of patients, when welding techniques are applied to produce IV-bags containing blood. In addition, flashes generated during welding can cause blood turbulences and destroy blood platelets . A good control of the laser power avoids flash formation and thus protects the blood cells from damage. | https://en.wikipedia.org/wiki/Laser_welding_of_polymers |
Lassar Cohn , Lassar-Cohn or Ernst Lassar Cohn (6 September 1858 – 9 October 1922) was a Prussian chemist and professor at the University of Königsberg who wrote several influential textbooks on organic analysis including methods for the analysis of urine.
Cohn was born in the Jewish family of Jacob Marcus Cohen and Hanna Hewe in Hamburg . He studied at the Gymnasium in Königsberg before going the University of Heidelberg. He also studied at Bonn and Königsberg. After receiving a doctorate in 1880 and habilitation in 1888 he joined the University of Königsberg and became a professor in 1894. He worked for some time from 1897 at the Ludwig-Maximilians-University in Munich but returned to Königsberg in 1902. In 1907 he also began to work with the chemical industry. Cohn's major works included studies of organic compounds, tartaric acid and its esters, bile chemistry and the recycling of industrial wastes. He innovated methods for nitrogen measurement, saccharimetry, and urine analysis. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Lassar_Cohn |
In differential geometry , the last geometric statement of Jacobi is a conjecture named after Carl Gustav Jacob Jacobi , which states:
Every caustic from any point p {\displaystyle p} on an ellipsoid other than umbilical points has exactly four cusps . [ 1 ]
Numerical experiments had indicated the statement is true [ 2 ] before it was proven rigorously in 2004 by Itoh and Kiyohara. [ 3 ] It has since been extended to a wider class of surfaces beyond the ellipsoid. [ 4 ]
This differential geometry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Last_geometric_statement_of_Jacobi |
In gerontology , late-life mortality deceleration is the disputed theory that hazard rate increases at a decreasing rate in late life rather than increasing exponentially as in the Gompertz law .
Late-life mortality deceleration is a well-established phenomenon in insects, [ 1 ] which often spend much of their lives in a constant hazard rate region, but it is much more controversial in mammals. [ 2 ] Rodent studies have found varying conclusions, with some finding short-term periods of mortality deceleration in mice, others not finding such. Baboon studies show no mortality deceleration.
An analogous deceleration occurs in the failure rate of manufactured products; this analogy is elaborated in the reliability theory of aging and longevity . [ 1 ] [ 3 ]
Late-life mortality deceleration was first proposed as occurring in human aging in Gompertz (1825) (which also introduced the Gompertz law), and observed as occurring in humans in Greenwood & Irwin (1939) , and has since become one of the pillars of the biodemography of human longevity – see history ; here "late life" is typically "after 85 years of age". However, a recent paper, Gavrilov & Gavrilova (2011) , concludes that mortality deceleration is negligible up to the age of 106 in the population studied (beyond this point, reliable data were unavailable) and that the Gompertz law is a good fit, with previous observations of deceleration being spurious, with various causes, including bad data and methodological problems – see criticism .
According to a 2018 paper, statistical errors are the main cause of apparent mortality deceleration in humans. [ 4 ]
Three related terms are used in this context:
A brief historical review is given in Gavrilov & Gavrilova (2011 , 2. Mortality at Advanced Ages: A Historical Review (pp. 433–435)); a detailed survey is given in Olshansky (1998) .
Late-life mortality deceleration was first proposed as occurring in human aging, in Gompertz (1825) , which also introduced the Gompertz law. [ 5 ] It was observed and quantified in Greenwood & Irwin (1939) , and reproduced in many later studies. Greenwood and Irwin wrote:
Following these studies, late-life mortality deceleration became one of the pillars of the theory of biodemography of human longevity , and models have incorporated it. It has been criticized at times, and recently has been very seriously criticized; see below.
Statistical studies of extreme longevity are difficult for a number of factors. Firstly, because few people live to very old ages, a very large population is required for such studies, ideally all born and living in similar conditions (same country, same birth year). In small countries, a single birth year cohort is insufficiently numerous for statistics, and thus multiple years are often used. Secondly, due to the great ages, accurate records of persons living over 100 years require records dating from the late 19th or early 20th century, when such record-keeping was often not high-quality; further, there is a tendency to exaggerate one's age, which distorts data. Thirdly, granularity is an issue – ideally exact day of birth and death would be used; using only year of birth and death introduces granularity, which adds bias (as discussed below).
Gavrilov & Gavrilova (2011) examined single birth-year cohorts from the United States Death Master File , using the method of extinct generations, and found that the effect disappeared if various distorting factors were removed. Specifically, they conclude that mortality deceleration is negligible up to the age of 106 in the population studied (beyond this point, reliable data were unavailable) and that the Gompertz law is a good fit, with previous observations of deceleration being spurious, with various causes, discussed below.
Given that mortality deceleration in humans had been observed in various studies, but disappeared on the careful analysis (of single-year cohorts in the US) in Gavrilov & Gavrilova (2011) , it is natural to ask what causes this discrepancy – why was mortality deceleration observed?
Gavrilov & Gavrilova (2011) propose several causes; notable, in each instance when such a factor is corrected or diminished, the fit with the Gompertz law becomes better.
Data quality:
Technical:
Methodology: [ some of these are different ways of saying the same thing ]
Several causes are proposed for late-life mortality: [ 1 ]
Late-life mortality deceleration can be modeled via modifications of the Gompertz law, using various logistic models .
The rates of late-life mortality are important for pensions. For example, the mortality rates in late life (after age 85) are of particular interest for the baby boom generation, which will reach this age starting in 2030, and for pensions funding calculations.
Late-life mortality rates are of basic importance for understanding aging, both for organisms generally and for humans specifically. | https://en.wikipedia.org/wiki/Late-life_mortality_deceleration |
Late-stage functionalization (LSF) is a desired, chemical or biochemical, chemoselective transformation on a complex molecule to provide at least one analog in sufficient quantity and purity for a given purpose without needing the addition of a functional group that exclusively serves to enable said transformation. [ 1 ]
Molecular complexity is an intrinsic property of each molecule and frequently determines the synthetic effort to make it. [ 2 ] [ 3 ] LSF can significantly diminish this synthetic effort, and thus enables access to molecules, which would otherwise not be available or too difficult to access. The requirements for LSF can be met by both C–H functionalization reactions and functional group manipulations. [ 1 ] LSF reactions are particularly relevant and often used in the fields of drug discovery and materials chemistry , [ 4 ] [ 5 ] [ 6 ] although no LSF has been implemented in a commercial process.
All LSF reactions are chemoselective but not every chemoselective reaction fulfills the requirements of the definition for LSF. [ 1 ] High chemoselectivity is required for a useful LSF with a predictable reaction outcome because complex molecules typically feature several distinct functional groups that need to be tolerated. In this sense, chemoselectivity is sometimes referred to as functional group tolerance. Furthermore, high chemoselectivity avoids often undesired over-functionalization of the valuable substrate, which is used as a limiting reagent in LSF reactions. [ 1 ]
Every C–H bond functionalization on a complex molecule classifies as LSF, except when a directing or activating group must be installed in a previous step of the synthesis to accomplish the transformation. For functional group manipulations, the distinction between LSF and functional-group-tolerant reactions is more subtle. For example, peptide bioconjugation reactions make use of the native functionality in amino acid side chains, and thus classify as LSF. In contrast, bioorthogonal 1,3-dipolar cycloadditions (see also copper-free click chemistry and Huisgen cycloaddition ) generally require prior introduction of azide or cycloalkyne functionalities to biomolecules. Hence, such transformations do not classify as LSF despite their excellent functional group tolerance. [ 1 ] [ 7 ] [ 8 ]
Site-selectivity, also positional or regioselectivity , is generally desired but no requirement for LSF reactions because site-unselective LSF reactions can also be useful for special purposes. For example, site-unselective late-stage C–H functionalization reactions can provide quick access to several constitutional isomers of complex molecules relevant for biological testing in drug discovery . [ 1 ] [ 4 ] [ 5 ] [ 9 ] Site-selective reactions to access each possible constitutional isomer independently are scarce but highly desirable because cumbersome purification procedures are avoided, and other isomers are not produced as waste. Some LSF reactions provide one constitutional isomer in high selectivity based on innate substrate selectivity for a given reaction or based on catalyst control. The discovery of site-selective LSF reactions constitutes an important research objective in the field of synthetic methodology development. [ 1 ] [ 10 ] [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Late-stage_functionalization |
Lateefah Durosinmi (born 1957 in Lagos ) is a Nigerian chemist and academic. She is a Professor at Obafemi Awolowo University in Ilé-Ifẹ̀ , Nigeria.
Lateefah Moyosore-Oluwa Adunni Durosinmi was born on 7 July 1957, on Lagos Island in Nigeria. Her father Late Alhaji Tijani Akanni Kolawole Williams was a sales manager and her mother was Madam Wusamot Abeni Kareem. Durosinmi was educated at the Patience Modern Girls’ (Private) School in Olowogbowo and then boarded at the Girls’ Secondary Grammar School in Gbagada. She married Muheez Durosinmi on 9 May 1981. [ 1 ]
Durosinmi attended the University of Ibadan and gained a BSc (Hons) in Chemistry in 1979. She then studied for a Master of Science in Analytical Chemistry , graduating in 1986. She took a PhD in Inorganic Chemistry at the Obafemi Awolowo University in Ilé-Ifẹ̀ , in 1992. [ 2 ] Her study focus was amino acids . [ 1 ]
Durosinmi began her career with the Lagos Water Corporation , then taught chemistry at Saint Anne’s School in Ibadan . In 1989, she took an appointment at the Obafemi Awolowo University in the chemistry department, where she is presently a Professor of Inorganic Chemistry. Between 2008 and 2016 she was also Acting Dean of Students. She visited Loughborough University as a postdoctoral research fellow from 1994 until 1995. [ 2 ]
Between 2005 and 2009, she was President of the Federation of Muslim Women’s Associations in Nigeria (FOMWAN). [ 2 ] Afterwards, a number of lectures and essays were released in her honour. [ 3 ] [ 1 ]
Durosinmi set up the Lateefah Moyosore Durosinmi Foundation (LMDF) in 2013, with the aim of supporting financially disadvantaged students and women setting up businesses. In 2019, she awarded scholarships to 33 students and grants to 15 women at a ceremony in Ibadan. She commented that "we must assist women to develop the society and as well as the youths to develop their talent". [ 4 ] In 2019, Professor Ashiata Bolatito Lanre-Abbas, who was the first female Muslim Professor at the University of Ibadan, gave the sixth Lateefah Moyosore Durosinmi Foundation lecture concerning economic recession at Obafemi Awolowo University. [ 5 ] | https://en.wikipedia.org/wiki/Lateefah_Durosinmi |
Latency refers to a short period of delay (usually measured in milliseconds ) between when an audio signal enters a system, and when it emerges. Potential contributors to latency in an audio system include analog-to-digital conversion , buffering , digital signal processing , transmission time , digital-to-analog conversion , and the speed of sound in the transmission medium .
Latency can be a critical performance metric in professional audio including sound reinforcement systems , foldback systems (especially those using in-ear monitors ) live radio and television . Excessive audio latency has the potential to degrade call quality in telecommunications applications. Low latency audio in computers is important for interactivity .
In all systems, latency can be said to consist of three elements: codec delay, playout delay and network delay.
Latency in telephone calls is sometimes referred to as mouth-to-ear delay ; the telecommunications industry also uses the term quality of experience (QoE). Voice quality is measured according to the ITU model; measurable quality of a call degrades rapidly where the mouth-to-ear delay latency exceeds 200 milliseconds. The mean opinion score (MOS) is also comparable in a near-linear fashion with the ITU's quality scale - defined in standards G.107, [ 1 ] : 800 G.108 [ 2 ] and G.109 [ 3 ] - with a quality factor R ranging from 0 to 100. An MOS of 4 ('Good') would have an R score of 80 or above; to achieve 100R requires an MOS exceeding 4.5.
The ITU and 3GPP groups end-user services into classes based on latency sensitivity: [ 4 ]
Similarly, the G.114 recommendation regarding mouth-to-ear delay indicates that most users are "very satisfied" as long as latency does not exceed 200 ms, with an according R of 90+. Codec choice also plays an important role; the highest quality (and highest bandwidth) codecs like G.711 are usually configured to incur the least encode-decode latency, so on a network with sufficient throughput sub-100 ms latencies can be achieved. G.711 at a bitrate of 64 kbit/s is the encoding method predominantly used on the public switched telephone network .
The AMR narrowband codec, used in GSM and UMTS networks, introduces latency in the encode and decode processes.
As mobile operators upgrade existing best-effort networks to support concurrent multiple types of service over all-IP networks, services such as Hierarchical Quality of Service ( H-QoS ) allow for per-user, per-service QoS policies to prioritise time-sensitive protocols like voice calls, and other wireless backhaul traffic. [ 5 ] [ 6 ] [ 7 ]
Another aspect of mobile latency is the inter-network handoff; as a customer on Network A calls a Network B customer the call must traverse two separate Radio Access Networks , two core networks, and an interlinking Gateway Mobile Switching Centre (GMSC) which performs the physical interconnecting between the two providers. [ 8 ]
With end-to-end QoS managed and assured rate connections, latency can be reduced to analogue PSTN/POTS levels. On a stable connection with sufficient bandwidth and minimal latency, VoIP systems typically have a minimum of 20 ms inherent latency. Under less ideal network conditions a 150 ms maximum latency is sought for general consumer use. [ 9 ] [ 10 ] Many popular videoconferencing systems rely on data buffering and data redundancy to cope for network jitter and packet loss. Measurements have shown that mouth-to-ear delay are between 160 and 300 ms over a 500-mile distance, on an average US network conditions. [ citation needed ] Latency is a larger consideration when an echo is present and systems must perform echo suppression and cancellation . [ 11 ]
Latency can be a particular problem in audio platforms on computers. Supported interface optimizations reduce the delay down to times that are too short for the human ear to detect. By reducing buffer sizes, latency can be reduced. [ 12 ] A popular optimization solution is Steinberg's ASIO , which bypasses the audio platform, and connects audio signals directly to the sound card's hardware. Many professional and semi-professional audio applications utilize the ASIO driver, allowing users to work with audio in real time. [ 13 ] Pro Tools HD offers a low latency system similar to ASIO. Pro Tools 10 and 11 are also compatible with ASIO interface drivers.
The Linux realtime kernel [ 14 ] is a modified kernel, that alters the standard timer frequency the Linux kernel uses and gives all processes or threads the ability to have realtime priority. This means that a time-critical process like an audio stream can get priority over another, less-critical process like network activity. This is also configurable per user (for example, the processes of user "tux" could have priority over processes of user "nobody" or over the processes of several system daemons ).
Many modern digital television receivers, set-top boxes and AV receivers use sophisticated audio processing, which can create a delay between the time when the audio signal is received and the time when it is heard on the speakers. Since TVs also introduce delays in processing the video signal this can result in the two signals being sufficiently synchronized to be unnoticeable by the viewer. However, if the difference between the audio and video delay is significant, the effect can be disconcerting. Some systems have a lip sync setting that allows the audio lag to be adjusted to synchronize with the video, and others may have advanced settings where some of the audio processing steps can be turned off.
Audio lag is also a significant detriment in rhythm games , where precise timing is required to succeed. Most of these games have a lag calibration setting whereupon the game will adjust the timing windows by a certain number of milliseconds to compensate. In these cases, the notes of a song will be sent to the speakers before the game even receives the required input from the player in order to maintain the illusion of rhythm. Games that rely upon musical improvisation , such as Rock Band drums or DJ Hero , can still suffer tremendously, as the game cannot predict what the player will hit in these cases, and excessive lag will still create a noticeable delay between hitting notes, and hearing them play.
Audio latency can be experienced in broadcast systems where someone is contributing to a live broadcast over a satellite or similar link with high delay. The person in the main studio has to wait for the contributor at the other end of the link to react to questions. Latency in this context could be between several hundred milliseconds and a few seconds. Dealing with audio latencies as high as this takes special training in order to make the resulting combined audio output reasonably acceptable to the listeners. Wherever practical, it is important to try to keep live production audio latency low in order to keep the reactions and interchange of participants as natural as possible. A latency of 10 milliseconds or better is the target for audio circuits within professional production structures. [ 15 ]
Latency in live performance occurs naturally from the speed of sound . It takes sound about 3 milliseconds to travel 1 meter. Small amounts of latency occur between performers depending on how they are spaced from each other and from stage monitors if these are used. This creates a practical limit to how far apart the artists in a group can be from one another. Stage monitoring extends that limit, as sound travels close to the speed of light through the cables that connect stage monitors.
Performers, particularly in large spaces, will also hear reverberation , or echo of their music, as the sound that projects from stage bounces off of walls and structures, and returns with latency and distortion. A primary purpose of stage monitoring is to provide artists with more primary sound so that they are not confused by the latency of these reverberations.
While analog audio equipment has no appreciable latency, digital audio equipment has latency associated with two general processes: conversion from one format to another, and digital signal processing (DSP) tasks such as equalization, compression and routing.
Digital conversion processes include analog-to-digital converters (ADC), digital-to-analog converters (DAC), and various changes from one digital format to another, such as AES3 which carries low-voltage electrical signals to ADAT , an optical transport. Any such process takes a small amount of time to accomplish; typical latencies are in the range of 0.2 to 1.5 milliseconds, depending on sampling rate, software design and hardware architecture. [ 16 ]
Different audio signal processing operations such as finite impulse response (FIR) and infinite impulse response (IIR) filters take different mathematical approaches to the same end and can have different latencies. In addition, input and output sample buffering add delay. Typical latencies range from 0.5 to ten milliseconds with some designs having as much as 30 milliseconds of delay. [ 17 ]
Latency in digital audio equipment is most noticeable when a singer's voice is transmitted through their microphone, through digital audio mixing, processing and routing paths, then sent to their own ears via in-ear monitors or headphones. In this case, the singer's vocal sound is conducted to their own ear through the bones of the head, then through the digital pathway to their ears some milliseconds later. In one study, listeners found latency greater than 15 ms to be noticeable. Latency for other musical activities such as playing guitar does not have the same critical concern. Ten milliseconds of latency isn't as noticeable to a listener who is not hearing his or her own voice. [ 18 ]
In sound reinforcement for music or speech presentation in large venues, it is optimal to deliver sufficient sound volume to the back of the venue without resorting to excessive sound volumes near the front. One way for audio engineers to achieve this is to use additional loudspeakers placed at a distance from the stage but closer to the rear of the audience. Sound travels through air at the speed of sound (around 343 metres (1,125 ft) per second depending on air temperature and humidity). By measuring or estimating the difference in latency between the loudspeakers near the stage and the loudspeakers nearer the audience, the audio engineer can introduce an appropriate delay in the audio signal going to the latter loudspeakers, so that the wavefronts from near and far loudspeakers arrive at the same time. Because of the Haas effect an additional 15 milliseconds can be added to the delay time of the loudspeakers nearer the audience, so that the stage's wavefront reaches them first, to focus the audience's attention on the stage rather than the local loudspeaker. The slightly later sound from delayed loudspeakers simply increases the perceived sound level without negatively affecting localization. | https://en.wikipedia.org/wiki/Latency_(audio) |
Latency , from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag , as it is known in gaming circles , refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games. [ 1 ] The original meaning of “latency”, as used widely in psychology, medicine and most other disciplines, derives from “latent”, a word of Latin origin meaning “hidden”. [ 2 ] Its different and relatively recent meaning (this topic) of “lateness” or “delay” appears to derive from its superficial similarity to the word “late”, from the old English “laet”. [ 3 ]
Latency is physically a consequence of the limited velocity at which any physical interaction can propagate. The magnitude of this velocity is always less than or equal to the speed of light . Therefore, every physical system with any physical separation (distance) between cause and effect will experience some sort of latency, regardless of the nature of the stimulation to which it has been exposed.
The precise definition of latency depends on the system being observed or the nature of the simulation. In communications , the lower limit of latency is determined by the medium being used to transfer information. In reliable two-way communication systems, latency limits the maximum rate at which information can be transmitted, as there is often a limit on the amount of information that is in-flight at any given moment. Perceptible latency has a strong effect on user satisfaction and usability in the field of human–machine interaction . [ 4 ]
Online games are sensitive to latency ( lag ), since fast response times to new events occurring during a game session are rewarded while slow response times may carry penalties. Due to a delay in transmission of game events, a player with a high latency internet connection may show slow responses in spite of appropriate reaction time . This gives players with low-latency connections a technical advantage.
Joel Hasbrouck and Gideon Saar (2011) measure latency to execute financial transactions based on three components: the time it takes for information to reach the trader, execution of the trader's algorithms to analyze the information and decide a course of action, and the generated action to reach the exchange and get implemented. Hasbrouck and Saar contrast this with the way in which latencies are measured by many trading venues that use much more narrow definitions, such as the processing delay measured from the entry of the order (at the vendor's computer) to the transmission of an acknowledgment (from the vendor's computer). [ 5 ] Trading using computers has developed to the point where millisecond improvements in network speeds offer a competitive advantage for financial institutions. [ 6 ]
Network latency in a packet-switched network is measured as either one-way (the time from the source sending a packet to the destination receiving it), or round-trip delay time (the one-way latency from the source to the destination plus the one-way latency from the destination back to the source). Round-trip latency is more often quoted, because it can be measured from a single point. Many software platforms provide a service called ping that can be used to measure round-trip latency. Ping uses the Internet Control Message Protocol (ICMP) echo request which causes the recipient to send the received packet as an immediate response, thus it provides a rough way of measuring round-trip delay time. Ping cannot perform accurate measurements, [ 7 ] principally because ICMP is intended only for diagnostic or control purposes, and differs from real communication protocols such as TCP . Furthermore, routers and internet service providers might apply different traffic shaping policies to different protocols. [ 8 ] [ 9 ] For more accurate measurements it is better to use specific software, for example: hping , Netperf or Iperf .
However, in a non-trivial network, a typical packet will be forwarded over multiple links and gateways, each of which will not begin to forward the packet until it has been completely received. In such a network, the minimal latency is the sum of the transmission delay of each link, plus the forwarding latency of each gateway. In practice, minimal latency also includes queuing and processing delays. Queuing delay occurs when a gateway receives multiple packets from different sources heading toward the same destination. Since typically only one packet can be transmitted at a time, some of the packets must queue for transmission, incurring additional delay. Processing delays are incurred while a gateway determines what to do with a newly received packet. Bufferbloat can also cause increased latency that is an order of magnitude or more. The combination of propagation, serialization, queuing, and processing delays often produces a complex and variable network latency profile.
Latency limits total throughput in reliable two-way communication systems as described by the bandwidth-delay product .
Latency in optical fiber is largely a function of the speed of light . This would equate to a latency of 3.33 μs for every kilometer of path length. The index of refraction of most fiber optic cables is about 1.5, meaning that light travels about 1.5 times as fast in a vacuum as it does in the cable. This works out to about 5.0 μs of latency for every kilometer. In shorter metro networks, higher latency can be experienced due to extra distance in building risers and cross-connects. To calculate the latency of a connection, one has to know the distance traveled by the fiber, which is rarely a straight line, since it has to traverse geographic contours and obstacles, such as roads and railway tracks, as well as other rights-of-way.
Due to imperfections in the fiber, light degrades as it is transmitted through it. For distances of greater than 100 kilometers, amplifiers or regenerators are deployed. Latency introduced by these components needs to be taken into account.
Satellites in geostationary orbits are far enough away from Earth that communication latency becomes significant – about a quarter of a second for a trip from one ground-based transmitter to the satellite and back to another ground-based transmitter; close to half a second for two-way communication from one Earth station to another and then back to the first. Low Earth orbit is sometimes used to cut this delay, at the expense of more complicated satellite tracking on the ground and requiring more satellites in the satellite constellation to ensure continuous coverage.
Audio latency is the delay between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include analog-to-digital conversion , buffering , digital signal processing , transmission time , digital-to-analog conversion and the speed of sound in air.
Video latency refers to the degree of delay between the time a transfer of a video stream is requested and the actual time that transfer begins. Networks that exhibit relatively small delays are known as low-latency networks, while their counterparts are known as high-latency networks.
Any individual workflow within a system of workflows can be subject to some type of operational latency. It may even be the case that an individual system may have more than one type of latency, depending on the type of participant or goal-seeking behavior. This is best illustrated by the following two examples involving air travel .
From the point of view of a passenger, latency can be described as follows. Suppose John Doe flies from London to New York . The latency of his trip is the time it takes him to go from his house in England to the hotel he is staying at in New York. This is independent of the throughput of the London-New York air link – whether there were 100 passengers a day making the trip or 10000, the latency of the trip would remain the same.
From the point of view of flight operations personnel, latency can be entirely different. Consider the staff at the London and New York airports. Only a limited number of planes are able to make the transatlantic journey, so when one lands they must prepare it for the return trip as quickly as possible. It might take, for example:
Assuming the above are done consecutively, minimum plane turnaround time is:
However, cleaning, refueling and loading the cargo can be done at the same time. Passengers can only be loaded after cleaning is complete. The reduced latency, then, is:
The people involved in the turnaround are interested only in the time it takes for their individual tasks. When all of the tasks are done at the same time, however, it is possible to reduce the latency to the length of the longest task. If some steps have prerequisites, it becomes more difficult to perform all steps in parallel. In the example above, the requirement to clean the plane before loading passengers results in a minimum latency longer than any single task.
Any mechanical process encounters limitations modeled by Newtonian physics . The behavior of disk drives provides an example of mechanical latency. Here, it is the time seek time for the actuator arm to be positioned above the appropriate track and then rotational latency for the data encoded on a platter to rotate from its current position to a position under the disk read-and-write head .
Computers run instructions in the context of a process . In the context of computer multitasking , the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system schedules the process for each transition (high-low or low-high) based on a hardware clock such as the High Precision Event Timer . The latency is the delay between the events generated by the hardware clock and the actual transitions of voltage from high to low or low to high.
Many desktop operating systems have performance limitations that create additional latency. The problem may be mitigated with real-time extensions and patches such as PREEMPT RT .
On embedded systems, the real-time execution of instructions is often supported by a real-time operating system .
Note that in software systems , benchmarking against "average" and "median" latency can be misleading because few outlier numbers can distort them. Instead, software architects and software developers should use "99th percentile". [ 10 ]
In simulation applications, latency refers to the time delay, often measured in milliseconds , between initial input and output clearly discernible to the simulator trainee or simulator subject. Latency is sometimes also called transport delay . Some authorities [ who? ] distinguish between latency and transport delay by using the term latency in the sense of the extra time delay of a system over and above the reaction time of the vehicle being simulated, but this requires detailed knowledge of the vehicle dynamics and can be controversial.
In simulators with both visual and motion systems, it is particularly important that the latency of the motion system not be greater than of the visual system, or symptoms of simulator sickness may result. This is because, in the real world, motion cues are those of acceleration and are quickly transmitted to the brain, typically in less than 50 milliseconds; this is followed some milliseconds later by a perception of change in the visual scene. The visual scene change is essentially one of change of perspective or displacement of objects such as the horizon, which takes some time to build up to discernible amounts after the initial acceleration which caused the displacement. A simulator should, therefore, reflect the real-world situation by ensuring that the motion latency is equal to or less than that of the visual system and not the other way round. | https://en.wikipedia.org/wiki/Latency_(engineering) |
Latent heat (also known as latent energy or heat of transformation ) is energy released or absorbed, by a body or a thermodynamic system , during a constant-temperature process—usually a first-order phase transition , like melting or condensation.
Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas). [ 1 ] [ 2 ]
The term was introduced around 1762 by Scottish chemist Joseph Black . Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.
In contrast to latent heat, sensible heat is energy transferred as heat , with a resultant temperature change in a body.
The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas).
Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization , condensation , freezing or melting , whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.
The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related latent heats. These latent heats are defined independently of the conceptual framework of thermodynamics. [ 3 ]
When a body is heated at constant temperature by thermal radiation in a microwave field for example, it may expand by an amount described by its latent heat with respect to volume or latent heat of expansion , or increase its pressure by an amount described by its latent heat with respect to pressure . [ 4 ]
Latent heat is energy released or absorbed by a body or a thermodynamic system during a constant-temperature process. Two common forms of latent heat are latent heat of fusion ( melting ) and latent heat of vaporization ( boiling ). These names describe the direction of energy flow when changing from one phase to the next: from solid to liquid, and liquid to gas.
In both cases the change is endothermic , meaning that the system absorbs energy. For example, when water evaporates, an input of energy is required for the water molecules to overcome the forces of attraction between them and make the transition from water to vapor.
If the vapor then condenses to a liquid on a surface, then the vapor's latent energy absorbed during evaporation is released as the liquid's sensible heat onto the surface.
The large value of the enthalpy of condensation of water vapor is the reason that steam is a far more effective heating medium than boiling water, and is more hazardous.
In meteorology , latent heat flux is the flux of energy from the Earth's surface to the atmosphere that is associated with evaporation or transpiration of water at the surface and subsequent condensation of water vapor in the troposphere . It is an important component of Earth's surface energy budget. Latent heat flux has been commonly measured with the Bowen ratio technique, or more recently since the mid-1900s by the eddy covariance method.
In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen . Cullen had used an air pump to lower the pressure in a container with diethyl ether . No heat was withdrawn from the ether, yet the ether boiled, but its temperature decreased. [ 5 ] [ 6 ] And in 1758, on a warm day in Cambridge , England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. [ 7 ] With each subsequent evaporation , the thermometer read a lower temperature, eventually reaching 7 °F (−14 °C). Another thermometer showed that the room temperature was constant at 65 °F (18 °C). In his letter Cooling by Evaporation , Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day." [ 8 ]
The English word latent comes from Latin latēns , meaning lying hidden . [ 9 ] [ 10 ] The term latent heat was introduced into calorimetry around 1750 by Joseph Black , commissioned by producers of Scotch whisky in search of ideal quantities of fuel and water for their distilling process to study system changes, such as of volume and pressure, when the thermodynamic system was held at constant temperature in a thermal bath.
It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. [ 5 ] In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no additional heat was needed beyond what this increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. [ 11 ] [ 12 ] He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone. [ 13 ]
Black would compare the change in temperature of two identical quantities of water, heated by identical means, one of which was, say, melted from ice, whereas the other was heated from merely cold liquid state. By comparing the resulting temperatures, he could conclude that, for instance, the temperature of the sample melted from ice was 140 °F lower than the other sample, thus melting the ice absorbed 140 "degrees of heat" that could not be measured by the thermometer, yet needed to be supplied, thus it was "latent" (hidden). Black also deduced that as much latent heat as was supplied into boiling the distillate (thus giving the quantity of fuel needed) also had to be absorbed to condense it again (thus giving the cooling water required). [ 14 ]
In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. [ 15 ] Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. [ a ] The temperature of the ice had increased by 8 °F. The ice had thus absorbed 8 “degrees of heat”, which Black called sensible heat , manifest as a temperature increase, which could be felt and measured. In addition to that, 147 – 8 = 139 “degrees of heat” were absorbed as latent heat , manifest as phase change rather than as temperature change. [ 16 ] [ 17 ]
Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”). [ 18 ] [ 15 ]
Finally, Black increased the temperature of a mass of water, then vaporized an equal mass of water by even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale. [ 19 ]
Later, James Prescott Joule characterised latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy , and the sensible heat as an energy that was indicated by the thermometer, [ 20 ] relating the latter to thermal energy .
A specific latent heat ( L ) expresses the amount of energy in the form of heat ( Q ) required to completely effect a phase change of a unit of mass ( m ), usually 1 kg , of a substance as an intensive property :
Intensive properties are material characteristics and are not dependent on the size or extent of the sample. Commonly quoted and tabulated in the literature are the specific latent heat of fusion and the specific latent heat of vaporization for many substances.
From this definition, the latent heat for a given mass of a substance is calculated by
where:
The following table shows the specific latent heats and change of phase temperatures (at standard pressure) of some common fluids and gases. [ citation needed ]
The specific latent heat of condensation of water in the temperature range from −25 °C to 40 °C is approximated by the following empirical cubic function:
where the temperature T {\displaystyle T} is taken to be the numerical value in °C.
For sublimation and deposition from and into ice, the specific latent heat is almost constant in the temperature range from −40 °C to 0 °C and can be approximated by the following empirical quadratic function:
As the temperature (or pressure) rises to the critical point , the latent heat of vaporization falls to zero. | https://en.wikipedia.org/wiki/Latent_heat |
Latent human error is a term used in safety work and accident prevention, especially in aviation, to describe human errors which are likely to be made due to systems or routines that are formed in such a way that humans are disposed to making these errors. Latent human errors are frequently components in causes of accidents. The error is latent and may not materialize immediately, thus, latent human error does not cause immediate or obvious damage. Discovering latent errors is therefore difficult and requires a systematic approach. [ 1 ] Latent human error is often discussed in aviation incident investigation, and contributes to over 70% of the accidents. [ 2 ]
By gathering data about errors made, then collating, grouping and analyzing them, it can be determined whether a disproportionate amount of similar errors are being made. If this is the case, a contributing factor may be disharmony between the respective systems/routines and human nature or propensities . The routines or systems can then be analyzed, potential problems identified, and amendments made if necessary, in order to prevent future errors, incidents or accidents from occurring.
This psychology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Latent_human_error |
The latent internal energy of a system is the internal energy a system requires to undergo a phase transition . Its value is specific to the substance or mix of substances in question. The value can also vary with temperature and pressure. Generally speaking the value is different for the type of phase change being accomplished. Examples can include Latent internal energy of vaporization (liquid to vapor), Latent internal energy of crystallization (liquid to solid) Latent internal energy of sublimation (solid to vapor). These values are usually expressed in units of energy per mole or per mass such as J /mol or BTU /lb. Often a negative sign will be used to represent energy being withdrawn from the system, while a positive value represents energy being added to the system. [ 1 ]
For every type of latent internal energy there is an opposite. For example, the latent internal energy of Freezing (liquid to solid) is equal to the negative of the Latent internal energy of melting (solid to liquid)
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Latent_internal_energy |
Latent tuberculosis ( LTB ), also called latent tuberculosis infection ( LTBI ) is when a person is infected with Mycobacterium tuberculosis , but does not have active tuberculosis (TB). Active tuberculosis can be contagious while latent tuberculosis is not, and it is therefore not possible to get TB from someone with latent tuberculosis. Various treatment regimens are in use for latent tuberculosis. They generally need to be taken for several months.
As of 2023, it is estimated that one quarter of the world's population have latent or active TB, [ 1 ] with TB estimated to have newly infected 10.8 million people per year. [ 2 ]
The spread of tuberculosis is uneven throughout the world, with approximately 80% of the population in many Asian and African countries testing positive on tuberculin tests, while only 5–10% of the US population tests positive. [ 3 ]
It is not possible to catch tuberculosis from someone with latent tuberculosis. In people who develop active TB of the lungs, also called pulmonary tuberculosis, the Mantoux test will often be positive. They are contagious and capable of passing the bacteria to others. TB can be transmitted by exhalation . Tiny droplets exhaled by someone with active tuberculosis can remain suspended in the air for several hours if environmental conditions allow. Another person could then inhale the droplets and potentially become infected with tuberculosis. However, this is unlikely to occur, as infection generally requires longer exposure. [ 4 ] This includes brief exposures which happen when one is exposed to TB in a store or in brief social contact; prolonged exposure (over a couple of months by, for example, living with a relative with TB [ 5 ] ) is generally required for tuberculosis infection. [ 6 ]
Statistics show that approximately one-third of people exposed to pulmonary TB become infected with the bacteria, but only one in ten of these infected people develops active TB disease during their lifetimes. [ 7 ] People with latent TB are not contagious and will not feel sick. However, latent infections can become active TB infections, which are contagious, in the future. [ 8 ]
In some countries like Canada and the United States of America, people have medical privacy or "confidentiality" and do not have to reveal their active tuberculosis case to family, friends, or co-workers; therefore, the person who gets latent tuberculosis may never know who had the active case of tuberculosis that caused the latent tuberculosis diagnosis for them. Only by required testing (required in some jobs) [ 9 ] or developing symptoms of active tuberculosis and visiting a medical doctor who does testing will a person know they have been exposed. Because tuberculosis is not common in the United States, doctors may not suspect tuberculosis; therefore, they may not test. If a person has symptoms of tuberculosis, it is wise to be tested. [ citation needed ]
Persons with diabetes have a significantly higher chance of converting from latent to active tuberculosis, [ 10 ] and mortality of tuberculosis may be greater in diabetic patients. [ 10 ] Persons with both HIV and latent tuberculosis have a 10% chance of developing active tuberculosis every year. "HIV infection is the greatest known risk factor for the progression of latent M. tuberculosis infection to active TB. In many African countries, 30–60% of all new TB cases occur in people with HIV, and TB is the leading cause of death globally for HIV-infected people." [ 11 ]
Once a person has been diagnosed with Latent Tuberculosis (LTBI) and a medical doctor confirms no active signs of infection or identifiable M. tuberculosis bacteria in the body, the person should remain alert to symptoms of active tuberculosis for the remainder of their life. Even after completing the full course of medication, there is no guarantee that all of the M. tuberculosis bacteria have all been killed. [ 12 ] [ 13 ] "When a person develops active TB (disease), the symptoms (cough, fever, night sweats , weight loss etc.) may be mild for many months. This can lead to delays in seeking care, and results in transmission of the bacteria to others." [ 14 ]
Tuberculosis bacteria may exist in latent form both inside and outside of the lungs. In many cases, they may be latent in another organ, without any remaining bacteria in the lungs. If the reactivation of tuberculosis occurs in the brain, organs, kidneys, joints, or other areas, the symptoms are often slow to progress, frequently being ignored by patients until they become severe. [ 15 ] [ 12 ] Extrapulmonary tuberculosis (tuberculosis outside of the lungs) can affect virtually any organ, with distinct manifestations depending on the site: tuberculosis in the spine causes back pain and potential paralysis; lymph node involvement presents as painless or occasionally painful swelling in the neck or armpits; meningeal infection triggers persistent headaches and altered consciousness; while urogenital tuberculosis leads to painful urination, blood in urine, and infertility. All of these may feature generalized symptoms like fever, night sweats, and weight loss, which, combined with their nonspecific presentation and lower amounts of bacteria, make them more difficult to diagnose than the classic pulmonary disease. [ 16 ] [ 12 ]
Situations in which tuberculosis may become reactivated are:
Compared to uninfected individuals, people with latent TB appears to have some (35–80%) protection against developing active TB after being exposed to M. tuberculosis in the environment. This kind of protection is called concomitant immunity . It seems to be due to the presence of tissue-resident memory T cells . [ 22 ]
There are two classes of tests commonly used to identify patients with latent tuberculosis: tuberculin skin tests and IFN-γ ( Interferon-gamma ) tests. [ citation needed ]
The skin tests currently include the following two: [ citation needed ]
IFN-γ tests include the following three:
The tuberculin skin test (TST) in its first iteration, the Mantoux Test , was developed in 1908. Tuberculin (also called purified protein derivative or PPD) is a standardised dead extract of cultured TB, injected into the skin to measure the person's immune response to the bacteria. So, if a person has been exposed to the bacteria previously, they should express an immune reaction to the injection, usually a mild swelling or redness around the site. There have been two primary methods of TST: the Mantoux test, and the Heaf test. The Heaf test was discontinued in 2005 because the manufacturer deemed its production to be financially unsustainable, though it was previously preferred in the UK because it was felt to require less training to administer and involved less inter-observer variation in its interpretation than the Mantoux test. The Mantoux test was the preferred test in the US, and is now the most widely used TST globally. [ citation needed ]
The Mantoux test is now standardised by the WHO . 0.1 ml of tuberculin (100 units/ml), which delivers a dose of 5 units is given by intradermal injection into the surface of the lower forearm ( subcutaneous injection results in false negatives). A waterproof ink mark is drawn around the injection site so as to avoid difficulty finding it later if the level of reaction is small. The test is read 48 to 72 hours later. [ 23 ] The area of induration (NOT of erythema ) is measured transversely across the forearm (left to right, not up and down) and recorded to the nearest millimetre. [ 24 ]
The Heaf test was first described in 1951. [ 25 ] The test uses a Heaf gun with disposable single-use heads; each head has six needles arranged in a circle. There are standard heads and pediatric heads: the standard head is used on all patients aged 2 years and older; the pediatric head is for infants under the age of 2. For the standard head, the needles protrude 2 mm when the gun is actuated; for the pediatric heads, the needles protrude 1 mm. Skin is cleaned with alcohol, then tuberculin (100,000 units/ml) is evenly smeared on the skin (about 0.1 ml); the gun is then applied to the skin and fired. The excess solution is then wiped off and a waterproof ink mark is drawn around the injection site. The test is read 2 to 7 days later. [ citation needed ]
The results of both tests are roughly equivalent as follows: [ citation needed ]
Tuberculin conversion is said to occur if a patient who has previously had a negative tuberculin skin test develops a positive tuberculin skin test at a later test. It indicates a change from negative to positive, and usually signifies a new infection. [ citation needed ]
The phenomenon of boosting is one way of obtaining a false positive test result. Theoretically, a person's ability to develop a reaction to the TST may decrease over time – for example, a person is infected with latent TB as a child, and is administered a TST as an adult. Because there has been such a long time since the immune responses to TB has been necessary, that person might give a negative test result. If so, there is a fairly reasonable chance that the TST triggers a hypersensitivity in the person's immune system – in other words, the TST reminds the person's immune system about TB, and the body overreacts to what it perceives as a reinfection. In this case, when that subject is given the test again (as is standard procedure, see above) they may have a significantly greater reaction to the test, giving a very strong positive; this can be commonly misdiagnosed as Tuberculin Conversion. This can also be triggered by receiving the BCG vaccine, as opposed to a proper infection. Although boosting can occur in any age group, the likelihood of the reaction increases with age. [ 26 ]
Boosting is only likely to be relevant if an individual is beginning to undergo periodic TSTs (health care workers, for example). In this case the standard procedure is called two-step testing. The individual is given their first test and in the event of a negative, given a second test in 1 to 3 weeks. This is done to combat boosting in situations where, had that person waited up to a year to get their next TST, they might still have a boosted reaction, and be misdiagnosed as a new infection. [ 27 ]
Here there is a difference in US and UK guidelines; in the US testers are told to ignore the possibility of false positive due to the BCG vaccine, as the BCG is seen as having waning efficacy over time. Therefore, the CDC urges that individuals be treated based on risk stratification regardless of BCG vaccination history, and if an individual receives a negative and then a positive TST they will be assessed for full TB treatment beginning with X-ray to confirm TB is not active and proceeding from there. [ 28 ] Conversely, the UK guidelines acknowledge the potential effect of the BCG vaccination, as it is mandatory and therefore a prevalent concern – though the UK shares the procedure of administering two tests, one week apart, and accepting the second one as the accurate result, they also assume that a second positive is indicative of an old infection (and therefore certainly LTBI) or the BCG itself. In the case of BCG vaccinations confusing the results, Interferon-γ (IFN-γ) tests may be used as they will not be affected by the BCG. [ citation needed ]
According to the U.S. guidelines, there are multiple size thresholds for declaring a positive result of latent tuberculosis from the Mantoux test: For testees from high-risk groups, such as those who are HIV positive, the cutoff is 5 mm of induration; for medium risk groups, 10 mm; for low-risk groups, 15 mm. The U.S. guidelines recommend that a history of previous BCG vaccination should be ignored. For details of tuberculin skin test interpretation, please refer to the CDC guidelines (reference given below).
The UK guidelines are formulated according to the Heaf test: In patients who have had BCG previously, latent TB is diagnosed if the Heaf test is grade 3 or 4 and have no signs or symptoms of active TB; if the Heaf test is grade 0 or 1, then the test is repeated. In patients who have not had BCG previously, latent TB is diagnosed if the Heaf test is grade 2, 3 or 4, and have no signs or symptoms of active TB. Repeat Heaf testing is not done in patients who have had BCG (because of the phenomenon of boosting). For details of tuberculin skin test interpretation, please refer to the BTS guidelines (references given below).
Given that the US recommendation is that prior BCG vaccination be ignored in the interpretation of tuberculin skin tests, false positives with the Mantoux test are possible as a result of: (1) having previously had a BCG (even many years ago), or (2) periodical testing with tuberculin skin tests. Having regular TSTs boosts the immunological response in those people who have previously had BCG, so these people will falsely appear to be tuberculin conversions. This may lead to treating more people than necessary, with the possible risk of those patients developing adverse drug reactions. However, as Bacille Calmette-Guérin vaccine is not 100% effective, and is less protective in adults than pediatric patients, not treating these patients could lead to a possible infection. The current [ when? ] US policy seems to reflect a desire to err on the side of safety.
The U.S. guidelines also allow for tuberculin skin testing in immunosuppressed patients (those with HIV , or who are on immunosuppressive drugs ), whereas the UK guidelines recommend that tuberculin skin tests should not be used for such patients because it is unreliable. [ citation needed ]
The role of IFN-γ tests is undergoing constant review and various guidelines have been published with the option for revision as new data becomes available. CDC:MMWR Health Protection Agency:UK
There are currently two commercially available interferon-γ release assays (IGRAs): QuantiFERON-TB Gold and T-SPOT.TB . [ 29 ] These tests are not affected by prior BCG vaccination, and look for the body's response to specific TB antigens not present in other forms of mycobacteria and BCG ( ESAT-6 ). Whilst these tests are new they are now becoming available globally.
CDC:
CDC recommends that QFT-G may be used in all circumstances in which the TST is currently used, including contact investigations, evaluation of recent immigrants, and sequential-testing surveillance programs for infection control (e.g., those for health-care workers).
HPA Interim Guidance:
The HPA recommends the use of IGRA testing in health care workers, if available, in view of the importance of detecting latently infected staff who may go on to develop active disease and come into contact with immunocompromised patients and the logistical simplicity of IGRA testing.
It is usually assumed by most medical practitioners in the early stages of a diagnosis that a case of latent tuberculosis is the normal or regular strain of tuberculosis. It will therefore be most commonly treated with Isoniazid (the most used treatment for latent tuberculosis.) Only if the tuberculosis bacteria does not respond to the treatment will the medical practitioner begin to consider more virulent strains, requiring significantly longer and more thorough treatment regimens. [ citation needed ]
There are four types of tuberculosis recognized in the world today: [ citation needed ]
The treatment of latent tuberculosis infection (LTBI) is essential to controlling and eliminating TB by reducing the risk that TB infection will progress to disease. Latent tuberculosis will convert to active tuberculosis in 10% of cases (or more in cases of immune compromised patients). Taking medication for latent tuberculosis is recommended by many doctors. [ 33 ]
In the U.S., the standard treatment is nine months of isoniazid , but this regimen is not widely used outside of the US. [ citation needed ]
There is no agreement regarding terminology: the terms preventive therapy and chemoprophylaxis have been used for decades, and are preferred in the UK because it involves giving medication to people who have no disease and are currently well: the reason for giving medication is primarily to prevent people from becoming unwell. In the U.S., physicians talk about latent tuberculosis treatment because the medication does not actually prevent infection: the person is already infected and the medication is intended to prevent existing silent infection from becoming active disease. There are no convincing reasons to prefer one term over the other. [ citation needed ]
"Populations at increased risk of progressing to active infection once exposed: [ citation needed ]
It is essential that assessment to rule out active TB be carried out before treatment for LTBI is started. To give treatment for latent tuberculosis to someone with active tuberculosis is a serious error: the tuberculosis will not be adequately treated and there is a serious risk of developing drug-resistant strains of TB.
There are several treatment regimens currently in use:
A 2000 Cochrane review containing 11 double-blinded, randomized control trials and 73,375 patients examined six and 12 month courses of isoniazid (INH) for treatment of latent tuberculosis. HIV positive and patients currently or previously treated for tuberculosis were excluded. The main result was a relative risk (RR) of 0.40 (95% confidence interval (CI) 0.31 to 0.52) for development of active tuberculosis over two years or longer for patients treated with INH, with no significant difference between treatment courses of six or 12 months (RR 0.44, 95% CI 0.27 to 0.73 for six months, and 0.38, 95% CI 0.28 to 0.50 for 12 months). [ 40 ]
A Cochrane systematic review published in 2013 evaluated four different alternatives regimens to INH monotherapy for preventing active TB in HIV-negative people with latent tuberculosis infection. The evidence from this review found no difference between shorter regimens of Rifampicin or weekly, directly observed Rifapentine plus INH compare to INH monotherapy in preventing active TB in HIV-negative people at risk of developing it . However the review found that the shorter Rifampicin regimen for four months and weekly directly observed Rifapentine plus INH for three months "may have additional advantages of higher treatment completion and improved safety." However the overall quality of evidence was low to moderate (as per GRADE criteria) and none of the included trials were conducted in LMIC nations with high TB transmission and hence might not be applicable to nations with high TB transmission. [ 41 ]
There is no guaranteed "cure" for latent tuberculosis. "People infected with TB bacteria have a lifetime risk of falling ill with TB..." [ 14 ] with those who have compromised immune systems, those with diabetes and those who use tobacco at greater risk. [ 14 ]
A person who has taken the complete course of Isoniazid (or other full course prescription for tuberculosis) on a regular, timely schedule may have been cured. "Current standard therapy is isoniazid (INH) which reduce the risk of active TB by as much as 90 per cent (in patients with positive LTBI test results and fibrotic pulmonary lesions compatible with tuberculosis [ 35 ] ) if taken daily for 9 months." [ 40 ] However, if a person has not completed the medication exactly as prescribed, the "cure" is less likely, and the "cure" rate is directly proportional to following the prescribed treatment specifically as recommended. Furthermore, "If you don't take the medicine correctly and you become sick with TB a second time, the TB may be harder to treat if it has become drug resistant." [ 15 ] If a patient were to be cured in the strictest definition of the word, it would mean that every single bacterium in the system is removed or dead, and that person cannot get tuberculosis (unless re-infected). However, there is no test to assure that every single bacterium has been killed in a patient's system. As such, a person diagnosed with latent TB can safely assume that, even after treatment, they will carry the bacteria – likely for the rest of their lives. Furthermore, "It has been estimated that up to one-third of the world's population is infected with M. tuberculosis , and this population is an important reservoir for disease reactivation." [ 41 ] This means that in areas where TB is endemic treatment may be even less certain to "cure" TB, as reinfection could trigger activation of latent TB already present even in cases where treatment was followed completely. [ citation needed ]
There is controversy over whether people who test positive long after infection have a significant risk of developing the disease (without re-infection). Some researchers and public health officials have warned that this test-positive population is a "source of future TB cases" even in the US and other wealthy countries, and that this "ticking time bomb" should be a focus of attention and resources. [ 42 ]
On the other hand, Marcel Behr, Paul Edelstein, and Lalita Ramakrishnan reviewed studies concerning the concept of latent tuberculosis in order to determine whether tuberculosis-infected persons have life-long infection capable of causing disease at any future time. These studies, both published in the British Medical Journal ( BMJ ) in 2018 and 2019, show that the incubation period of tuberculosis is short, usually within months after infection, and very rarely more than two years after infection. [ 43 ] [ 44 ] They also show that more than 90% of people infected with M. tuberculosis for more than two years never develop tuberculosis even if their immune system is severely suppressed. [ 45 ] Immunologic tests for tuberculosis infection such as the tuberculin skin test and interferon gamma release assays (IGRA) only indicate past infection, with the majority of previously infected persons no longer capable of developing tuberculosis. Ramakrishnan told the New York Times that researchers "have spent hundreds of millions of dollars chasing after latency, but the whole idea that a quarter of the world is infected with TB is based on a fundamental misunderstanding." [ 46 ]
Writing in The Atlantic , science journalist Katherine J. Wu explains: [ 47 ]
If the bacteria were lingering, researchers would expect to see a big spike in disease late in life among people with positive skin tests, as their immune system naturally weakens. They would also expect to see a high rate of progression to full-blown TB among people who start taking immunosuppressive drugs or catch HIV. And yet, neither of those trends pans out: At most, some 5 to 10 percent of people who have tested positive by skin test and later sustain a blow to their immune system develop TB disease within about three to five years—a hint that, for almost everyone else, there may not be any MTB left. “If there were a slam-dunk experiment, that’s it,” William Bishai, a TB researcher at Johns Hopkins, told me.
The first BMJ article disputing widespread latency was accompanied by an editorial written by Dr. Soumya Swaminathan , Deputy Director-General of the World Health Organization , who endorsed the findings and called for more funding of TB research directed at the most heavily afflicted parts of the world, rather than disproportionate attention to a relatively minor problem that affects just the wealthy countries. [ 46 ]
The World Health Organization no longer endorses the concept that all those with immunologic evidence of past TB infection are currently infected and so are at risk of developing TB some time in the future. In 2022, the WHO issued corrigenda to its 2021 Global TB Report to clarify estimates on the worldwide burden of infected people. [ 48 ] These corrigenda deleted "About a quarter of the world's population is infected with M. tuberculosis " and replaced it with "About a quarter of the world's population has been infected with M. tuberculosis ." The corrigenda also removed the prior estimate of the lifetime risk of TB of 5 to 10% among those with evidence of past TB infection, indicating that they no longer have confidence in earlier estimates that a substantial percentage of those with positive immunologic test results will develop the disease.
This article incorporates public domain material from websites or documents of the Centers for Disease Control and Prevention . | https://en.wikipedia.org/wiki/Latent_tuberculosis |
Lateral bodies are structures that sit on the concave sides of the viral core of a poxvirus and is surrounded by a membrane . [ 1 ] They serve as immunomodulatory delivery packets, and membrane cloaking to spread poxviruses. [ 2 ] They were first visualized using electron microscopy in 1956 and shortly after, it was shown that they detach from the viral core upon membrane fusion . [ 3 ] [ 4 ]
Lateral bodies are made up of at least three proteins, phosphoprotein F17, dual-specificity phosphatase H1 and the viral oxidoreductase G4. [ 5 ] F17 is the main structural protein and may play a role in modulating cellular immune response through MAPK signaling pathways . [ 6 ] H1 dephosphorylates STAT1 to prevent nuclear transcription and block IFNy -induced immune signaling . [ 5 ] Finally, G4 is essential for viral morphogenesis. [ 5 ] Additionally, the proteins packed in lateral bodies are redox proteins, which modulates the host oxidative response impacting early gene expression and virion production. [ 7 ] | https://en.wikipedia.org/wiki/Lateral_body |
A lateral flow test ( LFT ), [ 1 ] is an assay also known as a lateral flow immunochromatographic test (ICT) , or rapid test . It is a simple device intended to detect the presence of a target substance in a liquid sample without the need for specialized and costly equipment. LFTs are widely used in medical diagnostics in the home, at the point of care, and in the laboratory. For instance, the home pregnancy test is an LFT that detects a specific hormone. These tests are simple and economical and generally show results in around five to thirty minutes. [ 2 ] Many lab-based applications increase the sensitivity of simple LFTs by employing additional dedicated equipment. [ 3 ] Because the target substance is often a biological antigen , many lateral flow tests are rapid antigen tests (RAT or ART).
LFTs operate on the same principles of affinity chromatography as the enzyme-linked immunosorbent assays ( ELISA ). In essence, these tests run the liquid sample along the surface of a pad with reactive molecules that show a visual positive or negative result. The pads are based on a series of capillary beds, such as pieces of porous paper, [ 4 ] microstructured polymer , [ 5 ] [ 6 ] or sintered polymer. [ 7 ] Each of these pads has the capacity to transport fluid (e.g., urine, blood, saliva) spontaneously. [ 8 ]
The sample pad acts as a sponge and holds an excess of sample fluid. Once soaked, the fluid flows to the second conjugate pad in which the manufacturer has stored freeze dried bio-active particles called conjugates (see below) in a salt–sugar matrix. The conjugate pad contains all the reagents required for an optimized chemical reaction between the target molecule (e.g., an antigen ) and its chemical partner (e.g., antibody ) that has been immobilized on the particle's surface. This marks target particles as they pass through the pad and continue across to the test and control lines. The test line shows a signal, often a color as in pregnancy tests. The control line contains affinity ligands which show whether the sample has flowed through and the bio-molecules in the conjugate pad are active. After passing these reaction zones, the fluid enters the final porous material, the wick, that simply acts as a waste container.
LFTs can operate as either competitive or sandwich assays .
LFTs derive from paper chromatography , which was developed in 1943 by Martin and Synge , [ 9 ] and elaborated in 1944 by Consden, Gordon and Martin. [ 10 ] [ 11 ] There was an explosion of activity in this field after 1945. [ 9 ] The ELISA technology was developed in 1971. [ 12 ] A set of LFT patents, including the litigated US 6,485,982 described below, were filed by Armkel LLC starting in 1988. [ 13 ]
In principle, any colored particle can be used, but latex (blue color) or nanometer-sized particles [ 14 ] of gold (red color) are most commonly used. The gold particles are red in color due to localized surface plasmon resonance . [ 15 ] Fluorescent [ 16 ] or magnetic [ 17 ] [ 18 ] labelled particles can also be used, but these require the use of an electronic reader to assess the test result.
Sandwich assays are generally used for larger analytes because they tend to have multiple binding sites. [ 19 ] As the sample migrates through the assay it first encounters a conjugate, which is an antibody specific to the target analyte labelled with a visual tag, usually colloidal gold. The antibodies bind to the target analyte within the sample and migrate together until they reach the test line. The test line also contains immobilized antibodies specific to the target analyte, which bind to the migrated analyte bound conjugate molecules. The test line then presents a visual change due to the concentrated visual tag, hence confirming the presence of the target molecules. The majority of sandwich assays also have a control line which will appear whether or not the target analyte is present to ensure proper function of the lateral flow pad. [ 2 ]
The rapid, low-cost sandwich-based assay is commonly used for home pregnancy tests which detect human chorionic gonadotropin , hCG, in the urine of pregnant women.
Competitive assays are generally used for smaller analytes since smaller analytes have fewer binding sites. [ 19 ] The sample first encounters antibodies to the target analyte labelled with a visual tag (colored particles). The test line contains the target analyte fixed to the surface. When the target analyte is absent from the sample, unbound antibody will bind to these fixed analyte molecules, meaning that a visual marker will show. Conversely, when the target analyte is present in the sample, it binds to the antibodies to prevent them binding to the fixed analyte in the test line, and thus no visual marker shows. This differs from sandwich assays in that no band means the analyte is present. [ 2 ] [ 19 ]
Most LFTs are intended to operate on a purely qualitative basis. However, it is possible to measure the intensity of the test line to determine the quantity of analyte in the sample. Handheld diagnostic devices known as lateral flow readers are used by several companies to provide a fully quantitative assay result. By utilizing unique wavelengths of light for illumination in conjunction with either CMOS or CCD detection technology, a signal rich image can be produced of the actual test lines. Using image processing algorithms specifically designed for a particular test type and medium, line intensities can then be correlated with analyte concentrations. One such handheld lateral flow device platform is made by Detekt Biomedical L.L.C. [ 20 ] Alternative non-optical techniques are also able to report quantitative assays results. One such example is a magnetic immunoassay (MIA) in the LFT form also allows for getting a quantified result. Reducing variations in the capillary pumping of the sample fluid is another approach to move from qualitative to quantitative results. Recent work has, for example, demonstrated capillary pumping with a constant flow rate independent from the liquid viscosity and surface energy . [ 6 ] [ 21 ] [ 22 ] [ 23 ]
Most tests will incorporate a second line which contains a further antibody (one which is not specific to the analyte) that binds some of the remaining colored particles which did not bind to the test line. This confirms that fluid has passed successfully from the sample-application pad, past the test line. [ 2 ] By giving confirmation that the sample has had a chance to interact with the test line, this increases confidence that a visibly-unchanged test line can be interpreted as a negative result (or that a changed test line can be interpreted as a negative result in a competitive assay).
Because the intense red color of hemoglobin interferes with the readout of colorimetric or optical detection-based diagnostic tests, blood plasma separation is a common first step to increase diagnostic test accuracy. Plasma can be extracted from whole blood via integrated filters [ 24 ] or via agglutination. [ 25 ]
Time to obtain the test result is a key driver for these products. Tests results can be available in as little as a few minutes. Generally there is a trade off between time and sensitivity: more sensitive tests may take longer to develop. The other key advantage of this format of test compared to other immunoassays is the simplicity of the test, by typically requiring little or no sample or reagent preparation. [ 26 ]
This is a highly competitive area and a number of people claim patents in the field, most notably Alere (formerly Inverness Medical Innovations, now owned by Abbott ) who own patents [ 13 ] originally filed by Unipath . The US 6,485,982 patent, that has been litigated, expired in 2019. A number of other companies also hold patents in this arena. A group of competitors are challenging the validity of the patents. [ 27 ] The original patent is apparently from 1988. [ 28 ] [ 29 ]
Lateral flow assays have a wide array of applications and can test a variety of samples including urine, blood, saliva, sweat, serum, and other fluids. They are currently used by clinical laboratories, hospitals, physicians and veterinary clinics, food analysis labs and environmental testing facilities.
Immediacy in obtaining results is normally the key factor in choosing this technique, although simplicity and lack of a need for formal equipment are also important factors. These features allow ICTs to be used a at-home test or in pharmacies. Because of their exceptional quality, rapid test are also used routinely in well-equippped laboratories when the demand for test is low.
The broad applications of rapid test can be realized because of their simplicity accompanied by high quality analytical production. The sensitivity and specificity of these techniques tend to be comparable to those of other more complex methods, and on occasion significantly better. [ 30 ]
Other uses for lateral flow assays are food and environmental safety and veterinary medicine for chemicals such as diseases and toxins. [ 2 ] LFTs are also commonly used for disease identification such as ebola, but the most common LFT are the home pregnancy [ 2 ] and SARS-CoV-2 tests.
Lateral flow assays have played a critical role in COVID-19 testing as they have the benefit of delivering a result in 15–30 minutes. [ 31 ] The systematic evaluation of lateral flow assays during the COVID-19 pandemic [ 32 ] was initiated at Oxford University as part of a UK collaboration with Public Health England . A study that started in June 2020 in the United Kingdom, FALCON-C19, confirmed the sensitivity of some lateral flow devices (LFDs) in this setting. [ 33 ] [ 34 ] [ 35 ] Four out of 64 LFDs tested had desirable performance characteristics according to these early tests; the Innova SARS-CoV-2 Antigen Rapid Qualitative Test performed moderately [ 35 ] in viral antigen detection/sensitivity with excellent specificity, although kit failure rates and the impact of training were potential issues. [ 34 ] The Innova test's specificity is more widely publicised, but sensitivity in phase 4 trials was 50.1%. [ 36 ] This describes a device for which one out of every two patients infected with COVID-19 and tested in real-world conditions would receive a false-negative result. After closure of schools in January 2021, biweekly LFTs were introduced in England for teachers, pupils, and households of pupils when schools re-opened on March 8, 2021 for asymptomatic testing. [ 37 ] Biweekly LFT were made universally available to everyone in England on April 9, 2021. [ 38 ] LFTs have been used for mass testing for COVID-19 globally [ 39 ] [ 40 ] [ 41 ] and complement other public health measures for COVID-19. [ 42 ]
Some scientists outside government expressed serious misgivings in late 2020 about the use of Innova LFDs for screening for Covid. According to Jon Deeks , a professor of biostatistics at the University of Birmingham , England, the Innova test is "entirely unsuitable" for community testing: "as the test may miss up to half of cases, a negative test result indicates a reduced risk of Covid, but does not exclude Covid". [ 43 ] [ 44 ]
Sensitivity of tests used in 2022 was around 70%. [ 45 ] | https://en.wikipedia.org/wiki/Lateral_flow_test |
Lateral olfactory tract usher substance (LOTUS), also known as Cartilage acidic protein-1B (Crtac1B), is a membrane protein produced by neurons . During embryonic development, it is strongly expressed in the olfactory bulb by Mitral cells . [ 1 ]
LOTUS is an endogenous antagonist of the Nogo receptor (NgR1) and Paired Immunoglobulin-Like Receptor B (PirB in mice, LilrB2 in humans). These receptors block neuronal outgrowth when activated. By blocking their function, LOTUS promotes neuronal growth, e.g. during the formation of the lateral olfactory tract . [ 2 ] As LOTUS generates a permissive brain environment for neuronal regeneration, it may aid recovery after spinal cord injury . It also has been shown to reduce synapse loss in a mouse model of Alzheimer's disease . [ 3 ] | https://en.wikipedia.org/wiki/Lateral_olfactory_tract_usher_substance |
The lateral surface of an object is all of the sides of the object, excluding its bases (when they exist).
The lateral surface area is the area of the lateral surface. This is to be distinguished from the total surface area , which is the lateral surface area together with the areas of the base and top.
For a cube the lateral surface area would be the area of the four sides. If the edge of the cube has length a , the area of one square face A face = a ⋅ a = a 2 . Thus the lateral surface of a cube will be the area of four faces: 4 a 2 .
More generally, the lateral surface area of a prism is the sum of the areas of the sides of the prism. [ 1 ] This lateral surface area can be calculated by multiplying the perimeter of the base by the height of the prism. [ 2 ]
For a right circular cylinder of radius r and height h , the lateral area is the area of the side surface of the cylinder: A = 2π rh .
For a pyramid , the lateral surface area is the sum of the areas of all of the triangular faces but excluding the area of the base.
For a cone , the lateral surface area would be π r ⋅ l where r is the radius of the circle at the bottom of the cone and l is the lateral height (the length of a line segment from the apex of the cone along its side to its base) of the cone (given by the Pythagorean theorem l = √ r 2 + h 2 where h is the height of the cone)
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lateral_surface |
Lateral thinking is a manner of solving problems using an indirect and creative approach via reasoning that is not immediately obvious. Synonymous to thinking outside the box , it involves ideas that may not be obtainable using only traditional step-by-step logic . [ 1 ] The cutting of the Gordian Knot is a classical example.
The term was first used in 1967 by Maltese psychologist Edward de Bono who used the Judgement of Solomon , the Nine Dots Puzzle , and the sewing machine (automating the work rather than adding more workers) as examples, among many others, of lateral thinking. [ 2 ]
Lateral thinking deliberately distances itself from Vertical Thinking , the traditional method for problem solving.
De Bono argues lateral thinking entails a switch-over from a familiar pattern to a new, unexpected one. Such insight sometimes takes the form of humour [ 4 ] but can also be cultivated. [ 5 ]
Critics have characterized lateral thinking as a pseudo-scientific concept, arguing de Bono's core ideas have never been rigorously tested or corroborated. [ 6 ] [ 7 ]
Lateral thinking has to be distinguished from critical thinking. [ 8 ] Critical thinking is primarily concerned with judging the true value of statements and seeking errors whereas lateral thinking focuses more on the "movement value" of statements and ideas. A person uses lateral thinking to move from one known idea to new ideas. Edward de Bono defines four types of thinking tools:
The thinker chooses an object at random, or a noun from a dictionary and associates it with the area they are thinking about. De Bono exemplifies this through the randomly chosen word "nose" being applied to an office photocopier, leading to the idea that the copier could produce a lavender smell when it was low on paper. [ 10 ]
A provocation is a statement that we know is wrong or impossible but used to create new ideas. De Bono gives an example of considering river pollution and setting up the provocation, "the factory is downstream of itself", causing a factory to be forced to take its water input from a point downstream of its output, an idea which later became law in some countries. [ 11 ] Provocations can be set up by the use of any of the provocation techniques —wishful thinking, exaggeration , reversal, escape, distortion, or arising. The thinker creates a list of provocations and then uses the most outlandish ones to move their thinking forward to new ideas.
The purpose of movement techniques is to produce as many alternatives as possible in order to encourage new ways of thinking about both problems and solutions. The production of alternatives tends to produce many possible solutions to problems that seemed to only have one possible solution. [ 12 ] One can move from a provocation to a new idea through the following methods: extract a principle, focus on the difference, moment to moment, positive aspects or special circumstances.
A tool which is designed to ask the question, "Why?", in a non-threatening way: why something exists or why it is done the way it is. The result is a very clear understanding of "Why?", which naturally leads to new ideas. The goal is to be able to challenge anything at all, not just those things that are problematic. For example, one could challenge the handles on coffee cups : The reason for the handle seems to be that the cup is often too hot to hold directly; perhaps coffee cups could be made with insulated finger grips, or there could be separate coffee-cup holders similar to beer holders, or coffee should not be so hot in the first place. [ 12 ]
Ideas carry out concepts. This tool systematically expands the range and number of concepts in order to end up with a very broad range of ideas to consider. [ 12 ]
Based on the idea that the majority is always wrong (as suggested by Henrik Ibsen [ 13 ] [ non-primary source needed ] and by John Kenneth Galbraith [ 14 ] ), take anything that is obvious and generally accepted as "goes without saying", question it, take an opposite view, and try to convincingly disprove it. This technique is similar to de Bono's "Black Hat" of Six Thinking Hats , which looks at identifying reasons to be cautious and conservative. [ citation needed ]
The purpose of fractionation is to create alternative perceptions of problems and solutions by taking the commonplace view of the situation and breaking it into multiple alternative situations in order to break away from the fixed view and see the situation from different angles. This allows the generation of multiple possible solutions that can be synthesized into more comprehensive answers. [ 12 ] | https://en.wikipedia.org/wiki/Lateral_thinking |
The term laterality refers to the preference most humans show for one side of their body over the other. Examples include left-handedness/right-handedness and left/right- footedness ; it may also refer to the primary use of the left or right hemisphere in the brain. It may also apply to animals or plants. The majority of tests have been conducted on humans, specifically to determine the effects on language .
Most humans are right-handed . Many are also right-sided in general (that is, they prefer to use their right eye , right foot and right ear if forced to make a choice between the two). The reasons for this are not fully understood, but it is thought that because the left cerebral hemisphere of the brain controls the right side of the body, the right side is generally stronger; it is suggested that the left cerebral hemisphere is dominant over the right in most humans because in 90–92% of all humans, the left hemisphere is the language hemisphere.
Human cultures are predominantly right-handed, and so the right-sided trend may be socially as well as biologically enforced. This is quite apparent from a quick survey of languages . The English word left comes from the Anglo-Saxon word lyft , which means 'weak' or 'useless'. Similarly, the French word for left, gauche , is also used to mean 'awkward' or 'tactless', and sinistra , the Latin word from which the English word sinister was derived, means 'left'. Similarly, in many cultures the word for right also means 'correct'. The English word right comes from the Anglo-Saxon word riht , which also means 'straight' or 'correct'.
This linguistic and social bias is not restricted to European cultures : for example, Chinese characters are designed for right-handers to write, and no significant left-handed culture has ever been found in the world.
When a person is forced to use the hand opposite of the hand that they would naturally use, this is known as forced laterality , or more specifically forced dextrality . A study done by the Department of Neurology at Keele University , North Staffordshire Royal Infirmary suggests that forced dextrality may be part of the reason that the percentage of left-handed people decreases with the higher age groups, both because the effects of pressures toward right-handedness are cumulative over time (hence increasing with age for any given person subjected to them) and because the prevalence of such pressure is decreasing, such that fewer members of younger generations face any such pressure to begin with. [ 1 ]
Ambidexterity is when a person has approximately equal skill with both hands and/or both sides of the body. True ambidexterity is very rare. Although a small number of people can write competently with both hands and use both sides of their body well, even these people usually show preference for one side of their body over the other. However, this preference is not necessarily consistent for all activities. Some people may, for instance, use their right hand for writing , and their left hand for playing racket sports and eating [ 2 ] ( see also: cross-dominance ).
Also, it is not uncommon that people preferring to use the right hand prefer to use the left leg , e.g. when using a shovel, kicking a ball, or operating control pedals. In many cases, this may be because they are disposed for left-handedness but have been trained for right-handedness, which is usually attached to learning and behavioural disorders (term usually so called as " cross dominance "). [ 3 ] In the sport of cricket , some players may find that they are more comfortable bowling with their left or right hand, but batting with the other hand.
Approximate statistics, complied in 1981, are given below: [ 4 ]
Laterality of motor and sensory control has been the subject of a recent intense study and review. [ 5 ] It turns out that the hemisphere of speech is the hemisphere of action in general and that the command hemisphere is located either in the right or the left hemisphere (never in both). Around 80% of people are left hemispheric for speech and the remainder are right hemispheric: ninety percent of right-handers are left hemispheric for speech , but only 50% of left-handers are right hemispheric for speech (the remainder are left hemispheric). The reaction time of the neurally dominant side of the body (the side opposite to the major hemisphere or the command center, as just defined) is shorter than that of the opposite side by an interval equal to the interhemispheric transfer time. Thus, one in five persons has a handedness that is the opposite for which they are wired (per laterality of command center or brainedness, as determined by reaction time study mentioned above).
Cerebral dominance or specialization has been studied in relation to a variety of human functions. With speech in particular, many studies have been used as evidence that it is generally localized in the left hemisphere . Research comparing the effects of lesions in the two hemispheres, split-brain patients, and perceptual asymmetries have aided in the knowledge of speech lateralization. In one particular study, the left hemisphere's sensitivity to differences in rapidly changing sound cues was noted (Annett, 1991). This has real world implication, since very fine acoustic discriminations are needed to comprehend and produce speech signals . In an electrical stimulation demonstration performed by Ojemann and Mateer (1979), the exposed cortex was mapped revealing the same cortical sites were activated in phoneme discrimination and mouth movement sequences (Annett, 1991).
As suggested by Kimura (1975, 1982), left hemisphere speech lateralization might be based upon a preference for movement sequences as demonstrated by American Sign Language (ASL) studies. Since ASL requires intricate hand movements for language communication , it was proposed that skilled hand motions and speech require sequences of action over time. In deaf patients with a left hemispheric stroke and damage, noticeable losses in their abilities to sign were noted. These cases were compared to studies of normal speakers with dysphasias located at lesioned areas similar to the deaf patients. In the same study, deaf patients with right hemispheric lesions did not display any significant loss of signing nor any decreased capacity for motor sequencing (Annett, 1991).
One theory, known as the acoustic laterality theory, the physical properties of certain speech sounds are what determine laterality to the left hemisphere. Stop consonants , for example t, p, or k, leave a defined silent period at the end of words that can easily be distinguished. This theory postulates that changing sounds such as these are preferentially processed by the left hemisphere. As a result of the right ear being responsible for transmission to sounds to the left hemisphere, it is capable of perceiving these sounds with rapid changes. This right ear advantage in hearing and speech laterality was evidenced in dichotic listening studies . Magnetic imaging results from this study showed greater left hemisphere activation when actual words were presented as opposed to pseudowords . [ 6 ] Two important aspects of speech recognition are phonetic cues , such as format patterning, and prosody cues, such as intonation , accent , and emotional state of the speaker (Imaizumi, Koichi, Kiritani, Hosoi & Tonoike, 1998).
In a study done with both monolinguals and bilinguals , which took into account language experience, second language proficiency , and onset of bilingualism among other variables, researchers were able to demonstrate left hemispheric dominance. In addition, bilinguals that began speaking a second language early in life demonstrated bilateral hemispheric involvement. The findings of this study were able to predict differing patterns of cerebral language lateralization in adulthood (Hull & Vaid, 2006).
It has been shown that cerebral lateralization is a widespread phenomenon in the animal kingdom . [ 7 ] Functional and structural differences between left and right brain hemispheres can be found in many other vertebrates and also in invertebrates. [ 8 ]
It has been proposed that negative, withdrawal-associated emotions are processed predominantly by the right hemisphere, whereas the left hemisphere is largely responsible for processing positive, approach-related emotions. This has been called the "laterality- valence hypothesis". [ 9 ]
One sub-set of laterality in animals is limb dominance. Preferential limb use for specific tasks has been shown in species including chimpanzees, mice, bats, wallabies, parrots, chickens and toads. [ 8 ]
Another form of laterality is hemispheric dominance for processing conspecific vocalizations, reported for chimpanzees, sea lions, dogs, zebra finches and Bengalese finches. [ 8 ]
In mice ( Mus musculus ), laterality in paw usage has been shown to be a learned behavior (rather than inherited), [ 10 ] due to which, in any population, half of the mice become left-handed while the other half becomes right-handed. The learning occurs by a gradual reinforcement of randomly occurring weak asymmetries in paw choice early in training, even when training in an unbiased world. [ 11 ] [ 12 ] Meanwhile, reinforcement relies on short-term and long-term memory skills that are strain-dependent, [ 11 ] [ 12 ] causing strains to differ in the degree of laterality of its individuals. Long-term memory of previously gained laterality in handedness due to training is heavily diminished in mice with absent corpus callosum and reduced hippocampal commissure. [ 13 ] Regardless of the amount of past training and consequent biasing of paw choice, there is a degree of randomness in paw choice that is not removed by training, [ 14 ] which may provide adaptability to changing environments.
Domestic horses ( Equus caballus ) exhibit laterality in at least two areas of neural organization, i.e. sensory and motor. In thoroughbreds , the strength of motor laterality increases with age. Horses under 4 years old have a preference to initially use the right nostril during olfaction. [ 15 ] Along with olfaction, French horses have an eye laterality when looking at novel objects. There is a correlation between their score on an emotional index and eye preference; horses with higher emotionality are more likely to look with their left eye. The less emotive French saddlebreds glance at novel objects using the right eye, however, this tendency is absent in the trotters , although the emotive index is the same for both breeds. [ 16 ] Racehorses exhibit laterality in stride patterns as well. They use their preferred stride pattern at all times whether racing or not, unless they are forced to change it while turning, injured, or fatigued. [ 17 ]
Fearfulness is an undesirable trait in guide dogs, therefore, testing for laterality can be a useful predictor of a successful guide dog. Knowing a guide dog's laterality can also be useful for training because the dog may be better at walking to the left or the right of their blind owner. [ 18 ]
Domestic cats ( Felis catus ) show an individual handedness when reaching for static food. In one study, 46% preferred to use the right paw, 44% the left, and 10% were ambi-lateral; 60% used one paw 100% of the time. There was no difference between male and female cats in the proportions of left and right paw preferences. In moving-target reaching tests, cats have a left-sided behavioural asymmetry. [ 19 ] One study indicates that laterality in this species is strongly related to temperament. Furthermore, individuals with stronger paw preferences are rated as more confident, affectionate, active, and friendly. [ 20 ]
Chimpanzees show right-handedness in certain conditions. This is expressed at the population level for females, but not males. The complexity of the task has a dominant effect on handedness in chimps. [ 21 ]
Cattle use visual/brain lateralisation in their visual scanning of novel and familiar stimuli. [ 22 ] Domestic cattle prefer to view novel stimuli with the left eye, (similar to horses, Australian magpies, chicks, toads and fish) but use the right eye for viewing familiar stimuli. [ 23 ]
Schreibers' long-fingered bat is lateralized at the population level and shows a left-hand bias for climbing or grasping. [ 24 ]
Marsupials are fundamentally different from other mammals in that they lack a corpus callosum . [ 25 ] However, wild kangaroos and other macropod marsupials have a left-hand preference for everyday tasks. Left-handedness is particularly apparent in the red kangaroo ( Macropus rufus ) and the eastern gray kangaroo ( Macropus giganteus ). The red-necked wallaby ( Macropus rufogriseus ) preferentially uses the left hand for behaviours that involve fine manipulation, but the right for behaviours that require more physical strength. There is less evidence for handedness in arboreal species. [ 26 ]
Parrots tend to favor one foot when grasping objects (for example fruit when feeding). Some studies indicate that most parrots are left footed. [ 27 ]
The Australian magpie ( Gymnorhina tibicen ) uses both left-eye and right-eye laterality when performing anti-predator responses, which include mobbing . Prior to withdrawing from a potential predator, Australian magpies view the animal with the left eye (85%), but prior to approaching, the right eye is used (72%). The left eye is used prior to jumping (73%) and prior to circling (65%) the predator, as well as during circling (58%) and for high alert inspection of the predator (72%). The researchers commented that "mobbing and perhaps circling are agonistic responses controlled by the LE[left eye]/right hemisphere, as also seen in other species. Alert inspection involves detailed examination of the predator and likely high levels of fear, known to be right hemisphere function." [ 28 ]
Yellow-legged gull ( Larus michahellis ) chicks show laterality when reverting from a supine to prone posture, and also in pecking at a dummy parental bill to beg for food. Lateralization occurs at both the population and individual level in the reverting response and at the individual level in begging. Females have a leftward preference in the righting response, indicating this is sex dependent. Laterality in the begging response in chicks varies according to laying order and matches variation in egg androgens concentration. [ 29 ]
Laterality determines the organisation of rainbowfish ( Melanotaenia spp.) schools. These fish demonstrate an individual eye preference when examining their reflection in a mirror. Fish which show a right-eye preference in the mirror test prefer to be on the left side of the school. Conversely, fish that show a left-eye preference in the mirror test or were non-lateralised, prefer to be slightly to the right side of the school. The behaviour depends on the species and sex of the school. [ 30 ]
Three species of toads, the common toad ( Bufo bufo ), green toad ( Bufo viridis ) and the cane toad ( Bufo marinus ) show stronger escape and defensive responses when a model predator was placed on the toad's left side compared to their right side. [ 31 ] Emei music frogs ( Babina daunchina ) have a right-ear preference for positive or neutral signals such as a conspecific's advertisement call and white noise, but a left-ear preference for negative signals such as predatory attack. [ 32 ]
The Mediterranean fruit fly ( Ceratitis capitata ) exhibits left-biased population-level lateralisation of aggressive displays (boxing with forelegs and wing strikes) with no sex-differences. [ 33 ] In ants, Temnothorax albipennis (rock ant) scouts show behavioural lateralization when exploring unknown nest sites, showing a population-level bias to prefer left turns. One possible reason for this is that its environment is partly maze-like and consistently turning in one direction is a good way to search and exit mazes without getting lost. [ 34 ] This turning bias is correlated with slight asymmetries in the ants' compound eyes (differential ommatidia count). [ 35 ] | https://en.wikipedia.org/wiki/Laterality |
A latex fixation test , also called a latex agglutination assay or test ( LA assay or test ), is an assay used clinically in the identification and typing of many important microorganisms . These tests use the patient's antigen - antibody immune response. This response occurs when the body detects a pathogen and forms an antibody specific to an identified antigen (a protein configuration) present on the surface of the pathogen. [ citation needed ]
Agglutination tests, specific to a variety of pathogens, can be designed and manufactured for clinicians by coating microbeads of latex with pathogen-specific antigens or antibodies. In performing a test, laboratory clinicians will mix a patient's cerebrospinal fluid , serum or urine with the coated latex particles in serial dilutions with normal saline (important to avoid the prozone effect ) and observe for agglutination (clumping). Agglutination of the beads in any of the dilutions is considered a positive result, confirming either that the patient's body has produced the pathogen-specific antibody (if the test supplied the antigen) or that the specimen contains the pathogen's antigen (if the test supplied the antibody). Instances of cross-reactivity (where the antibody sticks to another antigen besides the antigen of interest) can lead to confusing results.
Agglutination techniques are used to detect antibodies produced in response to a variety of viruses and bacteria , as well as autoantibodies , which are produced against the self in autoimmune diseases . For example, assays exist for rubella virus , rotavirus , and rheumatoid factor , and an excellent LA test is available for cryptococcus . [ 1 ] Agglutination techniques are also used in definitive diagnosis of group A streptococcal infection . | https://en.wikipedia.org/wiki/Latex_fixation_test |
Lath and plaster is a building process used to finish mainly interior dividing walls and ceilings. It consists of narrow strips of wood ( laths ) which are nailed horizontally across the wall studs or ceiling joists and then coated in plaster . The technique derives from an earlier, more primitive process called wattle and daub . [ 1 ]
Lath and plaster largely fell out of favour in the U.K. after the introduction of plasterboard in the 1930s. [ 2 ] In Canada and the United States , wood lath and plaster remained in use until the process was replaced by transitional methods followed by drywall (the North American term for plasterboard) in the mid-twentieth century. [ citation needed ]
The wall or ceiling finishing process begins with wood or metal laths . These are narrow strips of wood , extruded metal, or split boards, nailed horizontally across the wall studs or ceiling joists . Each wall frame is covered in lath, tacked at the studs. Wood lath is typically about one inch (2.5 cm) wide by four feet (1.2 m) long by 1 ⁄ 4 inch (6 mm) thick. Each horizontal course of lath is spaced about 3 ⁄ 8 inch (9.5 mm) away from its neighboring courses. Metal lath is available in 27-inch (69 cm) by 8-foot (240 cm) sheets.
In Canada and the United States the laths were generally sawn, but in the United Kingdom and its colonies, riven or split hardwood laths of random lengths and sizes were often used. Early American examples featured split beam construction, as did examples put up in rural areas of the U.S. and Canada well into the second half of the 19th century. Splitting the timber along its grain greatly improved the laths' strength and durability. As Americans and Canadians expanded west, saw mills were not always available to create neatly planed boards and the first crop of buildings in any new western or northern settlement would be put up with split beam lath. In some areas of the U.K. reed mat was also used as a lath.
Temporary lath guides are then placed vertically to the wall, usually at the studs. Lime or gypsum plaster is then applied, typically using a wooden board as the application tool. The applier drags the board upward over the wall, forcing the plaster into the gaps between the lath and leaving a layer on the front the depth of the temporary guides, typically about 1 ⁄ 4 inch (6.4 mm). A helper feeds new plaster onto the board, as the plaster is applied in quantity. When the wall is fully covered, the vertical lath "guides" are removed, and their "slots" are filled in, leaving a fairly uniform undercoat.
In three coat plastering it is standard to apply a second layer in the same fashion, leaving about 1 ⁄ 2 inch (13 mm) of rough, sandy plaster (called a brown coat or browning (UK)). A smooth, white finish coat goes on last. After the plaster is completely dry, the walls are ready to be painted. In this article's photo ("lath seen from the back...") the curls of plaster are called keys and are necessary to keep the plaster on the lath. Traditional lime based mortar/plaster often incorporates horsehair which reinforces the plasterwork, thereby helping to prevent the keys from breaking away.
In addition to wood lath, various types of metal lath began to be used toward the end of the 19th century. [ 3 ] Metal lath is categorized according to weight, type of ribbing, and whether the lath is galvanized or not. Metal lathing was spaced across a 13.5-inch (340 mm) center, attached by tie wires using lathers' nippers. Sometimes, the mesh was dimpled to be self- furring .
In use as early as 1900, rock lath (also known as "button board," "plaster board" or "gypsum-board lath"), is a type of gypsum wall board (essentially an early form of drywall) with holes spaced regularly to provide a 'key' for wet plaster. [ 3 ] Rock lath was typically produced in sheets sized 2 by 4 feet (610 by 1,220 mm). The purpose of the four-foot length is so that the sheet of lath exactly spans three interstud voids (overlapping half a stud at each end of a four-stud sequence in standard construction), the studs themselves being spaced 16 inches (410 mm) apart on center (United States building code standard measurements). By the late 1930s, rock lath was the primary method used in residential plastering. [ 3 ]
Lath and plaster methods have mostly been replaced with modern drywall or plasterboard , which is faster and less expensive to install. Drywall possesses poor sound damping qualities and can be easily damaged by moisture. Traditional lime based plasters are resistant to moisture and provide excellent sound isolation.
One continued advantage of using traditional lath is for ornamental or unusual shapes. For instance, building a rounded wall would be difficult if drywall were used exclusively, as drywall is not flexible enough to allow tight radii. Wire mesh, often used for exterior stucco , is also found in combination or replacement of lath and plaster which serves similar purpose.
Traditional lath and plaster (including rock and metal lath varieties) has superior sound-proofing qualities when used with lime or gypsum plaster, which is denser than modern drywall. [ 2 ]
In many historic buildings lath and plaster ceilings play a major role for the prevention of fire spread. They are critical to the protection of horizontal elements such as timber joisted floors, including the flooring on top, which in terms of fire performance is often in a poor condition due to the presence of gaps. [ 4 ] | https://en.wikipedia.org/wiki/Lath_and_plaster |
A Latimer diagram of a chemical element is a summary of the standard electrode potential data of that element. This type of diagram is named after Wendell Mitchell Latimer (1893–1955), an American chemist.
In a Latimer diagram, because by convention redox reactions are shown in the direction of reduction (gain of electrons ), the most highly oxidized form of the element is on the left side, with successively lower oxidation states to the right side. The species are connected by arrows, and the numerical value of the standard potential (in volts ) for the reduction is written at each arrow. For example, for oxygen , the species would be in the order O 2 (0), H 2 O 2 (–1), H 2 O (-2):
The arrow between O 2 and H 2 O 2 has a value +0.68 V over it, it indicates that the standard electrode potential for the reaction:
is 0.68 volts.
Latimer diagrams can be used in the construction of Frost diagrams , as a concise summary of the standard electrode potentials relative to the element. Since Δ r G o = -n F E o , the electrode potential is a representation of the Gibbs energy change for the given reduction. The sum of the Gibbs energy changes for subsequent reductions (e.g. from O 2 to H 2 O 2 , then from H 2 O 2 to H 2 O) is the same as the Gibbs energy change for the overall reduction (i.e. from O 2 to H 2 O), in accordance with Hess's law . This can be used to find the electrode potential for non-adjacent species, which gives all the information necessary for the Frost diagram .
It must be stressed that standard reduction potentials are not additive values. They cannot be directly summed up, or subtracted, from the values in volt indicated in a Latimer diagram. If needed, their calculation must be performed via the difference in Gibbs free energies. The easiest way to proceed is simply to use energies (nE) directly expressed in electron-volt (eV), because the Faraday constant F and the sign minus simplifies on both side of the equation. So, the values of E in volt must be simply multiplied by the number (n) of electron transferred in the considered half-reaction. Since the Faraday constant can disappear from the equation, no need to calculate Δ r G o expressed in joule.
A simple examination of a Latimer diagram can also indicate if a species will disproportionate in solution under the conditions for which the electrode potentials are given: if the potential to the right of the species is higher than the potential on the left, it will disproportionate. Therefore, hydrogen peroxide H 2 O 2 is unstable and will disproportionate in O 2 and H 2 O . | https://en.wikipedia.org/wiki/Latimer_diagram |
The Latimer–MacDuffee theorem is a theorem in abstract algebra , a branch of mathematics .
It is named after Claiborne Latimer and Cyrus Colton MacDuffee , who published it in 1933. [ 1 ] Significant contributions to its theory were made later by Olga Taussky-Todd . [ 2 ]
Let f {\displaystyle f} be a monic , irreducible polynomial of degree n {\displaystyle n} . The Latimer–MacDuffee theorem gives a one-to-one correspondence between Z {\displaystyle \mathbb {Z} } - similarity classes of n × n {\displaystyle n\times n} matrices with characteristic polynomial f {\displaystyle f} and the ideal classes in the order
where ideals are considered equivalent if they are equal up to an overall (nonzero) rational scalar multiple. (Note that this order need not be the full ring of integers, so nonzero ideals need not be invertible.) Since an order in a number field has only finitely many ideal classes (even if it is not the maximal order, and we mean here ideals classes for all nonzero ideals, not just the invertible ones), it follows that there are only finitely many conjugacy classes of matrices over the integers with characteristic polynomial f ( x ) {\displaystyle f(x)} .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Latimer–MacDuffee_theorem |
The Latin-American Energy Organization (OLADE) is a public international entity serving to manage and protect energy resources in South America and the Caribbean. OLADE is a mediator between Latin American countries and the Caribbean, encouraging cooperation and clear statistics of energy consumption. There are 27 participating member Latin American countries that confers with the OLADE. [ 1 ]
For a member state to official join OLADE is must fall under these criteria:
For a member state wishing to Leave OLADE they can do so at any time but still has rights and obligations to OLADE for thirty days. After which they are free to leave.
If a member state wishes to come back to OLADE it must be approved by the Meeting of Ministers and follow the regular requirements. [ 2 ]
The Latin Energy Organization was established in 1973 and it mission it to create an equal economic relationship between the developing countries and developed countries of Latin American and the Caribbean.
While also providing a platform for Latin American countries to speak on about energy, the organization also does acts of:
Within the Latin American Energy Organization, there are four governing bodies that manage the Organization.
Meeting of the Ministers:
The highest governing body in the OLADE. This governing body consists of a minister from each participating member state coming together to create a general policy. This policy must be in line with the Lima Agreement.
Council of Experts:
This is the advisory group of the Meeting of the Ministers. This governing body aims to provide advice and review of proposal to ministerial decision-making across the OLADE.
Directive committee (CODI):
The CODI is the check and balances governing body. This body main purpose is to observe and assess OLADE's policies, strategies, and programs.
Permanent Secretariat:
Position in OLADE which is responsible for implementing programs and policies. [ 3 ]
The OLADE has greatly helped in some Latin American agreements, which includes:
October 5, 2022:
Latin America Energy Organization and Global Energy Alliance for people and planet agreements in supporting sustainable transition to energy resources in Latin American and the Caribbean. [ 5 ]
September 20, 2022:
International Atomic Energy Agency and Latin America Energy Organization agreements if exchanging on developments and energy power tools in the aim of transitioning to cleaner energy. [ 6 ]
The active executive secretariat of OLADE is Andrés Rebolledo Smitman. He is an economist graduating from the University of Chile and has served as a consultant to the International- American development bank (IDB). [ 7 ]
Years in position | https://en.wikipedia.org/wiki/Latin_Americans_Energy_Organization |
The Latin Library is a website that collects public domain Latin texts. [ 1 ] It is run by William L. Carey, adjunct professor of Latin and Roman Law at George Mason University . [ 2 ] The texts have been drawn from different sources, are not intended for research purposes nor as substitutes for critical editions, and may contain errors. [ 3 ] There are no translations at the site.
This article relating to library science or information science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Latin_Library |
Many letters of the Latin alphabet , both capital and small, are used in mathematics , science , and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning.
Below is an alphabetical list of the letters of the alphabet with some of their uses. The field in which the convention applies is mathematics unless otherwise noted.
Some common conventions: | https://en.wikipedia.org/wiki/Latin_letters_used_in_mathematics,_science,_and_engineering |
In combinatorics and in experimental design , a Latin square is an n × n array filled with n different symbols, each occurring exactly once in each row and exactly once in each column. An example of a 3×3 Latin square is
The name "Latin square" was inspired by mathematical papers by Leonhard Euler (1707–1783), who used Latin characters as symbols, [ 2 ] but any set of symbols can be used: in the above example, the alphabetic sequence A, B, C can be replaced by the integer sequence 1, 2, 3. Euler began the general theory of Latin squares.
The Korean mathematician Choi Seok-jeong was the first to publish an example of Latin squares of order nine, in order to construct a magic square in 1700, predating Leonhard Euler by 67 years. [ 3 ]
A Latin square is said to be reduced (also, normalized or in standard form ) if both its first row and its first column are in their natural order. [ 4 ] For example, the Latin square above is not reduced because its first column is A, C, B rather than A, B, C.
Any Latin square can be reduced by permuting (that is, reordering) the rows and columns. Here switching the above matrix's second and third rows yields the following square:
This Latin square is reduced; both its first row and its first column are alphabetically ordered A, B, C.
If each entry of an n × n Latin square is written as a triple ( r , c , s ), where r is the row, c is the column, and s is the symbol, we obtain a set of n 2 triples called the orthogonal array representation of the square. For example, the orthogonal array representation of the Latin square
is
where for example the triple (2, 3, 1) means that in row 2 and column 3 there is the symbol 1. Orthogonal arrays are usually written in array form where the triples are the rows, such as:
The definition of a Latin square can be written in terms of orthogonal arrays:
This means that the n 2 ordered pairs ( r , c ) are all the pairs ( i , j ) with 1 ≤ i , j ≤ n , once each. The same is true of the ordered pairs ( r , s ) and the ordered pairs ( c , s ).
The orthogonal array representation shows that rows, columns and symbols play rather similar roles, as will be made clear below.
Many operations on a Latin square produce another Latin square (for example, turning it upside down).
If we permute the rows, permute the columns, or permute the names of the symbols of a Latin square, we obtain a new Latin square said to be isotopic to the first. Isotopism is an equivalence relation , so the set of all Latin squares is divided into subsets, called isotopy classes , such that two squares in the same class are isotopic and two squares in different classes are not isotopic.
Another type of operation is easiest to explain using the orthogonal array representation of the Latin square. If we systematically and consistently reorder the three items in each triple (that is, permute the three columns in the array form), another orthogonal array (and, thus, another Latin square) is obtained. For example, we can replace each triple ( r , c , s ) by ( c , r , s ) which corresponds to transposing the square (reflecting about its main diagonal), or we could replace each triple ( r , c , s ) by ( c , s , r ), which is a more complicated operation. Altogether there are 6 possibilities including "do nothing", giving us 6 Latin squares called the conjugates (also parastrophes ) of the original square. [ 5 ]
Finally, we can combine these two equivalence operations: two Latin squares are said to be paratopic , also main class isotopic , if one of them is isotopic to a conjugate of the other. This is again an equivalence relation, with the equivalence classes called main classes , species , or paratopy classes . [ 5 ] Each main class contains up to six isotopy classes.
There is no known easily computable formula for the number L n of n × n Latin squares with symbols 1, 2, ..., n . The most accurate upper and lower bounds known for large n are far apart. One classic result [ 6 ] is that ∏ k = 1 n ( k ! ) n / k ≥ L n ≥ ( n ! ) 2 n n n 2 . {\displaystyle \prod _{k=1}^{n}\left(k!\right)^{n/k}\geq L_{n}\geq {\frac {\left(n!\right)^{2n}}{n^{n^{2}}}}.}
A simple and explicit formula for the number of Latin squares was published in 1992, but it is still not easily computable due to the exponential increase in the number of terms. This formula for the number L n of n × n Latin squares is L n = n ! ∑ A ∈ B n ( − 1 ) σ 0 ( A ) ( per A n ) , {\displaystyle L_{n}=n!\sum _{A\in B_{n}}^{}(-1)^{\sigma _{0}(A)}{\binom {\operatorname {per} A}{n}},} where B n is the set of all n × n {0, 1}-matrices, σ 0 ( A ) is the number of zero entries in matrix A , and per( A ) is the permanent of matrix A . [ 7 ]
The table below contains all known exact values. It can be seen that the numbers grow exceedingly quickly. For each n , the number of Latin squares altogether (sequence A002860 in the OEIS ) is n ! ( n − 1)! times the number of reduced Latin squares (sequence A000315 in the OEIS ).
For each n , each isotopy class (sequence A040082 in the OEIS ) contains up to ( n !) 3 Latin squares (the exact number varies), while each main class (sequence A003090 in the OEIS ) contains either 1, 2, 3 or 6 isotopy classes.
(sequence A003090 in the OEIS )
(sequence A040082 in the OEIS )
(sequence A264603 in the OEIS )
The number of structurally distinct Latin squares (i.e. the squares cannot be made identical by means of rotation, reflection, and permutation of the symbols) for n = 1 up to 7 is 1, 1, 1, 12, 192, 145164, 1524901344 respectively (sequence A264603 in the OEIS ).
We give one example of a Latin square from each main class up to order five.
They present, respectively, the multiplication tables of the following groups:
Two Latin squares of the same order n are called orthogonal if, by overlaying them, one gets every ordered pair ( a , b ) of symbols where a is a symbol in the first square and b is one in the second square. Orthogonal pairs and more generally sets of pairwise orthogonal Latin squares are important in design theory and finite geometry.
A transversal in a Latin square is a choice of n cells, where each row contains one cell, each column contains one cell, and there is one cell containing each symbol.
One can consider a Latin square as a complete bipartite graph in which the rows are vertices of one part, the columns are vertices of the other part, each cell is an edge (between its row and its column), and the symbols are colors. The rules of the Latin squares imply that this is a proper edge coloring . With this definition, a Latin transversal is a matching in which each edge has a different color; such a matching is called a rainbow matching .
Therefore, many results on Latin squares/rectangles are contained in papers with the term "rainbow matching" in their title, and vice versa. [ 8 ]
Some Latin squares have no transversal. For example, when n is even, an n -by- n Latin square in which the value of cell i , j is ( i + j ) mod n has no transversal. Here are two examples: [ 1 2 2 1 ] [ 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3 ] {\displaystyle {\begin{bmatrix}1&2\\2&1\end{bmatrix}}\quad {\begin{bmatrix}1&2&3&4\\2&3&4&1\\3&4&1&2\\4&1&2&3\end{bmatrix}}} In 1967, H. J. Ryser conjectured that, when n is odd , every n -by- n Latin square has a transversal. [ 9 ]
In 1975, S. K. Stein and Brualdi conjectured that, when n is even , every n -by- n Latin square has a partial transversal of size n −1. [ 10 ]
A more general conjecture of Stein is that a transversal of size n −1 exists not only in Latin squares but also in any n -by- n array of n symbols, as long as each symbol appears exactly n times. [ 9 ]
Some weaker versions of these conjectures have been proved:
For small squares it is possible to generate permutations and test whether the Latin square property is met. For larger squares, Jacobson and Matthews' algorithm allows sampling from a uniform distribution over the space of n × n Latin squares. [ 16 ]
Sets of Latin squares that are orthogonal to each other have found an application as error correcting codes in situations where communication is disturbed by more types of noise than simple white noise , such as when attempting to transmit broadband Internet over powerlines. [ 19 ] [ 20 ] [ 21 ]
Firstly, the message is sent by using several frequencies, or channels, a common method that makes the signal less vulnerable to noise at any one specific frequency. A letter in the message to be sent is encoded by sending a series of signals at different frequencies at successive time intervals. In the example below, the letters A to L are encoded by sending signals at four different frequencies, in four time slots. The letter C, for instance, is encoded by first sending at frequency 3, then 4, 1 and 2.
The encoding of the twelve letters are formed from three Latin squares that are orthogonal to each other. Now imagine that there's added noise in channels 1 and 2 during the whole transmission. The letter A would then be picked up as: 12 12 123 124 {\displaystyle {\begin{matrix}12&12&123&124\end{matrix}}}
In other words, in the first slot we receive signals from both frequency 1 and frequency 2; while the third slot has signals from frequencies 1, 2 and 3. Because of the noise, we can no longer tell if the first two slots were 1,1 or 1,2 or 2,1 or 2,2. But the 1,2 case is the only one that yields a sequence matching a letter in the above table, the letter A.
Similarly, we may imagine a burst of static over all frequencies in the third slot: 1 2 1234 4 {\displaystyle {\begin{matrix}1&2&1234&4\end{matrix}}}
Again, we are able to infer from the table of encodings that it must have been the letter A being transmitted. The number of errors this code can spot is one less than the number of time slots. It has also been proven that if the number of frequencies is a prime or a power of a prime, the orthogonal Latin squares produce error detecting codes that are as efficient as possible.
The problem of determining if a partially filled square can be completed to form a Latin square is NP-complete . [ 22 ]
The popular Sudoku puzzles are a special case of Latin squares; any solution to a Sudoku puzzle is a Latin square. Sudoku imposes the additional restriction that nine particular 3×3 adjacent subsquares must also contain the digits 1–9 (in the standard version). See also Mathematics of Sudoku .
The more recent KenKen and Strimko puzzles are also examples of Latin squares.
Latin squares have been used as the basis for several board games, notably the popular abstract strategy game Kamisado .
Latin squares are used in the design of agronomic research experiments to minimise experimental errors. [ 23 ]
The Latin square also figures in the arms of the Statistical Society of Canada , [ 24 ] being specifically mentioned in its blazon . Also, it appears in the logo of the International Biometric Society . [ 25 ] | https://en.wikipedia.org/wiki/Latin_square |
Species richness , or biodiversity , increases from the poles to the tropics for a wide variety of terrestrial and marine organisms , often referred to as the latitudinal diversity gradient . [ 1 ] The latitudinal diversity gradient is one of the most widely recognized patterns in ecology . [ 1 ] It has been observed to varying degrees in Earth's past . [ 2 ] A parallel trend has been found with elevation ( elevational diversity gradient ), [ 3 ] though this is less well-studied. [ 4 ]
Explaining the latitudinal diversity gradient has been called one of the great contemporary challenges of biogeography and macroecology (Willig et al. 2003, Pimm and Brown 2004, Cardillo et al. 2005). [ 5 ] The question "What determines patterns of species diversity?" was among the 25 key research themes for the future identified in 125th Anniversary issue of Science (July 2005). There is a lack of consensus among ecologists about the mechanisms underlying the pattern, and many hypotheses have been proposed and debated. A recent review [ 6 ] noted that among the many conundrums associated with the latitudinal diversity gradient (or latitudinal biodiversity gradient) the causal relationship between rates of molecular evolution and speciation has yet to be demonstrated.
Understanding the global distribution of biodiversity is one of the most significant objectives for ecologists and biogeographers. Beyond purely scientific goals and satisfying curiosity, this understanding is essential for applied issues of major concern to humankind, such as the spread of invasive species , the control of diseases and their vectors , and the likely effects of global climate change on the maintenance of biodiversity (Gaston 2000). Tropical areas play prominent roles in the understanding of the distribution of biodiversity, as their rates of habitat degradation and biodiversity loss are exceptionally high. [ 7 ]
The latitudinal diversity gradient is a noticeable pattern among modern organisms that has been described qualitatively and quantitatively. It has been studied at various taxonomic levels , through different time periods and across many geographic regions (Crame 2001). The latitudinal diversity gradient has been observed to varying degrees in Earth's past, possibly due to differences in climate during various phases of Earth's history . Some studies indicate that the gradient was strong, particularly among marine taxa , while other studies of terrestrial taxa indicate it had little effect on the distribution of animals. [ 2 ]
Although many of the hypotheses exploring the latitudinal diversity gradient are closely related and interdependent, most of the major hypotheses can be split into three general hypotheses.
There are five major hypotheses that depend solely on the spatial and areal characteristics of the tropics .
Using computer simulations , Colwell and Hurtt (1994) and Willing and Lyons (1998) first pointed out that if species’ latitudinal ranges were randomly shuffled within the geometric constraints of a bounded biogeographical domain (e.g. the continents of the New World , for terrestrial species ), species' ranges would tend to overlap more toward the center of the domain than towards its limits, forcing a mid-domain peak in species richness. [ citation needed ] Colwell and Lees (2000) called this stochastic phenomenon the mid-domain effect (MDE), presented several alternative analytical formulations for one-dimensional MDE (expanded by Connolly 2005), and suggested the hypothesis that MDE might contribute to the latitudinal gradient in species richness, together with other explanatory factors considered here, including climatic and historical ones. [ citation needed ] Because "pure" mid-domain models attempt to exclude any direct environmental or evolutionary influences on species richness, they have been claimed to be null models [ citation needed ] (Colwell et al. 2004, 2005). On this view, if latitudinal gradients of species richness were determined solely by MDE, observed richness patterns at the biogeographic level would not be distinguishable from patterns produced by random placement of observed ranges. [ 8 ] Others object that MDE models so far fail to exclude the role of the environment at the population level and in setting domain boundaries, and therefore cannot be considered null models [ citation needed ] (Hawkins and Diniz-Filho 2002; Hawkins et al. 2005; Zapata et al. 2003, 2005). Mid-domain effects have proven controversial (e.g. Jetz and Rahbek 2001, Koleff and Gaston 2001, Lees and Colwell, 2007, Romdal et al. 2005, Rahbek et al. 2007, Storch et al. 2006; Bokma and Monkkonen 2001, Diniz-Filho et al. 2002, Hawkins and Diniz-Filho 2002, Kerr et al. 2006, Currie and Kerr, 2007) [ citation needed ] . While some studies have found evidence of a potential role for MDE in latitudinal gradients of species richness, particularly for wide-ranging species (e.g. Jetz and Rahbek 2001, Koleff and Gaston 2001, Lees and Colwell, 2007, Romdal et al. 2005, Rahbek et al. 2007, Storch et al. 2006; Dunn et al. 2007) [ 5 ] [ 9 ] others report little correspondence between predicted and observed latitudinal diversity patterns (Bokma and Monkkonen 2001, Currie and Kerr, 2007, Diniz-Filho et al. 2002, Hawkins and Diniz-Filho 2002, Kerr et al. 2006) [ citation needed ] .
Another spatial hypothesis is the geographical area hypothesis (Terborgh 1973). It asserts that the tropics are the largest biome and that large tropical areas can support more species. More area in the tropics allows species to have larger ranges and consequently larger population sizes . Thus, species with larger ranges are likely to have lower extinction rates (Rosenzweig 2003). Additionally, species with larger ranges may be more likely to undergo allopatric speciation , which would increase rates of speciation (Rosenzweig 2003). The combination of lower extinction rates and high rates of speciation leads to the high levels of species richness in the tropics.
A critique of the geographical area hypothesis is that even if the tropics is the most extensive of the biomes, successive biomes north of the tropics all have about the same area. Thus, if the geographical area hypothesis is correct, these regions should all have approximately the same species richness, which is not true, as is referenced by the fact that polar regions contain fewer species than temperate regions (Gaston and Blackburn 2000). To explain this, Rosenzweig (1992) suggested that if species with partly tropical distributions were excluded, the richness gradient north of the tropics should disappear. Blackburn and Gaston 1997 tested the effect of removing tropical species on latitudinal patterns in avian species richness in the New World and found there is indeed a relationship between the land area and the species richness of a biome once predominantly tropical species are excluded. Perhaps a more serious flaw in this hypothesis is some biogeographers suggest that the terrestrial tropics are not, in fact, the largest biome, and thus this hypothesis is not a valid explanation for the latitudinal species diversity gradient (Rohde 1997, Hawkins and Porter 2001). In any event, it would be difficult to defend the tropics as a "biome" rather than the geographically diverse and disjunct regions that they truly include.
The effect of area on biodiversity patterns has been shown to be scale-dependent, having the strongest effect among species with small geographical ranges compared to those species with large ranges who are affected more so by other factors such as the mid-domain and/or temperature . [ 5 ]
The species energy hypothesis suggests the amount of available energy sets limits to the richness of the system . Thus, increased solar energy (with an abundance of water ) at low latitudes causes increased net primary productivity (or photosynthesis ). This hypothesis proposes the higher the net primary productivity the more individuals can be supported , and the more species there will be in an area. Put another way, this hypothesis suggests that extinction rates are reduced towards the equator as a result of the higher populations sustainable by the greater amount of available energy in the tropics. Lower extinction rates lead to more species in the tropics .
One critique of this hypothesis has been that increased species richness over broad spatial scales is not necessarily linked to an increased number of individuals , which in turn is not necessarily related to increased productivity. [ 10 ] Additionally, the observed changes in the number of individuals in an area with latitude or productivity are either too small (or in the wrong direction) to account for the observed changes in species richness. [ 10 ] The potential mechanisms underlying the species-energy hypothesis, their unique predictions and empirical support have been assessed in a major review by Currie et al. (2004). [ 11 ]
The effect of energy has been supported by several studies in terrestrial and marine taxon . [ 7 ]
Another climate-related hypothesis is the climate harshness hypothesis, which states the latitudinal diversity gradient may exist simply because fewer species can physiologically tolerate conditions at higher latitudes than at low latitudes because higher latitudes are often colder and drier than tropical latitudes. Currie et al. (2004) [ 11 ] found fault with this hypothesis by stating that, although it is clear that climatic tolerance can limit species distributions, it appears that species are often absent from areas whose climate they can tolerate .
Similarly to the climate harshness hypothesis, climate stability is suggested to be the reason for the latitudinal diversity gradient. The mechanism for this hypothesis is that while a fluctuating environment may increase the extinction rate or preclude specialization , a constant environment can allow species to specialize on predictable resources, allowing them to have narrower niches and facilitating speciation . The fact that temperate regions are more variable both seasonally and over geological timescales (discussed in more detail below) suggests that temperate regions are thus expected to have less species diversity than the tropics.
Critiques for this hypothesis include the fact that there are many exceptions to the assumption that climate stability means higher species diversity. For example, low species diversity is known to occur often in stable environments such as tropical mountaintops . Additionally, many habitats with high species diversity do experience seasonal climates, including many tropical regions that have highly seasonal rainfall (Brown and Lomolino 1998).
There are four main hypotheses that are related to historical and evolutionary explanations for the increase of species diversity towards the equator.
The historical perturbation hypothesis proposes the low species richness of higher latitudes is a consequence of an insufficient time period available for species to colonize or recolonize areas because of historical perturbations such as glaciation (Brown and Lomolino 1998, Gaston and Blackburn 2000). This hypothesis suggests that diversity in the temperate regions has not yet reached equilibrium and that the number of species in temperate areas will continue to increase until saturated (Clarke and Crame 2003). However, in the marine environment , where there is also a latitudinal diversity gradient, there is no evidence of a latitudinal gradient in perturbation.
The evolutionary speed hypothesis [ 12 ] argues higher evolutionary rates due to shorter generation times in the tropics have caused higher speciation rates and thus increased diversity at low latitudes. [ 13 ] Higher evolutionary rates in the tropics have been attributed to higher ambient temperatures , higher mutation rates , shorter generation time and/or faster physiological processes , [ 14 ] [ 13 ] and increased selection pressure from other species that are themselves evolving. [ 15 ] Faster rates of microevolution in warm climates (i.e. low latitudes and altitudes) have been shown for plants , [ 16 ] mammals , [ 17 ] birds , [ 18 ] fish [ 19 ] and amphibians . [ 20 ] Bumblebee species inhabiting lower, warmer elevations have faster rates of both nuclear and mitochondrial genome -wide evolution . [ 21 ] Based on the expectation that faster rates of microevolution result in faster rates of speciation, these results suggest that faster evolutionary rates in warm climates almost certainly have a strong influence on the latitudinal diversity gradient. However, recent evidence from marine fish [ 22 ] and flowering plants [ 23 ] have shown that rates of speciation actually decrease from the poles towards the equator at a global scale. Understanding whether extinction rate varies with latitude will also be important to whether or not this hypothesis is supported. [ 24 ]
The hypothesis of effective evolutionary time assumes that diversity is determined by the evolutionary time under which ecosystems have existed under relatively unchanged conditions, and by evolutionary speed directly determined by the effect of temperature on mutation rates , generation times , and speed of selection . [ 13 ] It differs from most other hypotheses in not postulating an upper limit to species richness set by various abiotic and biotic factors , i.e., it is a nonequilibrium hypothesis assuming a largely non-saturated niche space. It does accept that many other factors may play a role in causing latitudinal gradients in species richness as well. The hypothesis is supported by much recent evidence, in particular, the studies of Allen et al. [ 14 ] and Wright et al. [ 25 ]
The integrated evolutionary speed hypothesis argues that species diversity increases due to faster rates of genetic evolution and speciation at lower latitudes where ecosystem productivity is generally greater. [ 26 ] It differs from the effective evolutionary time hypothesis by recognizing that species richness generally increases with increasing ecosystem productivity [ 27 ] [ 28 ] [ 29 ] and declines where high environmental energy (temperature) causes water deficits. [ 30 ] It also proposes that evolutionary rate increases with population size, abiotic environmental heterogeneity, environmental change and via positive feedback with biotic heterogeneity. There is considerable support for faster rates of genetic evolution in warmer environments, [ 26 ] some support for a slower rate among plant species where water availability is limited [ 31 ] and for a slower rate among bird species with small population sizes. [ 32 ] Many aspects of the hypothesis, however, remain untested.
Biotic hypotheses claim ecological species interactions such as competition , predation , mutualism , and parasitism are stronger in the tropics and these interactions promote species coexistence and specialization of species, leading to greater speciation in the tropics. These hypotheses are problematic because they cannot be the ultimate cause of the latitudinal diversity gradient as they fail to explain why species interactions might be stronger in the tropics.
An example of one such hypothesis is the greater intensity of predation and more specialized predators in the tropics has contributed to the increase of diversity in the tropics (Pianka 1966). This intense predation could reduce the importance of competition (see competitive exclusion) and permit greater niche overlap and promote higher richness of prey. Some recent large-scale experiments suggest predation may indeed be more intense in the tropics, [ 33 ] [ 34 ] although this cannot be the ultimate cause of high tropical diversity because it fails to explain what gives rise to the richness of the predators in the tropics. Interestingly, the largest test of whether biotic interactions are strongest in the tropics, which focused on predation exerted by large fish predators in the world's open oceans, found predation to peak at mid-latitudes. Moreover, this test further revealed a negative association of predation intensity and species richness, thus contrasting the idea that strong predation near the equator drives or maintains high diversity. [ 35 ] Other studies have failed to observe consistent changes in ecological interactions with latitude altogether (Lambers et al. 2002), [ 1 ] suggesting that the intensity of species interactions is not correlated with the change in species richness with latitude. Overall, these results highlight the need for more studies on the importance of species interactions in driving global patterns of diversity.
There are many other hypotheses related to the latitudinal diversity gradient, but the above hypotheses are a good overview of the major ones still cited today.
Many of these hypotheses are similar to and dependent on one another. For example, the evolutionary hypotheses are closely dependent on the historical climate characteristics of the tropics.
An extensive meta-analysis of nearly 600 latitudinal gradients from published literature tested the generality of the latitudinal diversity gradient across different organismal, habitat and regional characteristics. [ 1 ] The results showed that the latitudinal gradient occurs in marine, terrestrial, and freshwater ecosystems, in both hemispheres . The gradient is steeper and more pronounced in richer taxa (i.e. taxa with more species), larger organisms, in marine and terrestrial versus freshwater ecosystems, and at regional versus local scales. The gradient steepness (the amount of change in species richness with latitude) is not influenced by dispersal, animal physiology (homeothermic or ectothermic) trophic level , hemisphere, or the latitudinal range of study. The study could not directly falsify or support any of the above hypotheses, however, results do suggest a combination of energy/climate and area processes likely contribute to the latitudinal species gradient. Notable exceptions to the trend include the ichneumonidae , shorebirds, penguins, and freshwater zooplankton . Also, in terrestrial ecosystems the soil bacterial diversity peaks in temperate climatic zones, [ 36 ] [ 37 ] and has been linked to carbon inputs and the microscale distribution of aqueous habitats. [ 38 ]
One of the main assumptions about latitudinal diversity gradients and patterns in species richness is that the underlying data (i.e., the lists of species at specific locations) are complete. However, this assumption is not met in most cases. For instance, diversity patterns for blood parasites of birds suggest higher diversity in tropical regions, however, the data may be skewed by under sampling in rich faunal areas such as Southeast Asia and South America. [ 39 ] For marine fishes, which are among the most studied taxonomic groups, current lists of species are considerably incomplete for most of the world's oceans. At a 3° (about 350 km 2 ) spatial resolution, less than 1.8% of the world's oceans have above 80% of their fish fauna currently described. [ 40 ]
The fundamental macroecological question that the latitudinal diversity gradient depends on is "What causes patterns in species richness?". Species richness ultimately depends on whatever proximate factors are found to affect processes of speciation, extinction, immigration, and emigration. While some ecologists continue to search for the ultimate primary mechanism that causes the latitudinal richness gradient, many ecologists suggest instead this ecological pattern is likely to be generated by several contributory mechanisms (Gaston and Blackburn 2000, Willig et al. 2003, Rahbek et al. 2007). For now, the debate over the cause of the latitudinal diversity gradient will continue until a groundbreaking study provides conclusive evidence, or there is general consensus that multiple factors contribute to the pattern. | https://en.wikipedia.org/wiki/Latitudinal_gradients_in_species_diversity |
Engraved metal plates are significant in the Latter Day Saint movement because in 1827, the founder, Joseph Smith , claimed to have obtained a set of engraved golden plates he had found four years earlier after being directed there by an angel . [ 1 ] He claimed to have translated the engravings on the plates by divine power [ 2 ] into English as the Book of Mormon , a religious text of that religious tradition.
Latter Day Saints believe that other engraved metal plates exist, many of which are mentioned in the Book of Mormon . In addition, Mormon apologists argue that the golden plates are part of a long tradition of writing on engraved metal plates in the Middle East.
The golden plates are a set of bound and engraved metal plates that Latter Day Saint denominations believe are the source of Joseph Smith 's English translation of the Book of Mormon . Although several witnesses said they saw the plates, Smith said that he returned them to an angel after the translation was completed. Most Latter Day Saints assume their authenticity as a matter of faith.
Smith said he discovered the plates on September 22, 1823, on Cumorah hill , Manchester, New York , where he said they had been hidden in a buried box and protected for centuries by the angel Moroni , a resurrected ancient American prophet-historian, who had been last to write on them. Smith claimed that the angel required him to obey certain commandments prior to receiving them and that his inability to obey prevented him from obtaining the plates until four years later, on September 22, 1827. [ 3 ]
During this period, Smith also began dictating written commandments in the voice of God, including a commandment to form a new church and to choose eleven men who would join Smith as witnesses of the plates. These witnesses later declared, in two separate written statements attached to the 1830 published Book of Mormon, that they had seen the plates. [ 4 ]
The Book of Mormon is accepted by adherents of the Latter Day Saint movement as a sacred text.
There have been a variety of secular theories proposed for the origins of the Book of Mormon's plates. These range from theories based on environmental influences to psychological theories to pranking that grew into the Mormon faith .
The most recent scholarship, by Sonia Hazard, argues that the plates were inspired by printing plates or something similar. Joseph Smith , in this theory, would have either encountered "plates" or similar objects, possibly even on the Hill Cumorah , and believed them to be ancient artifacts. Given the presence of witnesses who attested to physical encounters with the plates, Hazard argues that physical objects seem most likely to be the stimulus of the Book of Mormon and therefore the brass plates' narrative role. [ 5 ]
In a similar vein, Ann Taves argues that the belief of Joseph Smith and others in the plates contributed to them perceiving a physical object. While, in Taves's view, the plates were not a material reality, they seemed to be so for the faith's eyewitnesses. [ 6 ]
Peter Ingersoll, a contemporary of Smith, was quoted by Eber D. Howe as saying that the brass plates were in fact a bag of sand. Ingersoll then relates the story of Smith deceiving his family, the Three Witnesses , and the Eight Witnesses with said bag of sand. Ingersoll indicates that this was a joke that spiraled into the Mormon movement . [ 7 ]
In addition to the golden plates, the Book of Mormon refers to several other sets of books written on metal plates:
In 1843, Smith acquired a set of six small bell-shaped plates, known as the Kinderhook Plates , found in Kinderhook , Pike County, Illinois . The plates were manufactured and buried by three men who lived in Kinderhook, and who had intended the plates as a prank against the LDS community. Although Smith did not translate the plates, William Clayton , his secretary, wrote that Smith said they contained "the history of the person with whom they were found and he was a descendant of Ham through the loins of Pharaoh king of Egypt." As Richard Bushman has written:
"Joseph may not have detected the fraud, but he did not swing into a full-fledged translation as he had with the Egyptian scrolls. The trap did not quite spring shut, which foiled the conspirators original plan." [ 8 ]
After Smith's death, the Kinderhook plates were presumed lost, and for decades the Church of Jesus Christ of Latter-day Saints (LDS Church) published facsimiles of them in its official History of the Church . In 1980, the Kinderhook Plates were tested at Brigham Young University and determined to have been manufactured during the nineteenth century. Today, the LDS Church acknowledges that the Kinderhook plates were a hoax. [ 9 ] [ 10 ]
James J. Strang , one of many rival claimants to succeed Smith in the 1844 succession crisis , said that he had discovered and translated a set of plates known as the Voree Plates or "Voree Record." Like Smith, Strang produced witnesses to testify to his plates' authenticity. [ 11 ] Although Strang's attempt to supplant Brigham Young as Smith's successor proved abortive, Smith's mother, Lucy Mack Smith , [ 12 ] and for a time all living witnesses to the Book of Mormon , including the three Whitmers and Martin Harris (although perhaps excluding Oliver Cowdery ), accepted "Strang's leadership, angelic call, metal plates, and his translation of these plates as authentic." [ 13 ] Strang equally claimed to have discovered and translated the Plates of Laban spoken of in the Book of Mormon. As with the Voree Plates, Strang produced witnesses who authenticated them. Strang's purported translation of these plates was published in 1850 as the Book of the Law of the Lord , which together with the Voree Record, is accepted as Scripture by members of Strang's diminutive church, the Church of Jesus Christ of Latter Day Saints (Strangite) . [ 14 ]
Mormon apologist William J. Hamblin argued that the golden plates are part of a long tradition of writing on engraved metal plates in the ancient Mediterranean. [ 15 ] There are many Hebrew specific examples of writings on metal plates, including a reference in Exodus 28:36 of the Bible of the high priest wearing an engraved gold plate, excavated silver plates containing Numbers 6:24-26 of the Bible dating to the seventh century BC, a treaty with the Romans engraved on bronze, a list of hidden temple treasures on the Copper Scroll from Qumran , and a third century AD ritual text referencing writings on metal plates or amulets numerous times. [ 15 ] In addition, there are numerous other semitic examples of writings on metal plates including three foundation plates of copper, silver, and gold dating to the 24th century BC and earlier, Byblos syllabic inscriptions on copper plates from the 18th century BC, the Kilauea gold plates (830-825 BC) containing a short prayer, Sargon II writings on six metal plates of bronze, lead, silver, and gold from Khorsabad (714-705 BC) about temple building, and the Pyrgi gold plate from Italy (500-475 BC) of a religious dedication. [ 15 ] Evidence of this tradition is the stone boxes of large gold and silver plates covering the Apanada hoard (515 BCE) excavated in 1933. Furthermore, the Mandaeans of Iran are reported to maintain their entire Book of John in metal book made entirely of lead plates. [ 16 ]
Nevertheless, there is no known extant example of writing on metal plates from the ancient Mediterranean longer than the eight-page Persian codex, and none from any ancient civilization in the Western Hemisphere. [ 17 ] | https://en.wikipedia.org/wiki/Latter_Day_Saint_movement_and_engraved_metal_plates |
The Lattice Boltzmann methods for solids (LBMS) are a set of methods for solving partial differential equations (PDE) in solid mechanics. The methods use a discretization of the Boltzmann equation (BM), and their use is known as the lattice Boltzmann methods for solids.
LBMS methods are categorized by their reliance on:
The LBMS subset remains highly challenging from a computational aspect as much as from a theoretical point of view. Solving solid equations within the LBM framework is still a very active area of research. If solids are solved, this shows that the Boltzmann equation is capable of describing solid motions as well as fluids and gases: thus unlocking complex physics to be solved such as fluid-structure interaction (FSI) in biomechanics.
The first attempt [ 1 ] of LBMS tried to use a Boltzmann-like equation for force (vectorial) distributions. The approach requires more computational memory but results are obtained in fracture and solid cracking.
Another approach consists in using LBM as acoustic solvers to capture waves propagation in solids. [ 2 ] [ 4 ] [ 5 ] [ 6 ]
This idea consists of introducing a modified version of the forcing term: [ 7 ] (or equilibrium distribution [ 8 ] ) into the LBM as a stress divergence force. This force is considered space-time dependent and contains solid properties [ Note 1 ]
where σ ¯ ¯ {\displaystyle {\overline {\overline {\sigma }}}} denotes the Cauchy stress tensor . g → {\displaystyle {\vec {g}}} and ρ {\displaystyle \rho } are respectively the gravity vector and solid matter density.
The stress tensor is usually computed across the lattice aiming finite difference schemes .
Force tuning [ 3 ] has recently proven its efficiency with a maximum error of 5% in comparison with standard finite element solvers in mechanics. Accurate validation of results can also be a tedious task since these methods are very different, common issues are: | https://en.wikipedia.org/wiki/Lattice_Boltzmann_methods_for_solids |
Lattice confinement fusion ( LCF ) is a type of nuclear fusion in which deuteron -saturated metals are exposed to gamma radiation or ion beams avoiding the confined high-temperature plasmas used in other methods of fusion. [ 1 ] [ 2 ]
In 2020, a team of NASA researchers seeking a new energy source for deep-space exploration missions published the first paper describing a method for triggering nuclear fusion in the space between the atoms of a metal solid, an example of screened fusion. [ 3 ] The experiments did not produce self-sustaining reactions, and the electron source itself was energetically expensive. [ 1 ]
The reaction is fueled with deuterium , a widely available non-radioactive hydrogen isotope composed of one proton , one neutron , and one electron . The deuterium is confined in the space between the atoms of a metal solid such as erbium or titanium . Erbium can indefinitely maintain 10 23 cm −3 deuterium atoms (deuterons) at room temperature. The deuteron-saturated metal forms an overall neutral plasma . [ dubious – discuss ] The electron density of the metal reduces the likelihood that two deuterium nuclei will repel each other as they get closer together. [ 1 ]
A dynamitron electron-beam accelerator generates an electron beam that hits a tantalum target and produces gamma rays , irradiating titanium deuteride or erbium deuteride. A gamma ray of about 2.2 megaelectron volts (MeV) strikes a deuteron and splits it into proton and neutron. The neutron collides with another deuteron. This second, energetic deuteron can experience screened fusion or a stripping reaction. [ 1 ]
Although the lattice is notionally at room temperature, LCF creates an energetic environment inside the lattice where individual atoms achieve fusion-level energies. [ 3 ] Heated regions are created at the micrometer scale.
The energetic deuteron fuses with another deuteron, yielding either a 3 helium nucleus and a neutron or a 3 hydrogen nucleus and a proton. These fusion products may fuse with other deuterons, creating an alpha particle, or with another 3 helium or 3 hydrogen nucleus. Each releases energy, continuing the process. [ 1 ]
In a stripping reaction, the metal strips a neutron from accelerated deuteron and fuses it with the metal, yielding a different isotope of the metal. [ 1 ] If the produced metal isotope is radioactive, it may decay into another element, releasing energy in the form of ionizing radiation in the process.
A related technique pumps deuterium gas through the wall of a palladium -silver alloy tubing. The palladium is electrolytically loaded with deuterium. In some experiments this produces fast neutrons that trigger further reactions. [ 1 ] Other experimenters (Fralick et al.) also made claims of anomalous heat produced by this system.
Pyroelectric fusion has previously been observed in erbium hydrides. A high-energy beam of deuterium ions generated by pyroelectric crystals was directed at a stationary, room-temperature ErD 2 or ErT 2 target, and fusion was observed. [ 2 ]
In previous fusion research, such as inertial confinement fusion (ICF), fuel such as the rarer tritium is subjected to high pressure for a nano-second interval, triggering fusion. In magnetic confinement fusion (MCF), the fuel is heated in a plasma to temperatures much higher than those at the center of the Sun. In LCF, conditions sufficient for fusion are created in a metal lattice that is held at ambient temperature during exposure to high-energy photons . [ 3 ] ICF devices momentarily reach densities of 10 26 cc −1 , while MCF devices momentarily achieve 10 14 .
Lattice confinement fusion requires energetic deuterons and is therefore not cold fusion . [ 1 ] | https://en.wikipedia.org/wiki/Lattice_confinement_fusion |
Lattice density functional theory ( LDFT ) is a statistical theory used in physics and thermodynamics to model a variety of physical phenomena with simple lattice equations.
Lattice models with nearest-neighbor interactions have been used extensively to model a wide variety of systems and phenomena, including the lattice gas, binary liquid solutions, order-disorder phase transitions , ferromagnetism , and antiferromagnetism . [ 1 ] Most calculations of correlation functions for nonrandom configurations are based on statistical mechanical techniques, which lead to equations that usually need to be solved numerically.
In 1925, Ising [ 2 ] gave an exact solution to the one-dimensional (1D) lattice problem. In 1944 Onsager [ 3 ] was able to get an exact solution to a two-dimensional (2D) lattice problem at the critical density. However, to date, no three-dimensional (3D) problem has had a solution that is both complete and exact. [ 4 ] Over the last ten years, Aranovich and Donohue have developed lattice density functional theory (LDFT) based on a generalization of the Ono-Kondo equations to three-dimensions, and used the theory to model a variety of physical phenomena.
The theory starts by constructing an expression for free energy , A=U-TS, where internal energy U and entropy S can be calculated using mean field approximation. The grand potential is then constructed as Ω=A-μΦ, where μ is a Lagrange multiplier which equals to the chemical potential , and Φ is a constraint given by the lattice.
It is then possible to minimize the grand potential with respect to the local density, which results in a mean-field expression for local chemical potential. The theory is completed by specifying the chemical potential for a second (possibly bulk) phase. In an equilibrium process, μ I =μ II .
Lattice density functional theory has several advantages over more complicated free volume techniques such as Perturbation theory and the statistical associating fluid theory , including mathematical simplicity and ease of incorporating complex boundary conditions . Although this approach is known to give only qualitative information about the thermodynamic behavior of a system, it provides important insights about the mechanisms of various complex phenomena such as phase transition , [ 5 ] [ 6 ] [ 7 ] aggregation , [ 8 ] configurational distribution, [ 9 ] surface-adsorption, [ 10 ] [ 11 ] self-assembly , crystallization , as well as steady state diffusion . | https://en.wikipedia.org/wiki/Lattice_density_functional_theory |
In chemistry , the lattice energy is the energy change (released) upon formation of one mole of a crystalline compound from its infinitely separated constituents, which are assumed to initially be in the gaseous state at 0 K. It is a measure of the cohesive forces that bind crystalline solids. The size of the lattice energy is connected to many other physical properties including solubility , hardness , and volatility . Since it generally cannot be measured directly, the lattice energy is usually deduced from experimental data via the Born–Haber cycle . [ 1 ]
The concept of lattice energy was originally applied to the formation of compounds with structures like rocksalt ( NaCl ) and sphalerite ( ZnS ) where the ions occupy high-symmetry crystal lattice sites. In the case of NaCl, lattice energy is the energy change of the reaction:
Na + ( g ) + Cl − ( g ) ⟶ NaCl ( s ) {\displaystyle {\ce {Na^+ (g) + Cl^- (g) -> NaCl (s)}}}
which amounts to −786 kJ/mol. [ 2 ]
Some chemistry textbooks [ 3 ] as well as the widely used CRC Handbook of Chemistry and Physics [ 4 ] define lattice energy with the opposite sign, i.e. as the energy required to convert the crystal into infinitely separated gaseous ions in vacuum , an endothermic process. Following this convention, the lattice energy of NaCl would be +786 kJ/mol. Both sign conventions are widely used.
The relationship between the lattice energy Δ U l {\displaystyle \Delta U_{l}} and the lattice enthalpy Δ H l {\displaystyle \Delta H_{l}} at pressure P {\displaystyle P} is given by the following equation:
where Δ U l {\displaystyle \Delta U_{l}} is the lattice energy (i.e., the molar internal energy change), Δ H l {\displaystyle \Delta H_{l}} is the lattice enthalpy, and Δ V m {\displaystyle \Delta V_{m}} the change of molar volume due to the formation of the lattice. Since the molar volume of the solid is much smaller than that of the gases, Δ V m < 0 {\displaystyle \Delta V_{m}<0} . The formation of a crystal lattice from ions in vacuum must lower the internal energy due to the net attractive forces involved, and so Δ U l < 0 {\displaystyle \Delta U_{l}<0} . The − P Δ V m {\displaystyle -P\Delta V_{m}} term is positive but is relatively small at low pressures, and so the value of the lattice enthalpy is also negative (and exothermic ). Both, lattice energy and lattice enthalpy are identical at 0 K and the difference may be disregarded in practice at normal temperatures. [ 5 ]
The lattice energy of an ionic compound depends strongly upon the charges of the ions that comprise the solid, which must attract or repel one another via Coulomb's law . More subtly, the relative and absolute sizes of the ions influence Δ H l {\displaystyle \Delta H_{l}} . London dispersion forces also exist between ions and contribute to the lattice energy via polarization effects. For ionic compounds made up of molecular cations and/or anions, there may also be ion-dipole and dipole-dipole interactions if either molecule has a molecular dipole moment . The theoretical treatments described below are focused on compounds made of atomic cations and anions, and neglect contributions to the internal energy of the lattice from thermalized lattice vibrations .
In 1918 [ 6 ] Max Born and Alfred Landé proposed that the lattice energy could be derived from the electric potential of the ionic lattice and a repulsive potential energy term. [ 2 ] This equation estimates the lattice energy based on electrostatic interactions and a repulsive term characterized by a power-law dependence (using a Born exponent, n {\displaystyle n} ). It was published building on earlier work by Born on ionic lattices.
where N A {\displaystyle N_{A}} is the Avogadro constant , M {\displaystyle M} is the Madelung constant , z + {\displaystyle z^{+}} / z − {\displaystyle z^{-}} are the charge numbers of the cations and anions, e {\displaystyle e} is the elementary charge (1.6022 × 10 −19 C ), ε 0 {\displaystyle \varepsilon _{0}} is the permittivity of free space ( 4 π ε 0 {\displaystyle 4\pi \varepsilon _{0}} = 1.112 × 10 −10 C 2 /(J·m)), r 0 {\displaystyle r_{0}} is the distance to the closest ion (nearest neighbour) and n {\displaystyle n} is the Born exponent (a number between 5 and 12, determined experimentally by measuring the compressibility of the solid, or derived theoretically). [ 7 ]
The Born–Landé equation above shows that the lattice energy of a compound depends principally on two factors:
Barium oxide (BaO), for instance, which has the NaCl structure and therefore the same Madelung constant, has a bond radius of 275 picometers and a lattice energy of −3054 kJ/mol, while sodium chloride (NaCl) has a bond radius of 283 picometers and a lattice energy of −786 kJ/mol. The bond radii are similar but the charge numbers are not, with BaO having charge numbers of (+2,−2) and NaCl having (+1,−1); the Born–Landé equation predicts that the difference in charge numbers is the principal reason for the large difference in lattice energies.
In 1932, [ 8 ] Born and Joseph E. Mayer refined the Born-Landé equation by replacing the power-law repulsive term with an exponential term e − r 0 / ρ {\displaystyle e^{-r_{0}/\rho }} which better accounts for the quantum mechanical repulsion effect between the ions. [ 9 ] This equation improved the accuracy for the description of many ionic compounds:
Δ U l = − N A M z + z − e 2 4 π ε 0 r 0 ( 1 − ρ r 0 ) {\displaystyle \Delta U_{l}=-{\frac {N_{A}Mz^{+}z^{-}e^{2}}{4\pi \varepsilon _{0}r_{0}}}\left(1-{\frac {\rho }{r_{0}}}\right)}
where N A {\displaystyle N_{A}} is the Avogadro constant , M {\displaystyle M} is the Madelung constant , z + {\displaystyle z^{+}} / z − {\displaystyle z^{-}} are the charge numbers of the cations and anions, e {\displaystyle e} is the elementary charge (1.6022 × 10 −19 C ), ε 0 {\displaystyle \varepsilon _{0}} is the permittivity of free space ( 8.854 × 10 −12 C 2 J −1 m −1 ), r 0 {\displaystyle r_{0}} is the distance to the closest ion and ρ {\displaystyle \rho } is a constant that depends on the compressibility of the crystal (30 - 34.5 pm works well for alkali halides), used to represent the repulsion between ions at short range. [ 5 ] Same as before, it can be seen that large values of r 0 {\displaystyle r_{0}} results in low lattice energies, whereas high ionic charges result in high lattice energies.
Developed in 1956 by Anatolii Kapustinskii , this is a generalized empirical equation useful for a wide range of ionic compounds, including those with complex ions. [ 10 ] It builds upon the previous equations and provides a simplified way to estimate the lattice energy of ionic compounds based on the charges and radii of the ions. It is an approximation that facilitates calculations compared to the Born-Landé and Born-Mayer equations, easier for quick estimates where high precision is not required. [ 2 ]
Δ U l = − κ Z | z + z − | r 0 ( 1 − ρ r 0 ) {\displaystyle \Delta U_{l}=-{\frac {\kappa Z|z^{+}z^{-}|}{r_{0}}}\left(1-{\frac {\rho }{r_{0}}}\right)}
where κ {\displaystyle \kappa } is the Kapustinskii constant (1.202·10 5 (kJ·Å)/mol), Z {\displaystyle Z} is the number of ions per formula unit , z + {\displaystyle z^{+}} / z − {\displaystyle z^{-}} are the charge numbers of the cations and anions, r 0 {\displaystyle r_{0}} is the distance to the closest ion and ρ {\displaystyle \rho } is a constant that depends on the compressibility of the crystal (30 - 34.5 pm works well for alkali halides), used to represent the repulsion between ions at short range.
For certain ionic compounds, the calculation of the lattice energy requires the explicit inclusion of polarization effects. [ 11 ] In these cases the polarization energy E pol associated with ions on polar lattice sites may be included in the Born–Haber cycle. As an example, one may consider the case of iron-pyrite FeS 2 . It has been shown that neglect of polarization led to a 15% difference between theory and experiment in the case of FeS 2 , whereas including it reduced the error to 2%. [ 12 ]
The following table presents a list of lattice energies for some common compounds as well as their structure type. | https://en.wikipedia.org/wiki/Lattice_energy |
Lattice light-sheet microscopy is a modified version of light sheet fluorescence microscopy that increases image acquisition speed while decreasing damage to cells caused by phototoxicity . This is achieved by using a structured light sheet to excite fluorescence in successive planes of a specimen, generating a time series of 3D images which can provide information about dynamic biological processes. [ 1 ] [ 2 ]
It was developed in the early 2010s by a team led by Eric Betzig . [ 1 ] According to an interview conducted by The Washington Post , Betzig believes that this development will have a greater impact than the work that earned him the 2014 Nobel Prize in Chemistry for "the development of super-resolution fluorescence microscopy ". [ 3 ]
Lattice light sheet microscopy is a novel combination of techniques from Light sheet fluorescence microscopy , Bessel beam microscopy, and Super-resolution microscopy (specifically structured illumination microscopy, SIM [ 4 ] ).
In lattice light sheet microscopy, very similarly to light sheet microscopy, the illumination of the sample occurs perpendicular to the image detection. Initially the light sheet is formed by stretching the linearly polarized circular input beam with a pair of cylindrical lenses along the x axis and then compressing it with an additional pair of lenses along the z axis. [ 5 ] This modification creates a thin sheet of light that is then projected onto a binary ferroelectric spatial light modulator (SLM). The SLM is a device that spatially varies the waveform of a beam of light. The light that is reflected back from the SLM is used to eliminate unwanted diffraction . Diffraction is eliminated by the transform lens that creates a Fraunhofer diffraction pattern from the reflected light at an opaque mask containing a transparent annulus . [ 5 ] Optical lattices are two or three dimensional interference patterns, which here are produced by the transparent annular ring. The mask is conjugate to x and z galvanometers . This quality of the microscope is important for the dithered mode of operation, where the light sheet must be oscillated within the x axis.
The lattice light-sheet microscope has two modes of operation: In the dithered mode, the light sheet is rapidly scanned along the x axis and only one image is recorded per Z plane, at normal diffraction limited resolutions. [ 1 ] The second mode of operation is the structured illumination microscopy mode (SIM). SIM is a technique where a grid pattern of excitation light is superimposed on the sample and rotated in steps between the capture of each image. [ 4 ] [ 6 ] [ 7 ] These images are then processed via an algorithm to produce a reconstructed image past the limit of diffraction that is built into our optical instruments.
Lattice light sheet microscopy can be viewed as an improvement of Bessel beam light sheet microscopes [ 8 ] in terms of axial resolution (also termed resolution in z). In Bessel beam light sheet microscopes, a non-diffracting Bessel beam is first created then dithered in the x direction to produce a sheet. However, the lobes of a Bessel functions carry as much energy as the central spot, resulting in illumination out of the depth of field of the observation objective.
Lattice light sheet microscopy aims at reducing the intensity of the outer lobes of the Bessel functions by destructive interference . To do so, a two-dimensional lattice of regularly spaced Bessel beams is created. Then, destructive interference can be triggered by carefully tuning the spacing between the beams (that is, the period of the lattice).
Practically, the lattice of interfering Bessel beams is engineered by a spatial light modulator (SLM), a liquid-crystal device whose individual pixels can be switched on and off to display a binary pattern. Due to the matrix nature of the SLM, the generated pattern contains many unwanted frequencies. Thus, these are filtered out by the means of an annulus placed in a plane conjugated with the back focal plane of the objective (Fourier domain).
Finally, to obtain a uniform intensity at the sample rather than a lattice, the sheet is dithered using a galvanometer oscillating in the x direction.
Lattice Light-Sheet Microscopy combines high resolution and clarity at high image acquisition speed, without damaging samples through photobleaching . [ 1 ] Photobleaching is a major and highly common problem in fluorescence microscopy wherein fluorescent tags will lose their ability to emit photons upon repeated excitation. Unlike common fluorescence microscopes, samples in a Lattice Light-Sheet Microscope experience photobleaching at a rate drastically reduced when compared to conventional techniques (In conventional techniques, this results in an image signal that gets weaker over the course of multiple excitations). This allows for longer exposures without loss of signal, which in turn allows for video to be captured at over longer periods of time. The Lattice method also has the ability to resolve 200 to 1000 planes per second, an extremely fast imaging rate that allows continuous video capture. This capture rate is one order of magnitude faster than Bessel beam excitation, and two orders of magnitude faster than Spinning Disk Confocal Microscopy. [ 1 ] These two advantages combine to allow researchers to take very detailed movies over long periods of time.
Lattice light sheet microscopy is limited to transparent and thin samples to achieve good image quality. The quality of image acquired degrades with imaging depth. This phenomenon occurs due to sample-induced aberrations , and it has been proposed that imaging samples to beyond 20 to 100 μm will require adaptive optics . [ 1 ]
Lattice light sheet microscopy is useful for in-vivo cellular localization and super resolution. Lattice light sheets' confined excitation band keeps nearly all illuminated cells in focus. The reduction of large, out of focus spots allow precise tracking of individual cells at a high molecular density, a capability unattainable through previous microscopy methods. [ 1 ] Consequently, lattice light sheet is being used for a number of dynamic cellular interactions. The decrease in phototoxicity has created opportunities to study the subcellular processes of embryos without damaging their living tissues. Studies have examined and quantified the extent of the highly variable growth patterns of microtubules throughout mitosis . Dictyostelium discoideum (slime mold) cells were imaged during their rapid chemotactic movement toward one another and the initial contact.
The aggregation of T cell and target cells was observed, along with the subsequent formation of the immunological synapse . The advancements of the lattice sheet method revealed three-dimensional movement patterns of actin as well as lamellipodial protrusion in these interactions. The increase in imaging speed also allowed the observation of fast moving neutrophils through the extracellular matrix in another study [ citation needed ] .
The technique, along with chemical and genetic manipulation techniques, was used to capture a live image of a virus (a virus that was engineered to spike COVID-19 proteins) infecting a cell, by injecting its genetic material into the cell's endosome for the first time, at Harvard Medical School , in cooperation with other institutions. [ 9 ] [ 10 ]
The technique is being actively developed at the Janelia Research Campus of the Howard Hughes Medical Institute . [ 11 ] Eric Betzig has stated that his goal is to combine his work on microscopy to develop a "high-speed, high-resolution, low-impact tool that can look deep inside biological systems." [ 3 ] Penetration deeper than 20–100 μm may be achieved by combining lattice light-sheet microscopy with adaptive optics . [ 1 ] | https://en.wikipedia.org/wiki/Lattice_light-sheet_microscopy |
Lattice models in biophysics represent a class of statistical-mechanical models which consider a biological macromacromolecule (such as DNA, protein, actin, etc.) as a lattice of units, each unit being in different states or conformations.
For example, DNA in chromatin can be represented as a one-dimensional lattice, whose elementary units are the nucleotide , base pair or nucleosome . Different states of the unit can be realized either by chemical modifications (e.g. DNA methylation or modifications of DNA-bound histones ), or due to quantized internal degrees of freedom (e.g. different angles of the bond joining two neighboring units), or due to binding events involving a given unit (e.g. reversible binding of small ligands or proteins to DNA , or binding/unbinding of two complementary nucleotides in the DNA base pair). [ 1 ]
This biophysics -related article is a stub . You can help Wikipedia by expanding it .
This article about lattice models is a stub . You can help Wikipedia by expanding it .
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lattice_model_(biophysics) |
In crystallography , a lattice plane of a given Bravais lattice is any plane containing at least three noncollinear Bravais lattice points. Equivalently, a lattice plane is a plane whose intersections with the lattice (or any crystalline structure of that lattice) are periodic (i.e. are described by 2d Bravais lattices). [ 1 ] A family of lattice planes is a collection of equally spaced parallel lattice planes that, taken together, intersect all lattice points. Every family of lattice planes can be described by a set of integer Miller indices that have no common divisors (i.e. are relative prime ). Conversely, every set of Miller indices without common divisors defines a family of lattice planes. If, on the other hand, the Miller indices are not relative prime, the family of planes defined by them is not a family of lattice planes, because not every plane of the family then intersects lattice points. [ 2 ]
Conversely, planes that are not lattice planes have aperiodic intersections with the lattice called quasicrystals ; this is known as a "cut-and-project" construction of a quasicrystal (and is typically also generalized to higher dimensions). [ 3 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it .
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lattice_plane |
In computer science , lattice problems are a class of optimization problems related to mathematical objects called lattices . The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems : lattice problems are an example of NP-hard problems which have been shown to be average-case hard , providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers . For applications in such cryptosystems , lattices over vector spaces (often Q n {\displaystyle \mathbb {Q} ^{n}} ) or free modules (often Z n {\displaystyle \mathbb {Z} ^{n}} ) are generally considered.
For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N . The norm usually considered is the Euclidean norm L 2 . However, other norms (such as L p ) are also considered and show up in a variety of results. [ 1 ]
Throughout this article, let λ ( L ) {\displaystyle \lambda (L)} denote the length of the shortest non-zero vector in the lattice L : that is,
In the SVP, a basis of a vector space V and a norm N (often L 2 ) are given for a lattice L and one must find the shortest non-zero vector in V , as measured by N , in L . In other words, the algorithm should output a non-zero vector v such that ‖ v ‖ N = λ ( L ) {\displaystyle \|v\|_{N}=\lambda (L)} .
In the γ-approximation version SVP γ , one must find a non-zero lattice vector of length at most γ ⋅ λ ( L ) {\displaystyle \gamma \cdot \lambda (L)} for given γ ≥ 1 {\displaystyle \gamma \geq 1} .
The exact version of the problem is only known to be NP-hard for randomized reductions. [ 2 ] [ 3 ] By contrast, the corresponding problem with respect to the uniform norm is known to be NP-hard . [ 4 ]
To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time ( 2 ω ( n ) {\displaystyle 2^{\omega (n)}} ) and poly ( n ) {\displaystyle \operatorname {poly} (n)} memory, and algorithms requiring both exponential time and space ( 2 Θ ( n ) {\displaystyle 2^{\Theta (n)}} ) in the lattice dimension. The former class of algorithms most notably includes lattice enumeration [ 5 ] [ 6 ] [ 7 ] and random sampling reduction, [ 8 ] [ 9 ] while the latter includes lattice sieving, [ 10 ] [ 11 ] [ 12 ] computing the Voronoi cell of the lattice, [ 13 ] [ 14 ] and discrete Gaussian sampling. [ 15 ] An open problem is whether algorithms for solving exact SVP exist running in single exponential time ( 2 O ( n ) {\displaystyle 2^{O(n)}} ) and requiring memory scaling polynomially in the lattice dimension. [ 16 ]
To solve the γ-approximation version SVP γ for γ > 1 {\displaystyle \gamma >1} for the Euclidean norm, the best known approaches are based on using lattice basis reduction . For large γ = 2 Ω ( n ) {\displaystyle \gamma =2^{\Omega (n)}} , the Lenstra–Lenstra–Lovász (LLL) algorithm can find a solution in time polynomial in the lattice dimension. For smaller values γ {\displaystyle \gamma } , the Block Korkine-Zolotarev algorithm (BKZ) [ 17 ] [ 18 ] [ 19 ] is commonly used, where the input to the algorithm (the blocksize β {\displaystyle \beta } ) determines the time complexity and output quality: for large approximation factors γ {\displaystyle \gamma } , a small block size β {\displaystyle \beta } suffices, and the algorithm terminates quickly. For small γ {\displaystyle \gamma } , larger β {\displaystyle \beta } are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine (running in lattices of dimension at most β {\displaystyle \beta } ), and its overall complexity is closely related to the costs of these SVP calls in dimension β {\displaystyle \beta } .
The problem GapSVP β consists of distinguishing between the instances of SVP in which the length of the shortest vector is at most 1 {\displaystyle 1} or larger than β {\displaystyle \beta } , where β {\displaystyle \beta } can be a fixed function of the dimension of the lattice n {\displaystyle n} . Given a basis for the lattice, the algorithm must decide whether λ ( L ) ≤ 1 {\displaystyle \lambda (L)\leq 1} or λ ( L ) > β {\displaystyle \lambda (L)>\beta } . Like other promise problems , the algorithm is allowed to err on all other cases.
Yet another version of the problem is GapSVP ζ,γ for some functions ζ and γ. The input to the algorithm is a basis B {\displaystyle B} and a number d {\displaystyle d} . It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that λ ( L ( B ) ) ≤ ζ ( n ) {\displaystyle \lambda (L(B))\leq \zeta (n)} and that 1 ≤ d ≤ ζ ( n ) / γ ( n ) {\displaystyle 1\leq d\leq \zeta (n)/\gamma (n)} , where n {\displaystyle n} is the dimension. The algorithm must accept if λ ( L ( B ) ) ≤ d {\displaystyle \lambda (L(B))\leq d} , and reject if λ ( L ( B ) ) ≥ γ ( n ) ⋅ d {\displaystyle \lambda (L(B))\geq \gamma (n)\cdot d} . For large ζ {\displaystyle \zeta } (i.e. ζ ( n ) > 2 n / 2 {\displaystyle \zeta (n)>2^{n/2}} ), the problem is equivalent to GapSVP γ because [ 20 ] a preprocessing done using the LLL algorithm makes the second condition (and hence, ζ {\displaystyle \zeta } ) redundant.
In CVP, a basis of a vector space V and a metric M (often L 2 ) are given for a lattice L , as well as a vector v in V but not necessarily in L . It is desired to find the vector in L closest to v (as measured by M ). In the γ {\displaystyle \gamma } -approximation version CVP γ , one must find a lattice vector at distance at most γ {\displaystyle \gamma } .
The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an oracle for CVP γ (defined below), one can solve SVP γ by making some queries to the oracle. [ 21 ] The naive method to find the shortest vector by calling the CVP γ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.
The reduction from SVP γ to CVP γ is as follows: Suppose that the input to the SVP γ is the basis for lattice B = [ b 1 , b 2 , … , b n ] {\displaystyle B=[b_{1},b_{2},\ldots ,b_{n}]} . Consider the basis B i = [ b 1 , … , 2 b i , … , b n ] {\displaystyle B^{i}=[b_{1},\ldots ,2b_{i},\ldots ,b_{n}]} and let x i {\displaystyle x_{i}} be the vector returned by CVP γ ( B i , b i ) . The claim is that the shortest vector in the set { x i − b i } {\displaystyle \{x_{i}-b_{i}\}} is the shortest vector in the given lattice.
Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP. [ 22 ] Using PCP tools, Arora et al. showed that CVP is hard to approximate within factor 2 log 1 − ϵ ( n ) {\displaystyle 2^{\log ^{1-\epsilon }(n)}} unless NP ⊆ DTIME ( 2 poly ( log n ) ) {\displaystyle \operatorname {NP} \subseteq \operatorname {DTIME} (2^{\operatorname {poly} (\log n)})} . [ 23 ] Dinur et al. strengthened this by giving a NP-hardness result with ϵ = ( log log n ) c {\displaystyle \epsilon =(\log \log n)^{c}} for c < 1 / 2 {\displaystyle c<1/2} . [ 24 ]
Algorithms for CVP, especially the Fincke and Pohst variant, [ 6 ] have been used for data detection in multiple-input multiple-output ( MIMO ) wireless communication systems (for coded and uncoded signals). [ 25 ] [ 13 ] In this context it is called sphere decoding due to the radius used internal to many CVP solutions. [ 26 ]
It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS). [ 27 ] It is called the LAMBDA method in that field. In the same field, the general CVP problem is referred to as Integer Least Squares .
This problem is similar to the GapSVP problem. For GapSVP β , the input consists of a lattice basis and a vector v {\displaystyle v} , and the algorithm must answer whether one of the following holds:
The opposite condition is that the closest lattice vector is at a distance 1 < λ ( L ) ≤ β {\displaystyle 1<\lambda (L)\leq \beta } , hence the name Gap CVP.
The problem is trivially contained in NP for any approximation factor.
Schnorr , in 1987, showed that deterministic polynomial time algorithms can solve the problem for β = 2 O ( n ( log log n ) 2 / log n ) {\displaystyle \beta =2^{O(n(\log \log n)^{2}/\log n)}} . [ 28 ] Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of β = 2 O ( n log log n / log n ) {\displaystyle \beta =2^{O(n\log \log n/\log n)}} . [ 10 ]
In 1993, Banaszczyk showed that GapCVP n is in N P ∩ c o N P {\displaystyle {\mathsf {NP\cap coNP}}} . [ 29 ] In 2000, Goldreich and Goldwasser showed that β = n / log n {\displaystyle \beta ={\sqrt {n/\log n}}} puts the problem in both NP and coAM . [ 30 ] In 2005, Aharonov and Regev showed that for some constant c {\displaystyle c} , the problem with β = c n {\displaystyle \beta =c{\sqrt {n}}} is in N P ∩ c o N P {\displaystyle {\mathsf {NP\cap coNP}}} . [ 31 ]
For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for β = n o ( 1 / log log n ) {\displaystyle \beta =n^{o(1/\log {\log {n}})}} . [ 32 ]
Given a lattice L of dimension n , the algorithm must output n linearly independent v 1 , v 2 , … , v n {\displaystyle v_{1},v_{2},\ldots ,v_{n}} so that max ‖ v i ‖ ≤ max B ‖ b i ‖ {\displaystyle \max \|v_{i}\|\leq \max _{B}\|b_{i}\|} , where the right-hand side considers all bases B = { b 1 , … , b n } {\displaystyle B=\{b_{1},\ldots ,b_{n}\}} of the lattice.
In the γ {\displaystyle \gamma } -approximate version, given a lattice L with dimension n , one must find n linearly independent vectors v 1 , v 2 , … , v n {\displaystyle v_{1},v_{2},\ldots ,v_{n}} of length max ‖ v i ‖ ≤ γ λ n ( L ) {\displaystyle \max \|v_{i}\|\leq \gamma \lambda _{n}(L)} , where λ n ( L ) {\displaystyle \lambda _{n}(L)} is the n {\displaystyle n} th successive minimum of L {\displaystyle L} .
This problem is similar to CVP. Given a vector such that its distance from the lattice is at most λ ( L ) / 2 {\displaystyle \lambda (L)/2} , the algorithm must output the closest lattice vector to it.
Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice.
Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given a lattice basis B {\displaystyle B} , output an equivalent basis B ′ {\displaystyle B'} such that the length of the longest vector in B ′ {\displaystyle B'} is as short as possible.
The approximation version SBP γ problem consist of finding a basis whose longest vector is at most γ {\displaystyle \gamma } times longer than the longest vector in the shortest basis.
Average-case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers .
The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The Lenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time. [ 33 ] This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice; however, this belief was challenged in the late 1990s, when several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai . [ 2 ]
In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems. [ 2 ] [ 3 ] Building on these results, Ajtai and Dwork created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, [ 34 ] thus making it the first result to have used worst-case hardness to create secure systems. [ 35 ] | https://en.wikipedia.org/wiki/Lattice_problem |
In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.
One measure of nearly orthogonal is the orthogonality defect . This compares the product of the lengths of the basis vectors with the volume of the parallelepiped they define. For perfectly orthogonal basis vectors, these quantities would be the same.
Any particular basis of n {\displaystyle n} vectors may be represented by a matrix B {\displaystyle B} , whose columns are the basis vectors b i , i = 1 , … , n {\displaystyle b_{i},i=1,\ldots ,n} . In the fully dimensional case where the number of basis vectors is equal to the dimension of the space they occupy, this matrix is square, and the volume of the fundamental parallelepiped is simply the absolute value of the determinant of this matrix det ( B ) {\displaystyle \det(B)} . If the number of vectors is less than the dimension of the underlying space, then volume is det ( B T B ) {\displaystyle {\sqrt {\det(B^{T}B)}}} . For a given lattice Λ {\displaystyle \Lambda } , this volume is the same (up to sign) for any basis, and hence is referred to as the determinant of the lattice det ( Λ ) {\displaystyle \det(\Lambda )} or lattice constant d ( Λ ) {\displaystyle d(\Lambda )} .
The orthogonality defect is the product of the basis vector lengths divided by the parallelepiped volume;
From the geometric definition it may be appreciated that δ ( B ) ≥ 1 {\displaystyle \delta (B)\geq 1} with equality if and only if the basis is orthogonal.
If the lattice reduction problem is defined as finding the basis with the smallest possible defect, then the problem is NP-hard [ citation needed ] . However, there exist polynomial time algorithms to find a basis with defect δ ( B ) ≤ c {\displaystyle \delta (B)\leq c} where c is some constant depending only on the number of basis vectors and the dimension of the underlying space (if different) [ citation needed ] . This is a good enough solution in many practical applications [ citation needed ] .
For a basis consisting of just two vectors, there is a simple and efficient method of reduction closely analogous to the Euclidean algorithm for the greatest common divisor of two integers. As with the Euclidean algorithm, the method is iterative; at each step the larger of the two vectors is reduced by adding or subtracting an integer multiple of the smaller vector.
The pseudocode of the algorithm, often known as Lagrange's algorithm or the Lagrange-Gauss algorithm, is as follows:
See the section on Lagrange's algorithm in [ 1 ] for further details.
Lattice reduction algorithms are used in a number of modern number theoretical applications, including in the discovery of a spigot algorithm for π {\displaystyle \pi } . Although determining the shortest basis is possibly an NP-complete problem, algorithms such as the LLL algorithm [ 2 ] can find a short (not necessarily shortest) basis in polynomial time with guaranteed worst-case performance. LLL is widely used in the cryptanalysis of public key cryptosystems.
When used to find integer relations, a typical input to the algorithm consists of an augmented n × n {\displaystyle n\times n} identity matrix with the entries in the last column consisting of the n {\displaystyle n} elements (multiplied by a large positive constant w {\displaystyle w} to penalize vectors that do not sum to zero) between which the relation is sought.
The LLL algorithm for computing a nearly-orthogonal basis was used to show that integer programming in any fixed dimension can be done in polynomial time . [ 3 ]
The following algorithms reduce lattice bases; several public implementations of these algorithms are also listed. | https://en.wikipedia.org/wiki/Lattice_reduction |
Lattice scattering is the scattering of ions by interaction with atoms in a lattice. [ 1 ] This effect can be qualitatively understood as phonons colliding with charge carriers .
In the current quantum mechanical picture of conductivity the ease with which electrons traverse a crystal lattice is dependent on the near perfectly regular spacing of ions in that lattice. Only when a lattice contains perfectly regular spacing can the ion-lattice interaction (scattering) lead to almost transparent behavior of the lattice. [ 2 ]
In the quantum understanding, an electron is viewed as a wave traveling through a medium. When the wavelength of the electrons is larger than the crystal spacing, the electrons will propagate freely throughout the metal without collision.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
This scattering –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lattice_scattering |
Latticework is an openwork framework consisting of a criss-crossed pattern of strips of building material , typically wood or metal . The design is created by crossing the strips to form a grid or weave. [ 1 ] Latticework may be functional – for example, to allow airflow to or through an area; structural, as a truss in a lattice girder ; [ 2 ] used to add privacy, as through a lattice screen; purely decorative ; or some combination of these.
Latticework in stone or wood from the classical period is also called Roman lattice or transenna (plural transenne ).
In India , the house of a rich or noble person may be built with a baramdah or verandah surrounding every level leading to the living area. The upper floors often have balconies overlooking the street that are shielded by latticed screens carved in stone called jalis which keep the area cool and give privacy. [ 3 ] | https://en.wikipedia.org/wiki/Latticework |
In mathematics, a Lattès map is a rational map f = Θ L Θ −1 from the complex sphere to itself such that Θ is a holomorphic map from a complex torus to the complex sphere and L is an affine map z → az + b from the complex torus to itself.
Lattès maps are named after French mathematician Samuel Lattès , who wrote about them in 1918. | https://en.wikipedia.org/wiki/Lattès_map |
Laucysteinamide A ( LcA ) is a marine natural product isolated from a cyanobacterium , Caldora penicillata . [ 1 ]
It is structurally related to other marine cyanobacterial metabolites such as somocystinamide A [ 2 ] and curacin A , which have inspired extensive investigations into their use as a lead for anticancer therapies. [ 3 ] [ 4 ] [ 5 ] [ 6 ] Its biological activity profile has not been fully evaluated due to decomposition of the natural sample. However, it has shown moderate cytotoxicity against H460 human lung cancer cells. [ 1 ]
In order to examine the possibility that LcA's true bioactivity was diminished by solubility issues, Taylor et al. chemically synthesize d LcA. [ 7 ] This synthetic sample was incorporated into an emulsifier PEG400 and tested for its cytotoxicity against H460 cells. This sample did not show any more activity than the natural sample, implying that LcA only has moderate cytotoxicity. In addition, simple enamide analogs showed no activity. [ 7 ] This work implies that the exceptional antiproliferative activity of somocystinamide A arises from the dimeric nature of its structure and not from the enamide moiety.
This Cyanobacteria -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Laucysteinamide_A |
Laudate Deum ( Praise God ) is an apostolic exhortation by Pope Francis , published on October 4, 2023. [ 1 ] It was released on the 2023 Feast of St Francis Assisi as a follow-up to his 2015 encyclical Laudato si' . The text is about 8,000 words divided into 73 paragraphs. [ 2 ]
In it, Pope Francis calls for speedier action against the climate crisis and condemns climate change denial . [ 3 ]
The apostolic exhortation, dated 4 October 2023, was officially presented the following day—Thursday morning, October 5—in a press conference called “Laudate Deum: voices and testimonies on the climate crisis”, held in the Vatican Gardens, in Largo della Radio, in front of the Palazzina Leone XIII of the Vatican. [ 4 ]
The Vatican released the document in Italian, Belarusian, German, English, Spanish, French, Polish, Portuguese and Arabic.
It was Pope Francis' sixth apostolic exhortation, after Querida Amazonia , which was released in 2020. [ 5 ]
Pope Francis revealed the title of Laudate Deum during a meeting on 21 September 2023 at the Vatican with rectors of Catholic and public universities from throughout Latin America and the Caribbean. "Laudate Deum" (Praise God) is a frequent refrain in several psalms, including Psalm 148, which tells the heavens and the angels and the sun and moon to praise the Lord. The new document, anticipated pope Francis at that time, is "a look at what has happened" since 2015 and a look at what still "needs to be done." [ 6 ]
The title refers to the words of St. Francis of Assisi and to the encyclical Laudato si' , which was published in 2015. “‘Praise God for all his creatures,’” Laudate Deum begins. “This was the message that St. Francis of Assisi proclaimed by his life, his canticles and all his actions.” [ 2 ]
The main goal of Laudate Deum is to call once again on all people of goodwill to care for the poor and for the Earth. [ 1 ]
In this document, the Pope expresses hope that societies around the world will change their lifestyles and intensify grassroots activities aimed at reducing the negative human impact on the natural environment , to prevent even more tragic damage to the Earth. [ 7 ] The dramatic environmental degradation strongly affects not only the indigenous peoples, the poor, and endangered species, but also the future of all young people. [ 8 ] He also calls on politicians and the rich to work for the common good, and not for their own profit and particular interests. Finally (in paragraph 73) the Pope emphasizes that "when human beings claim to take God’s place, they become their own worst enemies". [ 1 ]
As in Laudato Si' , the Pope makes use of several quotations from assemblies of bishops held around the world, [ 1 ] : Paragraph 3 affirming, for example, the African bishops' statement that "climate change is a moral outrage. It is a tragic and striking example of structural sin". [ 9 ]
The exhortation focuses on the urgency of addressing the climate crisis, offering insights on the current state of the global environment, the inadequacies of current responses, and proposed pathways forward. [ 10 ] The document is divided into 6 chapters. The introduction begins by acknowledging the undeniable reality of climate change and its increasingly evident effects on the planet. Pope Francis emphasizes the anthropogenic origin of climate change and the irreversible nature of many associated catastrophes. [ 8 ] While recognizing the limitations in fully correcting the damage, the exhortation underscores the importance of taking measures to prevent further harm. [ 7 ]
Laudate Deum addresses the reality of climate change and its escalating impact on the lives of all peoples. Pope Francis emphasizes the anthropogenic origin of the crisis , highlighting the irreversible nature of many associated catastrophes, "at least for several hundred years". [ 1 ] : Paragraph 15 While acknowledging the limitations in fully correcting the damage, the document stresses the need for measures to prevent further harm. [ 11 ]
The document explores resistance and confusion surrounding climate change, identifies human causes, and outlines the damages and risks associated with the crisis. It calls for a collective acceptance of responsibility for the impact on future generations, drawing parallels with the interconnectedness revealed during the COVID-19 pandemic . [ 12 ]
According to The New York Times , Francis’ message amounted to a tacit acknowledgement that his initial appeal (in Laudato si’ ) stated to save the planet has gone largely unheeded. [ 13 ]
The pope specifically noted that "emissions per individual in the United States are about two times greater than those of individuals living in China, and about seven times greater than the average of the poorest countries". He also asserted that a "broad change in the irresponsible lifestyle connected with the Western model would have a significant long-term impact". [ 13 ] [ 14 ]
Pope Francis criticizes the prevailing belief that technology and economic power alone can solve environmental problems. [ 11 ] The exhortation calls for a reconsideration of the use of power, cautioning against excessive ambition driven by profit-centric logic, hindering genuine concern for the common home. [ 11 ]
Referencing the Bishops of the United States, he states, "Climate change is one of the principal challenges facing society and the global community. The effects of climate change are borne by the most vulnerable people, whether at home or around the world." [ 11 ]
The document stresses the importance of global cooperation in addressing the climate crisis. It advocates for multilateral agreements and effective global organizations with the authority to ensure the global common good. [ 15 ] Critiquing past approaches to decision-making, Laudate Deum calls for a reconfiguration of multilateralism to address inadequacies in current political mechanisms. [ 16 ]
Pope Francis notes that "Not every increase in power represents progress for humanity", pointing to historical instances where technological progress has led to devastating consequences. [ 11 ]
Examining the weakness of international politics in the context of climate change, the exhortation acknowledges shortcomings in implementing agreements due to the lack of effective monitoring and sanctioning mechanisms. It emphasizes the need to overcome selfish posturing for the sake of the global common good. [ 17 ] [ 18 ]
Section 4 of the document reviews the history of international conferences on climate change , acknowledging shortcomings in implementing agreements. It underscores the necessity of overcoming selfish posturing for the sake of the global common good. [ 19 ] [ 20 ] [ 21 ]
The document asserts that “Over the decades, international conferences have been held to address the climate crisis, but they have often fallen short in implementing agreements due to the lack of effective monitoring and sanctioning mechanisms. It is crucial to overcome the selfish posturing of countries for the sake of the global common good”. [ 22 ]
Laudate Deum mentions the 2023 Conference of the Parties of the UNFCCC ( COP28 ) held from 30 November to 12 December at the Expo City, Dubai. In a historical move, Pope Francis was originally scheduled to attend the COP28 from December 1 to 3 (within the overall event held from November 30 to December 12, 2023), but unfortunately he had to cancel his trip due to health issues. [ 23 ] [ 24 ] The pontiff wanted to participate in some way in the discussions in the United Arab Emirates, according to the Holy See. It is unclear if Francis might read an address to the climate conference by videoconference or take part in some other form.
The exhortation concludes by calling on people of all religious confessions to react to the climate crisis. Specifically addressing the Catholic faithful, Pope Francis reminds them of their responsibility to care for God's creation. The document emphasizes the importance of walking in communion and working towards reconciliation with the world. [ 25 ] [ 26 ]
Nicole Winfield and Seth Borenstein stated that “Pope Francis shamed and challenged world leaders [...] to commit to binding targets to slow climate change before it’s too late” and that “using precise scientific data, sharp diplomatic arguments and a sprinkling of theological reasoning, [...] he delivered a moral imperative for the world to transition away from fossil fuels to clean energy with measures that that are “efficient, obligatory and readily monitored.” [ 27 ]
Father Daniel Horan OFM wrote on the National Catholic Reporter that “While in the buildup to its release some people have been describing this document as a second Laudato Si' or, more colloquially, its "sequel," the pope presents this text as more of an addendum and update to his earlier encyclical.” He point out that the document is “both an exhortation in the truest sense — a written or spoken message that emphatically urges someone to do something — and an apologia, a theological and rhetorical defense of truth and faith.” [ 28 ]
The new exhortation is “timely,” said Tomás Insua, co-founder and executive director of the nonprofit Laudato Si’ Movement , which works through close to 900 member organizations in 115 countries to foster a Catholic approach to the care of the environment. Insua, who is based in Rome, said that the pope's message underscores how “it’s a deeply Christian thing to be concerned for God’s beloved creation [and] deeply rooted in this very biblical love of creation.” [ 29 ]
Max Foley-Keene praised the exhortation in an article for Commonweal , calling it "an urgent cry for us to create new structures that will foster and protect these relationships". [ 30 ]
Giorgio Parisi , the Nobel-winning physicist who was one of the speakers at the news conference presenting Laudate Deum , stressed that it is “very important that this Apostolic Exhortation is addressed to all people of good faith” rather than just to members of the Catholic Church, “because, like the Pope has said many times, nobody can be saved alone, and we are all connected.” [ 31 ]
Writing on the National Review , conservative commentator John C. Pinheiro was critical towards the document, accusing the Pope of "resorting to apocalyptic language", of excessive trust towards institutions such as the United Nations and the World Health Organisation (pointing out that both the UN and the WHO support abortion) and of leniency towards the Chinese regime. He concluded by stating that "Pope Francis could benefit from advisers who understand metrics, numbers, and human action and exchange". [ 32 ] In a similar vein, Stephen Moore wrote on The Washington Times that "this declaration is so filled with anti-Christian fallacies that one has to wonder whether we have a pope who is actually Catholic". [ 33 ]
Italian traditionalist Catholic commentator Camillo Langone was extremely critical of the Pontiff, accusing him on Il Foglio of "imposing a climatist dogma" and of justifying the violent actions of radical environmentalist groups; he also compared the Pope to Müezzinzade Ali Pasha , commander of the Ottoman fleet during the Battle of Lepanto , and called for his excommunication. [ 34 ] | https://en.wikipedia.org/wiki/Laudate_Deum |
In crystallography and solid state physics , the Laue equations relate incoming waves to outgoing waves in the process of elastic scattering , where the photon energy or light temporal frequency does not change upon scattering by a crystal lattice . They are named after physicist Max von Laue (1879–1960).
The Laue equations can be written as Δ k = k o u t − k i n = G {\displaystyle \mathbf {\Delta k} =\mathbf {k} _{\mathrm {out} }-\mathbf {k} _{\mathrm {in} }=\mathbf {G} } as the condition of elastic wave scattering by a crystal lattice, where Δ k {\displaystyle \mathbf {\Delta k} } is the scattering vector , k i n {\displaystyle \mathbf {k} _{\mathrm {in} }} , k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} are incoming and outgoing wave vectors (to the crystal and from the crystal, by scattering), and G {\displaystyle \mathbf {G} } is a crystal reciprocal lattice vector . Due to elastic scattering | k o u t | 2 = | k i n | 2 {\displaystyle |\mathbf {k} _{\mathrm {out} }|^{2}=|\mathbf {k} _{\mathrm {in} }|^{2}} , three vectors. G {\displaystyle \mathbf {G} } , k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} , and − k i n {\displaystyle -\mathbf {k} _{\mathrm {in} }} , form a rhombus if the equation is satisfied. If the scattering satisfies this equation, all the crystal lattice points scatter the incoming wave toward the scattering direction (the direction along k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} ). If the equation is not satisfied, then for any scattering direction, only some lattice points scatter the incoming wave. (This physical interpretation of the equation is based on the assumption that scattering at a lattice point is made in a way that the scattering wave and the incoming wave have the same phase at the point.) It also can be seen as the conservation of momentum as ℏ k o u t = ℏ k i n + ℏ G {\displaystyle \hbar \mathbf {k} _{\mathrm {out} }=\hbar \mathbf {k} _{\mathrm {in} }+\hbar \mathbf {G} } since G {\displaystyle \mathbf {G} } is the wave vector for a plane wave associated with parallel crystal lattice planes. (Wavefronts of the plane wave are coincident with these lattice planes.)
The equations are equivalent to Bragg's law ; the Laue equations are vector equations while Bragg's law is in a form that is easier to solve, but these tell the same content.
Let a , b , c {\displaystyle \mathbf {a} \,,\mathbf {b} \,,\mathbf {c} } be primitive translation vectors (shortly called primitive vectors) of a crystal lattice L {\displaystyle L} , where atoms are located at lattice points described by x = p a + q b + r c {\displaystyle \mathbf {x} =p\,\mathbf {a} +q\,\mathbf {b} +r\,\mathbf {c} } with p {\displaystyle p} , q {\displaystyle q} , and r {\displaystyle r} as any integers . (So x {\displaystyle \mathbf {x} } indicating each lattice point is an integer linear combination of the primitive vectors.)
Let k i n {\displaystyle \mathbf {k} _{\mathrm {in} }} be the wave vector of an incoming (incident) beam or wave toward the crystal lattice L {\displaystyle L} , and let k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} be the wave vector of an outgoing (diffracted) beam or wave from L {\displaystyle L} . Then the vector k o u t − k i n = Δ k {\displaystyle \mathbf {k} _{\mathrm {out} }-\mathbf {k} _{\mathrm {in} }=\mathbf {\Delta k} } , called the scattering vector or transferred wave vector , measures the difference between the incoming and outgoing wave vectors.
The three conditions that the scattering vector Δ k {\displaystyle \mathbf {\Delta k} } must satisfy, called the Laue equations , are the following:
where numbers h , k , l {\displaystyle h,k,l} are integer numbers . Each choice of integers ( h , k , l ) {\displaystyle (h,k,l)} , called Miller indices , determines a scattering vector Δ k {\displaystyle \mathbf {\Delta k} } . Hence there are infinitely many scattering vectors that satisfy the Laue equations as there are infinitely many choices of Miller indices ( h , k , l ) {\displaystyle (h,k,l)} . Allowed scattering vectors Δ k {\displaystyle \mathbf {\Delta k} } form a lattice L ∗ {\displaystyle L^{*}} , called the reciprocal lattice of the crystal lattice L {\displaystyle L} , as each Δ k {\displaystyle \mathbf {\Delta k} } indicates a point of L ∗ {\displaystyle L^{*}} . (This is the meaning of the Laue equations as shown below.) This condition allows a single incident beam to be diffracted in infinitely many directions. However, the beams corresponding to high Miller indices are very weak and can't be observed. These equations are enough to find a basis of the reciprocal lattice (since each observed Δ k {\displaystyle \mathbf {\Delta k} } indicates a point of the reciprocal lattice of the crystal under the measurement), from which the crystal lattice can be determined. This is the principle of x-ray crystallography .
For an incident plane wave at a single frequency f {\displaystyle \displaystyle f} (and the angular frequency ω = 2 π f {\displaystyle \displaystyle \omega =2\pi f} ) on a crystal, the diffracted waves from the crystal can be thought as the sum of outgoing plane waves from the crystal. (In fact, any wave can be represented as the sum of plane waves, see Fourier Optics .) The incident wave and one of plane waves of the diffracted wave are represented as
where k i n {\displaystyle \displaystyle \mathbf {k} _{\mathrm {in} }} and k o u t {\displaystyle \displaystyle \mathbf {k} _{\mathrm {out} }} are wave vectors for the incident and outgoing plane waves, x {\displaystyle \displaystyle \mathbf {x} } is the position vector , and t {\displaystyle \displaystyle t} is a scalar representing time, and φ i n {\displaystyle \varphi _{\mathrm {in} }} and φ o u t {\displaystyle \varphi _{\mathrm {out} }} are initial phases for the waves. For simplicity we take waves as scalars here, even though the main case of interest is an electromagnetic field, which is a vector . We can think these scalar waves as components of vector waves along a certain axis ( x , y , or z axis) of the Cartesian coordinate system .
The incident and diffracted waves propagate through space independently, except at points of the lattice L {\displaystyle L} of the crystal, where they resonate with the oscillators, so the phases of these waves must coincide. [ 1 ] At each point x = p a + q b + r c {\displaystyle \mathbf {x} =p\,\mathbf {a} +q\,\mathbf {b} +r\,\mathbf {c} } of the lattice L {\displaystyle L} , we have
or equivalently, we must have
for some integer n {\displaystyle n} , that depends on the point x {\displaystyle \mathbf {x} } . Since this equation holds at x = 0 {\displaystyle \mathbf {x} =0} , φ i n = φ o u t + 2 π n ′ {\displaystyle \varphi _{\mathrm {in} }=\varphi _{\mathrm {out} }+2\pi n'} at some integer n ′ {\displaystyle n'} . So
(We still use n {\displaystyle n} instead of ( n − n ′ ) {\displaystyle (n-n')} since both the notations essentially indicate some integer.) By rearranging terms, we get
Now, it is enough to check that this condition is satisfied at the primitive vectors a , b , c {\displaystyle \mathbf {a} ,\mathbf {b} ,\mathbf {c} } (which is exactly what the Laue equations say), because, at any lattice point x = p a + q b + r c {\displaystyle \mathbf {x} =p\,\mathbf {a} +q\,\mathbf {b} +r\,\mathbf {c} } , we have
where n {\displaystyle n} is the integer p h + q k + r l {\displaystyle ph+qk+rl} . The claim that each parenthesis, e.g. ( Δ k ⋅ a ) {\displaystyle (\mathbf {\Delta k} \cdot \mathbf {a} )} , is to be a multiple of 2 π {\displaystyle 2\pi } (that is each Laue equation) is justified since otherwise p ( Δ k ⋅ a ) + q ( Δ k ⋅ b ) + r ( Δ k ⋅ c ) = 2 π n {\displaystyle p(\mathbf {\Delta k} \cdot \mathbf {a} )+q(\mathbf {\Delta k} \cdot \mathbf {b} )+r(\mathbf {\Delta k} \cdot \mathbf {c} )=2\pi n} does not hold for any arbitrary integers p , q , r {\displaystyle p,q,r} .
This ensures that if the Laue equations are satisfied, then the incoming and outgoing (diffracted) wave have the same phase at each point of the crystal lattice, so the oscillations of atoms of the crystal, that follows the incoming wave, can at the same time generate the outgoing wave at the same phase of the incoming wave.
If G = h A + k B + l C {\displaystyle \mathbf {G} =h\mathbf {A} +k\mathbf {B} +l\mathbf {C} } with h {\displaystyle h} , k {\displaystyle k} , l {\displaystyle l} as integers represents the reciprocal lattice for a crystal lattice L {\displaystyle L} (defined by x = p a + q b + r c {\displaystyle \mathbf {x} =p\,\mathbf {a} +q\,\mathbf {b} +r\,\mathbf {c} } ) in real space, we know that G ⋅ x = G ⋅ ( p a + q b + r c ) = 2 π ( h p + k q + l r ) = 2 π n {\displaystyle \mathbf {G} \cdot \mathbf {x} =\mathbf {G} \cdot (p\mathbf {a} +q\mathbf {b} +r\mathbf {c} )=2\pi (hp+kq+lr)=2\pi n} with an integer n {\displaystyle n} due to the known orthogonality between primitive vectors for the reciprocal lattice and those for the crystal lattice. (We use the physical, not crystallographer's, definition for reciprocal lattice vectors which gives the factor of 2 π {\displaystyle 2\pi } .) But notice that this is nothing but the Laue equations. Hence we identify Δ k = k o u t − k i n = G {\displaystyle \mathbf {\Delta k} =\mathbf {k} _{\mathrm {out} }-\mathbf {k} _{\mathrm {in} }=\mathbf {G} } , means that allowed scattering vectors Δ k = k o u t − k i n {\displaystyle \mathbf {\Delta k} =\mathbf {k} _{\mathrm {out} }-\mathbf {k} _{\mathrm {in} }} are those equal to reciprocal lattice vectors G {\displaystyle \mathbf {G} } for a crystal in diffraction, and this is the meaning of the Laue equations. This fact is sometimes called the Laue condition . In this sense, diffraction patterns are a way to experimentally measure the reciprocal lattice for a crystal lattice.
The Laue condition can be rewritten as the following. [ 2 ]
Applying the elastic scattering condition | k o u t | 2 = | k i n | 2 {\displaystyle |\mathbf {k} _{\mathrm {out} }|^{2}=|\mathbf {k} _{\mathrm {in} }|^{2}} (In other words, the incoming and diffracted waves are at the same (temporal) frequency. We can also say that the energy per photon does not change.)
To the above equation, we obtain
The second equation is obtained from the first equation by using k o u t − k i n = G {\displaystyle \mathbf {k} _{\mathrm {out} }-\mathbf {k} _{\mathrm {in} }=\mathbf {G} } .
The result 2 k o u t ⋅ G = | G | 2 {\displaystyle 2\mathbf {k} _{\mathrm {out} }\cdot \mathbf {G} =|\mathbf {G} |^{2}} (also 2 k in ⋅ ( − G ) = | G | 2 {\displaystyle 2{{\mathbf {k} }_{\text{in}}}\cdot (-\mathbf {G} )=|\mathbf {G} {{|}^{2}}} ) is an equation for a plane (as the set of all points indicated by k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} satisfying this equation) as its equivalent equation G ⋅ ( 2 k out − G ) = 0 {\displaystyle \mathbf {G} \cdot (2{{\mathbf {k} }_{\text{out}}}-\mathbf {G} )=0} is a plane equation in geometry. Another equivalent equation, that may be easier to understand, is k out ⋅ G ^ = 1 2 | G | {\displaystyle {{\mathbf {k} }_{\text{out}}}\cdot {\widehat {\mathbf {G} }}={\frac {1}{2}}\left|\mathbf {G} \right|} (also ( − k in ) ⋅ G ^ = 1 2 | G | {\displaystyle (-{{\mathbf {k} }_{\text{in}}})\cdot {\widehat {\mathbf {G} }}={\frac {1}{2}}\left|\mathbf {G} \right|} ). This indicates the plane that is perpendicular to the straight line between the reciprocal lattice origin G = 0 {\displaystyle \mathbf {G} =0} and G {\displaystyle \mathbf {G} } and located at the middle of the line. Such a plane is called Bragg plane. [ 3 ] This plane can be understood since G = k o u t − k i n {\displaystyle \mathbf {G} =\mathbf {k} _{\mathrm {out} }-\mathbf {k} _{\mathrm {in} }} for scattering to occur. (It is the Laue condition, equivalent to the Laue equations.) And, the elastic scattering | k o u t | 2 = | k i n | 2 {\displaystyle |\mathbf {k} _{\mathrm {out} }|^{2}=|\mathbf {k} _{\mathrm {in} }|^{2}} has been assumed so G {\displaystyle \mathbf {G} } , k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} , and − k i n {\displaystyle -\mathbf {k} _{\mathrm {in} }} form a rhombus . Each G {\displaystyle \mathbf {G} } is by definition the wavevector of a plane wave in the Fourier series of a spatial function which periodicity follows the crystal lattice (e.g., the function representing the electronic density of the crystal), wavefronts of each plane wave in the Fourier series is perpendicular to the plane wave's wavevector G {\displaystyle \mathbf {G} } , and these wavefronts are coincident with parallel crystal lattice planes. This means that X-rays are seemingly "reflected" off parallel crystal lattice planes perpendicular G {\displaystyle \mathbf {G} } at the same angle as their angle of approach to the crystal θ {\displaystyle \theta } with respect to the lattice planes; in the elastic light ( typically X-ray ) -crystal scattering, parallel crystal lattice planes perpendicular to a reciprocal lattice vector G {\displaystyle \mathbf {G} } for the crystal lattice play as parallel mirrors for light which, together with G {\displaystyle \mathbf {G} } , incoming (to the crystal) and outgoing (from the crystal by scattering) wavevectors forms a rhombus.
Since the angle between k o u t {\displaystyle \mathbf {k} _{\mathrm {out} }} and G {\displaystyle \mathbf {G} } is π / 2 − θ {\displaystyle \pi /2-\theta } , (Due to the mirror-like scattering, the angle between k i n {\displaystyle \mathbf {k} _{\mathrm {in} }} and G {\displaystyle \mathbf {G} } is also π / 2 − θ {\displaystyle \pi /2-\theta } .) k o u t ⋅ G = | k o u t | | G | sin θ {\displaystyle \mathbf {k} _{\mathrm {out} }\cdot \mathbf {G} =|\mathbf {k} _{\mathrm {out} }||\mathbf {G} |\sin \theta } . Recall, | k o u t | = 2 π / λ {\displaystyle |\mathbf {k} _{\mathrm {out} }|=2\pi /\lambda } with λ {\displaystyle \lambda } as the light (typically X-ray) wavelength, and | G | = 2 π d n {\displaystyle |\mathbf {G} |={\frac {2\pi }{d}}n} with d {\displaystyle d} as the distance between adjacent parallel crystal lattice planes and n {\displaystyle n} as an integer. With these, we now derive Bragg's law that is equivalent to the Laue equations (also called the Laue condition):
2 k o u t ⋅ G = | G | 2 2 | k o u t | | G | sin θ = | G | 2 2 ( 2 π / λ ) ( 2 π n / d ) sin θ = ( 2 π n / d ) 2 2 d sin θ = n λ . {\displaystyle {\begin{aligned}2\mathbf {k} _{\mathrm {out} }\cdot \mathbf {G} =|\mathbf {G} |^{2}\\2|\mathbf {k} _{\mathrm {out} }||\mathbf {G} |\sin \theta =|\mathbf {G} |^{2}\\2(2\pi /\lambda )(2\pi n/d)\sin \theta =(2\pi n/d)^{2}\\2d\sin \theta =n\lambda .\end{aligned}}} | https://en.wikipedia.org/wiki/Laue_equations |
In condensed matter physics , the Laughlin wavefunction [ 1 ] [ 2 ] is an ansatz , proposed by Robert Laughlin for the ground state of a two-dimensional electron gas placed in a uniform background magnetic field in the presence of a uniform jellium background when the filling factor of the lowest Landau level is ν = 1 / n {\displaystyle \nu =1/n} where n {\displaystyle n} is an odd positive integer. It was constructed to explain the observation of the ν = 1 / 3 {\displaystyle \nu =1/3} fractional quantum Hall effect (FQHE), and predicted the existence of additional ν = 1 / n {\displaystyle \nu =1/n} states as well as quasiparticle excitations with fractional electric charge e / n {\displaystyle e/n} , both of which were later experimentally observed. Laughlin received one third of the Nobel Prize in Physics in 1998 for this discovery.
If we ignore the jellium and mutual Coulomb repulsion between the electrons as a zeroth order approximation, we have an infinitely degenerate lowest Landau level (LLL) and with a filling factor of 1/ n , we'd expect that all of the electrons would lie in the LLL. Turning on the interactions, we can make the approximation that all of the electrons lie in the LLL. If ψ 0 {\displaystyle \psi _{0}} is the single particle wavefunction of the LLL state with the lowest orbital angular momenta , then the Laughlin ansatz for the multiparticle wavefunction is
where position is denoted by
in ( Gaussian units )
and x {\displaystyle x} and y {\displaystyle y} are coordinates in the x–y plane. Here ℏ {\displaystyle \hbar } is the reduced Planck constant , e {\displaystyle e} is the electron charge , N {\displaystyle N} is the total number of particles, and B {\displaystyle B} is the magnetic field , which is perpendicular to the xy plane. The subscripts on z identify the particle. In order for the wavefunction to describe fermions , n must be an odd integer. This forces the wavefunction to be antisymmetric under particle interchange. The angular momentum for this state is n ℏ {\displaystyle n\hbar } .
Consider n = 3 {\displaystyle n=3} above: resultant Ψ L ( z 1 , z 2 , z 3 , … , z N ) ∝ Π i < j ( z i − z j ) 3 {\displaystyle \Psi _{L}(z_{1},z_{2},z_{3},\ldots ,z_{N})\propto \Pi _{i<j}(z_{i}-z_{j})^{3}} is a trial wavefunction; it is not exact, but qualitatively, it reproduces many features of the exact solution and quantitatively, it has very high
overlaps with the exact ground state for small systems. Assuming Coulomb repulsion between any two electrons, that
ground state Ψ E D {\displaystyle \Psi _{ED}} can be determined using exact diagonalisation [ 3 ] and the
overlaps have been calculated to be close to one. Moreover, with short-range interaction (Haldane pseudopotentials for m > 3 {\displaystyle m>3} set to zero),
Laughlin wavefunction becomes exact, [ 4 ] i.e. ⟨ Ψ E D | Ψ L ⟩ = 1 {\displaystyle \langle \Psi _{ED}|\Psi _{L}\rangle =1} .
The Laughlin wavefunction is the multiparticle wavefunction for quasiparticles . The expectation value of the interaction energy for a pair of quasiparticles is
where the screened potential is (see Static forces and virtual-particle exchange § Coulomb potential between two current loops embedded in a magnetic field )
where M {\displaystyle M} is a confluent hypergeometric function and J 0 {\displaystyle {\mathcal {J}}_{0}} is a Bessel function of the first kind. Here, r 12 {\displaystyle r_{12}} is the distance between the centers of two current loops, e {\displaystyle e} is the magnitude of the electron charge , r B = 2 l B {\displaystyle r_{B}={\sqrt {2}}{\mathit {l}}_{B}} is the quantum version of the Larmor radius , and L B {\displaystyle L_{B}} is the thickness of the electron gas in the direction of the magnetic field. The angular momenta of the two individual current loops are l ℏ {\displaystyle {\mathit {l}}\hbar } and l ′ ℏ {\displaystyle {\mathit {l}}^{\prime }\hbar } where l + l ′ = n {\displaystyle {\mathit {l}}+{\mathit {l}}^{\prime }=n} . The inverse screening length is given by ( Gaussian units )
where ω c {\displaystyle \omega _{c}} is the cyclotron frequency , and A {\displaystyle A} is the area of the electron gas in the xy plane.
The interaction energy evaluates to:
To obtain this result we have made the change of integration variables
and
and noted (see Common integrals in quantum field theory )
The interaction energy has minima for (Figure 1)
and
For these values of the ratio of angular momenta, the energy is plotted in Figure 2 as a function of n {\displaystyle n} . | https://en.wikipedia.org/wiki/Laughlin_wavefunction |
A launching gantry (also called bridge building crane, and bridge-building machine) is a special-purpose mobile gantry crane used in bridge construction, specifically segmental bridges that use precast box girder bridge segments or precast girders in highway and high-speed rail bridge construction projects. The launching gantry is used to lift and support bridge segments or girders as they are placed while being supported by the bridge piers instead of the ground.
While superficially similar, launching gantry machines should not be confused with movable scaffolding systems , which also are used in segmental bridge construction. Both feature long girders spanning multiple bridge spans which move with the work, but launching gantry machines are used to lift and support precast bridge segments and bridge girders, while movable scaffolding systems are used for cast-in-place construction of bridge segments.
Typically, precast segmental bridges and precast girders are placed using ground-based cranes to lift each segment or girder. However, ground access to the spans may be challenged by the presence of existing infrastructure or bodies of water, or the height to which the segments must be raised can exceed the reach of ground-based cranes. A launching gantry can be used to solve these issues. [ 1 ] : 38
The most visible feature of a launching gantry are the twin parallel girders, [ 1 ] : 38 which can either be above (upper-beam) or below (lower-beam or underslung) the bridge deck. [ 1 ] : 40 However, a single beam can also be used, typically in upper-beam configuration. [ 1 ] : 41 The launching gantry machine usually is sized to the construction project, with the length of the twin main girders approximately 2.3 times the distance between spans. This length enables the launching gantry to span the gap between two adjacent bridge piers while providing allowances for the distance required for launching to the next span and flexibility of movement to accommodate curved paths between piers. [ 1 ] : 38, 40 In some cases, hinges have been inserted into the gantry girders to allow tighter curves. [ 2 ] The launching gantry girders are supported at each pier by braced frames which have a limited range of movement to facilitate placement of bridge segments or bridge girders; the launching gantry does not generally contact the bridge deck. [ 1 ] : 38
Two gantry trolleys can run the full length of the launching gantry girders. Each trolley is equipped with two winches: a main winch to suspend the load, and a translation winch to move the trolley along the girders. [ 1 ] : 38–39 When bridge segments (or bridge girders) are delivered at the ground level, the launching gantry is used to pick them up and raise them to deck or pier height. If the segments (or girders) are delivered instead at the bridge deck level, the launching gantry moves back to allow the forward trolley to pick up the front end of the next segment (or girder), while the back end of the segment (or girder) is supported by the transportation vehicle; as the forward trolley moves forward, the rear trolley takes over supporting the back end from the vehicle. [ 1 ] : 39
Bridge segments (or bridge girders) are set in place by the launching gantry until the span between adjacent piers is completed. For segmental bridges, typically a span-by-span or balanced-cantilever approach is adopted to place segments. To free up the gantry trolley(s), temporary hangers are used to support each segment after it has been placed. In the span-by-span approach, all the segments for a span are placed before bridge tendons are tensioned; in this fashion, work progresses from one pier towards an adjacent pier. In the balanced-cantilever approach, segments are placed simultaneously on each side and work progresses from a central pier towards the two nearest piers instead. In either case, the launching gantry girders and hangers essentially serve as falsework prior to tensioning. [ 1 ] : 39–40
Once the bridge span between adjacent piers is completed, the winches on the trolleys are used to lift the gantry girders and "launch" them ahead to the next span. The process of lifting and placing bridge segments (or girders) followed by launching the gantry girders ahead is repeated until the bridge is complete. [ 3 ] [ 1 ] : 39
An example of a large launching gantry is the SLJ900/32 designed in China by the Shijiazhuang Railway Design Institute and manufactured by the Beijing Wowjoint Machinery Company. This launching gantry is 91 metres (299 ft) long, 7 metres (23 ft) wide, and weighs 580 tonnes (640 short tons). [ 4 ] When driving, the machine is supported by 64 wheels, in four sections of 16 wheels each (forming two trucks, one at each end). When launching, the forward end of the machine is supported (on sliding rails) by a strut lowered onto a bridge support column, while the truck for that end hangs off the gantry backbone with no support from beneath. Once the gantry straddles the open span, the bridge segment is lowered onto the bridge support piers, and the process reverses to retract the launching gantry. The SLJ900 moves at 8 km/h (5 mph) unloaded, and 5 km/h (3 mph) carrying a bridge segment. [ 3 ] | https://en.wikipedia.org/wiki/Launching_gantry |
Laundroid was a laundry-folding machine and home robot , used to automatically wash , dry , iron , sort and fold clothes to a dedicated closet . It was dubbed to be the world's first laundry folding robot , [ 1 ] and was planned to go on sale in Japan first, and subsequently, in a limited number, in the United States. Release date was set to 2017, with pre-orders starting in March 2017.
Its image-recognition system and robotic arms took 3 to 10 minutes to pick and fold each item, or overnight for a load of laundry. [ citation needed ]
The Laundroid was first introduced and demonstrated at the 2015 CEATEC consumer electronics show in Tokyo , Japan. It was jointly developed by Daiwa House , Panasonic , and Seven Dreamers.
In November 2016, Seven Dreamers announced it has secured an extra $60 million in Series B Funding led by Panasonic Corp., Daiwa House Industry Co., and SBI Investment Co. [ 2 ]
The first machines would only be able to fold the clothes for the closet , but the final product – full wash, dry and fold system – was planned to be released in 2019. [ 3 ] [ 4 ] [ 5 ]
On April 23, 2019, Seven Dreamers announced bankruptcy. [ 6 ] They had $20 million in debt to 200 creditors according to credit research agency Teikoku Databank.
This tool article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Laundroid |
Wastewater comes out of the laundry process with additional energy (heat), lint, soil, dyes, finishing agents, and other chemicals from detergents. [ 1 ] Some laundry wastewater goes directly into the environment, due to the flaws of water infrastructure . The majority goes to sewage treatment plants before flowing into the environment. Some chemicals remain in the water after treatment, which may contaminate the water system. Some have argued they can be toxic to wildlife, or can lead to eutrophication .
As of 2023, the United States has 2,538 industrial laundry facilities [ 2 ] which may discharge an average of 400 m 3 of wastewater every day. [ 3 ] Annually, about 5.11 km 3 laundry wastewater is produced.
There are several parameters in the evaluation of laundry wastewater: temperature, pH-value, suspended substances, Cl 2 , sediment substances, total nitrogen, total phosphorus , nitrogen ammonia, chemical oxygen demand(COD), biochemical oxygen demand(BOD 5 ), anionic surfactants. [ 4 ]
Several common detergent ingredients are surfactants , builders , bleach-active compounds and auxiliary agents. The surfactants can be classified into anionic, cationic and nonionic surfactants. The most widely used surfactant linear alkylbenzene sulfonate (LAS) is an anionic surfactant. In builders, sodium triphosphate, zeolite A, sodium nitrilotriacetate (NTA) are the most important substances. Bleach-active compounds are usually sodium perborate and sodium percarbonate . Enzymes and fluorescent whitening agents are added into detergents as auxiliary agents. [ 1 ]
Surfactants are surface active agents, as they have both hydrophilic and lipophilic properties and are widely used in various washing process. With the lipophilic tails, surfactants are biologically active. Anionic surfactants have the ability of binding to bioactive macromolecules like enzymes , DNA, peptides, causing changes of surface charge and the folding of polyp eptide chain(structure o different. Cationic surfactant can bind to the inner membrane of bacteria, and by this way disorganize the bacteria through their long alkyl chain. Nonionic surfactants are able to bind to both proteins and phospholipid membrane, leading to leakage of low molecular mass compounds by increasing the permeability of membranes and vesicles. This may result in serious damage in cells or even cell death. [ 5 ]
Linear alkylbenzene sulfonate (LAS) with the formula of C 12 H 25 C 6 H 4 SO 3 Na, also known as sodium dodecylbenzene sulfonate , is the most widely used anionic surfactant in laundry detergent because it has minimal environmental impact for its readily biodegradation.
A complete biodegradation under aerobic conditions consists of two steps, primary biodegradation and ultimate biodegradation. The first step begins from the terminal carbon in the alkyl chain as omega-oxidation, which can start from one or both ends, then is followed by beta-oxidation. After the first step the residual is sulfophenyl(di)carboxylates (SP(d)Cs), a large molecule which can be involved in the second step. The second step occurs only when the required bacteria exist. The ring cleavage of benzene and the further desulphonation of the mono- and dicarboxylic sulphophenyl acids happen. After the two-step biodegradation, LAS is degraded into carbon dioxide, water, inorganic salts and residual biomass. During the biodegradation, several specific bacteria and oxygen are required in both omega-oxidation of the alkyl chain and the benzene ring cleaving process, so this biodegradation can only happen in aerobic conditions . [ 6 ] In anaerobic conditions in treatment process, LAS shows no change. Researchers also prove that biodegradation process is restricted in 20–40 mg/L and even inhibited at a higher concentration, which leads to the incomplete biodegradation of LAS in sewage treatment plants. [ 7 ]
Builders in detergents are water softeners, which can remove calcium and magnesium ions by complexation or precipitation in hard water which contains high levels of calcium and magnesium.
Sodium triphosphate , with a formula of Na 5 P 3 O 10 , is a largely used builder in laundry detergents, which can lead to eutrophication caused by phosphorus (P). P is needed for energy transfer, the formation of DNA, RNA and many other intermediary metabolites. Only P in orthophosphate can be assimilated by autotrophs , other P compounds like sodium triphosphate can be chemically or enzymatically hydrolyzed to orthophosphate. [ 8 ] The mechanism is shown below. [ 9 ]
Excessive phosphorus can make for excessive production of autotrophs, like algae and cyanobacteria, leading to eutrophication in an inorganic nutrient pathway. Nutrient enrichment in lakes and reservoirs results in the microscopic floating plants, algae and formation of dense mats of larger floating plants that can produce oxygen by photosynthesis. When they die and sink to the bottom, they consume oxygen in decomposition. Bacteria thriving in this process consume oxygen. With the depletion of oxygen, fishes die and anaerobic bacteria produce methane , hydrogen sulfide and ammonia , which can destroy the ecosystem. | https://en.wikipedia.org/wiki/Laundry_wastewater |
Laura Gagliardi (born 6 April 1968) is an Italian theoretical and computational chemist and the Richard and Kathy Leventhal Professor of Chemistry and Molecular Engineering at the University of Chicago . She is known for her work on the development of electronic structure methods and their use for understanding complex chemical systems.
Gagliardi earned her Master of Science degree in chemistry at the University of Bologna in 1992 for which she was awarded 'Toso Montanari' for the student with the highest-mark graduation in chemistry. She earned her PhD at the same university in 1997. She was a postdoctoral research associate at the University of Cambridge from 1998 to 1999.
Gagliardi became an assistant professor at the University of Palermo in 2002. In 2005, she became associate professor at the University of Geneva in Switzerland, and in 2009 she joined the University of Minnesota as a professor of chemistry. She was the director of the Nanoporous Materials Genome Center from 2012 to 2014 and of the Inorganometallic Catalyst Design Center from 2014 to 2022. She was also the director of the Chemical Theory Center from 2011 to 2020 at the University of Minnesota. She was appointed as Distinguished McKnight University Professor in 2014 and awarded a McKnight Presidential Endowed Chair in 2018. [ 1 ] In 2020, she joined the University of Chicago as the Richard and Kathy Leventhal Professor in chemistry and molecular engineering. [ 2 ] Since 2022 she has been serving as Director of the Catalyst Design Center for Decarbonization [ 3 ]
She currently serves as Editor-in-Chief for the Journal of Chemical Theory and Computation , and has served as an Associate Editor for the Journal of the American Chemical Society (2021), the Journal of Chemical Theory and Computation (2016-2020), and is a member of the Editorial Advisory Board of the following journals: Journal of Catalysis (2018–present), Chemical Reviews (2015-present), ACS Central Science (2014–present), The Journal of the American Chemical Society (2013-2018), Inorganic Chemistry (2014-2016), Theoretical Chemistry Accounts (2009–present), Journal of Chemical Theory and Computation (2012-2016), and the Journal of Physical Chemistry (2011-2016).
Gagliardi is married to Christopher J. Cramer ; the couple has three children from a prior marriage. [ 19 ] | https://en.wikipedia.org/wiki/Laura_Gagliardi |
Laura Frances Robinson (born November 1976) is a British scientist who is Professor of Geochemistry at the University of Bristol . She makes use of geochemistry to study the processes that govern the climate. In particular, Robinson studies radioactive elements, as these can be analysed in geological materials. She was awarded the 2010 President's Award of the Geological Society of London . [ 2 ]
Robinson was an undergraduate student at the University of Cambridge , where she studied natural sciences. She moved to the University of Oxford for her graduate studies, where she investigated pleistocene climate chronology. [ 3 ] After completing her doctorate, Robinson moved to California . [ 4 ] She was appointed a postdoctoral fellow at California Institute of Technology . At Caltech, worked alongside Jess Adkins on deep sea corals. The research took her on a cruise in the North Atlantic ocean , where she journeyed in a submarine to undersea mountains. On this trip she collected fossils from the sea floor. She studied 16,000 year old coral fossils from the Southern Ocean . [ 4 ] This experience inspired her to explore how the Atlantic Ocean changed during climate transitions. [ 4 ] She moved to the Woods Hole Oceanographic Institution , where she was made Associate Scientist.
Robinson was awarded the 2010 Geological Society of London President's Award for her contributions to geosciences. [ 5 ] In 2011 Robinson moved back to the United Kingdom , where she was appointed to the faculty of the University of Bristol . [ 6 ] She was awarded a European Research Council Starting Grant studying changes in chemistry and circulation of the Atlantic Ocean . [ 7 ] She makes use of an Agassiz Trawl to collect samples from the floor of the ocean, with a particular focus on deep-sea corals. [ 8 ] [ 9 ] Robinson was involved with a British Antarctic Survey mission to the South Orkney Islands . [ 10 ] The mission took place on the RRS James Clark Ross and investigated the biodiversity in and outside of the South Orkney Islands . [ 11 ] [ 8 ] For this work she was awarded the Antarctic Service Medal. [ 6 ]
In 2016 she delivered a Ted Talk on the secrets she discovers on the ocean floor. [ 12 ] | https://en.wikipedia.org/wiki/Laura_Robinson_(scientist) |
Lauren Blakely Hitchcock (March 18, 1900 – October 15, 1972) was a chemical engineer and early opponent of air pollution. [ 1 ] [ 2 ]
Hitchcock was born in Paris to Frank Lauren Hitchcock , a mathematician and physicist, and Margaret Johnson Blakely, and was raised in Belmont, Massachusetts . He received his undergraduate (1920), master's (1927), and doctorate degree (1933) from Massachusetts Institute of Technology . He taught at the University of Virginia from 1928 to 1935 and then moved into private industry. [ 1 ]
Hitchcock became president of the Southern California Air Pollution Foundation (APF) [ 3 ] [ 4 ] in 1954, which had been formed to fight smog. Hitchcock identified automobile exhaust and backyard incinerators as the cause and advised that significant steps would be needed--comparable to wartime efforts--to fight the problem in a meaningful way. [ 1 ] In 1963, Hitchcock was appointed to the faculty at University at Buffalo , where his work papers are now archived. | https://en.wikipedia.org/wiki/Lauren_B._Hitchcock |
Lauri Vaska (May 7, 1925 – November 15, 2015) was an Estonian -American chemist who has made noteworthy contributions to organometallic chemistry .
Vaska was born in Rakvere , Estonia. [ 1 ] He was educated at the Baltic University in Hamburg , Germany (1946) and subsequently at the University of Göttingen (1946–1949), where he received his Vordiplom (equivalent to the American B.S. degree). He emigrated to the United States in 1952 and pursued his Ph.D. in inorganic chemistry at the University of Texas (1952–1956). [ 1 ] He was a postdoctoral fellow at Northwestern University (1956–1957) where he conducted research on magnetochemistry. In 1957 he took a position as Fellow at the Mellon Institute in Pittsburgh , where he remained until 1964. During that time, the Mellon Institute housed a number of future chemists, including Paul Lauterbur and R. Bruce King . Vaska moved as an associate professor to Clarkson University in Potsdam , New York , where, from 1990 to his death, he was professor emeritus of chemistry. [ 1 ] His brother Vootele Vaska [ et ] was a philosopher. He died in Basking Ridge, New Jersey in 2015, aged 90. [ 1 ] [ 2 ]
Vaska published ca. eighty journal articles on the coordination chemistry of transition metals , homogeneous catalysis , and both organometallic and bioinorganic chemistry . His years at Mellon were especially productive. With J.W. Di Luzio in 1962 he first described the iridium compound which became known as Vaska's complex , trans -IrCl(CO)[P(C 6 H 5 ) 3 ] 2 [ 3 ] Working with a series of coworkers, he demonstrated that this iridium(I) complex undergoes a variety of reactions with small molecules. For example, it oxidatively adds H 2 to give a dihydride . [ 4 ] He subsequently discovered that his complex reversibly bound O 2 , which was then a startling achievement. He discovered the main reactions of oxidative addition , a process that is central to homogeneous catalysis in organometallic chemistry. He demonstrated a number of important substituent effects on the oxidative addition, such as the greater reactivity of Ir(I) vs. Rh(I) and the stabilization of oxidative adducts by iodide vs. chloride .
Among his awards are the Boris Pregel Award for Research in Chemical Physics ( New York Academy of Sciences ) in 1971 [ 5 ] [ 6 ] and election in 1981 as a Fellow of the American Association for the Advancement of Science for "pioneering work in transition metal organometallic chemistry and synthetic oxygen carriers". | https://en.wikipedia.org/wiki/Lauri_Vaska |
Laurie Ellen Locascio (born November 21, 1961) is an American biomedical engineer, analytical chemist , and president and CEO of the American National Standards Institute (ANSI). She was formerly the under secretary of commerce for standards and technology and the 17th director of National Institute of Standards and Technology from 2022 to 2024. From 2017 to 2021, Locascio was vice president for research of University of Maryland, College Park and University of Maryland, Baltimore .
Locascio was born November 21, 1961, in Cumberland, Maryland . [ 1 ] Her father was a physicist at the Allegany Ballistics Laboratory . He fostered her interest in science. [ 2 ] She attended Bishop Walsh High School . [ 3 ] In 1977, she was awarded an educational development certificate. [ 3 ] Locascio had an early interest in biology and won her school's senior science award. She graduated in 1979. [ 2 ]
Locascio attended James Madison University from 1979 to 1983 where she earned her B.Sc. in chemistry with a minor in biochemistry. [ 1 ] In 1982, Locascio was a research assistant in the department of chemistry at West Virginia University . [ 4 ] She attended the University of Utah from 1983 to 1986 while working as a research assistant in the department of bioengineering. [ 4 ] Locascio completed her M.Sc. in bioengineering in 1986. [ 1 ]
From 1986 to 1999, Locascio was a research biomedical engineer in the molecular spectroscopy and microfluidic methods group in the analytical chemistry division of the National Institute of Standards and Technology (NIST). [ 4 ] She received a certificate of recognition from the United States Department of Commerce in 1987, 1989, and 1990. Locascio was awarded the Department of Commerce Bronze Medal in 1991. [ 5 ] While working at NIST, she was encouraged by her manager Willie E. May and mentor Richard Durst to pursue a doctoral degree. [ 6 ] From 1995 to 1999, Locascio completed a Ph.D. in toxicology at the University of Maryland School of Medicine . [ 1 ] [ 4 ] At the University of Maryland, Katherine S. Squibb and Bruce O. Fowler , the director of the toxicology program, supported Locascio's efforts to attend graduate school while also working at NIST. [ 6 ] Her dissertation was titled Miniaturization of bioassays for analytical toxicology . [ 7 ] Cheng S. Lee was her doctoral advisor [ 8 ] and Mohyee E. Eldefrawi served on her advisory committee. [ 6 ]
Locascio is an interdisciplinary researcher. [ 9 ] She worked at NIST for 31 years, rising from a research biomedical engineer to eventually leading the agency's material measurement laboratory. [ 10 ] Locascio also served as the acting associate director for laboratory programs, the number two position at NIST, providing direction and operational guidance for NIST's lab research programs [ 10 ] across two campuses in Gaithersburg, Maryland , and Boulder, Colorado . [ 11 ] She received the 2017 American Chemical Society Earle B. Barnes Award for Leadership in Chemical Research Management, and the 2017 Washington Academy of Sciences Special Award in Scientific Leadership. [ 10 ] Locascio has published 115 scientific papers and has received 12 patents in the fields of bioengineering and analytical chemistry. [ 10 ] During her time at NIST, she received the Department of Commerce Silver Medal , American Chemical Society Division of Analytical Chemistry Arthur F. Findeis Award, the NIST Safety Award and the NIST Applied Research Award. [ 10 ] Locascio is also a fellow of the American Chemical Society and the American Institute for Medical and Biological Engineering . [ 10 ]
In late 2017, Locascio joined University of Maryland's faculty. [ 12 ] She was the first person to serve as the vice president for research of both the College Park and Baltimore campuses. [ 13 ] In this role, Locascio oversaw the University of Maryland's research and innovation enterprise at these two campuses, which garner a combined $1.1 billion in external research funding each year. [ 11 ] [ 9 ] Within Locascio's purview was the development of large interdisciplinary research programs, technology commercialization, innovation and economic development efforts, and strategic partnerships with industry, federal, academic, and nonprofit collaborators. [ 11 ] She also served as a professor in the Fischell Department of Bioengineering at the A. James Clark School of Engineering with a secondary appointment in the department of pharmacology in the School of Medicine. [ 10 ] In 2021, Locascio inducted as a fellow of the National Academy of Inventors . [ 10 ] At the University of Maryland that same year, she was succeeded by interim vice president Amitabh Varshney . [ 14 ]
On July 16, 2021, President Joe Biden nominated Locascio as the under secretary of commerce for standards and technology . [ 11 ] She was confirmed by the Senate on April 7, 2022. [ 15 ] On April 19, 2022, Locascio was sworn in by U.S. secretary of commerce Gina Raimondo . She was the fourth Under Secretary of Commerce for Standards and Technology and 17th director of NIST. [ 10 ] Locascio was the third female head of NIST. [ 9 ] She resigned her governmental positions on December 31, 2024. [ 16 ]
In January 2025, she assumed the role of president and CEO of the American National Standards Institute (ANSI). [ 17 ] | https://en.wikipedia.org/wiki/Laurie_E._Locascio |
A lava filter is a biological filter that uses lavastone pebbles as support material on which microorganisms can grow in a thin biofilm . [ 1 ] This community of microorganisms, known as the periphyton , break down the odor components in the air, such as hydrogen sulfide . The biodegradation processes that occurs is provided by the bacteria themselves. In order for this to work, sufficient oxygen as well as water and nutrients (for cell growth) is to be supplied.
Contaminated air enters the system at the bottom of the filter and passes in an upward direction through the filter. Water is supplied through the surface of the biofilter and trickles down over the lava rock to the bottom, where it is collected. Constant water provisioning at the surface prevents dry-out of the active bacteria in the biofilm and ensures a constant pH value in the filter. It also functions to make nutrients available to the bacteria.
Percolating water collected at the filter bottom contains odour components as well as sulfuric acid from the biological oxidation of hydrogen sulfide. Depending on the process design the collected water is recirculated or subjected to further treatment.
At present: 2 types of systems are used;
These are constructed out of 2 layers of lava pebbles and a top layer of nutrient-free soil (only at the plants roots). [ 3 ] On top, water-purifying plants (as Iris pseudacorus and Sparganium erectum ) are placed. Usually, around 1/4 of the dimension of lavastone is required to purify the water and just like slow sand filters , a series of herringbone drains are placed (with lava filters these are placed at the bottom layer). [ citation needed ]
The water-purifying plants used with constantly submerged, planted, lavafilters (e.g. treatment ponds, self-purifying irrigation reservoirs, ...) include a wide variety of plants, depending on the local climate and geographical location. Plants are usually chosen which are indigenous in that location for environmental reasons and optimum workings of the system. In addition to water-purifying (de-nutrifying) plants, plants that supply oxygen , and shade are also added in ecologic water catchments , ponds , ... This to allow a complete ecosystem to form. Finally, in addition to plants, locally grown bacteria and non- predatory fish are also added to eliminate pests. The bacteria are usually grown locally by submerging straw in water and allowing it to form bacteria (arriving from the surrounding atmosphere ). The plants used (placed on an area 1/4 of the water mass) are divided in 4 separate water depth-zones; knowingly:
Finally, three types of (non-predatory) fish (surface; bottom and ground-swimmers) are chosen. This of course to ensure that the fish may 'get along'. Examples of the three types of fish (for temperate climates) are: | https://en.wikipedia.org/wiki/Lava_filter |
In computer programming jargon , lava flow is an anti-pattern that occurs when computer source code written under sub-optimal conditions is deployed into a production environment and subsequently expanded upon while still in a developmental state. The term derives from the natural occurrence of lava which, once cooled, solidifies into rock that is difficult to remove. Similarly, such code becomes difficult to refactor or replace due to dependencies that arise over time, necessitating the maintenance of backward compatibility with the original, incomplete design. [ 1 ]
Lava flow can occur due to a variety of reasons within a software development process:
Unrefined code that becomes part of the software’s infrastructure increases the complexity of the system and the codebase becomes increasingly difficult to understand and maintain. It leads to:
Development teams often experience the impact of lava flow when team members cycle in and out:
Several practices can mitigate the effects of the lava flow anti-pattern:
This computer science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lava_flow_(programming) |
Laver's theorem , in order theory , states that order embeddability of countable total orders is a well-quasi-ordering . That is, for every infinite sequence of totally-ordered countable sets , there exists an order embedding from an earlier member of the sequence to a later member. This result was previously known as Fraïssé's conjecture , after Roland Fraïssé , who conjectured it in 1948; [ 1 ] Richard Laver proved the conjecture in 1971. More generally, Laver proved the same result for order embeddings of countable unions of scattered orders . [ 2 ] [ 3 ]
In reverse mathematics , the version of the theorem for countable orders is denoted FRA (for Fraïssé) and the version for countable unions of scattered orders is denoted LAV (for Laver). [ 4 ] In terms of the "big five" systems of second-order arithmetic , FRA is known to fall in strength somewhere between the strongest two systems, Π 1 1 {\displaystyle \Pi _{1}^{1}} -CA 0 and ATR 0 , and to be weaker than Π 1 1 {\displaystyle \Pi _{1}^{1}} -CA 0 . However, it remains open whether it is equivalent to ATR 0 or strictly between these two systems in strength. [ 5 ] | https://en.wikipedia.org/wiki/Laver's_theorem |
Laves phases are intermetallic phases that have composition AB 2 and are named for Fritz Laves who first described them. The phases are classified on the basis of geometry alone. While the problem of packing spheres of equal size has been well-studied since Gauss, Laves phases are the result of his investigations into packing spheres of two sizes. Laves phases fall into three Strukturbericht types : cubic MgCu 2 (C15), hexagonal MgZn 2 (C14), and hexagonal MgNi 2 (C36). The latter two classes are unique forms of the hexagonal arrangement, but share the same basic structure. In general, the A atoms are ordered as in diamond, hexagonal diamond, or a related structure, and the B atoms form tetrahedra around the A atoms for the AB 2 structure. [ 1 ]
Laves phases are of particular interest in modern metallurgy research because of their abnormal physical and chemical properties. Many hypothetical or primitive applications have been developed. However, little practical knowledge has been elucidated from Laves phase study so far.
A characteristic feature is the almost perfect electrical conductivity, but they are not plastically deformable at room temperature.
In each of the three classes of Laves phase, if the two types of atoms were perfect spheres with a size ratio of 3 / 2 ≈ 1.225 {\displaystyle {\sqrt {3/2}}\approx 1.225} , [ 2 ] the structure would be topologically tetrahedrally close-packed. [ 3 ] At this size ratio, the structure has an overall packing volume density of 0.710. [ 4 ] Compounds found in Laves phases typically have an atomic size ratio between 1.05 and 1.67. [ 3 ] Analogues of Laves phases can be formed by the self-assembly of a colloidal dispersion of two sizes of sphere. [ 2 ]
Laves phases are instances of the more general Frank-Kasper phases . | https://en.wikipedia.org/wiki/Laves_phase |
Lavinite ( Polish : Lawinit ) is a mixture of metal particles (usually iron) and sand held together by solidified molten sulfur . Instead of metal particles, magnesite could be used to give a whiter product. The idea was to make a material that looks like marble. [ 1 ]
It was invented c. 1912 by Willy Henker, who in that year opened the factory "Kunststein-Industrie W. Henker & Co" in Berlin , which was in operation until at least 1936. Henker produced decorative items from lavinite such as vases, candlesticks, lamps, chandeliers and rosettes as well as letters and advertising signs. [ 2 ]
Lavinite products were usually black, less often white or colored, enameled or covered with "antique" bronze. Initially, the factory offered items in the Art Nouveau style. Later they introduced lines in antique, oriental and Art Deco styles. [ 2 ]
In 1922, Kunststein-Industrie W. Henker & Co opened a sales office in New York City and lavinite became very popular in the United States. Afterwards, Henker sold the patent for lavinite production to the U.S., France, Austria and Poland. In 1923 the factory "Lavinit. Krupka I Perlicz" opened in Włocławek , Poland, where it operated until 1939. They offered products from Willy Henker's factory catalogue. Over time, the assortment was expanded by items referring to the history of Poland, such as busts of Prince Józef Poniatowski or Adam Mickiewicz . [ 2 ] For a short time, lavinite items were also produced by the Wulkanit factory in Grudziądz . [ 3 ]
Currently, decorative items in lavinite are popular and valued at auctions all around the world. The biggest collection of them, comprising 63 items, is in the Muzeum Ziemi Kujawskiej i Dobrzyńskiej in Włocławek. [ 4 ] | https://en.wikipedia.org/wiki/Lavinite |
A Lavoisier Medal is an award named and given in honor of Antoine Lavoisier , considered by some to be a father of modern chemistry. [ 1 ]
At least three organizations independently give awards for achievement in chemical-related disciplines, each using the name Lavoisier Medal . Lavoisier Medals are awarded by the following organizations:
The French Chemical Society 's Médaille Lavoisier is given for work or actions which have enhanced the perceived value of chemistry in society. [ 1 ]
The ISBC's Lavoisier Medal is awarded to an internationally acknowledged scientist for an outstanding contribution to the development and/or the application of direct calorimetry in biology and medicine [ 2 ]
Source: ISBC
The DuPont company's Lavoisier Medal for Technical Achievement is presented to DuPont scientists and engineers who have made outstanding contributions to DuPont and their scientific fields throughout their careers. Antoine Lavoisier mentored the founder of the company, E. I. du Pont , more than 200 years ago.
It was awarded 95 times from 1990 to 2013. [ 3 ] [ 4 ] Stephanie Louise Kwolek received the award in 1995. [ 5 ] She was the first female DuPont employee to receive the honor. [ 6 ]
Source (1990-2012): Dupont ( archived copy ) Source: (2011 onwards): Dupont ( archived copy ) | https://en.wikipedia.org/wiki/Lavoisier_Medal |
A law enforcement warning ( SAME code: LEW ) is a warning issued through the Emergency Alert System (EAS) in the United States to warn the public of criminal, and sometimes hazardous weather events that pose a threat to public safety. These include jailbreaks, riots, bomb explosions, and on rare occasions, severe weather events like blizzards. [ 1 ] An authorized law enforcement agency may blockade roads, waterways, or facilities, evacuate or deny access to affected areas, and arrest violators or suspicious persons. [ 2 ] The warning is usually issued by a law enforcement agency and is relayed by the National Weather Service .
CAC093-202208-
This article about disaster management or a disaster is a stub . You can help Wikipedia by expanding it .
This meteorology –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Law_enforcement_warning |
The Law of Maximum also known as Law of the Maximum is a principle developed by Arthur Wallace which states that total growth of a crop or a plant is proportional to about 70 growth factors. Growth will not be greater than the aggregate values of the growth factors. Without the correction of the limiting growth factors, nutrients, waters and other inputs are not fully or judicially used resulting in wasted resources. [ 1 ] [ 2 ] [ 3 ]
The factors range from 0 for no growth to 1 for maximum growth. Actual growth is calculated by the total multiplication of each growth factor. For example, if three factors had a value of 0.5, the actual growth would be:
If each of the three factors had a value of 0.9 the actual growth would be:
Hence the need to achieve maximal value for each factor is critical in order to obtain maximal growth.
The following demonstrates the Law of the Maximum. For the various crops listed below, one, two or three factors were limiting while all the other factors were 1. When two or three factors were simultaneously limiting, predicted growth of the two or three factors was similar to the actual growth when the two or three factors were limits individually and then multiplied together.
(Measured growth)
(calculated growth) | https://en.wikipedia.org/wiki/Law_of_Maximum |
The law of constancy of interfacial angles ( German : Das Gesetz der Winkelkonstanz ; French : Loi de constance des angles ) is an empirical law in the fields of crystallography and mineralogy concerning the shape , or morphology, of crystals. The law states that the angles between adjacent corresponding faces of crystals of a particular substance are always constant despite the different shapes, sizes, and mode of growth of crystals. The law is also named the first law of crystallography or Steno's law .
The International Union of Crystallography (IUCr) gives the following definition: "The law of the constancy of interfacial angles (or 'first law of crystallography') states that the angles between the crystal faces of a given species are constant, whatever the lateral extension of these faces and the origin of the crystal, and are characteristic of that species." [ 1 ] The law is valid at constant temperature and pressure. [ 2 ]
This law is important in identifying different mineral species as small changes in atomic structure can lead to large differences in the angles between crystal faces.
The sum of the interfacial angle ( external angle ) and the dihedral angle ( internal angle ) between two adjacent faces sharing a common edge is π radians (180°).
The law of the constancy of interfacial angles was first observed by the Danish physician Nicolas Steno when studying quartz crystals [ 3 ] [ 4 ] ( De solido intra solidum naturaliter contento , Florence, 1669), [ 5 ] [ 6 ] who noted that, although the crystals differed in appearance from one to another, the angles between corresponding faces were always the same. [ 7 ]
The law was also observed by Domenico Guglielmini ( Riflessioni filosofiche dedotte dalle figure de Sali , Bologna, 1688), [ 8 ] but it was generalized and firmly established by Jean-Baptiste Romé de l'Isle ( Cristallographie , Paris, 1783) [ 9 ] who accurately measured the interfacial angles of a great variety of crystals, using the goniometer designed by Arnould Carangeot and noted that the angles are characteristic of a substance. [ 10 ] [ 11 ] Carangeot was a student of Romé de L’Isle at the time of his invention of the basic crystallographic measuring instrument. [ 12 ] [ 13 ] [ 14 ]
A French crystallographer, René Just Haüy , showed in 1784 [ 15 ] that the known interfacial angles could be accounted for if the crystal were made up of minute building blocks ( molécules intégrantes ) [ 16 ] that correspond approximately to the present-day unit cells .
In the diagram, the green dodecahedron on the left is built from cubical units, with the faces having a Miller index of (210). Unlike the regular dodecahedron on the right, its faces are not regular pentagons, but they are close to regular in appearance. The piling of the cubical units forms the pentagonal dodecahedron of pyritohedral pyrite . The decrement of the layers is in the proportion of 2:1, which leads to a dihedral angle at the top edge pq of 126° 87′, closely corresponding to that of the empirical crystal, of 127° 56′. The diagram is based on an 1801 drawing by René Just Haüy . [ 17 ] [ 18 ]
The phenomenon of the constancy of interfacial angles is important because it is an outward sign of the inherent symmetry and ordered arrangement of atoms, ions or molecules within a crystal structure . The faces of a crystal are parallel to the planes of the crystal lattice , and it is for this reason that the interfacial angles are the same in different crystal specimens. [ 19 ]
The angles between the various faces of a crystal remain unchanged throughout its growth. Crystals grow by addition of material to existing faces, this material being deposited parallel to the already existing surfaces. Consequently, if more material is added to one face than to another, the faces become unalike in size and shape, nevertheless the interfacial angles between them remain the same. [ 20 ]
Crystals generally exhibit anisotropy , that is their properties are dependent on their direction. In particular, crystals cleave in specific directions, namely those parallel to the planes of the lattice structure. [ 21 ] Cleavage preferentially occurs parallel to higher density planes [ 22 ] with low Miller indices . [ 23 ] | https://en.wikipedia.org/wiki/Law_of_constancy_of_interfacial_angles |
The law of continuity is a heuristic principle introduced by Gottfried Leibniz based on earlier work by Nicholas of Cusa and Johannes Kepler . It is the principle that "whatever succeeds for the finite, also succeeds for the infinite". [ 1 ] Kepler used the law of continuity to calculate the area of the circle by representing it as an infinite-sided polygon with infinitesimal sides, and adding the areas of infinitely many triangles with infinitesimal bases. Leibniz used the principle to extend concepts such as arithmetic operations from ordinary numbers to infinitesimals , laying the groundwork for infinitesimal calculus . The transfer principle provides a mathematical implementation of the law of continuity in the context of the hyperreal numbers .
A related law of continuity concerning intersection numbers in geometry was promoted by Jean-Victor Poncelet in his "Traité des propriétés projectives des figures". [ 2 ] [ 3 ]
Leibniz expressed the law in the following terms in 1701:
In a 1702 letter to French mathematician Pierre Varignon subtitled “Justification of the Infinitesimal Calculus by that of Ordinary Algebra," Leibniz adequately summed up the true meaning of his law, stating that "the rules of the finite are found to succeed in the infinite." [ 5 ]
The law of continuity became important to Leibniz's justification and conceptualization of the infinitesimal calculus.
This article about the history of mathematics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Law_of_continuity |
In chemistry , the law of definite proportions , sometimes called Proust's law or the law of constant composition , states that a given chemical compound contains its constituent elements in a fixed ratio (by mass) and does not depend on its source or method of preparation. For example, oxygen makes up about 8 / 9 of the mass of any sample of pure water , while hydrogen makes up the remaining 1 / 9 of the mass: the mass of two elements in a compound are always in the same ratio. Along with the law of multiple proportions , the law of definite proportions forms the basis of stoichiometry . [ 1 ]
The law of definite proportion was given by Joseph Proust in 1797. [ 2 ]
I shall conclude by deducing from these experiments the principle I have established at the commencement of this memoir, viz. that iron like many other metals is subject to the law of nature which presides at every true combination, that is to say, that it unites with two constant proportions of oxygen. In this respect it does not differ from tin, mercury, and lead, and, in a word, almost every known combustible.
At the end of the 18th century, when the concept of a chemical compound had not yet been fully developed, the law was novel. In fact, when first proposed, it was a controversial statement and was opposed by other chemists, most notably Proust's fellow Frenchman Claude Louis Berthollet , who argued that the elements could combine in any proportion. [ 3 ] The existence of this debate demonstrates that, at the time, the distinction between pure chemical compounds and mixtures had not yet been fully developed. [ 4 ]
The law of definite proportions contributed to the atomic theory that John Dalton promoted beginning in 1805, which explained matter as consisting of discrete atoms , that there was one type of atom for each element, and that the compounds were made of combinations of different types of atoms in fixed proportions. [ 5 ]
A related early idea was Prout's hypothesis , formulated by English chemist William Prout , who proposed that the hydrogen atom was the fundamental atomic unit. From this hypothesis was derived the whole number rule , which was the rule of thumb that atomic masses were whole number multiples of the mass of hydrogen. This was later rejected in the 1820s and 30s following more refined measurements of atomic mass , notably by Jöns Jacob Berzelius , which revealed in particular that the atomic mass of chlorine was 35.45, which was incompatible with the hypothesis. Since the 1920s this discrepancy has been explained by the presence of isotopes; the atomic mass of any isotope is very close to satisfying the whole number rule, [ 6 ] with the mass defect caused by differing binding energies being significantly smaller.
Although very useful in the foundation of modern chemistry, the law of definite proportions is not universally true. There exist non-stoichiometric compounds whose elemental composition can vary from sample to sample. Such compounds follow the law of multiple proportion. An example is the iron oxide wüstite , which can contain between 0.83 and 0.95 iron atoms for every oxygen atom, and thus contain anywhere between 23% and 25% oxygen by mass. The ideal formula is FeO, but it is about Fe 0.95 O due to crystallographic vacancies. In general, Proust's measurements were not precise enough to detect such variations.
In addition, the isotopic composition of an element can vary depending on its source, hence its contribution to the mass of even a pure stoichiometric compound may vary. This variation is used in radiometric dating since astronomical , atmospheric , oceanic , crustal and deep Earth processes may concentrate some environmental isotopes preferentially. With the exception of hydrogen and its isotopes, the effect is usually small, but is measurable with modern-day instrumentation.
Many natural polymers vary in composition (for instance DNA , proteins , carbohydrates ) even when "pure". Polymers are generally not considered "pure chemical compounds" except when their molecular weight is uniform (mono-disperse) and their stoichiometry is constant. | https://en.wikipedia.org/wiki/Law_of_definite_proportions |
Wilhelm Ostwald ’s dilution law is a relationship proposed in 1888 [ 1 ] between the dissociation constant K d and the degree of dissociation α of a weak electrolyte . The law takes the form [ 2 ]
Where the square brackets denote concentration, and c 0 is the total concentration of electrolyte.
Using α = Λ c / Λ 0 {\displaystyle \alpha =\Lambda _{c}/\Lambda _{0}} , where Λ c {\displaystyle \Lambda _{c}} is the molar conductivity at concentration c and Λ 0 {\displaystyle \Lambda _{0}} is the limiting value of molar conductivity extrapolated to zero concentration or infinite dilution, this results in the following relation:
Consider a binary electrolyte AB which dissociates reversibly into A + and B − ions. Ostwald noted that the law of mass action can be applied to such systems as dissociating electrolytes. The equilibrium state is represented by the equation:
If α is the fraction of dissociated electrolyte, then αc 0 is the concentration of each ionic species. (1 - α ) must, therefore be the fraction of undissociated electrolyte, and (1 - α ) c 0 the concentration of same. The dissociation constant may therefore be given as
For very weak electrolytes α ≪ 1 {\displaystyle \alpha \ll 1} , implying that (1 - α ) ≈ 1 .
This gives the following results;
Thus, the degree of dissociation of a weak electrolyte is proportional to the inverse square root of the concentration, or the square root of the dilution. The concentration of any one ionic species is given by the root of the product of the dissociation constant and the concentration of the electrolyte.
The Ostwald law of dilution provides a satisfactory description of the concentration dependence of the conductivity of weak electrolytes like CH 3 COOH and NH 4 OH. [ 3 ] [ 4 ] The variation of molar conductivity is essentially due to the incomplete dissociation of weak electrolytes into ions.
For strong electrolytes, however, Lewis and Randall recognized that the law fails badly since the supposed equilibrium constant is actually far from constant. [ 5 ] This is because the dissociation of strong electrolytes into ions is essentially complete below a concentration threshold value. The decrease in molar conductivity as a function of concentration is actually due to attraction between ions of opposite charge as expressed in the Debye-Hückel-Onsager equation and later revisions.
Even for weak electrolytes the equation is not exact. Chemical thermodynamics shows that the true equilibrium constant is a ratio of thermodynamic activities , and that each concentration must be multiplied by an activity coefficient . This correction is important for ionic solutions due to the strong forces between ionic charges. An estimate of their values is given by the Debye–Hückel theory at low concentrations. | https://en.wikipedia.org/wiki/Law_of_dilution |
In chemistry , the law of mass action is the proposition that the rate of a chemical reaction is directly proportional to the product of the activities or concentrations of the reactants . [ 1 ] It explains and predicts behaviors of solutions in dynamic equilibrium . Specifically, it implies that for a chemical reaction mixture that is in equilibrium, the ratio between the concentration of reactants and products is constant. [ 2 ]
Two aspects are involved in the initial formulation of the law: 1) the equilibrium aspect, concerning the composition of a reaction mixture at equilibrium and 2) the kinetic aspect concerning the rate equations for elementary reactions . Both aspects stem from the research performed by Cato M. Guldberg and Peter Waage between 1864 and 1879 in which equilibrium constants were derived by using kinetic data and the rate equation which they had proposed. Guldberg and Waage also recognized that chemical equilibrium is a dynamic process in which rates of reaction for the forward and backward reactions must be equal at chemical equilibrium . In order to derive the expression of the equilibrium constant appealing to kinetics, the expression of the rate equation must be used. The expression of the rate equations was rediscovered independently by Jacobus Henricus van 't Hoff .
The law is a statement about equilibrium and gives an expression for the equilibrium constant , a quantity characterizing chemical equilibrium . In modern chemistry this is derived using equilibrium thermodynamics . It can also be derived with the concept of chemical potential . [ 3 ]
Two chemists generally expressed the composition of a mixture in terms of numerical values relating the amount of the product to describe the equilibrium state. Cato Maximilian Guldberg and Peter Waage , building on Claude Louis Berthollet 's ideas [ 4 ] [ 5 ] about reversible chemical reactions , proposed the law of mass action in 1864. [ 6 ] [ 7 ] [ 8 ] These papers, in Danish, went largely unnoticed, as did the later publication (in French) of 1867 which contained a modified law and the experimental data on which that law was based. [ 9 ] [ 10 ]
In 1877 van 't Hoff independently came to similar conclusions, [ 11 ] [ 12 ] but was unaware of the earlier work, which prompted Guldberg and Waage to give a fuller and further developed account of their work, in German, in 1879. [ 13 ] Van 't Hoff then accepted their priority.
In their first paper, [ 6 ] Guldberg and Waage suggested that in a reaction such as
the "chemical affinity" or "reaction force" between A and B did not just depend on the chemical nature of the reactants, as had previously been supposed, but also depended on the amount of each reactant in a reaction mixture. Thus the law of mass action was first stated as follows:
In this context a substitution reaction was one such as alcohol + acid ↽ − − ⇀ ester + water {\displaystyle {\ce {{alcohol}+ acid <=> {ester}+ water}}} . Active mass was defined in the 1879 paper as "the amount of substance in the sphere of action". [ 14 ] For species in solution active mass is equal to concentration. For solids, active mass is taken as a constant. α {\displaystyle \alpha } , a and b were regarded as empirical constants, to be determined by experiment.
At equilibrium , the chemical force driving the forward reaction must be equal to the chemical force driving the reverse reaction. Writing the initial active masses of A,B, A' and B' as p, q, p' and q' and the dissociated active mass at equilibrium as ξ {\displaystyle \xi } , this equality is represented by
ξ {\displaystyle \xi } represents the amount of reagents A and B that has been converted into A' and B'. Calculations based on this equation are reported in the second paper. [ 7 ]
The third paper of 1864 [ 8 ] was concerned with the kinetics of the same equilibrium system. Writing the dissociated active mass at some point in time as x, the rate of reaction was given as
Likewise the reverse reaction of A' with B' proceeded at a rate given by
The overall rate of conversion is the difference between these rates, so at equilibrium (when the composition stops changing) the two rates of reaction must be equal. Hence
The rate expressions given in Guldberg and Waage's 1864 paper could not be differentiated, so they were simplified as follows. [ 10 ] The chemical force was assumed to be directly proportional to the product of the active masses of the reactants.
This is equivalent to setting the exponents a and b of the earlier theory to one. The proportionality constant was called an affinity constant, k. The equilibrium condition for an "ideal" reaction was thus given the simplified form
[A] eq , [B] eq etc. are the active masses at equilibrium. In terms of the initial amounts reagents p,q etc. this becomes
The ratio of the affinity coefficients, k'/k, can be recognized as an equilibrium constant. Turning to the kinetic aspect, it was suggested that the velocity of reaction, v, is proportional to the sum of chemical affinities (forces). In its simplest form this results in the expression
where ψ {\displaystyle \psi } is the proportionality constant. Actually, Guldberg and Waage used a more complicated expression which allowed for interaction between A and A', etc. By making certain simplifying approximations to those more complicated expressions, the rate equation could be integrated and hence the equilibrium quantity ξ {\displaystyle \xi } could be calculated. The extensive calculations in the 1867 paper gave support to the simplified concept, namely,
This is an alternative statement of the law of mass action.
In the 1879 paper [ 13 ] the assumption that reaction rate was proportional to the product of concentrations was justified microscopically in terms of the frequency of independent collisions , as had been developed for gas kinetics by Boltzmann in 1872 ( Boltzmann equation ). It was also proposed that the original theory of the equilibrium condition could be generalised to apply to any arbitrary chemical equilibrium.
The exponents α, β etc. are explicitly identified for the first time as the stoichiometric coefficients for the reaction.
The affinity constants, k + and k − , of the 1879 paper can now be recognised as rate constants . The equilibrium constant, K, was derived by setting the rates of forward and backward reactions to be equal. This also meant that the chemical affinities for the forward and backward reactions are equal. The resultant expression
is correct [ 2 ] even from the modern perspective, apart from the use of concentrations instead of activities (the concept of chemical activity was developed by Josiah Willard Gibbs , in the 1870s, but was not widely known in Europe until the 1890s). The derivation from the reaction rate expressions is no longer considered to be valid. Nevertheless, Guldberg and Waage were on the right track when they suggested that the driving force for both forward and backward reactions is equal when the mixture is at equilibrium. The term they used for this force was chemical affinity. Today the expression for the equilibrium constant is derived by setting the chemical potential of forward and backward reactions to be equal. The generalisation of the law of mass action, in terms of affinity, to equilibria of arbitrary stoichiometry was a bold and correct conjecture.
The hypothesis that reaction rate is proportional to reactant concentrations is, strictly speaking, only true for elementary reactions (reactions with a single mechanistic step), but the empirical rate expression
is also applicable to second order reactions that may not be concerted reactions. Guldberg and Waage were fortunate in that reactions such as ester formation and hydrolysis, on which they originally based their theory, do indeed follow this rate expression.
In general many reactions occur with the formation of reactive intermediates, and/or through parallel reaction pathways. However, all reactions can be represented as a series of elementary reactions and, if the mechanism is known in detail, the rate equation for each individual step is given by the r f {\displaystyle r_{f}} expression so that the overall rate equation can be derived from the individual steps. When this is done the equilibrium constant is obtained correctly from the rate equations for forward and backward reaction rates.
In biochemistry, there has been significant interest in the appropriate mathematical model for chemical reactions occurring in the intracellular medium. This is in contrast to the initial work done on chemical kinetics, which was in simplified systems where reactants were in a relatively dilute, pH-buffered, aqueous solution. In more complex environments, where bound particles may be prevented from disassociation by their surroundings, or diffusion is slow or anomalous, the model of mass action does not always describe the behavior of the reaction kinetics accurately. Several attempts have been made to modify the mass action model, but consensus has yet to be reached. Popular modifications replace the rate constants with functions of time and concentration. As an alternative to these mathematical constructs, one school of thought is that the mass action model can be valid in intracellular environments under certain conditions, but with different rates than would be found in a dilute, simple environment [ citation needed ] .
The fact that Guldberg and Waage developed their concepts in steps from 1864 to 1867 and 1879 has resulted in much confusion in the literature as to which equation the law of mass action refers. It has been a source of some textbook errors. [ 15 ] Thus, today the "law of mass action" sometimes refers to the (correct) equilibrium constant formula, [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] and at other times to the (usually incorrect) r f {\displaystyle r_{f}} rate formula. [ 26 ] [ 27 ]
In a plasma , the ionization of the atoms can be understood as a chemical equilibrium between each ionization state with the next ionization state and a freed electron:
and accordingly a law of mass action arises for each reaction, which in the ideally dilute limit is the Saha ionization equation . [ 28 ]
The law of mass action also has implications in semiconductor physics . Regardless of doping , the product of electron and hole densities is a constant at equilibrium . This constant depends on the thermal energy of the system (i.e. the product of the Boltzmann constant , k B {\displaystyle k_{\text{B}}} , and temperature, T {\displaystyle T} ), as well as the band gap (the energy separation between conduction and valence bands, E g ≡ E C − E V {\displaystyle E_{g}\equiv E_{C}-E_{V}} ) and effective density of states in the valence ( N V ( T ) ) {\displaystyle (N_{V}(T))} and conduction ( N C ( T ) ) {\displaystyle (N_{C}(T))} bands. When the equilibrium electron ( n o ) {\displaystyle (n_{o})} and hole ( p o ) {\displaystyle (p_{o})} densities are equal, their density is called the intrinsic carrier density ( n i ) {\displaystyle (n_{i})} as this would be the value of n o {\displaystyle n_{o}} and p o {\displaystyle p_{o}} in a perfect crystal. Note that the final product is independent of the Fermi level ( E F ) {\displaystyle (E_{F})} :
Yakov Frenkel represented diffusion process in condensed matter as an ensemble of elementary jumps and quasichemical interactions of particles and defects. Henry Eyring applied his theory of absolute reaction rates to this quasichemical representation of diffusion. Mass action law for diffusion leads to various nonlinear versions of Fick's law . [ 29 ]
The Lotka–Volterra equations describe dynamics of the predator-prey systems. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this rate is evaluated as xy , where x is the number of prey, y is the number of predator. This is a typical example of the law of mass action.
The law of mass action forms the basis of the compartmental model of disease spread in mathematical epidemiology, in which a population of humans, animals or other individuals is divided into categories of susceptible, infected, and recovered (immune). The principle of mass action is at the heart of the transmission term of compartmental models in epidemiology , which provide a useful abstraction of disease dynamics. [ 30 ] The law of mass action formulation of the SIR model corresponds to the following "quasichemical" system of elementary reactions:
A rich system of law of mass action models was developed in mathematical epidemiology by adding components and elementary reactions.
Individuals in human or animal populations – unlike molecules in an ideal solution – do not mix homogeneously. There are some disease examples in which this non-homogeneity is great enough such that the outputs of the classical SIR model and their simple generalizations like SIS or SEIR, are invalid. For these situations, more sophisticated compartmental models or distributed reaction-diffusion models may be useful. | https://en.wikipedia.org/wiki/Law_of_mass_action |
In chemistry, the law of multiple proportions states that in compounds which contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. For instance, the ratio of the hydrogen content in methane (CH 4 ) and ethane (C 2 H 6 ) per measure of carbon is 4:3. This law is also known as Dalton's Law , named after John Dalton , the chemist who first expressed it. The discovery of this pattern led Dalton to develop the modern theory of atoms , as it suggested that the elements combine with each other in multiples of a basic quantity. Along with the law of definite proportions , the law of multiple proportions forms the basis of stoichiometry . [ 1 ]
The law of multiple proportions often does not apply when comparing very large molecules. For example, if one tried to demonstrate it using the hydrocarbons decane (C 10 H 22 ) and undecane (C 11 H 24 ), one would find that 100 grams of carbon could react with 18.46 grams of hydrogen to produce decane or with 18.31 grams of hydrogen to produce undecane, for a ratio of hydrogen masses of 121:120, which is hardly a ratio of "small" whole numbers.
In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson , who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" ( ethylene ) and "carburetted hydrogen gas" ( methane ). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. [ 2 ] In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C 2 H 4 ), and a methane molecule has one carbon atom and four hydrogen atoms (CH 4 ). In this particular case, Dalton was mistaken about the formulas of these compounds, and it wasn't his only mistake. But in other cases, he got their formulas right. The following examples come from Dalton's own books A New System of Chemical Philosophy (in two volumes, 1808 and 1817):
Example 1 — tin oxides: Dalton identified two types of tin oxide . One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen . The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO 2 ). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. [ 3 ] [ 4 ] Tin oxides are actually crystals, they don't exist in molecular form.
Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide (Fe 2 O 2 ) [ a ] and iron(III) oxide (Fe 2 O 3 ). [ 5 ] [ 6 ] Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide". [ 7 ] As with tin oxides, iron oxides are crystals.
Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". [ 8 ] These compounds are known today as nitrous oxide , nitric oxide , and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there are 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N 2 O, NO, and NO 2 . [ 9 ] [ 10 ]
The earliest definition of Dalton's observation appears in an 1807 chemistry encyclopedia:
...where two bodies combine in different proportions, if the quantity of one of them be assumed as a fixed number, the proportions of the other body that unite to it are in the simplest possible ratio to each other, being produced by multiplying the lowest proportion by a simple integral number as 2, 3, 4, &c. [...] in all cases the simple elements of bodies are disposed to unite atom to atom singly; or if either is in excess, it exceeds by a ratio to be expressed by some simple multiple of the number of its atoms. [ 11 ]
The first known writer to refer to this principle as the "doctrine of multiple proportions" was Jöns Jacob Berzelius in 1813. [ 12 ]
Dalton's atomic theory garnered widespread interest but not universal acceptance shortly after he published it because the law of multiple proportions by itself was not complete proof of the existence of atoms. Over the course of the 19th century, other discoveries in the fields of chemistry and physics would give atomic theory more credence, such that by the end of the 19th century it had found universal acceptance. | https://en.wikipedia.org/wiki/Law_of_multiple_proportions |
The law of rational indices is an empirical law in the field of crystallography concerning crystal structure . The law states that "when referred to three intersecting axes all faces occurring on a crystal can be described by numerical indices which are integers, and that these integers are usually small numbers." [ 2 ] The law is also named the law of rational intercepts [ 3 ] or the second law of crystallography .
The International Union of Crystallography (IUCr) gives the following definition: "The law of rational indices states that the intercepts, OP , OQ , OR , of the natural faces of a crystal form with the unit-cell axes a , b , c are inversely proportional to prime integers, h , k , l . They are called the Miller indices of the face. They are usually small because the corresponding lattice planes are among the densest and have therefore a high interplanar spacing and low indices." [ 4 ]
The law of constancy of interfacial angles , first observed by Nicolas Steno , [ 6 ] : 44 [ 7 ] ( De solido intra solidum naturaliter contento , Florence, 1669), [ 8 ] and firmly established by Jean-Baptiste Romé de l'Isle ( Cristallographie , Paris, 1783), [ 9 ] was a precursor to the law of rational indices.
René Just Haüy showed in 1784 [ 10 ] that the known interfacial angles could be accounted for if a crystal were made up of minute building blocks ( molécules intégrantes ), such as cubes, parallelepipeds , or rhombohedra . The 'rise-to-run' ratio of the stepped faces of the crystal was a simple rational number p/q , where p and q are small multiples of units of length (generally different and not more than 6). [ 6 ] : 46 [ 11 ] Haüy's method is named the law of decrements , law of simple rational truncations , or Haüy's law . [ 12 ] : 322 The law of rational indices was not stated in its modern form by Haüy, but it is directly implied by his law of decrements. [ 12 ] : 333
In 1830, Johann Hessel [ 13 ] proved that, as a consequence of the law of rational indices, morphological forms can combine to give exactly 32 kinds of crystal symmetry in Euclidean space , since only two-, three-, four-, and six-fold rotation axes can occur. [ 14 ] [ 15 ] : 796 However, Hessel's work remained practically unknown for over 60 years and, in 1867, Axel Gadolin independently rediscovered his results. [ 16 ]
Miller indices were introduced in 1839 by the British mineralogist William Hallowes Miller , [ 17 ] although a similar system ( Weiss parameters ) had already been used by the German mineralogist Christian Samuel Weiss since 1817. [ 18 ]
In 1866, Auguste Bravais [ 19 ] showed that crystals preferentially cleaved parallel to lattice planes of high density. [ 20 ] This is sometimes referred to as Bravais's law or the law of reticular density and is an equivalent statement to the law of rational indices. [ 12 ] : 333 [ 6 ] : 48
The law of rational indices is implied by the three-dimensional lattice structure of crystals . A crystal structure is periodic, and invariant under translations in three linearly independent directions. [ 22 ]
Quasicrystals do not have translational symmetry, and therefore do not obey the law of rational indices. | https://en.wikipedia.org/wiki/Law_of_rational_indices |
The law of reciprocal proportions , also called law of equivalent proportions or law of permanent ratios , is one of the basic laws of stoichiometry .
It relates the proportions in which elements combine across a number of different elements. It was first formulated by Jeremias Richter in 1791. [ 1 ] A simple statement of the law is: [ 2 ]
The acceptance of the law allowed tables of element equivalent weights to be drawn up. These equivalent weights were widely used by chemists in the 19th century.
The other laws of stoichiometry are the law of definite proportions and the law of multiple proportions .
The law of definite proportions refers to the fixed composition of any compound formed between element A and element B. The law of multiple proportions describes the stoichiometric relationship between two or more different compounds formed between element A and element B. The law states that if two different elements combine separately with a fixed mass of a third element, the ratio of the masses in which they combine are either the same or are in simple multiple ratio of the masses in which they combine with each other .
The law of reciprocal proportions was proposed in essence by Richter, [ 1 ] following his determination of neutralisation ratios of metals with acids. In the early 19th century it was investigated by Berzelius , who formulated it as follows: [ 3 ]
Later Jean Stas showed that within experimental error the stoichiometric laws were correct. [ 3 ] | https://en.wikipedia.org/wiki/Law_of_reciprocal_proportions |
The law of specific nerve energies , first proposed by Johannes Peter Müller in 1835, is that the nature of perception is defined by the pathway over which the sensory information is carried. Hence, the origin of the sensation is not important. Therefore, the difference in perception of seeing, hearing, and touch is not caused by differences in the stimuli themselves but by the different nervous structures that these stimuli excite. For example, pressing on the eye elicits sensations of flashes of light because the neurons in the retina send a signal to the occipital lobe . Despite the sensory input's being mechanical, the experience is visual.
Here is Müller's statement of the law, from Handbuch der Physiologie des Menschen für Vorlesungen , 2nd Ed., translated by Edwin Clarke and Charles Donald O'Malley:
As the above quotation shows, Müller's law seems to differ from the modern statement of the law in one key way. Müller attributed the quality of an experience to some specific quality of the energy in the nerves. For example, the visual experience from light shining into the eye, or from a poke in the eye, arises from some special quality of the energy carried by optic nerve , and the auditory experience from sound coming into the ear, or from electrical stimulation of the cochlea , arises from some different, special quality of the energy carried by the auditory nerve . In 1912, Lord Edgar Douglas Adrian showed that all neurons carry the same energy, electrical energy in the form of action potentials . That means that the quality of an experience depends on the part of the brain to which nerves deliver their action potentials (e.g., light from nerves arriving at the visual cortex and sound from nerves arriving at the auditory cortex ).
In 1945, Roger Sperry showed that it is the location in the brain to which nerves attach that determines experience. He studied amphibians whose optic nerves cross completely, so that the left eye connects to the right side of the brain and the right eye connects to the left side of the brain. He was able to cut the optic nerves and cause them to regrow on the opposite side of the brain so that the left eye now connected to the left side of the brain and the right eye connected to the right side of the brain. He then showed that these animals made the opposite movements from the ones they would have made before the operation. For example, before the operation, the animal would move to the left to get away from a large object approaching from the right. After the operation, the animal would move to the right in response to the same large object approaching from the right. Sperry showed similar results in other animals including mammals ( rats ), this work contributing to his Nobel Prize in 1981. | https://en.wikipedia.org/wiki/Law_of_specific_nerve_energies |
The law of squares is a theorem concerning transmission lines . It states that the current injected into the line by a step in voltage reaches a maximum at a time proportional to the square of the distance down the line. The theorem is due to William Thomson , the future Lord Kelvin. The law had some importance in connection with submarine telegraph cables .
For a step increase in the voltage applied to a transmission line , the law of squares can be stated as follows,
where,
The law of squares is not just limited to step functions . It also applies to an impulse response or a rectangular function which are more relevant to telegraphy . However, the multiplicative factor is different in these cases. For an impulse it is 1/6 rather than 1/2 and for rectangular pulses it is something in between depending on their length. [ 2 ]
The law of squares was proposed by William Thomson (later to become Lord Kelvin) in 1854 at Glasgow University . He had some input from George Gabriel Stokes . Thomson and Stokes were interested in investigating the feasibility of the proposed transatlantic telegraph cable . [ 3 ]
Thomson built his result by analogy with the heat transfer theory of Joseph Fourier (the transmission of an electrical step down a line is analogous to suddenly applying a fixed temperature at one end of a metal bar). He found that the equation governing the instantaneous voltage on the line, v ( x , t ) {\displaystyle v(x,t)} is given by, [ 4 ]
It is from this that he derived the law of squares. [ 5 ] While Thomson's description of a transmission line is not exactly incorrect, and it is perfectly adequate for the low frequencies involved in a Victorian telegraph cable, it is not the complete picture. In particular, Thomson did not take into account the inductance (L) of the line, or the leakage conductivity (G) of the insulation material. [ 6 ] The full description was given by Oliver Heaviside in what is now known as the telegrapher's equations . [ 7 ] The law of squares can be derived from a special case of the telegrapher's equations – that is, with L and G set to zero. [ 8 ]
Thomson's result is quite counter-intuitive and led to some disbelieving it. The result that most telegraph engineers expected was that the delay in the peak would be directly proportional to line length. Telegraphy was in its infancy and many telegraph engineers were self taught. They tended to mistrust academics and rely instead on practical experience. [ 9 ] Even as late as 1887, the author of a letter to The Electrician wished to "...protest against the growing tendency to drag mathematics into everything." [ 10 ]
One opponent of Thomson was of particular significance, Wildman Whitehouse , who challenged Thomson when he presented the theorem to the British Association in 1855. [ 11 ] Both Thomson and Whitehouse were associated with the transatlantic telegraph cable project, Thomson as an unpaid director and scientific advisor, and Whitehouse as the Chief Electrician of the Atlantic Telegraph Company . [ 12 ] Thomson's discovery threatened to derail the project, or at least, indicated that a much larger cable was required (a larger conductor will reduce R {\displaystyle R} and a thicker insulator will reduce C {\displaystyle C} ). [ 13 ] Whitehouse had no advanced mathematical education (he was a doctor by training) and did not fully understand Thomson's work. [ 14 ] He claimed he had experimental evidence that Thomson was wrong, but his measurements were poorly conceived and Thomson refuted his claims, showing that Whitehouse's results were consistent with the law of squares. [ 15 ]
Whitehouse believed that a thinner cable could be made to work with a high voltage induction coil . The Atlantic Telegraph Company, in a hurry to push ahead with the project, went with Whitehouse's cheaper solution rather than Thomson's. [ 16 ] After the cable was laid, it suffered badly from retardation, an effect that had first been noticed by Latimer Clark in 1853 on the Anglo-Dutch submarine cable of the Electric Telegraph Company . Retardation causes a delay and a lengthening of telegraph pulses, the latter as if one part of the pulse has been retarded more than the other. Retardation can cause adjacent telegraph pulses to overlap making them unreadable, an effect now called intersymbol interference . It forced telegraph operators to send more slowly to restore a space between pulses. [ 17 ] The problem was so severe on the Atlantic cable that transmission speeds were measured in minutes per word rather than words per minute . [ 18 ] In attempting to overcome this problem with ever higher voltage, Whitehouse permanently damaged the cable insulation and made it unusable. He was dismissed shortly afterwards. [ 19 ]
Some commentators overinterpreted the law of squares and concluded that it implied that the " speed of electricity " depends on the length of the cable. Heaviside, with typical sarcasm, in a piece in The Electrician countered this:
Is it possible to conceive that the current, when it first sets out to go, say, to Edinburgh, knows where it's going, how long a journey it has to make, and where it has to stop, so that it can adjust its speed accordingly? Of course not...
Both the law of squares and the differential retardation associated with it can be explained with reference to dispersion . This is the phenomenon whereby different frequency components of the telegraph pulse travel down the cable at different speeds depending on the cable materials and geometry. [ 21 ] This kind of analysis, using the frequency domain with Fourier analysis rather than the time domain , was unknown to telegraph engineers of the period. They would likely deny that a regular chain of pulses contained more than one frequency. [ 22 ] On a line dominated by resistance and capacitance, such as the low-frequency ones analysed by Thomson, the square of the velocity, u {\displaystyle u} , of a wave frequency component is proportional to its angular frequency , ω {\displaystyle \omega } such that,
See Primary line constants § Twisted pair and Primary line constants § Velocity for the derivation of this. [ 23 ]
From this it can be seen that the higher frequency components travel faster, progressively stretching out the pulse. As the higher frequency components "run away" from the main pulse, the remaining low-frequency components, which contain most of the energy, are left progressively travelling slower as a group. [ 24 ] | https://en.wikipedia.org/wiki/Law_of_squares |
The law of symmetry is a law in the field of crystallography concerning crystal structure . The law states that all crystals of the same substance possess the same elements of symmetry. The law is also named the law of constancy of symmetry , Haüy's law or the third law of crystallography .
The way in which the law of symmetry was originally defined by Haüy in 1815 was based on his law of decrements and his conception of crystals being assembled of tiny parallelepipeds ( molécules intégrantes ) stacked up in three dimensions without leaving any gaps. The modern definition of the law of symmetry is based on symmetry elements , and is more in the German dynamistic [ 1 ] crystallographic tradition of Christian Samuel Weiss , Moritz Ludwig Frankenheim and Johann F. C. Hessel . Weiss and his followers studied the external symmetry of crystals rather than their internal structure.
René Just Haüy first lectured about his law of symmetry in 1795 but it was not until 1815 that it was finally published. [ 2 ] [ 3 ] Haüy states the law as follows: "It consists in this, that any one method of decrement ( décroissement ) is repeated on all those parts of the nucleus of which the resemblance is such, that one can be substituted for the other by changing the position of this nucleus with respect to the eye, without it (the nucleus) ceasing to be presented in the same aspect" [ 4 ]
Later authors stated the law in clearer forms:
Haüy's method of building crystals from stacked parallelepipeds has been replaced in modern crystallography by three-dimensional lattices ( Bravais lattices ). The 32 crystallographic point groups combine the following symmetry elements. [ 9 ]
If a crystal has an axis of symmetry through its centre, such that the crystal can be rotated around the axis into a position where it appears identical to the starting position, then it has an axis of symmetry. A crystal may have zero, one, or multiple axes of symmetry but, by the crystallographic restriction theorem , the order of rotation may only be 2-fold, 3-fold, 4-fold, or 6-fold for each axis. An exception is made for quasicrystals which may have other orders of rotation, for example 5-fold. An axis of symmetry is also known as a proper rotation.
If a crystal can be divided by a plane into two mirror-image halves, then the plane is a plane of symmetry. A crystal may have zero, one, or multiple planes of symmetry. For example, a cube has nine planes of symmetry. A plane of symmetry is also known as reflection symmetry or mirror symmetry.
If every face of a crystal has another identical face at an equal distance from a central point, then this point is called the centre of symmetry symbolised as i. A crystal can only have one centre of symmetry. A centre of symmetry is also known as point reflection , inversion symmetry, or centrosymmetry .
A rotoinversion , symbolised as ( 1 , 2 , 3 , 4 or 6 ), is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axis. As an example, a two-fold rotoinversion ( 2 ) is illustrated in the figure. Rotoinversion is also known as improper rotation, rotoreflection, or rotation-reflection.
René Just Haüy showed in 1784 [ 10 ] that the law of constancy of interfacial angles could be accounted for if a crystal were made up of minute building blocks ( molécules intégrantes ), such as cubes, parallelepipeds , or rhombohedra . Haüy's method is named the law of decrements . [ 11 ] : 322 The law of rational indices was not stated in its modern form by Haüy, but it is directly implied by his law of decrements. [ 11 ] : 333
Haüy spoke for the first time about a law of symmetry in his physics classes at the École Normale Supérieure in 1795. In his memoir of 1815 Haüy related the number and the position of the faces observed on the external form of crystals to the symmetry of the hypothetical nucleus. [ 12 ] However, he deliberately excludes certain crystals, among others boracite , quartz , and the tourmalines . [ 13 ] : 7–8 He was forced to exclude some substances because their crystals did not exhibit holohedry (all of the edges and faces behave in an equivalent manner), as required by his law of symmetry, but rather hemihedry (half of the edges and faces are equivalent and the other half act differently). In the figure, a cube is transformed into an octahedron when all the faces are decremented by an identical amount at each vertex (holohedry), but into a tetrahedron when only alternate faces are decremented (hemihedry), an example being boracite. Haüy knew about the pyroelectric effect and the polarity induced in tourmaline by a change of temperature; he thought that the hemihedry these crystals exhibited might be accounted for by different electric forces acting on the two extremities of the axis of the crystal during growth. [ 11 ] : 328–329
Haüy discovered that in some quartz crystals, certain faces were inclined more towards one side than the other. He called this type of quartz crystal 'plagihedral' and differentiated right from left plagihedra, depending on which direction the face was inclined. [ 14 ] : 138–139 In practice, Haüy knew that there were counter-examples to his law of symmetry, such as plagihedral quartz, but as he did not have an explanation for them, he dismissed them merely as rare anomalies. In summary, hemimorphic forms, such as quartz and tourmaline, caused Haüy's law of symmetry great difficulties. [ 15 ] : 180 In 1819, Weiss demonstrated the generality of this phenomenon and gave it the name of hemihedry, thus challenging Haüy's atomistic approach. [ 16 ] The modern definition of hemihedry is: "The point group of a crystal is called hemihedry if it is a subgroup of index 2 of the point group of its lattice." [ 17 ] The point group T d ( tetrahedral symmetry ) is a subgroup of index 2 of point group O h ( octahedral symmetry ).
In his 1815 law of symmetry papers Haüy postulated the idea of rotational symmetry in crystals but he considered only a single (vertical) axis of rotation, which made it difficult to explain the observed crystal forms with additional (horizontal) axes of rotation. [ 18 ] As an example, Haüy did not recognize the existence of a horizontal axis of two-fold symmetry in cobaltite , and so could not include this mineral in his law of symmetry. [ 19 ]
The German mineralogists led by Weiss were interested in the optical properties of minerals and the systematic descriptions of crystals. Their approach led to the first two determinations of all 32 point groups by Frankenheim in 1826 and Hessel, using a different approach which combined symmetry elements, in 1830. [ 20 ] [ 11 ] : 367 Their work was not influential at the time, and, in 1867, Axel Gadolin independently rediscovered their results. [ 21 ]
Gabriel Delafosse continued Haüy's work in France. He was the first to use the terms lattice ( réseau ) and unit cell ( maille ). He stated that the orientation of the molecular axes in a substance is constant, which implies symmetry of translation (a defining feature of a lattice), and that the external symmetry of a crystal reflects its inner symmetry, namely the symmetry of the constituent atoms and their arrangement. In other words, the law of symmetry applies to both the inside and the outside of a crystal. [ 11 ] : 370–371
French scientists did not adopt the dynamic crystallographic theory, but they did attempted to learn from it. Delafosse built on Haüy's crystallographic approach by stating that the structure and physical properties of crystals should exhibit the same symmetry. Delafosse aimed to resolve the apparent counter-examples to Haüy's law of symmetry by explaining that the symmetry of the physical phenomena revealed the inner structure of crystals. This structure is sometimes more complex than the external morphology. Crystals, in these cases, are of lower symmetry than the lattice. This substructure explained the behaviour of hemihedral crystals, which were not adequately accounted for by Haüy. [ 22 ] : 40
Later work by Auguste Bravais in 1851 in which he defined the Bravais lattices can be considered as drawing on a combination of the approaches of Haüy and Weiss. [ 23 ] [ 24 ] : 11–12 [ 25 ] | https://en.wikipedia.org/wiki/Law_of_symmetry_(crystallography) |
The law of the handicap of a head start (original Dutch : Wet van de remmende voorsprong ) , also known as the first-mover disadvantage or dialectics of lead , is a theory in economic history and technological development that proposes that an initial head start in a particular domain may paradoxically become a handicap over time.
The theory suggests that early adopters or pioneers may become entrenched in their initial infrastructure , technology , or methodologies , making them resistant to change and vulnerable to being overtaken by late adopters who can implement newer, more efficient systems without the burden of sunk costs or path dependence . The complementary principle—the law of the stimulative arrears ( wet van de stimulerende achterstand )—proposes that regions or organizations initially lagging behind may benefit from a form of leapfrogging , bypassing intermediate stages of development to implement the most current solutions.
This concept directly challenges the better-known first-mover advantage theory in business strategy and competitive market economics, which emphasizes the benefits of being the first to enter a market. Historical examples supporting the theory include Britain's early industrialization becoming a handicap when newer manufacturing methods emerged, and established urban infrastructures becoming obstacles to modernization compared to newly developed cities. The theory has applications across domains including economic development , technological innovation , institutional change , and international relations .
This concept was introduced in 1937 by Jan Romein , a Dutch historian and journalist , in his essay "The Dialectics of Progress" ("De dialectiek van de vooruitgang"), which appeared in his collection "The Unfinished Past" (Het onvoltooid verleden). [ 1 ]
The law of the handicap of a head start describes a phenomenon that is applicable in numerous settings. The law suggests that making progress in a particular area often creates circumstances in which stimuli are lacking to strive for further progress. This results in the individual or group that started out ahead eventually being overtaken by others. In the terminology of the law, the head start , initially an advantage, subsequently becomes a handicap .
An explanation for why the phenomenon occurs is that when a society dedicates itself to certain standards, and those standards change, it is harder for them to adapt . Conversely, a society that has not committed itself yet will not have this problem. Thus, a society that at one point has a head start over other societies, may, at a later time, be stuck with obsolete technology or ideas that get in the way of further progress. One consequence of this is that what is considered to be the state of the art in a certain field can be seen as "jumping" from place to place, as each leader soon becomes a victim of the handicap.
In common terms, societies, companies, and individuals are often confronted with the decision to either invest now and get a fast return, or put off the investment until a new technology has emerged and possibly make a bigger profit then. For example, a regular problem for individuals is the decision of when to buy a new computer . Since computer speed develops at a steady pace, delaying the investment for a year may mean having to make do with a slower (or no) computer for the first year, but after that the individual will be able to buy a better computer for the same price. In many cases, however, the technological development is not as predictable as this, so it is harder to make an informed decision.
A related law that can be considered as the contrary of this law is the Law of the stimulative arrears ( Wet van de stimulerende achterstand ) published by Erik van der Hoeven in 1980. [ 2 ]
The author gives an example of the law in his original essay. During a trip to London , he wonders why at that time it was still lit by gas lamps , rather than electric lights as were by then common in other European capitals like Amsterdam . His explanation was that London's head start—their possession of street lights before most other cities—was now holding them back in replacing them with the more modern electric lights. As the streets were already lit there was no pressing need to replace gas lamps, despite the other advantages of electric lighting.
Secondary or late-movers to an industry or market have the opportunity to study first-movers and their techniques and strategies. "Late movers may be able to 'free-ride' on a pioneering firms investments in a number of areas including R&D, buyer education, and infrastructure development. [ 3 ] The basic principle of this effect is that the competition is allowed to benefit and not incur the costs which the first-mover has to sustain. These "imitation costs" are much lower than the "innovation costs" the first-mover had to incur, and can also cut into the profits the pioneering firm would otherwise enjoy.
Studies of free-rider effects say the biggest benefit is riding the coattails of a company's research and development, [ 4 ] and learning-based productivity improvement. [ 5 ] Other studies [ 6 ] have looked at free rider effects in relation to labor costs, as first-movers may have to hire and train personnel to succeed, only to have the competition hire them away. [ 3 ] For example, Craigslist was the first and biggest website to look for short-term rentals. AirBnB came in a few years after and built a massive business at the expense of Craigslist. [ 7 ]
First-movers must deal with the entire risk associated with developing a new technology and creating a new market for it. Late-movers have the advantage of not sustaining those risks to the same extent. While first-movers have nothing to draw upon when deciding potential revenues and firm sizes, late-movers are able to follow industry standards and adjust accordingly. [ 3 ] The first-mover must take on all the risk as these standards are set, and in some cases they do not last long enough to operate under the new standards.
"New entrants exploit technological discontinuities to displace existing incumbents." [ 3 ] Late entrants are sometimes able to assess a market need that will cause an initial product to be seen as inferior. This can occur when the first-mover does not adapt or see the change in customer needs, or when a competitor develops a better, more efficient, and sometimes less-expensive product. Often this new technology is introduced while the older technology is still growing, and the new technology may not be seen as an immediate threat. [ 3 ]
An example of this is the steam locomotive industry not responding to the invention and commercialization of diesel fuel (Cooper and Schendel, 1976). This disadvantage is closely related to incumbent inertia, and occurs if the firm is unable to recognize a change in the market, or if a ground-breaking technology is introduced. In either case, the first-movers are at a disadvantage in that although they created the market, they have to sustain it, and can miss opportunities to advance while trying to preserve what they already have.
New products and services which require significant R&D or development will also require significant investments. Therefore, the firms will need to have the funds available to be able to deal with the up-front investments. If they don't have the cash on hand, this could lead to high loans and debts which puts increased pressure on the products to do well. [ 7 ]
Though the name "first-mover advantage" hints that pioneering firms will remain more profitable than their competitors, this is not always the case. Certainly a pioneering firm will reap the benefits of early profits, but sometimes profits fall close to zero as a patent expires. This commonly leads to the sale of the patent, or exit from the market, which shows that the first-mover is not guaranteed longevity. This commonly accepted fact has led to the concept known as "second-mover advantage".
While firms enjoy the success of being the first entrant into the market, they can also become complacent and not fully capitalize on their opportunity. According to Lieberman and Montgomery:
Vulnerability of the first-mover is often enhanced by 'incumbent inertia'. Such inertia can have several root causes:
Firms that have heavily invested in fixed assets cannot readily adjust to the new challenges of the market, as they have less financial ability to change. Firms that simply do not wish to change their strategy or products and incur sunk costs from "cannibalizing" or changing the core of their business, fall victim to this inertia. [ 3 ] Such firms are less likely to be able to operate in a changing and competitive environment. They may pour too much of their assets into what works in the beginning, and not project what will be needed long term.
Some studies which investigated why incumbent organizations are unable to be sustained in the face of new challenges and technology, pinpointed other aspects of incumbents' failures. These included: "the development of organizational routines and standards, internal political dynamics, and the development of stable exchange relations with other organizations" (Hannan and Freeman, 1984).
All in all, some firms are too rigid and invested in the "now", and are unable to project the future to continue to maximize their current market stronghold. | https://en.wikipedia.org/wiki/Law_of_the_handicap_of_a_head_start |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.