text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
This page provides supplementary chemical data on lithium tantalate .
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as SIRI , and follow its directions. | https://en.wikipedia.org/wiki/Lithium_tantalate_(data_page) |
Lithium tert -butoxide is the metalorganic compound with the formula LiOC(CH 3 ) 3 . A white solid, it is used as a strong base in organic synthesis . The compound is often depicted as a salt, and it often behaves as such, but it is not ionized in solution. Both octameric [ 1 ] and hexameric forms have been characterized by X-ray crystallography . [ 2 ]
Lithium tert -butoxide is commercially available as a solution and as a solid, but it is often generated in situ for laboratory use because samples are so sensitive and older samples are often of poor quality. It can be obtained by treating tert-butanol with butyl lithium. [ 3 ]
As a strong base, lithium tert -butoxide is easily protonated.
Lithium tert -butoxide is used to prepare other tert -butoxide compounds such as copper(I) t-butoxide and hexa(tert-butoxy)dimolybdenum(III) : [ 4 ] | https://en.wikipedia.org/wiki/Lithium_tert-butoxide |
Lithium tetramethylpiperidide (often abbreviated LiTMP or LTMP ) is a chemical compound with the molecular formula LiC 9 H 18 N . It is used as a non-nucleophilic base , being comparable to LiHMDS in terms of steric hindrance .
It is synthesised by the deprotonation of 2,2,6,6-tetramethylpiperidine with n -butyllithium at −78 °C. Recent reports show that this reaction can also be performed 0 °C. [ 1 ] The compound is stable in a THF / ethylbenzene solvent mixture and is commercially available as such.
Like many lithium reagents it has a tendency to aggregate, forming a tetramer in the solid state. [ 2 ] | https://en.wikipedia.org/wiki/Lithium_tetramethylpiperidide |
Lithium toxicity , also known as lithium overdose , is the condition of having too much lithium . Symptoms may include a tremor, increased reflexes, trouble walking, kidney problems, and an altered level of consciousness . Some symptoms may last for a year after levels return to normal. Complications may include serotonin syndrome . [ 1 ]
Lithium toxicity can occur due to excessive intake or decreased excretion. [ 1 ] Excessive intake may be either a suicide attempt or accidental. [ 1 ] Decreased excretion may occur as a result of dehydration such as from vomiting or diarrhea , a low sodium diet , or from kidney problems . [ 1 ] The diagnosis is generally based on symptoms and supported by a lithium level in blood serum of greater than 1.2 mEq/L. [ 1 ] [ 2 ]
Gastric lavage and whole bowel irrigation may be useful if done early. [ 1 ] Activated charcoal is not effective. [ 1 ] For severe toxicity hemodialysis is recommended. [ 1 ] The risk of death is generally low. [ 3 ] Acute toxicity generally has better outcomes than chronic toxicity. [ 4 ] In the United States about 5,000 cases are reported to poison control centers a year. [ 2 ] Lithium toxicity was first described in 1898. [ 1 ]
Symptoms of lithium toxicity can be mild, moderate, or severe. [ 1 ]
Mild symptoms include nausea, feeling tired, and tremor occur at a level of 1.5 to 2.5 mEq/L in blood serum. Moderate symptoms include confusion, an increased heart rate, and low muscle tone occur at a level of 2.5 to 3.5 mEq/L. [ 1 ] Severe symptoms include coma, seizures, low blood pressure and increased body temperature which occur at a lithium concentration greater than 3.5 mEq/L. [ 1 ] When lithium overdoses produce neurological deficits or cardiac toxicity, the symptoms are considered serious and can be fatal. [ 5 ]
In acute toxicity, people have primarily gastrointestinal symptoms such as vomiting and diarrhea , which may result in volume depletion . During acute toxicity, lithium distributes later into the central nervous system causing dizziness and other mild neurological symptoms. [ 6 ]
In chronic toxicity, people have primarily neurological symptoms which include nystagmus , tremor , hyperreflexia , ataxia , and change in mental status . During chronic toxicity, the gastrointestinal symptoms seen in acute toxicity are less prominent. The symptoms are often vague and nonspecific. [ 7 ]
In acute on chronic toxicity [ clarification needed ] , people have symptoms of both acute and chronic toxicity.
People who survive an intoxication episode may develop persistent health problems. [ 8 ] This group of persistent health symptoms are called syndrome of irreversible lithium-effectuated neurotoxicity (SILENT). [ 9 ] The syndrome presents with irreversible neurological and neuro-psychiatric effects. [ 10 ] The neurological signs are cerebellar dysfunction, extrapyramidal symptoms , and brainstem dysfunction. [ 11 ] The neuro-psychiatric findings present with memory deficits, cognitive deficits, and sub-cortical dementia . For a diagnosis, the syndrome requires the absence of prior symptoms and persistence of symptoms for greater than 2 months after cessation of lithium. [ 12 ]
Lithium is readily absorbed from the gastrointestinal tract . [ 5 ] It is distributed to the body with higher levels in the kidney, thyroid , and bone as compared to other tissues. Since lithium is almost exclusively excreted by the kidneys , people with preexisting chronic kidney disease are at high risk of developing lithium intoxication. [ 13 ] The drug itself is also known to be nephrotoxic , opening up the possibility of spontaneous emergence of toxicity at doses that were previously well-tolerated. Lithium toxicity can be mistaken for other syndromes associated with antipsychotic use, such as serotonin syndrome because lithium increases serotonin metabolites in the cerebrospinal fluid . [ 14 ]
There are several drug interactions with lithium. Interactions can occur from typical antipsychotics or atypical antipsychotics . In particular, certain drugs enhance lithium levels by increasing renal re-absorption at the proximal tubule. These drugs are angiotensin-converting enzyme inhibitors , non-steroidal anti-inflammatory drugs and thiazide diuretics . [ 13 ]
The diagnosis is generally based on symptoms and supported by a lithium level blood level. [ 1 ] [ 2 ] Blood levels are most useful six to twelve hours after the last dose. [ 2 ] The normal blood serum lithium level in those on treatment is between 0.6 and 1.2 mEq/L. [ 1 ] Some blood tubes contain lithium heparin which may result in falsely elevated results. [ 2 ]
When lithium toxicity is suspected tests may include:
Imaging tests are not helpful.
If the person's lithium toxicity is mild or moderate, lithium dosage is reduced or stopped entirely. [ 13 ] If the toxicity is severe, lithium may need to be removed from the body. The removal of lithium is done in a hospital emergency department . It may involve: | https://en.wikipedia.org/wiki/Lithium_toxicity |
Lithium triethylborohydride is the organoboron compound with the formula Li Et 3 BH . Commonly referred to as LiTEBH or Superhydride , it is a powerful reducing agent used in organometallic and organic chemistry . It is a colorless or white liquid but is typically marketed and used as a THF solution. [ 2 ] The related reducing agent sodium triethylborohydride is commercially available as toluene solutions.
LiBHEt 3 is a stronger reducing agent than lithium borohydride and lithium aluminium hydride .
LiBHEt 3 is prepared by the reaction of lithium hydride (LiH) and triethylborane (Et 3 B) in tetrahydrofuran (THF):
The resulting THF complex is stable indefinitely in the absence of moisture and air.
Alkyl halides are reduced to the alkanes by LiBHEt 3 . [ 3 ] [ 4 ] [ 2 ]
LiBHEt 3 reduces a wide range of functional groups, but so do many other hydride reagents. Instead, LiBHEt 3 is reserved for difficult substrates, such as sterically hindered carbonyls, as illustrated by reduction of 2,2,4,4-tetramethyl-3-pentanone. Otherwise, it reduces acid anhydrides to alcohols and the carboxylic acid, not to the diol . Similarly lactones reduce to diols. α,β-Enones undergo 1,4-addition to give lithium enolates . Disulfides reduce to thiols (via thiolates). LiBHEt 3 deprotonates carboxylic acids, but does not reduce the resulting lithium carboxylates. For similar reasons, epoxides undergo ring-opening upon treatment with LiBHEt 3 to give the alcohol. With unsymmetrical epoxides, the reaction can proceed with high regio- and stereo- selectivity, favoring attack at the least hindered position:
Acetals and ketals are not reduced by LiBHEt 3 . It can be used in the reductive cleavage of mesylates and tosylates . [ 5 ] LiBHEt 3 can selectively deprotect tertiary N-acyl groups without affecting secondary amide functionality. [ 6 ] It has also been shown to reduce aromatic esters to the corresponding alcohols as shown in eq 6 and 7.
LiBHEt 3 also reduces pyridine and isoquinolines to piperidines and tetrahydroisoquinolines respectively. [ 7 ] The reduction of β-hydroxysulfinyl imines with catecholborane and LiBHEt 3 produces anti- 1,3-amino alcohols shown in (8). [ 8 ]
LiBHEt 3 reacts exothermically, potentially violently, with water, alcohols, and acids, releasing hydrogen and the pyrophoric triethylborane . [ 2 ] | https://en.wikipedia.org/wiki/Lithium_triethylborohydride |
A lithoautotroph is an organism that derives energy from reactions of reduced compounds of mineral (inorganic) origin. [ 1 ] Two types of lithoautotrophs are distinguished by their energy source; photolithoautotrophs derive their energy from light, while chemolithoautotrophs (chemolithotrophs or chemoautotrophs) derive their energy from chemical reactions. [ 1 ] Chemolithoautotrophs are exclusively microbes . Photolithoautotrophs include macroflora such as plants; these do not possess the ability to use mineral sources of reduced compounds for energy. Most chemolithoautotrophs belong to the domain Bacteria , while some belong to the domain Archaea . [ 1 ] Lithoautotrophic bacteria can only use inorganic molecules as substrates in their energy-releasing reactions. The term "lithotroph" is from Greek lithos ( λίθος ) meaning "rock" and trōphos (τροφοσ) meaning "consumer"; literally, it may be read "eaters of rock." The "lithotroph" part of the name refers to the fact that these organisms use inorganic elements/compounds as their electron source, while the "autotroph" part of the name refers to their carbon source being CO 2 . [ 1 ] Many lithoautotrophs are extremophiles , but this is not universally so, and some can be found to be the cause of acid mine drainage .
Lithoautotrophs are extremely specific in their source of reduced compounds. Thus, despite the diversity in using inorganic compounds that lithoautotrophs exhibit as a group, one particular lithoautotroph would use only one type of inorganic molecule to get its energy. A chemolithotrophic example is anaerobic ammonia oxidizing bacteria (anammox) , which use ammonia and nitrite to produce dinitrogen (N 2 ). [ 1 ] Additionally, in July 2020, researchers reported the discovery of chemolithoautotrophic bacterial cultures that feed on the metal manganese after performing unrelated experiments and named their bacterial species Candidatus Manganitrophus noduliformans and Ramlibacter lithotrophicus . [ 3 ]
Some chemolithotrophs use redox half-reactions with low reduction potentials for their metabolisms, meaning that they do not harvest a lot of energy compared to organisms that use organotrophic pathways. [ 1 ] This leads some chemolithotrophs, such as Nitrosomonas , to be unable to reduce NAD + directly; therefore, these organisms rely on reverse electron transport to reduce NAD + and form NADH and H + . [ 1 ]
Lithoautotrophs participate in many geological processes, such as the weathering of parent material (bedrock) to form soil , as well as biogeochemical cycling of sulfur , potassium , and other elements. [ 1 ] The existence of undiscovered strains of microbial lithoautotrophs is theorized based on some of these cycles, as they are needed to explain phenomena like the conversion of ammonium in iron-reducing environments. [ 4 ] Lithoautotrophs may be present in the deep terrestrial subsurface (they have been found well over 3 km below the surface of the planet), in soils , and in endolith communities. As they are responsible for the liberation of many crucial nutrients, and participate in the formation of soil , lithoautotrophs play a crucial role in the maintenance of life on Earth. For example, the nitrogen cycle is influenced by the activity of ammonium-oxidizing archaea, anammox bacteria, and complete ammonium-oxidizing (comammox) bacteria of the genus Nitrospira . [ 4 ]
Several environmental hazards, such as ammonium (NH 4 + ), hydrogen sulfide (H 2 S), and the greenhouse gas methane (CH 4 ), may be converted by chemolithoautotrophs into forms that are less environmentally harmful, such as N 2 , SO 4 2- , and CO 2 . [ 4 ] Although it was long believed that these organisms required oxygen to make these conversions, recent literature suggests that anaerobic oxidation also exists for these systems. [ 4 ]
Lithoautotrophic microbial consortia are responsible for the phenomenon known as acid mine drainage , whereby pyrite present in mine tailing heaps and in exposed rock faces is metabolized, using oxygen , to produce sulfites , which form potentially corrosive sulfuric acid when dissolved in water and exposed to aerial oxygen . [ 5 ] Acid mine drainage drastically alters the acidity and chemistry of groundwater and streams and may endanger plant and animal populations. Activity similar to acid mine drainage, but on a much lower scale, is also found in natural conditions such as the rocky beds of glaciers , in soil and talus , and in the deep subsurface. | https://en.wikipedia.org/wiki/Lithoautotroph |
Lithol Rubine BK is a reddish synthetic azo dye . It has the appearance of a red powder and is magenta when printed. It is slightly soluble in hot water, insoluble in cold water, and insoluble in ethanol . When dissolved in dimethylformamide , its absorption maximum lies at about 442 nm. It is usually supplied as a calcium salt. [ 1 ] It is prepared by azo coupling with 3-hydroxy-2-naphthoic acid .
It is used to dye plastics , paints , printing inks , and for textile printing . It is normally used as a standard magenta in the three and four color printing processes.
When used as a food dye , it has E number E180. It is used to color cheese rind, and it is a component in some lip balms .
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lithol_Rubine_BK |
Lithophytes are plants that grow in or on rocks . They can be classified as either epilithic (or epipetric) or endolithic; epilithic lithophytes grow on the surfaces of rocks, while endolithic lithophytes grow in the crevices of rocks (and are also referred to as chasmophytes). [ 1 ] Lithophytes can also be classified as being either obligate or facultative. Obligate lithophytes grow solely on rocks, while facultative lithophytes will grow partially on a rock and on another substrate simultaneously. [ 2 ]
Lithophytes that grow on land feed off nutrients from rain water and nearby decaying plants, including their own dead tissue. It is easier for chasmophytes to acquire nutrients because they grow in fissures in rocks where soil or organic matter has accumulated.
For most lithophytes, nitrogen is only available through interactions with the atmosphere. The most readily available form of nitrogen in the atmosphere is the gaseous state of ammonia (NH 3 ). Lithophytes consume atmospheric ammonia through a concentration gradient that allows the compound to traverse the plants' apoplast . Once free in the apoplast, gaseous ammonia is absorbed into metabolic cells by the enzyme glutamine synthetase . [ 3 ]
To be able to absorb the few nutrients available on rocks or rocky substrates efficiently, lithophytes have evolved certain adaptations. They possess decreased numbers of root hairs and larger root diameters in comparison to other plant species. To add to this nutrient uptake efficiency, lithophytic plants have increased their relationship with arbuscular mycorrhizal fungi and dark septate endophyte fungi. These two types of fungi live inter- and intracellularly with the roots of lithophytes and a wide variety of other plant species. They increase the uptake of nutrients and water and have been found in greater concentrations in lithophytes. [ 2 ]
Walls, and other exposed stonework, are colonised by plants in a similar way to the colonisation of cliffs and scree . These natural features are uncommon, especially in the lowlands, so walls are important for the conservation of plants which might otherwise be very isolated. Some wall plants even have 'wall' or 'muralis' as part of their common or scientific name such as wall-flower ( Erysimum cheiri ) or ivy-leaved toadflax ( Cymbalaria muralis ), which shows their long established relationship with these man-made structures. English Heritage Landscape Advice Note: Vegetation on Walls [ 4 ]
Examples of lithophytes include many orchids such as Dendrobium and Paphiopedilum , bromeliads such as Tillandsia , as well as many ferns , algae and liverworts . Lithophytes have also been found in many other plant families, such as, Liliaceae , Amaryllidaceae , Begoniaceae , Caprifoliaceae , Crassulaceae , Piperaceae and Selaginellaceae . [ 5 ]
As nutrients tend to be rarely available to lithophytes or chasmophytes, many species of carnivorous plants can be viewed as being pre-adapted to life on rocks. By consuming prey, these plants can gather more nutrients than non-carnivorous lithophytes. [ 6 ] Examples include the pitcher plants Nepenthes campanulata and Heliamphora exappendiculata , many Pinguicula and several Utricularia species.
In the year 1863, Alfred, Lord Tennyson was moved to write his short and pithy poem of metaphysical speculation Flower in the Crannied Wall upon contemplating an unnamed lithophyte growing out of the masonry of the wishing well at Waggoners Wells . [ 7 ]
Flower in the crannied wall, I pluck you out of the crannies, I hold you here, root and all, in my hand, Little flower—but if I could understand What you are, root and all, and all in all, I should know what God and man is. [ 8 ] | https://en.wikipedia.org/wiki/Lithophyte |
Lithoprotection is a term introduced in 2001 by Armenian biologist Tigran Tadevosyan as the name to the phenomenon where rock cover of a habitat diversifies local wildlife .
The word "lithoprotection" originates from the Greek root "lithos", meaning "stone", and the Latin root "protectus", meaning "to cover".
Originally an existence of a lithoprotection had been proposed based on observations of increased plant and animal diversity in habitats with rock cover compared to habitats without rock cover in the same conditions of an arid climate and steep slopes [ 1 ] [ 2 ] (Safarian, 1960; Tadevosyan, 2001). To explain this phenomenon, the author compared the life forms of plants, and the body sizes and escape strategies of animals inhabiting habitats with and without rock cover [ 2 ] [ 3 ] (Tadevosyan, 2001 - 2002). His conclusion was that many groups of vascular plants , including trees, shrubs, succulents , ferns , and moss species, linked to rocky substrates because they require higher than an area's average soil humidity , sometimes being shaded by a solid piece of rock, as well as strong attachment to the substrate. On the other hand, many relatively large animals (including larger lizards , snakes , some mammals and many bird species) need habitats with a dense network of shelters created by crevices and spaces between rocks in order survive overheating and predation . Thus, due to its ability to prevent the evaporation of humidity from the soil, to catch the seeds of plants, to hold trees and shrubs securely to the ground and to serve as a network of efficient shelters for many animals and shade-loving plants, rock cover lithoprotection is being considered as a protective element of a habitat biodiversity.
Managing lithoprotection may be used as a measure of managing wildlife. Removal of a lithoprotection by cleaning an area of stone and rocks is a common practice in an urban landscape development . Because a lack of lithoprotection is equal to decreased biodiversity, this practice can be considered one of the core mechanisms of biodiversity loss in urban landscapes. Adding lithoprotection in contrary typically attracts wildlife and helps diversify and maintain biodiversity of a landscape [ 2 ] (Tadevosyan, 2001). Bright examples of artificial use of lithoprotection are attractive rockeries, Japanese rock garden and other elements of a garden design . | https://en.wikipedia.org/wiki/Lithoprotection |
Lithotrophs are a diverse group of organisms using an inorganic substrate (usually of mineral origin) to obtain reducing equivalents for use in biosynthesis (e.g., carbon dioxide fixation ) or energy conservation (i.e., ATP production) via aerobic or anaerobic respiration . [ 1 ] While lithotrophs in the broader sense include photolithotrophs like plants, chemolithotrophs are exclusively microorganisms ; no known macrofauna possesses the ability to use inorganic compounds as electron sources. Macrofauna and lithotrophs can form symbiotic relationships, in which case the lithotrophs are called "prokaryotic symbionts". An example of this is chemolithotrophic bacteria in giant tube worms or plastids , which are organelles within plant cells that may have evolved from photolithotrophic cyanobacteria-like organisms. Chemolithotrophs belong to the domains Bacteria and Archaea . The term "lithotroph" was created from the Greek terms 'lithos' (rock) and 'troph' (consumer), meaning "eaters of rock". Many but not all lithoautotrophs are extremophiles .
The last universal common ancestor of life is thought to be a chemolithotroph (due to its presence in the prokaryotes). [ 2 ] Different from a lithotroph is an organotroph , an organism which obtains its reducing agents from the catabolism of organic compounds.
The term was suggested in 1946 by Lwoff and collaborators. [ 3 ]
Lithotrophs consume reduced inorganic compounds (electron donors).
A chemolithotroph is able to use inorganic reduced compounds in its energy-producing reactions. [ 4 ] : 155 [ 5 ] This process involves the oxidation of inorganic compounds coupled to ATP synthesis. The majority of chemolithotrophs are chemolithoautotrophs , able to fix carbon dioxide (CO 2 ) through the Calvin cycle , a metabolic pathway in which CO 2 is converted to glucose . [ 6 ] This group of organisms includes sulfur oxidizers, nitrifying bacteria , iron oxidizers, and hydrogen oxidizers.
The term "chemolithotrophy" refers to a cell's acquisition of energy from the oxidation of inorganic compounds, also known as electron donors. This form of metabolism is believed to occur only in prokaryotes and was first characterized by Ukrainian microbiologist Sergei Winogradsky . [ 7 ]
The survival of these bacteria is dependent on the physiochemical conditions of their environment. Although they are sensitive to certain factors such as quality of inorganic substrate, they are able to thrive under some of the most inhospitable conditions in the world, such as temperatures above 110 degrees Celsius and below 2 pH. [ 8 ] The most important requirement for chemolithotropic life is an abundant source of inorganic compounds, [ 9 ] which provide a suitable electron donor in order to fix CO 2 and produce the energy the microorganism needs to survive. Since chemosynthesis can take place in the absence of sunlight, these organisms are found mostly around hydrothermal vents and other locations rich in inorganic substrate.
The energy obtained from inorganic oxidation varies depending on the substrate and the reaction. For example, the oxidation of hydrogen sulfide to elemental sulfur by ½O 2 produces far less energy (50 kcal / mol or 210 kJ /mol) than the oxidation of elemental sulfur to sulfate (150 kcal/mol or 627 kJ/mol) by 3/2 O 2 ,. [ 10 ] The majority of lithotrophs fix carbon dioxide through the Calvin cycle, an energetically expensive process. [ 6 ] For some low-energy substrates, such as ferrous iron , the cells must cull through large amounts of inorganic substrate to secure just a small amount of energy. This makes their metabolic process inefficient in many places and hinders them from thriving. [ 11 ]
There is a fairly large variation in the types of inorganic substrates that these microorganisms can use to produce energy. Sulfur is one of many inorganic substrates that can be used in different reduced forms depending on the specific biochemical process that a lithotroph uses. [ 12 ] The chemolithotrophs that are best documented are aerobic respirers, meaning that they use oxygen in their metabolic process. The list of these microorganisms that employ anaerobic respiration though is growing. At the heart of this metabolic process is an electron transport system that is similar to that of chemoorganotrophs. The major difference between these two microorganisms is that chemolithotrophs directly provide electrons to the electron transport chain, while chemoorganotrophs must generate their own cellular reducing power by oxidizing reduced organic compounds. Chemolithotrophs bypass this by obtaining their reducing power directly from the inorganic substrate or by the reverse electron transport reaction. [ 13 ] Certain specialized chemolithotrophic bacteria use different derivatives of the Sox system; a central pathway specific to sulfur oxidation. [ 12 ] This ancient and unique pathway illustrates the power that chemolithotrophs have evolved to use from inorganic substrates, such as sulfur.
In chemolithotrophs, the compounds – the electron donors – are oxidized in the cell , and the electrons are channeled into respiratory chains, ultimately producing ATP . The electron acceptor can be oxygen (in aerobic bacteria), but a variety of other electron acceptors, organic and inorganic, are also used by various species . Aerobic bacteria such as the nitrifying bacteria, Nitrobacter , use oxygen to oxidize nitrite to nitrate. [ 14 ] Some lithotrophs produce organic compounds from carbon dioxide in a process called chemosynthesis , much as plants do in photosynthesis . Plants use energy from sunlight to drive carbon dioxide fixation, but chemosynthesis can take place in the absence of sunlight (e.g., around a hydrothermal vent ). Ecosystems establish in and around hydrothermal vents as the abundance of inorganic substances, namely hydrogen, are constantly being supplied via magma in pockets below the sea floor. [ 15 ] Other lithotrophs are able to directly use inorganic substances, e.g., ferrous iron, hydrogen sulfide, elemental sulfur, thiosulfate, or ammonia, for some or all of their energy needs. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
Here are a few examples of chemolithotrophic pathways, any of which may use oxygen or nitrate as electron acceptors:
NO − 2 ( nitrite ) + 7H + + 6e − [ 22 ]
→ 1/2 N 2 ( nitrogen ) + 4H + + 3e − [ 25 ]
1/2 N 2 (nitrogen) + 2H 2 O [ 25 ]
1/2 N 2 (nitrogen) + 3H 2 O [ 26 ]
PO 3− 4 ( phosphate ) + 2H + + 2e −
S 0 (sulfur) + 4H 2 O
Photolithotrophs such as plants obtain energy from light and therefore use inorganic electron donors such as water only to fuel biosynthetic reactions (e. g., carbon dioxide fixation in lithoautotrophs).
Lithotrophic bacteria cannot use, of course, their inorganic energy source as a carbon source for the synthesis of their cells. They choose one of three options:
In addition to this division, lithotrophs differ in the initial energy source which initiates ATP production:
Lithotrophs participate in many geological processes, such as the formation of soil and the biogeochemical cycling of carbon , nitrogen , and other elements . Lithotrophs also associate with the modern-day issue of acid mine drainage . Lithotrophs may be present in a variety of environments, including deep terrestrial subsurfaces, soils, mines, and in endolith communities. [ 27 ]
A primary example of lithotrophs that contribute to soil formation is Cyanobacteria . This group of bacteria are nitrogen-fixing photolithotrophs that are capable of using energy from sunlight and inorganic nutrients from rocks as reductants . [ 27 ] This capability allows for their growth and development on native, oligotrophic rocks and aids in the subsequent deposition of their organic matter (nutrients) for other organisms to colonize. [ 28 ] Colonization can initiate the process of organic compound decomposition : a primary factor for soil genesis. Such a mechanism has been attributed as part of the early evolutionary processes that helped shape the biological Earth.
Biogeochemical cycling of elements is an essential component of lithotrophs within microbial environments. For example, in the carbon cycle , there are certain bacteria classified as photolithoautotrophs that generate organic carbon from atmospheric carbon dioxide. Certain chemolithoautotrophic bacteria can also produce organic carbon, some even in the absence of light. [ 28 ] Similar to plants, these microbes provide a usable form of energy for organisms to consume. On the contrary, there are lithotrophs that have the ability to ferment , implying their ability to convert organic carbon into another usable form. [ 29 ] Lithotrophs play an important role in the biological aspect of the iron cycle . These organisms can use iron as either an electron donor, Fe(II) → Fe(III), or as an electron acceptor, Fe (III) → Fe(II). [ 30 ] Another example is the cycling of nitrogen . Many lithotrophic bacteria play a role in reducing inorganic nitrogen ( nitrogen gas ) to organic nitrogen ( ammonium ) in a process called nitrogen fixation . [ 28 ] Likewise, there are many lithotrophic bacteria that also convert ammonium into nitrogen gas in a process called denitrification . [ 27 ] Carbon and nitrogen are important nutrients, essential for metabolic processes, and can sometimes be the limiting factor that affects organismal growth and development. Thus, lithotrophs are key players in both providing and removing these important resource.
Lithotrophic microbes are responsible for the phenomenon known as acid mine drainage . Typically occurring in mining areas, this process concerns the active metabolism of pyrites and other reduced sulfur components to sulfate . One example is the acidophilic bacterial genus, A. ferrooxidans , that use iron(II) sulfide (FeS 2 ) to generate sulfuric acid . [ 29 ] The acidic product of these specific lithotrophs has the potential to drain from the mining area via water run-off and enter the environment.
Acid mine drainage drastically alters the acidity (pH values of 2–3) and chemistry of groundwater and streams, and may endanger plant and animal populations downstream of mining areas. [ 29 ] Activities similar to acid mine drainage, but on a much lower scale, are also found in natural conditions such as the rocky beds of glaciers, in soil and talus, on stone monuments and buildings and in the deep subsurface.
It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars . [ 5 ] Furthermore, organic components ( biosignatures ) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions. [ 31 ]
On January 24, 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic , chemotrophic and/or chemolithoautotrophic microorganisms , as well as ancient water, including fluvio-lacustrine environments ( plains related to ancient rivers or lakes ) that may have been habitable . [ 32 ] [ 33 ] [ 34 ] [ 35 ] The search for evidence of habitability , taphonomy (related to fossils ), and organic carbon on the planet Mars is now a primary NASA objective. [ 32 ] [ 33 ] | https://en.wikipedia.org/wiki/Lithotroph |
Lithuanian Plants Genes Bank ( Lithuanian : Augalų genų bankas, AGB ) is plant gene resource guardian and sustainable use organization governed by Lithuania's Ministry of Environment . Its headquarters based in Akademija, Kėdainiai , Central Lithuania . [ 1 ]
In 1997 the plant storage started to operate. In 2004 the Lithuanian Plants Gene Bank was established and the seed storage become a part of it. [ 2 ] By 2017 it had 3318 samples from 201 variety of different kind of plants. [ 3 ]
This Lithuania -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lithuanian_Plants_Genes_Bank |
Litmus is a water-soluble mixture of different dyes extracted from lichens . It is often absorbed onto filter paper to produce one of the oldest forms of pH indicator , used to test materials for acidity . In an acidic medium, blue litmus paper turns red, while in a basic or alkaline medium, red litmus paper turns blue. In short, it is a dye and indicator which is used to place substances on a pH scale. [ citation needed ]
The word "litmus" comes from the Old Norse word "litmosi" meaning "colour moss" or "colouring moss". The word is attested only in one Mediaeval source, a Norwegian law codex from 1316 in a chapter on customs and excise duties on pelts and furs. [ 1 ] About 1300, the Spanish physician Arnaldus de Villa Nova began using litmus to study acids and bases. [ 2 ] [ 3 ]
From the 16th century onwards, the blue dye was extracted from some lichens , especially in the Netherlands . [ citation needed ]
Litmus can be found in different species of lichens . The dyes are extracted from such species as Roccella tinctoria (South American), Roccella fuciformis (Angola and Madagascar), Roccella pygmaea (Algeria), Roccella phycopsis , Lecanora tartarea (Norway, Sweden), Variolaria dealbata , Ochrolechia parella , Parmotrema tinctorum , and Parmelia . Currently, the main sources are Roccella montagnei (Mozambique) and Dendrographa leucophoea (California). [ 2 ]
The main use of litmus is to test whether a solution is acidic or basic , as blue litmus paper turns red under acidic conditions, and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at 25 °C (77 °F). Neutral litmus paper is purple. [ 2 ] Wet litmus paper can also be used to test for water-soluble gases that affect acidity or basicity ; the gas dissolves in the water and the resulting solution colors the litmus paper. For instance, ammonia gas, which is alkaline, turns red litmus paper blue. While all litmus paper acts as pH paper, the opposite is not true.
Litmus can also be prepared as an aqueous solution that functions similarly. Under acidic conditions, the solution is red, and under alkaline conditions, the solution is blue.
Chemical reactions other than acid–base can also cause a color change to litmus paper. For instance, chlorine gas turns blue litmus paper white; the litmus dye is bleached [ 4 ] because hypochlorite ions are present. This reaction is irreversible, so the litmus is not acting as an indicator in this situation.
The litmus mixture has the CAS number 1393-92-6 and contains 10 to around 15 different dyes. All of the chemical components of litmus are likely to be the same as those of the related mixture known as orcein but in different proportions. In contrast with orcein, the principal constituent of litmus has an average molecular mass of 3300. [ 5 ] Acid-base indicators on litmus owe their properties to a 7-hydroxyphenoxazone chromophore . [ 6 ] Some fractions of litmus were given specific names including erythrolitmin (or erythrolein), azolitmin, spaniolitmin, leucoorcein, and leucazolitmin. Azolitmin shows nearly the same effect as litmus. [ 7 ]
A recipe to make litmus out of the lichens, as outlined on a UC Santa Barbara website says: [ 8 ]
Details are difficult to find because the processes were kept secret.
This summary of a modern manufacturing procedure is from The Vanishing Lichens, D H S Richardson, London, 1975.
The lichens (preferably Lecanora tartarea and Roccella tinctoria ) are ground in a solution of sodium carbonate and ammonia .
Stir the lichens from time to time and the color changes from red to purple and finally blue after about four weeks. The lichens are then dried and powdered. At this stage the lichens contain partly litmus and partly orcein pigments . The orcein is removed by extraction with alcohol , leaving the pure blue litmus. It is marketed as blue lumps, masses, or tablets, after mixing with colorless compounds such as chalk and gypsum . Litmus paper is paper impregnated with this substance.
Red litmus contains a weak diprotic acid . When it is exposed to a basic compound, the hydrogen ions react with the added base. The conjugate base formed from the litmus acid has a blue color, so the wet red litmus paper turns blue in an alkaline solution. | https://en.wikipedia.org/wiki/Litmus |
A litter is the live birth of multiple offspring at one time in animals from the same mother and usually from one set of parents , particularly from three to eight offspring. The word is most often used for the offspring of mammals , but can be used for any animal that gives birth to multiple young. In comparison, a group of eggs and the offspring that hatch from them are frequently called a clutch , while young birds are often called a brood . Animals from the same litter are referred to as littermates.
In most female mammals the average litter size is about half the number of mammae. [ 1 ] Presumably this enables females to successfully nurse litters even if some mammae fail to produce milk. Naked mole-rats break this "one-half rule" – field caught and lab born litters averaged 11 to 12 pups, and numbers of mammae on wild and captive females were similarly 11 to 12. Maximum litter sizes were 28 in the field and 27 in captivity, whereas the maximum number of mammae was 15. Breeding female naked mole-rats can bear and successfully rear litters that are far more numerous than their mammae because young take turns nursing from the same mammary and breeding females and pups are fed and protected by colony mates, enabling queens to concentrate their reproductive efforts on gestation and lactation. [ 2 ] [ 3 ]
Animals frequently display grouping behavior in herds , swarms , flocks , or colonies , and these multiple births derive similar advantages. A litter offers some protection from predation , not particularly to the individual young but to the parents' investment in breeding. With multiple young, predators could eat several and others could still survive to reach maturity, but with only one offspring, its loss could mean a wasted breeding season. The other significant advantage is the chance for the healthiest young animals to be favored from a group. Rather than it being a conscious decision on the part of the parents, the fittest and strongest baby competes most successfully for food and space, leaving the weakest young, or runts , to die through lack of care.
In the wild, only a small percentage, if any, of the litter may survive to maturity, whereas for domesticated animals and those in captivity with human care the whole litter almost always survives. Kittens and puppies are in this group. Carnivorans , rodents , and pigs usually have litters, while primates and larger herbivores usually have singletons. | https://en.wikipedia.org/wiki/Litter_(zoology) |
In mathematical queueing theory , Little's law (also result , theorem , lemma , or formula [ 1 ] [ 2 ] ) is a theorem by John Little which states that the long-term average number L of customers in a stationary system is equal to the long-term average effective arrival rate λ multiplied by the average time W that a customer spends in the system. Expressed algebraically the law is
The relationship is not influenced by the arrival process distribution, the service distribution, the service order, or practically anything else. In most queuing systems, service time is the bottleneck that creates the queue. [ 3 ]
The result applies to any system, and particularly, it applies to systems within systems. [ 4 ] For example in a bank branch, the customer line might be one subsystem, and each of the tellers another subsystem, and Little's result could be applied to each one, as well as the whole thing. The only requirements are that the system be stable and non-preemptive [ vague ] ; this rules out transition states such as initial startup or shutdown.
In some cases it is possible not only to mathematically relate the average number in the system to the average wait but even to relate the entire probability distribution (and moments) of the number in the system to the wait. [ 5 ]
In a 1954 paper, Little's law was assumed true and used without proof. [ 6 ] [ 7 ] The form L = λW was first published by Philip M. Morse where he challenged readers to find a situation where the relationship did not hold. [ 6 ] [ 8 ] Little published in 1961 his proof of the law, showing that no such situation existed. [ 9 ] Little's proof was followed by a simpler version by Jewell [ 10 ] and another by Eilon. [ 11 ] Shaler Stidham published a different and more intuitive proof in 1972. [ 12 ] [ 13 ]
Imagine an application that had no easy way to measure response time . If the mean number in the system and the throughput are known, the average response time can be found using Little’s Law:
For example: A queue depth meter shows an average of nine jobs waiting to be serviced. Add one for the job being serviced, so there is an average of ten jobs in the system. Another meter shows a mean throughput of 50 per second. The mean response time is calculated as 0.2 seconds = 10 / 50 per second.
Imagine a small store with a single counter and an area for browsing, where only one person can be at the counter at a time, and no one leaves without buying something. So the system is:
If the rate at which people enter the store (called the arrival rate) is the rate at which they exit (called the exit rate), the system is stable. By contrast, an arrival rate exceeding an exit rate would represent an unstable system, where the number of waiting customers in the store would gradually increase towards infinity.
Little's Law tells us that the average number of customers in the store L , is the effective arrival rate λ , times the average time that a customer spends in the store W , or simply:
Assume customers arrive at the rate of 10 per hour and stay an average of 0.5 hour. This means we should find the average number of customers in the store at any time to be 5.
Now suppose the store is considering doing more advertising to raise the arrival rate to 20 per hour. The store must either be prepared to host an average of 10 occupants or must reduce the time each customer spends in the store to 0.25 hour. The store might achieve the latter by ringing up the bill faster or by adding more counters.
We can apply Little's Law to systems within the store. For example, consider the counter and its queue. Assume we notice that there are on average 2 customers in the queue and at the counter. We know the arrival rate is 10 per hour, so customers must be spending 0.2 hours on average checking out.
We can even apply Little's Law to the counter itself. The average number of people at the counter would be in the range (0, 1) since no more than one person can be at the counter at a time. In that case, the average number of people at the counter is also known as the utilisation of the counter.
However, because a store in reality generally has a limited amount of space, it can eventually become unstable. If the arrival rate is much greater than the exit rate, the store will eventually start to overflow, and thus any new arriving customers will simply be rejected (and forced to go somewhere else or try again later) until there is once again free space available in the store. This is also the difference between the arrival rate and the effective arrival rate , where the arrival rate roughly corresponds to the rate at which customers arrive at the store, whereas the effective arrival rate corresponds to the rate at which customers enter the store. However, in a system with an infinite size and no loss, the two are equal.
To use Little's law on data, formulas must be used to estimate the parameters, as the result does not necessarily directly apply over finite time intervals, due to problems like how to log customers already present at the start of the logging interval and those who have not yet departed when logging stops. [ 14 ]
Little's law is widely used in manufacturing to predict lead time based on the production rate and the amount of work-in-process. [ 15 ]
Software-performance testers have used Little's law to ensure that the observed performance results are not due to bottlenecks imposed by the testing apparatus. [ 16 ] [ 17 ]
Other applications include staffing emergency departments in hospitals. [ 18 ] [ 19 ]
Lastly, an equivalent version of Little's law also applies in the fields of demography and population biology , although not referred to as "Little's Law". [ 20 ] [ 21 ] For example, Cohen (2008) [ 22 ] explains that in a homogeneous stationary population without migration, P = B × e {\displaystyle P=B\times e} , where P {\displaystyle P} is the total population size, B {\displaystyle B} is the number of births per year, and e {\displaystyle e} is the life expectancy from birth. The formula P = B × e {\displaystyle P=B\times e} is thus directly equivalent to Little's law ( L = λ × W {\displaystyle L=\lambda \times W} ). However, biological populations tend to be dynamic and therefore more complicated to model accurately. [ 23 ]
An extension of Little's law provides a relationship between the steady state distribution of number of customers in the system and time spent in the system under a first come, first served service discipline. [ 24 ] | https://en.wikipedia.org/wiki/Little's_lemma |
Little gastrin I is a form of gastrin commonly called as gastrin-17. [ 1 ] This is a protein hormone, secreted by the intestine.
Gastrin II has identical amino acid composition to Gastrin I , the only difference is that the single tyrosine residue is sulfated in Gastrin II. [ 2 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Little_gastrin_I |
The little pocket mouse ( Perognathus longimembris ) is a species of rodent in the family Heteromyidae . It is found in Baja California and Sonora in Mexico and in Arizona , California , Idaho , Nevada , Oregon and Utah in the United States . [ 1 ] Its natural habitat is subtropical or tropical dry lowland grassland . It is a common species and faces no particular threats and the IUCN has listed it as being of " least concern ".
Five mice of this species travelled to and orbited the Moon 75 times in an experiment on board the Apollo 17 command module in December 1972. Four of the mice survived the trip. [ 2 ] Six other little pocket mice were sent into orbit with Skylab 3 in July 1973, though these animals died only 30 hours into the mission due to a power failure. [ 3 ] [ 4 ]
This small mouse, with a long tail, inhabits arid and semiarid habitats with grasses, sagebrush and other scrubby vegetation. It is nocturnal and has a short period of activity for the first two hours after sunset, and then sporadic activity through the rest of the night. It sleeps in winter and is only active between April and November with numbers building up rapidly in the spring, peaking in June and July. It forages for seeds, plant material and small invertebrates which it carries back to its burrow in its cheek pouches . [ 5 ]
The little pocket mouse is common within most of its range although it is scarce in Baja California. The population appears to be steady and no particular threats have been identified for this species so the International Union for Conservation of Nature has assessed it as being of " least concern ". [ 1 ] | https://en.wikipedia.org/wiki/Little_pocket_mouse |
In mathematical analysis , Littlewood's 4/3 inequality , named after John Edensor Littlewood , [ 1 ] is an inequality that holds for every complex-valued bilinear form defined on c 0 {\displaystyle c_{0}} , the Banach space of scalar sequences that converge to zero.
Precisely, let B : c 0 × c 0 → C {\displaystyle B:c_{0}\times c_{0}\to \mathbb {C} } or R {\displaystyle \mathbb {R} } be a bilinear form. Then the following holds:
where
The exponent 4/3 is optimal, i.e., cannot be improved by a smaller exponent. [ 2 ] It is also known that for real scalars the aforementioned constant is sharp. [ 3 ]
Bohnenblust–Hille inequality [ 4 ] is a multilinear extension of Littlewood's inequality that states that for all m {\displaystyle m} -linear mapping M : c 0 × ⋯ × c 0 → C {\displaystyle M:c_{0}\times \cdots \times c_{0}\to \mathbb {C} } the following holds:
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Littlewood's_4/3_inequality |
Littlewood's three principles of real analysis are heuristics of J. E. Littlewood to help teach the essentials of measure theory in mathematical analysis .
Littlewood stated the principles in his 1944 Lectures on the Theory of Functions [ 1 ] as:
There are three principles, roughly expressible in the following terms: Every ( measurable ) set is nearly a finite sum of intervals; every function (of class L p ) is nearly continuous ; every convergent sequence of functions is nearly uniformly convergent .
The first principle is based on the fact that the inner measure and outer measure are equal for measurable sets, the second is based on Lusin's theorem , and the third is based on Egorov's theorem .
Littlewood's three principles are quoted in several real analysis texts, for example Royden, [ 2 ] Bressoud, [ 3 ] and Stein & Shakarchi. [ 4 ]
Royden [ 5 ] gives the bounded convergence theorem as an application of the third principle. The theorem states that if a uniformly bounded sequence of functions converges pointwise, then their integrals on a set of finite measure converge to the integral of the limit function. If the convergence were uniform this would be a trivial result, and Littlewood's third principle tells us that the convergence is almost uniform, that is, uniform outside of a set of arbitrarily small measure. Because the sequence is bounded, the contribution to the integrals of the small set can be made arbitrarily small, and the integrals on the remainder converge because the functions are uniformly convergent there. | https://en.wikipedia.org/wiki/Littlewood's_three_principles_of_real_analysis |
In mathematics , a Littlewood polynomial is a polynomial all of whose coefficients are +1 or −1. Littlewood's problem asks how large the values of such a polynomial must be on the unit circle in the complex plane . The answer to this would yield information about the autocorrelation of binary sequences. They are named for J. E. Littlewood who studied them in the 1950s.
A polynomial
is a Littlewood polynomial if all the a i = ±1 . Littlewood's problem asks for constants c 1 and c 2 such that there are infinitely many Littlewood polynomials p n , of increasing degree n satisfying
for all z on the unit circle. The Rudin–Shapiro polynomials provide a sequence satisfying the upper bound with c 2 = √ 2 . In 2019, an infinite family of Littlewood polynomials satisfying both the upper and lower bound was constructed by Paul Balister, Béla Bollobás , Robert Morris , Julian Sahasrabudhe , and Marius Tiba. | https://en.wikipedia.org/wiki/Littlewood_polynomial |
In mathematical field of combinatorial geometry , the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set . More formally, if V is a vector space of dimension d , the problem is to determine, given a finite subset of vectors S and a convex subset A , the number of subsets of S whose summation is in A .
The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord . [ 1 ] This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than ( c log n / n ) 2 n {\displaystyle {\Big (}c\,\log n/{\sqrt {n}}{\Big )}\,2^{n}} of the 2 n possible subsums of S fall into the disc.
In 1945 Paul Erdős improved the upper bound for d = 1 to
using Sperner's theorem . [ 2 ] This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space . [ 2 ]
Suppose S = { v 1 , …, v n }. By subtracting
from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form
that fall in the target set A , where ε i {\displaystyle \varepsilon _{i}} takes the value 1 or −1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors , and what can be said knowing nothing more about the v i .
This combinatorics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Littlewood–Offord_problem |
In mathematics , the Littlewood–Richardson rule is a combinatorial description of the coefficients that arise when decomposing a product of two Schur functions as a linear combination of other Schur functions. These coefficients are natural numbers, which the Littlewood–Richardson rule describes as counting certain skew tableaux . They occur in many other mathematical contexts, for instance as multiplicity in the decomposition of tensor products of finite-dimensional representations of general linear groups , or in the decomposition of certain induced representations in the representation theory of the symmetric group , or in the area of algebraic combinatorics dealing with Young tableaux and symmetric polynomials .
Littlewood–Richardson coefficients depend on three partitions , say λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } , of which λ {\displaystyle \lambda } and μ {\displaystyle \mu } describe the Schur functions being multiplied, and ν {\displaystyle \nu } gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} such that
The Littlewood–Richardson rule states that c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} is equal to the number of Littlewood–Richardson tableaux of skew shape ν / λ {\displaystyle \nu /\lambda } and of weight μ {\displaystyle \mu } .
Unfortunately the Littlewood–Richardson rule is much harder to prove than was at first suspected. The author was once told that the Littlewood–Richardson rule helped to get men on the moon but was not proved until after they got there.
The Littlewood–Richardson rule was first stated by D. E. Littlewood and A. R. Richardson ( 1934 , theorem III p. 119) but though they claimed it as a theorem they only proved it in some fairly simple special cases. Robinson ( 1938 ) claimed to complete their proof, but his argument had gaps, though it was so obscurely written that these gaps were not noticed for some time, and his argument is reproduced in the book ( Littlewood 1950 ). Some of the gaps were later filled by Macdonald (1995) . The first rigorous proofs of the rule were given four decades after it was found, by Schützenberger ( 1977 ) and Thomas (1974) , after the necessary combinatorial theory was developed by C. Schensted ( 1961 ), Schützenberger ( 1963 ), and Knuth ( 1970 ) in their work on the Robinson–Schensted correspondence .
There are now several short proofs of the rule, such as ( Gasharov 1998 ), and ( Stembridge 2002 ) using Bender-Knuth involutions . Littelmann (1994) used the Littelmann path model to generalize the Littlewood–Richardson rule to other semisimple Lie groups.
The Littlewood–Richardson rule is notorious for the number of errors that appeared prior to its complete, published proof. Several published attempts to prove it are incomplete, and it is particularly difficult to avoid errors when doing hand calculations with it: even the original example in D. E. Littlewood and A. R. Richardson ( 1934 ) contains an error.
A Littlewood–Richardson tableau is a skew semistandard tableau with the additional property that the sequence obtained by concatenating its reversed rows is a lattice word (or lattice permutation), which means that in every initial part of the sequence any number i {\displaystyle i} occurs at least as often as the number i + 1 {\displaystyle i+1} . Another equivalent (though not quite obviously so) characterization is that the tableau itself, and any tableau obtained from it by removing some number of its leftmost columns, has a weakly decreasing weight. Many other combinatorial notions have been found that turn out to be in bijection with Littlewood–Richardson tableaux, and can therefore also be used to define the Littlewood–Richardson coefficients.
Consider the case that λ = ( 2 , 1 ) {\displaystyle \lambda =(2,1)} , μ = ( 3 , 2 , 1 ) {\displaystyle \mu =(3,2,1)} and ν = ( 4 , 3 , 2 ) {\displaystyle \nu =(4,3,2)} . Then the fact that c λ , μ ν = 2 {\displaystyle c_{\lambda ,\mu }^{\nu }=2} can be deduced from the fact that the two tableaux shown at the right are the only two Littlewood–Richardson tableaux of shape ν / λ {\displaystyle \nu /\lambda } and weight μ {\displaystyle \mu } . Indeed, since the last box on the first nonempty line of the skew diagram can only contain an entry 1, the entire first line must be filled with entries 1 (this is true for any Littlewood–Richardson tableau); in the last box of the second row we can only place a 2 by column strictness and the fact that our lattice word cannot contain any larger entry before it contains a 2. For the first box of the second row we can now either use a 1 or a 2. Once that entry is chosen, the third row must contain the remaining entries to make the weight (3,2,1), in a weakly increasing order, so we have no choice left any more; in both case it turns out that we do find a Littlewood–Richardson tableau.
The condition that the sequence of entries read from the tableau in a somewhat peculiar order form a lattice word can be replaced by a more local and geometrical condition. Since in a semistandard tableau equal entries never occur in the same column, one can number the copies of any value from right to left, which is their order of occurrence in the sequence that should be a lattice word. Call the number so associated to each entry its index, and write an entry i with index j as i [ j ]. Now if some Littlewood–Richardson tableau contains an entry i > 1 {\displaystyle i>1} with index j , then that entry i [ j ] should occur in a row strictly below that of ( i − 1 ) [ j ] {\displaystyle (i-1)[j]} (which certainly also occurs, since the entry i − 1 occurs as least as often as the entry i does). In fact the entry i [ j ] should also occur in a column no further to the right than that same entry ( i − 1 ) [ j ] {\displaystyle (i-1)[j]} (which at first sight appears to be a stricter condition). If the weight of the Littlewood–Richardson tableau is fixed beforehand, then one can form a fixed collection of indexed entries, and if these are placed in a way respecting those geometric restrictions, in addition to those of semistandard tableaux and the condition that indexed copies of the same entries should respect right-to-left ordering of the indexes, then the resulting tableaux are guaranteed to be Littlewood–Richardson tableaux.
The Littlewood–Richardson as stated above gives a combinatorial expression for individual Littlewood–Richardson coefficients, but gives no indication of a practical method to enumerate the Littlewood–Richardson tableaux in order to find the values of these coefficients. Indeed, for given λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } there is no simple criterion to determine whether any Littlewood–Richardson tableaux of shape ν / λ {\displaystyle \nu /\lambda } and of weight μ {\displaystyle \mu } exist at all (although there are a number of necessary conditions, the simplest of which is | λ | + | μ | = | ν | {\displaystyle |\lambda |+|\mu |=|\nu |} ); therefore it seems inevitable that in some cases one has to go through an elaborate search, only to find that no solutions exist.
Nevertheless, the rule leads to a quite efficient procedure to determine the full decomposition of a product of Schur functions, in other words to determine all coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} for fixed λ and μ, but varying ν. This fixes the weight of the Littlewood–Richardson tableaux to be constructed and the "inner part" λ of their shape, but leaves the "outer part" ν free. Since the weight is known, the set of indexed entries in the geometric description is fixed. Now for successive indexed entries, all possible positions allowed by the geometric restrictions can be tried in a backtracking search. The entries can be tried in increasing order, while among equal entries they can be tried by decreasing index. The latter point is the key to efficiency of the search procedure: the entry i [ j ] is then restricted to be in a column to the right of i [ j + 1 ] {\displaystyle i[j+1]} , but no further to the right than i − 1 [ j ] {\displaystyle i-1[j]} (if such entries are present). This strongly restricts the set of possible positions, but always leaves at least one valid position for i [ j ] {\displaystyle i[j]} ; thus every placement of an entry will give rise to at least one complete Littlewood–Richardson tableau, and the search tree contains no dead ends.
A similar method can be used to find all coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} for fixed λ and ν, but varying μ.
The Littlewood–Richardson coefficients c ν λμ appear in the following interrelated ways:
Pieri's formula , which is the special case of the Littlewood–Richardson rule in the case when one of the partitions has only one part , states that
where S n is the Schur function of a partition with one row and the sum is over all partitions λ obtained from μ by adding n elements to its Ferrers diagram , no two in the same column.
If both partitions are rectangular in shape, the sum is also multiplicity free ( Okada 1998 ). Fix a , b , p , and q positive integers with p ≥ {\displaystyle \geq } q . Denote by ( a p ) {\displaystyle (a^{p})} the partition with p parts of length a . The partitions indexing nontrivial components of s ( a p ) s ( b q ) {\displaystyle s_{(a^{p})}s_{(b^{q})}} are those partitions λ {\displaystyle \lambda } with length ≤ p + q {\displaystyle \leq p+q} such that
For example,
.
The reduced Kronecker coefficient of the symmetric group C ¯ λ , μ , ν {\displaystyle {\bar {C}}_{\lambda ,\mu ,\nu }} is a generalization of c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} to three arbitrary Young diagrams λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } , which is symmetric under permutations of the three diagrams.
Zelevinsky (1981) extended the Littlewood–Richardson rule to skew Schur functions as follows:
where the sum is over all tableaux T on μ/ν such that for all j , the sequence of integers λ+ω( T ≥ j ) is non-increasing, and ω is the weight.
Newell-Littlewood numbers are defined from Littlewood–Richardson coefficients by the cubic expression [ 1 ]
Newell-Littlewood numbers give some of the tensor product multiplicities of finite-dimensional representations of classical Lie groups of the types B , C , D {\displaystyle B,C,D} .
The non-vanishing condition on Young diagram sizes c λ , μ ν ≠ 0 ⟹ | λ | + | μ | = | ν | {\displaystyle c_{\lambda ,\mu }^{\nu }\neq 0\implies |\lambda |+|\mu |=|\nu |} leads to
Newell-Littlewood numbers are generalizations of Littlewood–Richardson coefficients in the sense that
Newell-Littlewood numbers that involve a Young diagram with only one row obey a Pieri-type rule: N ( k ) , μ , ν {\displaystyle N_{(k),\mu ,\nu }} is the number of ways to remove k + | μ | − | ν | 2 {\displaystyle {\frac {k+|\mu |-|\nu |}{2}}} boxes from μ {\displaystyle \mu } (from different columns), then add k − | μ | + | ν | 2 {\displaystyle {\frac {k-|\mu |+|\nu |}{2}}} boxes (to different columns) to make ν {\displaystyle \nu } . [ 1 ]
Newell-Littlewood numbers are the structure constants of an associative and commutative algebra whose basis elements are partitions, with the product μ × ν = ∑ λ N μ , ν , λ λ {\displaystyle \mu \times \nu =\sum _{\lambda }N_{\mu ,\nu ,\lambda }\lambda } . For example,
The examples of Littlewood–Richardson coefficients below are given in terms of products of Schur polynomials S π , indexed by partitions π, using the formula
All coefficients with | ν | {\displaystyle |\nu |} at most 4 are given by:
Most of the coefficients for small partitions are 0 or 1, which happens in particular whenever one of the factors is of the form S n or S 11...1 , because of Pieri's formula and its transposed counterpart. The simplest example with a coefficient larger than 1 happens when neither of the factors has this form:
For larger partitions the coefficients become more complicated. For example,
The original example given by Littlewood & Richardson (1934 , p. 122-124) was (after correcting for 3 tableaux they found but forgot to include in the final sum)
with 26 terms coming from the following 34 tableaux:
Calculating skew Schur functions is similar.
For example, the 15 Littlewood–Richardson tableaux for ν=5432 and λ=331 are
so S 5432/331 = Σ c ν λμ S μ = S 52 + S 511 + S 4111 + S 2221 + 2 S 43 + 2 S 3211 + 2 S 322 + 2 S 331 + 3 S 421 ( Fulton 1997 , p. 64). | https://en.wikipedia.org/wiki/Littlewood–Richardson_rule |
In condensed matter physics , the Little–Parks effect was discovered in 1962 by William A. Little and Ronald D. Parks in experiments with empty and thin-walled superconducting cylinders subjected to a parallel magnetic field . [ 1 ] It was one of the first experiments to indicate the importance of Cooper-pairing principle in BCS theory . [ 2 ]
The essence of the Little–Parks effect is slight suppression of the cylinder's superconductivity by persistent current .
The electrical resistance of such cylinders shows a periodic oscillation with the magnetic flux piercing the cylinder, the period being
h 2 e ≈ 2.07 × 10 − 15 T ⋅ m 2 {\displaystyle {\frac {h}{2e}}\approx 2.07\times 10^{-15}{\text{ T}}\cdot {\text{m}}^{2}}
where h is the Planck constant and e is the elementary charge . The explanation provided by Little and Parks is that the resistance oscillation reflects a more fundamental phenomenon, i.e. periodic oscillation of the superconducting critical temperature T c .
The Little–Parks effect consists in a periodic variation of the T c with the magnetic flux, which is the product of the magnetic field (coaxial) and the cross sectional area of the cylinder. T c depends on the kinetic energy of the superconducting electrons. More precisely, the T c is such temperature at which the free energies of normal and superconducting electrons are equal, for a given magnetic field. To understand the periodic oscillation of the T c , which constitutes the Little–Parks effect, one needs to understand the periodic variation of the kinetic energy. The kinetic energy oscillates because the applied magnetic flux increases the kinetic energy while superconducting vortices, periodically entering the cylinder, compensate for the flux effect and reduce the kinetic energy. [ 1 ] Thus, the periodic oscillation of the kinetic energy and the related periodic oscillation of the critical temperature occur together.
The Little–Parks effect is a result of collective quantum behavior of superconducting electrons. It reflects the general fact that it is the fluxoid rather than the flux which is quantized in superconductors. [ 3 ]
The Little–Parks effect can be seen as a result of the requirement that quantum physics be invariant with respect to the gauge choice for the electromagnetic potential , of which the magnetic vector potential A forms part.
Electromagnetic theory implies that a particle with electric charge q travelling along some path P in a region with zero magnetic field B , but non-zero A (by B = 0 = ∇ × A {\displaystyle \mathbf {B} =0=\nabla \times \mathbf {A} } ), acquires a phase shift φ , given in SI units by
φ = q ℏ ∫ P A ⋅ d x , {\displaystyle \varphi ={\frac {q}{\hbar }}\int _{P}\mathbf {A} \cdot d\mathbf {x} ,}
In a superconductor, the electrons form a quantum superconducting condensate, called a Bardeen–Cooper–Schrieffer (BCS) condensate . In the BCS condensate all electrons behave coherently, i.e. as one particle. Thus the phase of the collective BCS wavefunction behaves under the influence of the vector potential A in the same way as the phase of a single electron. Therefore, the BCS condensate flowing around a closed path in a multiply connected superconducting sample acquires a phase difference Δ φ determined by the magnetic flux Φ B through the area enclosed by the path (via Stokes' theorem and ∇ × A = B {\displaystyle \nabla \times \mathbf {A} =\mathbf {B} } ), and given by:
Δ φ = q Φ B ℏ . {\displaystyle \Delta \varphi ={\frac {q\Phi _{B}}{\hbar }}.}
This phase effect is responsible for the quantized-flux requirement and the Little–Parks effect in superconducting loops and empty cylinders. The quantization occurs because the superconducting wave function must be single valued in a loop or an empty superconducting cylinder: its phase difference Δ φ around a closed loop must be an integer multiple of 2 π , with the charge q = 2 e for the BCS electronic superconducting pairs.
If the period of the Little–Parks oscillations is 2π with respect to the superconducting phase variable, from the formula above it follows that the period with respect to the magnetic flux is the same as the magnetic flux quantum , namely
Δ Φ B = 2 π ℏ 2 e = h 2 e . {\displaystyle \Delta \Phi _{B}={\frac {2\pi \hbar }{2e}}={\frac {h}{2e}}.}
Little–Parks oscillations are a widely used proof mechanism of Cooper pairing . One of good example is the study of the Superconductor Insulator Transition . [ 4 ] [ 5 ] [ 2 ]
The challenge here is to separate Little–Parks oscillations from weak (anti-)localization , as in Altshuler et al. results, where authors observed the Aharonov–Bohm effect in a dirty metallic film.
Fritz London predicted that the fluxoid is quantized in a multiply connected superconductor. Experimentally has been shown, [ 6 ] that the trapped magnetic flux existed only in discrete quantum units h /2 e . Deaver and Fairbank were able to achieve the accuracy 20–30% because of the wall thickness of the cylinder.
Little and Parks examined a "thin-walled" (Materials: Al, In, Pb, Sn and Sn–In alloys) cylinder (diameter was about 1 micron) at T very close to the transition temperature in an applied magnetic field in the axial direction. They found magnetoresistance oscillations with the period consistent with h /2 e .
What they actually measured was an infinitely small changes of resistance versus temperature for (different) constant magnetic field. The figure to the right shows instead measurements of the resistance for varying applied magnetic field, which corresponds to varying magnetic flux, with the different colors (probably) representing different temperatures. | https://en.wikipedia.org/wiki/Little–Parks_effect |
The littoral zone , also called litoral or nearshore , is the part of a sea , lake , or river that is close to the shore . [ 1 ] In coastal ecology , the littoral zone includes the intertidal zone extending from the high water mark (which is rarely inundated ), to coastal areas that are permanently submerged — known as the foreshore — and the terms are often used interchangeably. However, the geographical meaning of littoral zone extends well beyond the intertidal zone to include all neritic waters within the bounds of continental shelves .
The word littoral may be used both as a noun and as an adjective . It derives from the Latin noun litus, litoris , meaning "shore". (The doubled t is a late-medieval innovation, and the word is sometimes seen in the more classical-looking spelling litoral .) [ 2 ]
The term has no single definition. What is regarded as the full extent of the littoral zone, and the way the littoral zone is divided into subregions, varies in different contexts. For lakes, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. [ 1 ] The use of the term also varies from one part of the world to another, and between different disciplines. For example, military commanders speak of the littoral in ways that are quite different from the definition used by marine biologists .
The adjacency of water gives a number of distinctive characteristics to littoral regions. The erosive power of water results in particular types of landforms , such as sand dunes , and estuaries . The natural movement of the littoral along the coast is called the littoral drift . Biologically, the ready availability of water enables a greater variety of plant and animal life, and particularly the formation of extensive wetlands . In addition, the additional local humidity due to evaporation usually creates a microclimate supporting unique types of organisms.
In oceanography and marine biology , the idea of the littoral zone is extended roughly to the edge of the continental shelf . Starting from the shoreline, the littoral zone begins at the spray region just above the high tide mark. From here, it moves to the intertidal region between the high and low water marks, and then out as far as the edge of the continental shelf . These three subregions are called, in order, the supralittoral zone , the eulittoral zone , and the sublittoral zone .
The supralittoral zone (also called the splash , spray or supratidal zone ) is the area above the spring high tide line that is regularly splashed, but not submerged by ocean water. Seawater penetrates these elevated areas only during storms with high tides. Organisms that live here must cope with exposure to fresh water from rain, cold, heat, dryness and predation by land animals and seabirds. At the top of this area, patches of dark lichens can appear as crusts on rocks. Some types of periwinkles , Neritidae and detritus feeding Isopoda commonly inhabit the lower supralittoral. [ 3 ]
The eulittoral zone (also called the midlittoral or mediolittoral zone ) is the intertidal zone , known also as the foreshore . It extends from the spring high tide line, which is rarely inundated, to the spring low tide line, which is rarely not inundated. It is alternately exposed and submerged once or twice daily. Organisms living here must be able to withstand the varying conditions of temperature, light, and salinity. Despite this, productivity is high in this zone. The wave action and turbulence of recurring tides shape and reform cliffs, gaps and caves, offering a huge range of habitats for sedentary organisms. Protected rocky shorelines usually show a narrow, almost homogenous, eulittoral strip, often marked by the presence of barnacles . Exposed sites show a wider extension and are often divided into further zones. For more on this, see intertidal ecology .
The sublittoral zone starts immediately below the eulittoral zone. This zone is permanently covered with seawater and is approximately equivalent to the neritic zone .
In physical oceanography , the sublittoral zone refers to coastal regions with significant tidal flows and energy dissipation, including non-linear flows, internal waves , river outflows and oceanic fronts. In practice, this typically extends to the edge of the continental shelf , with depths around 200 meters.
In marine biology, the sublittoral zone refers to the areas where sunlight reaches the ocean floor, that is, where the water is never so deep as to take it out of the photic zone . This results in high primary production and makes the sublittoral zone the location of the majority of sea life. As in physical oceanography, this zone typically extends to the edge of the continental shelf . The benthic zone in the sublittoral is much more stable than in the intertidal zone; temperature, water pressure, and the amount of sunlight remain fairly constant. Sublittoral corals do not have to deal with as much change as intertidal corals. Corals can live in both zones, but they are more common in the sublittoral zone.
Within the sublittoral, marine biologists also identify the following:
Shallower regions of the sublittoral zone, extending not far from the shore, are sometimes referred to as the subtidal zone .
Many vertebrates (e.g., mammals, waterfowl, reptiles) and invertebrates (insects, etc.) use both the littoral zone as well as the terrestrial ecosystem for food and habitat. Biota that are commonly assumed to reside in the pelagic zone often rely heavily on resources from the littoral zone. [ 4 ] Littoral areas of ponds and lakes are typically better oxygenated, structurally more complex, and afford more abundant and diverse food resources than do profundal sediments. All these factors lead to a high diversity of insects and very complex trophic interactions. [ 4 ]
The great lakes of the world represent a global heritage of surface freshwater and aquatic biodiversity. Species lists for 14 of the world's largest lakes reveal that 15% of the global diversity (the total number of species) of freshwater fishes, 9% of non-insect freshwater invertebrate diversity, and 2% of aquatic insect diversity live in this handful of lakes. The vast majority (more than 93%) of species inhabit the shallow, nearshore littoral zone, and 72% are completely restricted to the littoral zone, even though littoral habitats are a small fraction of total lake areas. [ 5 ]
Because the littoral zone is important for many recreational and industrial purposes, it is often severely affected by many human activities that increase nutrient loading, spread invasive species, cause acidification and climate change , and produce increased fluctuations in water level. [ 4 ] Littoral zones are both more negatively affected by human activity and less intensively studied than offshore waters. Conservation of the remarkable biodiversity and biotic integrity of large lakes will require better integration of littoral zones into our understanding of lake ecosystem functioning and focused efforts to alleviate human impacts along the shoreline. [ 5 ]
In freshwater situations, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. [ 1 ] Sometimes other definitions are used. For example, the Minnesota Department of Natural Resources defines littoral as that portion of the lake that is less than 15 feet in depth. [ 6 ] Such fixed-depth definitions often do not accurately represent the true ecological zonation, but are sometimes used because they are simple measurements to make bathymetric maps or when there are no measurements of light penetration. The littoral zone comprises an estimated 78% of Earth's total lake area. [ 1 ]
The littoral zone may form a narrow or broad fringing wetland, with extensive areas of aquatic plants sorted by their tolerance to different water depths. Typically, four zones are recognized, from higher to lower on the shore: wooded wetland, wet meadow , marsh and aquatic vegetation . [ 7 ] The relative areas of these four types depends not only on the profile of the shoreline, but upon past water levels. The area of wet meadow is particularly dependent upon past water levels; [ 8 ] in general, the area of wet meadows along lakes and rivers increases with natural water level fluctuations. [ 9 ] [ 10 ] Many of the animals in lakes and rivers are dependent upon the wetlands of littoral zones, since the rooted plants provide habitat and food. Hence, a large and productive littoral zone is considered an important characteristic of a healthy lake or river. [ 8 ]
Littoral zones are at particular risk for two reasons. First, human settlement is often attracted to shorelines, and settlement often disrupts breeding habitats for littoral zone species. For example, many turtles are killed on roads when they leave the water to lay their eggs in upland sites. Fish can be negatively affected by docks and retaining walls which remove breeding habitat in shallow water. Some shoreline communities even deliberately try to remove wetlands since they may interfere with activities like swimming. Overall, the presence of human settlement has a demonstrated negative impact upon adjoining wetlands. [ 11 ] An equally serious problem is the tendency to stabilize lake or river levels with dams. Dams removed the spring flood, which carries nutrients into littoral zones and reduces the natural fluctuation of water levels upon which many wetland plants and animals depend. [ 12 ] [ 13 ] Hence, over time, dams can reduce the area of wetland from a broad littoral zone to a narrow band of vegetation. Marshes and wet meadows are at particular risk.
For the purposes of naval operations, the US Navy divides the littoral zone in the ways shown on the diagram at the top of this article. The US Army Corps of Engineers and the US Environmental Protection Agency have their own definitions, which have legal implications.
The UK Ministry of Defence defines the littoral as those land areas (and their adjacent areas and associated air space) that are susceptible to engagement and influence from the sea . [ 14 ] | https://en.wikipedia.org/wiki/Littoral_zone |
Littrow expansion and its counterpart Littrow compression are optical effects associated with slitless imaging spectrographs . These effects are named after Austrian physicist Otto von Littrow . [ 1 ]
In a slitless imaging spectrograph, light is focused with a conventional optical system, which includes a transmission or reflection grating as in a conventional spectrograph. This disperses the light, according to wavelength, in one direction; but no slit is interposed into the beam. For pointlike objects (such as distant stars) this results in a spectrum on the focal plane of the instrument for each imaged object. For distributed objects with emission-line spectra (such as the Sun in extreme ultraviolet ), it results in an image of the object at each wavelength of interest, overlapping on the focal plane, as in a spectroheliograph .
The Littrow expansion/compression effect is an anamorphic distortion of single-wavelength image on the focal plane of the instrument, due to a geometric effect surrounding reflection or transmission at the grating. In particular, the angle of incidence θ i {\displaystyle \theta _{i}} and reflection θ r {\displaystyle \theta _{r}} from a flat mirror, measured from the direction normal to the mirror, have the relation
which implies
so that an image encoded in the angle of collimated light is reversed but not distorted by the reflection.
In a spectrograph, the angle of reflection in the dispersed direction depends in a more complicated way on the angle of incidence:
where n {\displaystyle n} is an integer and represents spectral order, λ {\displaystyle \lambda } is the wavelength of interest, and D {\displaystyle D} is the line spacing of the grating. Because the sine function (and its inverse) are nonlinear, this in general means that
for most values of n {\displaystyle n} and λ / D {\displaystyle \lambda /D} , yielding anamorphic distortion of the spectral image at each wavelength. When the magnitude is larger, images are expanded in the spectral direction; when the magnitude is smaller, they are compressed. [ 2 ]
For the special case where
the reflected ray exits the grating exactly back along the incident ray, and d θ r / d θ i = 1 {\displaystyle d\theta _{r}/d\theta _{i}=1} ; this is the Littrow configuration , [ 3 ] and the specific angle for which this configuration holds is the Littrow angle . This configuration preserves the image aspect ratio in the reflected beam. All other incidence angles yield either Littrow expansion or Littrow compression of the collimated image. | https://en.wikipedia.org/wiki/Littrow_expansion |
The lituus spiral ( / ˈ l ɪ tj u . ə s / ) is a spiral in which the angle θ is inversely proportional to the square of the radius r .
This spiral, which has two branches depending on the sign of r , is asymptotic to the x axis. Its points of inflexion are at
The curve was named for the ancient Roman lituus by Roger Cotes in a collection of papers entitled Harmonia Mensurarum (1722), which was published six years after his death.
The representations of the lituus spiral in polar coordinates ( r , θ ) is given by the equation
where θ ≥ 0 and k ≠ 0 .
The lituus spiral with the polar coordinates r = a / √ θ can be converted to Cartesian coordinates like any other spiral with the relationships x = r cos θ and y = r sin θ . With this conversion we get the parametric representations of the curve:
These equations can in turn be rearranged to an equation in x and y :
The curvature of the lituus spiral can be determined using the formula [ 1 ]
In general, the arc length of the lituus spiral cannot be expressed as a closed-form expression , but the arc length of the lituus spiral can be represented as a formula using the Gaussian hypergeometric function :
where the arc length is measured from θ = θ 0 . [ 1 ]
The tangential angle of the lituus spiral can be determined using the formula [ 1 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lituus_(mathematics) |
Liu Boli ( Chinese : 刘伯里 ; 17 March 1931 – 2 July 2018) was a Chinese nuclear chemist and expert in radiopharmaceuticals , considered a founder of the field in China. He was a professor at Beijing Normal University and an academician of the Chinese Academy of Engineering .
Liu was born on 17 March 1931 in Changzhou , Jiangsu Province. [ 1 ] After graduating from Changzhou Senior High School , he studied at the Department of Chemistry of East China Normal University in Shanghai , and earned his bachelor's degree in 1953. [ 2 ] He was assigned to Beijing Normal University , where he worked and studied under Hu Zhibin ( 胡志彬 ). In 1958, he was transferred to the Institute of Nuclear Energy of the Chinese Academy of Sciences to study nuclear chemistry under Feng Xizhang ( 冯锡璋 ). It was a turning point in his career. [ 2 ]
In the 1960s, Liu and his colleagues were tasked with recycling nuclear fuels from China's nuclear reactors. [ 2 ] He worked under primitive conditions and was exposed to radiation for more than a decade, which caused his hair to turn gray before he was 40. [ 2 ]
Starting in 1974, Liu focused on the application of nuclear science in medical fields and the research and development of radiopharmaceuticals . [ 2 ] [ 3 ] He became a professor at Beijing Normal University and served as deputy chair of its chemistry department and director of its Institute of Applied Chemistry. [ 1 ] He made important discoveries in the properties of technetium-99m (99mTc), a radioactive isotope of technetium , and developed several medicines using 99mTc. He also researched radioactive isotopes of halogens , including bromine-82 , iodine-131 , and astatine-211 . [ 2 ]
Liu's research won many awards, including the National Science and Technology Conference Award (1979), the State Education Commission Science and Technology Progress Award, Second Class (1993 and 1998), and the State Science and Technology Progress Award , Second Class (1999). He was elected as an academician of the Chinese Academy of Engineering in 1997. [ 1 ] [ 2 ]
Liu died on 2 July 2018 in Beijing, at the age of 87. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Liu_Boli |
Liu Yuanfang ( Chinese : 刘元方 ; born February 1931) is a Chinese nuclear chemist. He is a chemist at the Chinese Academy of Sciences (CAS), who is now Professor of Chemistry at Shanghai University . [ citation needed ] He has studied nuclear chemistry and radiochemistry for forty years and pioneered education in that field in China. [ 1 ]
This biographical article about a Chinese scientist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Liu_Yuanfang |
Liu Yunbin ( Chinese : 刘允斌 ; pinyin : Liú Yǔnbīn ) (1925 – 21 November 1967) was a Chinese nuclear chemist and the son of former president of China Liu Shaoqi .
Liu was born on 1925 in Anyuan District of Pingxiang to Liu Shaoqi and He Baozhen. When he was two years old, he was sent back to Liu Shaoqi's hometown in Ningxiang County , Hunan for foster care. In 1934, his mother was executed while in captivity by the Kuomintang . [ 1 ] [ 2 ]
In July 1938, the Chinese Communist Party brought Liu to Yan'an to reunite with his father. In the autumn of the same year, at the age of 13, Liu started studying at Yan'an Education Primary School. [ 1 ] [ 2 ]
In 1939, the Central Committee of the Chinese Communist Party decided to send a group of children of party revolutionaries to study in the Soviet Union . In November 1939 Liu and his sister Liu Aiqin arrived at the children's home in Monino , where Mao Zedong 's sons Mao Anying and Mao Anqing were also living. While in Soviet Union, Liu was known by Russian name 'Klim' (Клим). [ 1 ] [ 2 ]
After one semester of study, Liu Yunbin moved to Interdom , which was 300 kilometers away from Moscow, at the city of Ivanovo , and was sponsored by the International Red Aid of the Soviet Union. During his time at the school, he studied very hard and in June 1941, following the start of the German invasion of the Soviet Union , Liu actively participated in the labors organized by the International Children's Institute, such as land reclamation, logging, and moving firewood. He also took the initiative to donate blood for the Soviet Red Army soldiers fighting in the frontline. He was selected as one of the leaders of the International Children's Institute student union, and soon joined the Komsomol and served as the head of the organization in the International Children's Academy. [ 1 ] [ 2 ]
After graduation from high school in 1945, Liu was admitted to the Moscow Iron and Steel Institute, where he majored in smelting . During his time in the institution, he also joined the Communist Party of the Soviet Union and after graduation, he was admitted to the Department of Chemistry at Moscow State University with honors and as a graduate student in radiochemistry . [ 1 ] He graduated in 1955 with an associate doctorate degree and entered the Moscow University Institute of Chemistry as a senior researcher. [ 2 ] In a 1955 letter to Liu Yunbin, Liu Shaoqi stated: [ 3 ]
When your personal interest is in conflict with the interest of the Party, I believe you can sacrifice your own interest for the sake of the Party and the country."
In 1957, he returned to China, where he resided in his father's residence in Zhongnanhai for a few days, before moving to Fangshan County , Shanxi, 50 kilometers away from Beijing, to work at the China Institute of Atomic Energy (Institute 401), which was one of the earliest nuclear weapons research institutes in China. He made outstanding contributions to nuclear energy research and was awarded the title of associate researcher. [ 1 ] [ 2 ]
In 1959, when Sino-Soviet relations deteriorated , the Soviet Union refused to provide China with technical materials required to develop the atomic bomb . In 1961, researchers from the First Institute of the China Institute of Atomic Energy were transferred to the China Nuclear Fuel Component Factory (Factory 202) at Baotou , Inner Mongolia Autonomous Region , where they set up a second laboratory, which was responsible for the research into thermonuclear materials. [ 1 ] [ 2 ]
In the winter of 1962, Liu Yunbin arrived at the 202 Factory, where he was appointed by his superiors as the director of the Second Research Office. The office under his leadership began research and organizational work for the atomic bomb project and, on 16 October 1964, China's first atomic bomb was successfully detonated at the Lop Nur test site, which resulted in China becoming the fifth nuclear power in the world and the first Asian nation to possess nuclear capability. [ 1 ] [ 2 ] [ 4 ]
In 1966 the Cultural Revolution began and Liu was sent to work, where he was tasked with cleaning and digging sewage ditches and other unskilled work. In July 1966, Liu Shaoqi was denounced as a " capitalist roader " and "traitor", and was removed from his position as Party Deputy chairman by Lin Biao . [ 1 ] [ 2 ]
As a consequence of the downfall of his father, Liu Yunbin was also condemned as a "spy" and "capitalist roader". He was beaten and abused by the Red Guards , who took him to an urban area in Baotou, where he was publicly humiliated at a denunciation rally . [ 1 ] On 21 November 1967, Liu took his own life by lying on the railroad tracks north of the residential area where his family lived. [ 2 ]
Liu's father died on 12 November 1969, in Kaifeng , due in part to maltreatment and torture in custody. Following the end of the Cultural Revolution in 1976 with the death of Mao Zedong , Liu Yunbin was posthumously rehabilitated and his reputation restored in 1978. In the same year, a solemn memorial service was conducted for him at the 202 Factory Club. Liu Shaoqi was posthumously rehabilitated in 1980. [ 1 ] [ 2 ]
On 16 April 2015, Russian Ambassador to China Andrey Denisov awarded the Jubilee Medal "70 Years of Victory in the Great Patriotic War 1941–1945" to 32 Chinese citizens, including Liu Yunbin, who was posthumously awarded the medal. [ 5 ]
During his time in the Soviet Union, Liu married a Russian woman named Mara Fedotova. The couple had two children; a son named Alexei (Russian: Алексей Климович Федото, Alexei Klimovich Fedotov ; Chinese: 刘维宁, Liu Weining ) and a daughter named Sonya. [ 3 ]
After Liu returned to China in 1957, Mara also moved to China with the children in 1959, which would be the last time the couple would see each other. Due to the tensions between China and Soviet Union, Mara divorced Liu and returned to Moscow with her two children. Liu later married Li Miaoxiu, with whom he had two sons, Liu Weidong and Liu Weize. [ 1 ] [ 3 ]
Alexei, who is also known by nickname Alyosha, did not publicly reveal himself as grandson of Liu Shaoqi due to fear of being spied on by the KGB when Sino-Soviet relations further deteriorated. After graduating from Moscow Aviation Institute , he worked at the national aviation space center of the Soviet Union and Roscosmos for a number of years with his identity not known until he was invited by the Government of China to attend the commemoration of the 100th anniversary of Liu Shaoqi's birthday in 1998. [ 6 ]
His request to travel to China was rejected as his occupation involved military secrecy. The denial of his request led him to grow anxious to travel to China and therefore to retire from the Russian military early. When his request to travel to China was refused again he filed a lawsuit and in 2003 managed to visit China for the first time, where he met with Liu Shaoqi's living family members, including his wife Wang Guangmei . [ 7 ] [ 8 ]
He decided to settle in China, where he presently manages an organization called 'Russian-Asian Union of Industrialists and Entrepreneurs' that facilitates trade between China and Russia in Guangzhou . Alexei and his wife have two children, including a daughter named Margarita who is currently serving as the vice chairman of the Russian-Asian Industrial Entrepreneurs Association and Russia-Philippine Business Council. [ 9 ] [ 10 ] [ 11 ]
Liu's daughter Sonya married a Russian American and is currently settled in the United States. [ 7 ] | https://en.wikipedia.org/wiki/Liu_Yunbin |
Liv Hornekær (born 1972 in Copenhagen . [ 1 ] ) is a Danish experimental physicist who works in nanotechnology and astrochemical research.
She is a professor at the Department of Physics and Astronomy at Aarhus University and head of the surface dynamics group at the department. [ 2 ] Her research mainly covers the interaction between hydrogen atoms and carbon-based surfaces [ 3 ]
In 2016, she won the prestigious EliteForsk Prize , which was awarded by the Danish Ministry of Higher Education and Science . [ 1 ] In 2017 she was appointed Professor of Physics at Aarhus University as the first woman ever, and in 2020 she was elected as member of The Royal Danish Academy of Sciences and Letters [ 4 ]
This article about a Danish scientist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Liv_Hornekær |
Live-cell imaging is the study of living cells using time-lapse microscopy . It is used by scientists to obtain a better understanding of biological function through the study of cellular dynamics. [ 1 ] Live-cell imaging was pioneered in the first decade of the 21st century. One of the first time-lapse microcinematographic films of cells ever made was made by Julius Ries, showing the fertilization and development of the sea urchin egg. [ 2 ] Since then, several microscopy methods have been developed to study living cells in greater detail with less effort. A newer type of imaging using quantum dots have been used, as they are shown to be more stable. [ 3 ] The development of holotomographic microscopy has disregarded phototoxicity and other staining-derived disadvantages by implementing digital staining based on cells’ refractive index. [ 4 ] [ 5 ]
Biological systems exist as a complex interplay of countless cellular components interacting across four dimensions to produce the phenomenon called life. While it is common to reduce living organisms to non-living samples to accommodate traditional static imaging tools, the further the sample deviates from the native conditions, the more likely the delicate processes in question will exhibit perturbations. [ 6 ] The onerous task of capturing the true physiological identity of living tissue, therefore, requires high-resolution visualization across both space and time within the parent organism. [ 7 ] The technological advances of live-cell imaging, designed to provide spatiotemporal images of subcellular events in real time, serves an important role for corroborating the biological relevance of physiological changes observed during experimentation. Due to their contiguous relationship with physiological conditions, live-cell assays are considered the standard for probing complex and dynamic cellular events. [ 8 ] As dynamic processes such as migration , cell development , and intracellular trafficking increasingly become the focus of biological research, techniques capable of capturing 3-dimensional data in real time for cellular networks ( in situ ) and entire organisms ( in vivo ) will become indispensable tools in understanding biological systems. The general acceptance of live-cell imaging has led to a rapid expansion in the number of practitioners and established a need for increased spatial and temporal resolution without compromising the health of the cell. [ 9 ]
Before the introduction of the phase-contrast microscope , it was difficult to observe living cells. As living cells are translucent, they must be stained to be visible in a traditional light microscope . Unfortunately, the process of staining cells generally kills them. With the invention of the phase-contrast microscopy it became possible to observe unstained living cells in detail. After its introduction in the 1940s, live-cell imaging rapidly became popular using phase-contrast microscopy. [ 11 ] The phase-contrast microscope was popularized through a series of time-lapse movies (see video), recorded using a photographic film camera. [ 12 ] Its inventor, Frits Zernike , was awarded the Nobel Prize in 1953. [ 13 ] Other later phase-contrast techniques used to observe unstained cells are Hoffman modulation and differential interference contrast microscopy.
Phase-contrast microscopy does not have the capacity to observe specific proteins or other organic chemical compounds which form the complex machinery of a cell. Synthetic and organic fluorescent stains have therefore been developed to label such compounds, making them observable by fluorescent microscopy (see video). [ 15 ] Fluorescent stains are, however, phototoxic , invasive and bleach when observed. This limits their use when observing living cells over extended periods of time. Non-invasive phase-contrast techniques are therefore often used as a vital complement to fluorescent microscopy in live-cell imaging applications. [ 16 ] [ 17 ] Deep learning-assisted fluorescence microscopy methods, however, help to reduced light burden and phototoxicity and allow even repeated high resolution live imaging. [ 18 ]
As a result of the rapid increase in pixel density of digital image sensors , quantitative phase-contrast microscopy has emerged as an alternative microscopy method for live-cell imaging. [ 20 ] [ 21 ] Quantitative phase-contrast microscopy has an advantage over fluorescent and phase-contrast microscopy in that it is both non-invasive and quantitative in its nature.
Due to the narrow focal depth of conventional microscopy, live-cell imaging is to a large extent currently limited to observing cells on a single plane. Most implementations of quantitative phase-contrast microscopy allow creating and focusing images at different focal planes from a single exposure. This opens up the future possibility of 3-dimensional live-cell imaging by means of fluorescence techniques. [ 22 ] Quantitative phase-contrast microscopy with rotational scanning allow 3D time-lapse images of living cells to be acquired at high resolution. [ 23 ] [ 24 ] [ 4 ]
Holotomography (HT) is a laser technique to measure three-dimensional refractive index (RI) tomogram of a microscopic sample such as biological cells and tissues. Because the RI can serve as an intrinsic imaging contrast for transparent or phase objects, measurements of RI tomograms can provide label-free quantitative imaging of microscopic phase objects. In order to measure 3D RI tomogram of samples, HT employs the principle of holographic imaging and inverse scattering . Typically, multiple 2D holographic images of a sample are measured at various illumination angles, employing the principle of interferometric imaging. Then, a 3D RI tomogram of the sample is reconstructed from these multiple 2D holographic images by inversely solving light scattering in the sample.
The principle of HT is very similar to X-ray computed tomography (CT), or CT scan . CT scan measures multiple 2D X-ray images of a human body at various illumination angles, and a 3D tomogram (X-ray absorptivity) is then retrieved using the inverse scattering theory. Both the X-ray CT and laser HT shares the same governing equation – Helmholtz equation , the wave equation for a monochromatic wavelength. HT is also known as optical diffraction tomography.
The combination of holography and rotational scanning allows long-term, label-free, live-cell recordings.
Non-invasive optical nanoscopy can achieve such a lateral resolution by using a quasi- 2π -holographic detection scheme and complex deconvolution. The spatial frequencies of the imaged cell do not make any sense to the human eye. But these scattered frequencies are converted into a hologram and synthesize a bandpass, which has a resolution double the one normally available. Holograms are recorded from different illumination directions on the sample plane and observe sub-wavelength tomographic variations of the specimen. Nanoscale apertures serve to calibrate the tomographic reconstruction and to characterize the imaging system by means of the coherent transfer function. This gives rise to realistic inverse filtering and guarantees true complex field reconstruction. [ 24 ]
In conclusion, the 2 terminologies of (i) optical resolution (the real one) and (ii) sampling resolution (the one on the screen) are separated for 3D holotomographic microscopy.
Live-cell imaging represents a careful compromise between acquiring the highest-resolution image and keeping the cells alive for as long as possible. [ 25 ] As a result, live-cell microscopists face a unique set of challenges that are often overlooked when working with fixed specimens. Moreover, live-cell imaging often employs special optical system and detector specifications. For example, ideally the microscopes used in live-cell imaging would have high signal-to-noise ratios , fast image acquisition rates to capture time-lapse video of extracellular events, and maintaining the long-term viability of the cells. [ 26 ] However, optimizing even a single facet of image acquisition can be resource-intensive and should be considered on a case-by-case basis.
In cases where extra space between the objective and the specimen is required to work with the sample, a dry lens can be used, potentially requiring additional adjustments of the correction collar, which changes the location of the lens in the objective, to account for differences in imaging chambers. Special objective lenses are designed with correction collars that correct for spherical aberrations while accounting for the cover-slip thickness. In high-numerical-aperture (NA) dry objective lenses, the correction collar adjustment ring changes the position of a movable lens group to account for differences in the way the outside of the lens focuses light relative to the center. Although lens aberrations are inherent in all lens designs, they become more problematic in dry lenses, where resolution retention is key. [ 27 ]
Oil immersion is a technique that can increase image resolution by immersing the lens and the specimen in oil with a high refractive index . Since light bends when it passes between media with different refractive indexes, by placing oil with the same refractive index as glass between the lens and the slide, two transitions between refractive indices can be avoided. [ 28 ] However, for most applications it is recommended that oil immersion be used with fixed (dead) specimens because live cells require an aqueous environment, and the mixing of oil and water can cause severe spherical aberrations. For some applications silicone oil can be used to produce more accurate image reconstructions. Silicone oil is an attractive medium because it has a refractive index that is close to that of living cells, allowing to produce high-resolution images while minimizing spherical aberrations. [ 27 ]
Live-cell imaging requires a sample in an aqueous environment that is often 50 to 200 micrometers away from the cover glass. Therefore, water-immersion lenses can help achieve a higher resolving power due to the fact that both the environment and the cells themselves will be close to the refractive index of water. Water-immersion lenses are designed to be compatible with the refractive index of water and usually have a corrective collar that allows adjustment of the objective. Additionally, because of the higher refractive index of water, water-immersion lenses have a high numerical aperture and can produce images superior to oil-immersion lens when resolving planes deeper than 0 μm. [ 27 ]
Another solution for live-cell imaging is the dipping lens. These lenses are a subset of water-immersion lenses that do not require a cover slip and can be dipped directly into the aqueous environment of the sample. One of the main advantages of the dipping lens is that it has a long effective working distance. [ 29 ] Since a cover slip is not required, this type of lens can approach the surface of the specimen, and as a result, the resolution is limited by the restraints imposed by spherical aberration rather than the physical limitations of the cover slip. Although dipping lenses can be very useful, they are not ideal for all experiments, since the act of "dipping" the lens can disturb the cells in the sample. Additionally, since the incubation chamber must be open to the lens, changes in the sample environment due to evaporation must be closely monitored. [ 27 ]
Today, most live imaging techniques rely on either high-illumination regimes or fluorescent labelling, both inducing phototoxicity and compromising the ability to keep cells unperturbed and alive over time. Since our knowledge of biology is driven by observation, it is key to minimize the perturbations induced by the imaging technique.
The rise of confocal microscopy is closely correlated with accessibility of high-power lasers, which are able to achieve high intensities of light excitation. However, the high-power output can damage sensitive fluorophores , so the lasers usually run significantly below their full power output. [ 30 ] Overexposure to light can result in photodamage due to photobleaching or phototoxicity . The effects of photobleaching can significantly reduce the quality of fluorescent images, and in recent years there has been a significant demand for longer-lasting commercial fluorophores. One solution, the Alexa Fluor series, show little to no fading even at high laser intensities. [ 31 ]
Under physiological conditions, many cells and tissue types are exposed to only low levels of light. [ 32 ] As a result, it is important to minimize the exposure of live cells to high doses of ultraviolet (UV), infrared (IR), or fluorescence exciting wavelengths of light, which can damage DNA , raise cellular temperatures, and cause photobleaching respectively. [ 33 ] High-energy photons absorbed by the fluorophores and the sample are emitted at longer wavelengths proportional to the Stokes shift . [ 34 ] However, cellular organelles can be damaged when the photon energy produces chemical and molecular changes rather than being re-emitted. [ 35 ] It is believed that the primary culprit in the light-induced toxicity experienced by live cells is a result of free radicals produced by the excitation of fluorescent molecules. [ 32 ] These free radicals are highly reactive and cause the destruction of cellular components, which can result in non-physiological behavior.
One method of minimizing photo-damage is to lower the oxygen concentration in the sample to avoid the formation of reactive oxygen species . [ 36 ] However, this method is not always possible in live-cell imaging and may require additional intervention. Another method for reducing the effects of free radicals in the sample is the use of antifade reagents. Unfortunately, most commercial antifade reagents cannot be used in live-cell imaging because of their toxicity. [ 37 ] Instead, natural free-radical scavengers such as vitamin C or vitamin E can be used without substantially altering physiological behavior on shorter time scales. [ 38 ] Phototoxicity-free live-cell imaging has recently been developed and commercialised. Holotomographic microscopy avoids phototoxicity thanks to its low-power laser (laser class 1: 0.2 mW/mm 2 ). [ 4 ] [ 5 ] [ 39 ] | https://en.wikipedia.org/wiki/Live-cell_imaging |
LiveBench is a continuously running benchmark project for assessing the quality of protein structure prediction and secondary structure prediction methods. LiveBench focuses mainly on homology modeling and protein threading but also includes secondary structure prediction , comparing publicly available webserver output to newly deposited protein structures in the Protein Data Bank . Like the EVA project and unlike the related CASP and CAFASP experiments, LiveBench is intended to study the accuracy of predictions that would be obtained by non-expert users of publicly available prediction methods. A major advantage of LiveBench and EVA over CASP projects, which run once every two years, is their comparatively large data set.
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/LiveBench |
LiveCode (formerly Revolution and MetaCard [ 3 ] ) is a cross-platform [ 4 ] rapid application development runtime system inspired by HyperCard . It features the LiveCode Script (formerly MetaTalk) programming language which belongs to the family of xTalk scripting languages like HyperCard 's HyperTalk . [ 5 ] [ 6 ]
The environment was introduced in 2001. [ 7 ] The "Revolution" development system was based on the MetaCard engine technology which Runtime Revolution later acquired from MetaCard Corporation in 2003. [ 8 ] [ 9 ] The platform won the Macworld Annual Editor's Choice Award for "Best Development Software" in 2004. [ 10 ] "Revolution" was renamed "LiveCode" in the fall of 2010. "LiveCode" is developed and sold by Runtime Revolution Ltd., based in Edinburgh, Scotland. In March 2015, the company was renamed "LiveCode Ltd.", to unify the company name with the product. In April 2013, a free/open source version 'LiveCode Community Edition 6.0' was published after a successful crowdfunding campaign at Kickstarter . [ 11 ] The code base was re-licensed and made available as free and open source software with a version in April 2013.
LiveCode runs on iOS , Android , OS X , Windows 95 through Windows 10 , Raspberry Pi and several variations of Unix, including Linux, Solaris, and BSD. It can be used for mobile, desktop and server/CGI applications. The iOS (iPhone and iPad) version was released in December 2010. [ 12 ] [ 13 ] The first version to deploy to the Web was released in 2009. [ 14 ] It is the most widely used HyperCard/HyperTalk clone, [ citation needed ] and the only one that runs on all major operating systems.
A developer release of v.8 was announced in New York on March 12, 2015. This major enhancement to the product includes a new, separate development language, known as "LiveCode Builder", which is capable of creating new object classes called "widgets". In earlier versions, the set of object classes was fixed, and could be enhanced only via the use of ordinary procedural languages such as C. The new language, which runs in its own IDE , is a departure from the transitional x-talk paradigm in that it permits typing of variables. But the two environments are fully integrated, and apart from the ability to create new objects, development in LiveCode proceeds in the normal way, within the established IDE.
A second crowdfunding campaign to Bring HTML5 to LiveCode reached funding goals of nearly US$400,000 on July 31, 2014. LiveCode developer release 8.0 DP4 (August 31, 2015) was the first to include a standalone deployment option to HTML5 .
On 31 August 2021, starting with version 9.6.4, LiveCode Community edition, licensed under GPL , was discontinued. [ 2 ]
The LiveCode software creates applications that run in many supported environments, using a compile-free workflow. The same computer code in LiveCode can play across multiple devices and platforms. LiveCode uses a high level, English-like programming language called Transcript that is dynamically typed. Transcript and compile-free workflow generates code that is self-documenting and easy for casual programmers to comprehend. For example, if the following script was executed when the system clock was at 9:00:00 AM:
Ten lines will be loaded into the first text field. (denoted as "field 1"), and seen as:
Notes:
LiveCode's natural English-like syntax is easy for beginners to learn. Variables are typeless, and are typed at compile time based purely on context. This makes the language simple to read and maintain, with relatively minimal loss of speed. The language contains advanced features including associative arrays , [ 15 ] regular expressions , multimedia, support for a variety of SQL databases, and TCP/IP libraries. The LiveCode engine supports several common image formats (including BMP, PNG, GIF, and JPEG,), anti-aliased vector graphics, HTML-style text hyperlinks, chained behaviors and embedded web browsers. Accessing these higher-level functions is designed to be straightforward.
LiveCode has around 2,950 built-in language terms and keywords, which may be extended by external libraries written in C and other lower level languages. [ 16 ] [ 17 ]
LiveCode project files are binary-compatible across platforms. They inherit each platform's look-and-feel and behaviors. Buttons, scroll bars, progress bars and menus behave as expected on the target platform without any intervention on the part of the one authoring a LiveCode application.
Compiling a LiveCode "standalone" produces a single, executable file (minimum size ~1.5MB) for each platform targeted. There is no separate runtime necessary.
The Wikipedia article on HyperCard contains a more detailed discussion about the basics of a similar development environment and scripting language. Modern LiveCode is a vast superset of the former HyperCard yet retains its simplicity. LiveCode includes a number of features missing from the original HyperCard program, including multiple platform deployment, communication with external devices and many fundamental language extensions. The LiveCode toolkit, as compared to HyperCard, has the ability to access internet-based text and media resources, which allows the creation of internet-enabled desktop applications. [ 18 ]
iOS and Android targets are available in some versions.
Note: Complete Linux requirements for 4.5.x-6.x are the following: | https://en.wikipedia.org/wiki/LiveCode |
LivePerson is a global technology company that develops conversational commerce and AI software. [ 2 ]
Headquartered in New York City , LivePerson is best known as the developer of the Conversational Cloud, a software platform that allows consumers to message with brands.
In 2018, the company announced its AI offering, allowing customers to create AI-powered chatbots to answer consumer messages, alongside human customer service staff. [ 3 ]
LivePerson was founded in 1995 by Robert LoCascio . [ 4 ] In April 2000, the company completed an initial public offering on the NASDAQ , [ 5 ] in March 2011 its shares began trading on the Tel Aviv Stock Exchange and were included in the TA-100 Index and the TA BlueTech Index . [ 6 ] | https://en.wikipedia.org/wiki/LivePerson |
Live blood analysis ( LBA ), live cell analysis , Hemaview or nutritional blood analysis is the use of high-resolution dark field microscopy to observe live blood cells. Live blood analysis is promoted by some alternative medicine practitioners, who assert that it can diagnose a range of diseases. There is no scientific evidence that live blood analysis is reliable or effective, and it has been described as a fraudulent means of convincing people that they are ill and should purchase dietary supplements . [ 1 ] [ 2 ] [ 3 ]
Live blood analysis is not accepted in laboratory practice and its validity as a laboratory test has not been established. [ 4 ] There is no scientific evidence for the validity of live blood analysis, [ 4 ] it has been described as a pseudoscientific, bogus and fraudulent medical test, [ 5 ] [ 6 ] and its practice has been dismissed by the medical profession as quackery . [ 7 ] The field of live blood microscopy is unregulated , there is no training requirement for practitioners and no recognised qualification, no recognised medical validity to the results, and proponents have made false claims about both medical blood pathology testing and their own services, which some have refused to amend when instructed by the Advertising Standards Authority . [ 8 ]
It has its origins in the now-discarded theories of pleomorphism promoted by Günther Enderlein , notably in his 1925 book Bakterien-Cyklogenie .
In January 2014 prominent live blood proponent and teacher Robert O. Young was arrested and charged for practising medicine without a license, [ 9 ] and in March 2014 Errol Denton, a former student of his, a UK live blood practitioner, was convicted on nine counts in a rare prosecution under the Cancer Act 1939 , [ 7 ] followed in May 2014 by another former student, Stephen Ferguson. [ citation needed ]
Proponents claim that live blood analysis provides information "about the state of the immune system, possible vitamin deficiencies, amount of toxicity, pH and mineral imbalance, areas of concern and weaknesses, fungus and yeast." Some even claim it can "spot cancer and other degenerative immune system diseases up to two years before they would otherwise be detectable" or say they can diagnose "lack of oxygen in the blood, low trace minerals, lack of exercise, too much alcohol or yeast, weak kidneys, bladder or spleen." [ 1 ] Practitioners include alternative medicine providers such as nutritionists, herbologists, naturopaths, and chiropractors. [ 4 ]
Dark field microscopy is useful to enhance contrast in unstained samples, but live blood analysis is not proven to be useful for any of its claimed indications. Two journal articles published in the alternative medical literature found that darkfield microscopy seemed unable to detect cancer, and that live blood analysis lacked reliability, reproducibility , and sensitivity and specificity . [ 10 ] [ 11 ] Edzard Ernst , professor of complementary medicine at the University of Exeter and University of Plymouth , notes: "No credible scientific studies have demonstrated the reliability of LBA for detecting any of the above conditions." Ernst describes live blood analysis as a "fraudulent" means of convincing patients to buy dietary supplements . [ 1 ] Quackwatch has been critical of live blood analysis, noting dishonesty in the claims brought forward by its proponents. [ 12 ] The alternative medicine popularizer Andrew Weil dismissed live blood analysis as "completely bogus", writing: "Dark-field microscopy combined with live blood analysis may sound like cutting-edge science, but it's old-fashioned hokum. Don't buy into it." [ 3 ]
There are several common diagnoses by the LBA practitioners that are actually based on observation of artifacts normally found in microscopy, and ignorance of basic biological science: [ 13 ] [ 14 ]
Acid in the blood: When the red blood cells stack on top of one another and appear like stacks of coins, it is called ' rouleaux ' formation. By observation of the rouleaux, the LBA practitioners diagnose 'acid in the blood', while other practitioners suggest a weak pancreas. Rouleaux of red blood cells under the microscope is an artifact which occurs when the blood sample at the edge of the coverslip [ 15 ] starts to dry out; where a large number of red blood cells clump together; or when the blood starts to clot when contacted with the glass. These artifacts are observed in only small, selected areas on the slide, while near the center of the slide the red blood cells are free floating. Blood acidosis is a severe illness and can not be diagnosed by observation of blood, nor treated by dietary supplements. [ 13 ] [ 14 ]
In 1996, the Pennsylvania Department of Laboratories informed three Pennsylvania chiropractors that Infinity2's "Nutritional Blood Analysis" could not be used for diagnostic purposes unless they maintain a laboratory that has both state and federal certification for complex testing. [ 16 ]
In 2001, the Health and Human Services Office of the Inspector General issued a report on regulation of "unestablished laboratory tests" that focused on live blood cell analysis and the difficulty of regulating unestablished tests and laboratories. [ 4 ]
In 2002, an Australian naturopath was convicted and fined for falsely claiming that he could diagnose illness using live blood analysis [ 17 ] after the death of a patient. He was acquitted of manslaughter . He subsequently changed his name and was later banned from practice for life. [ 18 ]
In 2005, the Rhode Island Department of Health ordered a chiropractor to stop performing live blood analysis. An attorney for the State Board of Examiners in Chiropractic Medicine described the test as "useless" and a "money-making scheme... The point of it all is apparently to sell nutritional supplements." A state medical board official said that live blood analysis has no discernible value, and that the public "should be very suspicious of any practitioner who offers this test." [ 2 ]
In 2011, the UK General Medical Council suspended a doctor's licence to practise after he used live blood analysis to diagnose patients with Lyme disease . The doctor accepted he had been practising "bad medicine". [ 19 ]
In 2013, following several Advertising Standards Authority adjudications [ 20 ] against claims made by LBA practitioners, the Committee of Advertising Practice added new guidelines to their AdviceOnline database advising what LBA marketers may claim in their advertising material. These state that "CAP is yet to see any evidence for the efficacy of this therapy which, without rigorous evidence to support it, should be advertised on an availability-only platform." [ 21 ]
One of these practitioners, Errol Denton, who practised out of a serviced office in Harley Street, was prosecuted in December 2013 under the Cancer Act 1939 , and chose to use a Freeman on the Land defence. [ 22 ] On March 20, 2014, he was convicted on nine counts under the Cancer Act 1939 and fined £9,000 plus around £10,000 in costs. [ 7 ] [ 23 ] In April 2018, Denton was further convicted of two counts of "engaging in unfair commercial practice" and one of "selling food not of the quality demanded", for selling a bottle of colloidal silver drink to an undercover trading standards officer in February 2016, after examining a drop of her blood and from it claiming that she had dislocated her shoulder. [ 24 ] [ 25 ] He was made the subject of a Criminal Behaviour Order , fined £2,250, and ordered pay £15,000 in costs. [ 26 ] | https://en.wikipedia.org/wiki/Live_blood_analysis |
A live bottom trailer is a semi-trailer used for hauling loose material such as asphalt , grain , potatoes , sand and gravel . A live bottom trailer is the alternative to a dump truck or an end dump trailer. The typical live bottom trailer has a conveyor belt on the bottom of the trailer tub that pushes the material out of the back of the trailer at a controlled pace. Unlike the conventional dump truck, the tub does not have to be raised to deposit the materials.
The live bottom trailer is powered by a hydraulic system . When the operator engages the truck hydraulic system, it activates the conveyor belt, moving the load horizontally out of the back trailer.
Live bottom trailers can haul a variety of products including gravel, potatoes, top soil, grain, carrots, sand, lime, peat moss, asphalt, compost, rip-rap, heavy rocks, biowaste, etc.
Those who work in industries such as the agriculture and construction benefit from the speed of unloading, versatility of the trailer and chassis mount.
The live bottom trailer eliminates trailer roll over because the tub does not have to be raised in the air to unload the materials. The trailer has a lower centre of gravity which makes it easy for the trailer to unload in an uneven area, compared to dump trailers that have to be on level ground to unload.
Overhead electrical wires are a danger for the conventional dump trailer during unloading, but with a live bottom, wires are not a problem. The trailer can work anywhere that it can drive into because the tub does not have to be raised for unloading. In addition, the truck cannot be accidentally driven with the trailer raised, which has been a cause of a number of accidents, often involving collision with bridges, overpasses, or overhead/suspended traffic signs/lights.
The tub empties clean, making it easier for different materials to be transported without having to get inside the tub to clean it out. The conveyor belt allows the material to be dumped at a controlled pace so that the material can be partially unloaded where it is needed.
The rounded tub results in a lower centre of gravity which means a smoother ride and better handling than other trailers. Working under bridges and in confined areas is easier with a live bottom as opposed to a dump trailer because it can fit anywhere it can drive.
Wet or dry materials can be hauled in a live bottom trailer.
In a dump truck, wet materials stick in the top of the tub during unloading and causes trailer roll over. Insurance costs are lower for a live bottom trailer because it does not have to be raised in the air and there are few cases of trailer roll over.
Some live bottom trailers are not well suited for heavy rock and demolition. However rip-rap, heavy rock, and asphalt can be hauled if built with the appropriate strength steels. | https://en.wikipedia.org/wiki/Live_bottom_trailer |
The live crown is the top part of a tree , the part that has green leaves (as opposed to the bare trunk , bare branches , and dead leaves). The ratio of the size of a tree's live crown to its total height is used in estimating its health and its level of competition with neighboring trees. [ 1 ] This is referred to as the Live Crown Ratio (LCR). [ 1 ]
This tree -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Live_crown |
In systems biology , live single-cell imaging is a live-cell imaging technique that combines traditional live-cell imaging and time-lapse microscopy techniques with automated cell tracking and feature extraction, drawing many techniques from high-content screening . It is used to study signalling dynamics and behaviour in populations of individual living cells. [ 1 ] [ 2 ] Live single-cell studies can reveal key behaviours that would otherwise be masked in population averaging experiments such as western blots . [ 3 ]
In a live single-cell imaging experiment a fluorescent reporter is introduced into a cell line to measure the levels, localisation or activity of a signalling molecule. Subsequently, a population of cells is imaged over time with careful atmospheric control to maintain viability, and reduce stress upon the cells. Automated cell tracking is then performed upon these time series images, following which filtering and quality control may be performed. Analysis of features describing the fluorescent reporter over time, can then lead to modelling and generation of biological conclusions from which further experimentation can be guided.
The field of live single-cell imaging began with work demonstrating that green fluorescent protein (GFP), found in the jellyfish Aequorea victoria , could be expressed in living organisms. [ 4 ] This discovery allowed researches to study the localisation and levels of proteins in living single cells, for example the activity of kinases , [ 5 ] and calcium levels, through the use of FRET reporters, [ 6 ] as well as numerous other phenotypes. [ 7 ]
Generally, these early studies focused on the localisation and behaviour of these fluorescently labelled proteins at the subcellular level over short periods of time. However, this changed with pioneering studies looking at the tumour suppressor p53 [ 8 ] and the stress and inflammation related protein NF-κB , [ 9 ] revealing there levels and localisation respectively to oscillate over periods of several hours. Live single-cell approaches were also applied around this time to understand signalling in single-cell organisms including bacteria, where live studies allowed the dynamics of competence to be modelled, [ 10 ] and yeast revealing the mechanism underpinning coherent cell cycle entry. [ 11 ]
In any live single-cell study, the first step is to introduce a reporter for our protein/molecule of interest into a suitable cell line. Much of the growth in the field has come from improved gene editing tools such as CRISPR , this leading to development of a wide variety of fluorescent reporters. [ 12 ]
Fluorescent tagging uses a gene encoding a fluorescent protein that is inserted into the coding frame of the protein to be tagged. Texture and intensity features can be extracted from images of the tagged protein.
Molecules can also be tagged in vitro and introduced into the cell with electrophoresis . This enable the use of smaller and more photostable fluorophores but requires additional washing steps. [ 13 ]
By engineering expression of FRET reporter such that donor and emitter fluorophores are only in close proximity when an upstream signalling molecule is either active or inactive, the donor to emitter fluorescence intensity ratio can be used as a measure of signalling activity. For example, in key early work using FRET reporters for live single studies FRET reporters of Rho GTPase activity were engineered. [ 14 ]
Nuclear translocation reporters use engineered nuclear import and nuclear export signals, which can be inhibited by signalling molecules, to record signalling activity via the ratio of nuclear reporter to cytoplasmic reporter. [ 15 ]
Live-cell imaging of fluorescently labelled cells must then be performed. This requires simultaneous incubation of cells in stress free conditions whilst imaging is being performed. There are several factors that must be taken into account when choosing imaging conditions such as phototoxicity, photobleaching , tracking ease, rate of change of signalling activity, and Signal to noise. These all relate to imaging frequency and illumination intensity.
Phototoxicity can result from being exposed to large amounts of light over long periods of time. Cells will become stressed, which can lead to apoptosis. High frequency and intensity imaging can cause the fluorophore signal to decrease through photobleaching. Higher frequency imaging generally makes automated cell tracking easier. Imaging frequencies should be able to capture necessary changes to signalling activity. Low intensity imaging or poor reporters may prevent low levels of signalling activity within the cell from being detected.
Following live-cell imaging, automated tracking software is then employed to extract time series data from videos of cells. Live-cell tracking is generally split into two steps, image segmentation of cells or their nuclei and cell/nuclei tracking based on these segments. Many challenges still exist in this stage of a live single-cell imaging study. [ 16 ] However recent progress has been highlighted in the field first objective comparison of single-cell tracking techniques. [ 17 ]
Quantitative phase imaging (QPI) is particularly useful for live-cell tracking. As QPI is label-free, it does not induce phototoxicity, nor does it suffer from the photobleaching associated with fluorescence imaging. [ 18 ] QPI offers a significantly higher contrast than conventional phase imaging techniques, such as phase-contrast microscopy . The higher contrast facilitates more robust cell segmentation and tracking than achievable with conventional phase imaging. [ 19 ]
New techniques that use a combination of traditional image segmentation techniques and deep learning to segment cells are also becoming more widely used as well. [ 20 ]
In the final stage of a live single-cell imaging study, modelling and analysis of time series data extracted from tracked cells is performed. Pedigree tree profiles can be constructed to reveal heterogeneity in individual cell response and downstream signalling. [ 21 ] [ 22 ] Refining and compressing data from video-based single-cell tracking can provide relevant inputs for big data analysis, contributing to the identification of biomarkers for enhanced diagnosis and prognosis. [ 23 ] A large overlap between analysis of single-cell live data, and modelling of biological systems using ordinary differential equations exists. Results from this key data analysis step will drive further experimentation, for example by perturbing aspects of the system being studied and then comparing signalling dynamics with those of the control population.
By analysing the signalling dynamics of single cells across entire populations, live single-cell studies are now letting us understand how these dynamics affect key cellular decision making processes. For example, live single-cell studies of the growth factor ERK revealed it to possess digital all-or-nothing activation. [ 24 ] Moreover, this all-or-nothing activation was pulsatile, and the frequency of pulses in turn determined whether mammalian cells would commit to cell cycle entry or not. In another key example, live single-cell studies of CDK2 activity in mammalian cells demonstrated that bifurcation in CDK2 activity following mitosis, determined whether cells would continue to proliferate or enter a state of quiescence; [ 25 ] now shown, using live single-cell methods, to be caused by stochastic DNA damage inducing upregulation of p21 , which inhibits CDK2 activity. [ 26 ] Moving forward, live single-cell studies will now likely incooperate multiple reporters into single-cell lines to allow complex decision making processes to be understood, however challenges still remain in scaling up live single-cell studies. | https://en.wikipedia.org/wiki/Live_single-cell_imaging |
Live virus reference strain (LVRS) refers to a common strain of a virus that is selected for the manufacture of a preventative vaccine . It is most commonly used in reference to the seasonal Influenza vaccines developed by the Centers for Disease Control every year. However, it can also refer to other virus strains. [ 1 ]
Each year, with the assistance of the World Health Organization , the Centers for Disease Control in Atlanta, Georgia, select strains of Influenza virus that are most likely to provide effective immunization to a broad spectrum of individuals. [ 1 ]
Vaccine viruses are chosen to maximize the likelihood that the vaccine will protect against the viruses most likely to spread and cause illness among people during the upcoming flu season. WHO recommends specific vaccine viruses for influenza vaccine production, but then individual countries make their own decisions for licensing of vaccines in their country. In the United States, the Food and Drug Administration determines what viruses will be used in U.S.-licensed vaccines. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Live_virus_reference_strain |
Liver cytology is the branch of cytology that studies the liver cells and its functions. The liver is a vital organ, in charge of almost all the body’s metabolism. Main liver cells are hepatocytes , Kupffer cells , and hepatic stellate cells ; each one with a specific function.
Cytology is the name given to the branch of biology that deals with the formation, structure and functionality of the cells. [ 1 ] Liver cytology specializes in the study of liver cells. The main liver cells are called hepatocytes ; however, there are other cells that can be observed in a liver sample such as Kupffer cells (macrophages). [ 2 ] The liver is the biggest gland of the body. It has a wide variety of functions that range from the destruction of old blood cells to the control of the whole metabolism of macromolecules . [ 3 ] In the fetus , the liver works as a principal center for hematopoiesis , function that is later replaced by the bone marrow . This hematopoietic function is not normally seen after birth; however, in certain pathological conditions this function may still be seen. It is important to note that the liver is an essential organ and it is the only one in the body that has the ability to regenerate itself after surgery or damage. [ 4 ]
Since cytology deals with tissues, which are composed of cells, samples of tissues must be obtained in order to analyze the cells. There are several ways of obtaining a sample, the first is through dissecting a corpse, with a sample of tissue taken during an autopsy. The second is performing an aspirate (bone marrow, cerebrospinal fluid , etc.). To perform an aspirate in liquid tissues, a needle is inserted inside the body and a sample is extracted. Another common method is surgery, with a piece being removed during the procedure for later analysis. Finally, another common method is biopsy . In a biopsy, a needle is inserted into the skin and a solid sample of tissue is obtained. [ 2 ]
After the sample is obtained, it has to be processed by different methods depending on the nature of the sample. Liquid samples, such as blood, are extracted and dried out, while solid samples must be dehydrated using a different combination of alcoholic compounds. [ 2 ] The tissue must also be stained , usually with haematoxylin and eosin , a pair of colorants that identify the acidic-basic nature of the cells. [ 1 ] After this treatment the samples are analyzed under a microscope, which can be optical or electronic, to determine if the sample is normal or pathological. [ 2 ]
The hepatocytes are the parenchymal cells of the liver, which form the lobules . They are intimately associated with the sinusoids , which are a network of capillaries. Since they are metabolically active cells, their cytoplasm has many organelles . [ 5 ]
Hepatocytes are the main cells of the liver. They are large polyhedral cells, with six surfaces, three of which have a relevant function. The three relevant type of surfaces are sinusoidal , canalicular and intercellular . These surfaces are involved in the exchange of substances between the hepatocyte, the vessels and the biliar canaliculi. The sinusoidal surfaces are separated from the sinusoids because of the perisinusoidal space . They represent 70% of the total hepatocyte surface. They are coated by microvilli which emerge to the perisinusoidal space. These surfaces are the place where the exchange of substances between the hepatocytes and the sinusoids occurs. The canalicular surfaces are the ones through which bile drains from the hepatocytes to the canaliculi. They represent 15% of the surface of the cell. The cytoplasm of the hepatocyte near canaliculi is rich in actin filaments, and they are probably capable of modifying the canaliculi’s diameter, thus influencing the flow; however this is not yet proven. The intercellular surfaces are the ones that are between two adjacent hepatocytes and they are not in contact with sinusoids or canaliculi. These are simple surfaces specialized in the cellular adherence and in the communication between hepatocytes through gap junctions. [ 6 ]
Hepatocytes measure between 20 and 30 μm in each dimension. They are in charge of developing all the functions of the liver such as the metabolism of lipids , carbohydrates and proteins , as well as the processing of hormones and drugs. Hepatocytes constitute about 80% of the cell population of the liver, with the other 20% being occupied by Kupffer cells, hepatic stellate cells , endothelial cells and mesothelial cells, which are not exactly characteristic of the liver, but are present in the liver samples. [ 2 ]
Histologically speaking, hepatocytes have specific characteristics. Their nuclei are large and spheroidal, occupying the center of the cell. There is at least one nucleolus in each nucleus. In the adult liver, most of the cells are binucleated , and most of the hepatocytes are tetraploid , which means that they have four times the amount of normal DNA. Their average lifespan is from approximately five months, and hepatocytes have a significant regeneration capacity after parenchymal loss by toxic processes, diseases or surgeries. Their cytoplasm is mostly acidophilic . Basophilic regions correspond to the RER and free ribosomes . Mitochondria are abundant in hepatocytes, from 800 to 1000 per cell. They can be detected using Janus green B or enzimo-histochemistry. Hepatocytes possess multiple Golgi complexes , and have large numbers of peroxisomes , which can be detected with immunohistochemistry . Smooth endoplasmic reticulum can be extensive and may contain enzymes involved in degradation and conjugation of toxins and drugs, and other enzymes involved in the synthesis of cholesterol and lipoproteins. [ 2 ]
Liver sinusoids are different from the rest of the body’s sinusoids since they have macrophage cells intercalated in between their endothelial cells. Kupffer cells have a different embryological origin, coming from the myeloid line in the reticuloendohelial system (also called mononuclear phagocyte system) and are related to the immune system. They first develop in the bone marrow and then migrate to the liver where they differentiate into Kupffer cells. In fact, they are the macrophages of the liver and are located in the sinusoids. Sinusoids are vascular channels that receive blood from terminal branches of the hepatic artery and portal vein and make it flow to the central veins. The space located between the endothelium is known as the Disse Space. Histologically speaking, Kupffer cells are difficult to identify; however, they are easily seen if there are stained particles that were phagocytosed. The main function of the Kupffer cells is the destruction of old blood cells that go through the liver. [ 2 ]
In the perisinusoidal space, a different type of cells can be found. These cells are characteristic of the liver, since they are not found in any other tissue. These hepatic stellate cells, also named lipocytes, have lipid drops in their cytosol . It is thought that these drops store a fraction of the body’s vitamin A supply. Hepatic stellate cells rest over the Remak trabecules, and they emit extensions to the sinusoids. [ 6 ] | https://en.wikipedia.org/wiki/Liver_cytology |
A liver support system or diachysis is a type of therapeutic device to assist in performing the functions of the liver. Such systems focus either on removing the accumulating toxins ( liver dialysis ), or providing additional replacement of the metabolic functions of the liver through the inclusion of hepatocytes to the device ( bioartificial liver device ). A diachysis machine is used for acute care i.e. emergency care, as opposed to a dialysis machine which are typically used over the longer term. These systems are being trialed to help people with acute liver failure (ALF) or acute-on-chronic liver failure. [ 1 ]
The primary functions of the liver include removing toxic substances from the blood, manufacturing blood proteins , storing energy in the form of glycogen , and secreting bile . The hepatocytes that perform these tasks can be killed or impaired by disease, resulting in acute liver failure (ALF) which can be seen in person with previously diseased liver or a healthy one.
In hyperacute and acute liver failure, the clinical picture develops rapidly with progressive encephalopathy and multiorgan dysfunction such as hyperdynamic circulation , coagulopathy , acute kidney injury and respiratory insufficiency , severe metabolic alterations, and cerebral edema that can lead to brain death. [ 2 ] [ 3 ] In these cases the mortality without liver transplantation (LTx) ranges between 40-80%. [ 4 ] [ 5 ] LTx is the only effective treatment for these patients although it requires a precise indication and timing to achieve good results. Nevertheless, due to the scarcity of organs to carry out liver transplantations, it is estimated that one third of patients with ALF die while waiting to be transplanted. [ 6 ]
On the other hand, a patient with a chronic hepatic disease can suffer acute decompensation of liver function following a precipitating event such as variceal bleeding, sepsis and excessive alcohol intake among others that can lead to a condition referred to as acute-on-chronic liver failure (ACLF).
Both types of hepatic insufficiency, ALF and ACLF, can potentially be reversible and liver functionality can return to a level similar to that prior to the insult or precipitating event.
LTx has shown an improvement in the prognosis and survival with severe cases of ALF. Nevertheless, cost and donor scarcity have prompted researchers to look for new supportive treatments that can act as “bridge” to the transplant procedure. By stabilizing the patient's clinical state, or by creating the right conditions that could allow the recovery of native liver functions, both detoxification and synthesis can improve, after an episode of ALF or ACLF. [ 7 ]
Three different types of supportive therapies have been developed: bio-artificial, artificial and hybrid liver support systems (Table 2).
Extracorporeal liver assist device
Molecular adsorbent recirculating system
Bioartificial Liver Support System
Fractionated plasma separation and adsorption system
TECA-Hybrid Artificial Liver Support System
Radial Flow Bioreactor
Single-pass albumin dialysis
Modular Extracorporeal Liver Support
Bioartificial Liver
Selective plasma filtration therapy
Bioartificial liver devices are experimental extracorporeal devices that use living cell lines to provide detoxification and synthesis support to the failing liver. Bio-artificial liver (BAL) Hepatassist 2000 uses porcine hepatocytes whereas ELAD system employs hepatocytes derived from human hepatoblastoma C3A cell lines. [ 19 ] [ 20 ] Both techniques can produce, in fulminant hepatic failure (FHF), an improvement of hepatic encephalopathy grade and biochemical parameters. Potential side effects that have been documented include immunological issues (porcine endogenous retrovirus transmission), infectious complications, and tumor transmigration.
Other biological hepatic systems are Bioartificial Liver Support (BLSS) and Radial Flow Bioreactor (RFB). Detoxification capacity of these systems is poor and therefore they must be used combined with other systems to mitigate this deficiency. Today, its use is limited to centers with high experience in their application. [ 21 ]
A bioartificial liver device (BAL) is an artificial extracorporeal liver support (ELS) system for an individual who is suffering from acute liver failure (ALF) or acute-on-chronic liver failure (ACLF). The fundamental difference between artificial and BAL systems lies in the inclusion of hepatocytes into the reactor, often operating alongside the purification circuits used in artificial ELS systems. The overall design varies between different BAL systems, but they largely follow the same basic structure, with patient blood or plasma flow through an artificial matrix housing hepatocytes. Plasma is often separated from the patient’s blood to improve efficiency of the system, and the device can be connected to artificial liver dialysis devices in order to further increase the effectiveness of the device in filtration of toxins. The inclusion of functioning hepatocytes in the reactor allows the restoration of some of the synthetic functions that the patient’s liver is lacking. [ 22 ]
The first bioartificial liver device was developed in 1993 by Dr. Achilles A. Demetriou at Cedars-Sinai Medical Center. The bioartificial liver helped an 18-year-old southern California woman survive without her own liver for 14 hours until she received a human liver using a 20-inch-long, 4-inch-wide plastic cylinder filled with cellulose fibers and pig liver cells. Blood was routed outside the patient's body and through the artificial liver before being returned to the body. [ 23 ] [ 24 ]
Dr. Kenneth Matsumara's work on the BAL led it to be named an invention of the year by Time magazine in 2001. [ 25 ] Liver cells obtained from an animal were used instead of developing a piece of equipment for each function of the liver . The structure and function of the first device also resembles that of today's BALs. Animal liver cells are suspended in a solution and a patient's blood is processed by a semipermeable membrane that allow toxins and blood proteins to pass but restricts an immunological response. [ 25 ]
Advancements in bioengineering techniques in the decade after Matsumara's work have led to improved membranes and hepatocyte attachment systems. [ 26 ] Cell sources now include primary porcine hepatocytes, primary human hepatocytes, human hepatoblastoma (C3A), immortalized human cell lines and stem cells . [ 26 ]
The purpose of BAL-type devices is not to permanently replace liver functions , but to serve as a supportive device, [ 27 ] either allowing the liver to regenerate properly upon acute liver failure , or to bridge the individual's liver functions until a transplant is possible.
BALs are essentially bioreactors , with embedded hepatocytes (liver cells ) that perform the functions of a normal liver . They process oxygenated blood plasma , which is separated from the other blood constituents. [ 28 ] Several types of BALs are being developed, including hollow fiber systems and flat membrane sheet systems. [ 29 ]
Various types of hepatocytes are used in these devices. Porcine hepatocytes are often used due to ease of acquisition and cost; however, they are relatively unstable and carry the risk of cross-species disease transmission. [ 30 ] Primary human hepatocytes sourced from donor organs present several problems in their cost and difficulty to obtain, especially with the current lack in transplantable tissue. [ 30 ] In addition, questions have been raised about tissue collected from patients transmitting malignancy or infection via the BAL device. Several lines of human hepatocytes are also used in BAL devices, including C3A and HepG2 tumour cell lines, but due to their origin from hepatomas, they possess the potential to pass on malignancy to the patient. [ 31 ] There is ongoing research into the cultivation of new types of human hepatocytes capable of improved longevity and efficacy in a bioreactor over currently used cell types, that do not pose the risk of transfer of malignancy or infection, such as the HepZ cell line created by Werner et al. . [ 32 ]
Similar to kidney dialysis , hollow fiber systems employ a hollow fiber cartridge. Hepatocytes are suspended in a gel solution such as collagen , which is injected into a series of hollow fibers. In the case of collagen, the suspension is then gelled within the fibers, usually by a temperature change. The hepatocytes then contract the gel by their attachment to the collagen matrix, reducing the volume of the suspension and creating a flow space within the fibers. Nutrient media is circulated through the fibers to sustain the cells. During use, plasma is removed from the patients blood. The patient's plasma is fed into the space surrounding the fibers. The fibers, which are composed of a semi-permeable membrane, facilitate transfer of toxins, nutrients and other chemicals between the blood and the suspended cells. The membrane also keeps immune bodies, such as immunoglobulins , from passing to the cells to prevent an immune system rejection. [ 33 ]
Currently, hollow-fibre bioreactors are the most commonly accepted design for clinical use due to their capillary-network allowing for easy perfusion of plasma across cell populations. [ 34 ] However, these structures have their limitations, with convectional transport issues, nutritional gradients, non-uniform seeding, inefficient immobilisation of cells, and reduced hepatocyte growth restricting their effectiveness in BAL designs. [ 35 ] Researchers are now investigating the use of cryogels to replace hollow-fibres as the cell carrier components in BAL systems.
Cryogels are super-macroporous three-dimensional polymers prepared at sub-zero temperatures, by the freezing of a solution of cryogel precursors and solvent. The pores develop during this freezing process – as the cryogel solution cools, the solvent begins to form crystals. This causes the concentration of the cryogel precursors in the solution to increase, initiating the cryogelation process and forming the polymer walls. As the cryogel warms, the solvent crystals thaw, leaving cavities that form the pores. [ 36 ] Cryogel pores range in size from 10-100 μm in size, forming an interconnected network that mimics a capillary system with a very large surface area to volume ratio, supporting large numbers of immobilised cells. Convection mediated transport is also supported by cryogels, enabling even distribution of nutrients and metabolite elimination, overcoming some of the shortcomings of hollow-fibre systems. [ 35 ] Cryogel scaffolds demonstrate good mechanical strength and biocompatibility without triggering an immune response, improving their potential for long-term inclusion in BAL devices or in-vitro use. [ 37 ] Another advantage of cryogels is their flexibility for use in a variety of tasks, including separation and purification of substances, along with acting as extracellular matrix for cell growth and proliferation. Immobilisation of specific ligands onto cryogels enables adsorption of specific substances, supporting their use as treatment options for toxins, [ 38 ] for separation of haemoglobin from blood, [ 39 ] and as a localised and sustained method for drug delivery. [ 40 ]
Developing an effective bioartificial liver (BAL) remains a formidable challenge, as it necessitates the intricate optimization of cell colonization, biomaterial scaffold design, and BAL fluid dynamics. Expanding upon prior research indicating its potential as a blood perfusion device for detoxification, some studies have explored the application of Arg-Gly-Asp (RGD)-containingPoly(2-hydroxyethyl methacrylate) (pHEMA)-alginate cryogels as scaffolds for BAL. These cryogels, incorporating alginate to mitigate protein fouling and functionalized with an RGD-containing peptide to enhance hepatocyte adhesion, represent a promising avenue for BAL scaffold development. Methods for characterizing internal flow within the porous cryogel matrix such as Particle Image Velocimetry (PIV), enables visualization of flow dynamics. PIV analysis revealed the laminar flow characteristics within cryogel pores, prompting the design of a multi-layered bioreactor consisting of spaced cryogel discs to optimize blood/hepatocyte mass exchange. Compared to the column configuration, the stacked bioreactor demonstrated significantly elevated production of albumin and urea, alongside enhanced cell colonization and proliferation over time. [ 35 ]
Recent developments in bioartificial liver (BAL) using living liver cells have shown promising advancements in the field of liver support and regeneration. These developments focus on utilizing various cell sources, scaffold materials, and bioreactor designs to enhance the functionality and viability of BAL systems. Key advancements include:
Cell Sources: Researchers have explored different cell sources for BAL, including primary hepatocytes, stem cell-derived hepatocyte-like cells, and immortalized liver cell lines. Efforts have been made to optimize cell culture conditions to maintain cell viability and functionality within BAL systems.
Scaffold Materials: Biomaterial scaffolds play a critical role in providing structural support and facilitating cell attachment and proliferation in BAL systems. Recent studies have investigated the use of natural and synthetic materials, such as hydrogels, alginate, and decellularized liver scaffolds, to create biomimetic environments conducive to liver cell growth and function.
Bioreactor Designs: Innovative bioreactor designs have been developed to enhance the performance of BAL systems by optimizing mass transfer, fluid dynamics, and cell-matrix interactions. These designs include perfusion-based bioreactors, microfluidic devices, and three-dimensional (3D) bioprinted constructs, which aim to mimic the physiological microenvironment of the liver and promote liver cell function and survival.
Functional Assessment: Advances in bioanalytical techniques have enabled researchers to assess the functionality of liver cells within BAL systems more accurately. These techniques include measuring the secretion of liver-specific biomarkers, such as albumin, urea, and bile acids, as well as evaluating metabolic activity, drug metabolism, and detoxification capacity.
There have been numerous clinical studies involving hollow-fibre bioreactors. Overall, they show promise but do not provide statistically significant evidence supporting their effectiveness. This is generally due to inherent design limitations, causing convectional transport issues, nutritional gradients, non-uniform seeding, inefficient immobilisation of cells, and reduced hepatocyte growth. [ 35 ] As of writing, no cryogel-based devices have entered clinical trials. However, laboratory results have been promising, [ 35 ] [ 41 ] and hopefully trials will begin soon.
The HepatAssist , developed at the Cedars-Sinai Medical Center, is a BAL device containing porcine hepatocytes within a hollow-fibre bioreactor. These semi-permeable fibres act as capillaries, allowing the perfusion of plasma through the device, and across the hepatocytes surrounding the fibres. The system incorporates a charcoal column to act as a filter, removing additional toxins from the plasma. [ 42 ]
Demetriou et al. [ 42 ] carried out a large, randomised, multicentre, controlled trial on the safety and efficacy of the HepatAssist device. 171 patients with ALF stemming from viral hepatitis, paracetamol overdose or other drug complications, primary non-function (PNF), or of indeterminate aetiology, were involved in the study and were randomly assigned to either the experimental or control groups. The study found that at the primary end-point 30-day post admission mark, there was an increased survival rate in BAL patients over control patients (71% vs 62%), but the difference was not significant. However, when patients with PNF are excluded from the results there is a 44% reduction in mortality for BAL treated patients, a statistically significant advantage. The investigators noted that exclusion of PNF patients is justifiable due to early retransplantation and lack of intercranial hypertension, so HepatAssist would give little benefit to this group. For the secondary end-point of time-to-death, in patients with ALF of known aetiology there was a significant difference between BAL and control groups, with BAL patients surviving for longer. There was no significant difference for patients of unknown aetiology, however.
The conclusions of the study suggest that such a device has potentially significant importance when used as a treatment measure. While the overall findings were not statistically significant, when the aetiology of the patients was taken into account the BAL group gained a statistically significant reduction in mortality over the control group. This suggests that while the device may not be applicable to patients as an overall treatment for liver dysfunction, it can provide an advantage when the heterogeneity of patients is considered and is used with patients of specific aetiology.
The Extracorporeal Liver Assist Device (ELAD) is a human-cell based treatment system. A catheter removes blood from the patient, and an ultrafiltrate generator separates the plasma from the rest of the blood. This plasma is then run through a separate circuit containing cartridges filled with C3A cells, before being returned to the main circuit and re-entering the patient. [ 43 ]
Thompson et al. [ 43 ] performed a large open-label trial, measuring the effectiveness of ELAD on patients with severe alcoholic hepatitis resulting in ACLF. Their study involved patients screened at 40 sites across the US, UK, and Australia, and enrolled a total of 203 patients. Patients were then randomised into either ELAD (n=96) or standard medical care (n=107) groups, with even distribution for patients in terms of sex, MELD score, and bilirubin levels. Of the 96 patients in the ELAD group, 45 completed the full 120 hours of treatment – the rest were unable to complete the full regimen due to a variety of reasons, including withdrawal of consent or severe adverse events, though 37 completed >72 hours of treatment, with results showing minimal difference in mortality between those receiving either >72 hours or the full 120 hours of treatment. The study was unable to complete its goal, finding no statistically significant improvement in mortality rates for patients that received ELAD treatment over those receiving standard care at 28 and 91 days (76.0% versus 80.4% and 59.4% versus 61.7%, respectively). Biomarker measurements showed a significantly reduced level of bilirubin and alkaline phosphatase in ELAD patients, though neither improvement translated into increased survivability rates. Outcomes for patients with MELD score <28 showed trends towards improved survival on ELAD, whereas those with MELD >28 had decreased survivability on ELAD. These patients presented with raised creatinine from kidney failure, suggesting a reason why ELAD decreased survival chance over standard care. Unlike artificial ELS devices and HepatAssist, ELAD does not incorporate any filtration devices, such as charcoal columns and exchange resins. Therefore, it cannot replace the filtration capability of the kidneys and cannot compensate for multi-organ failure from more severe presentations of ACLF, resulting in increased mortality rates.
While the results of the study cannot provide conclusive evidence to suggest that a BAL device like ELAD improves the outcome of severe ACLF, it does suggest that it can aid the survival of patients with a less severe form of the disease. In those patients with a MELD <28, beneficial effects were seen 2–3 weeks post treatment, suggesting that while C3A incorporating BAL devices are unable to provide short-term aid like artificial albumin filtration devices, they instead provide more long-term aid in recovery of the patient’s liver. [ 43 ]
A randomized, phase 3 trial of the ELAD device in patients with severe alcoholic hepatitis failed to show benefit on overall survival and development was discontinued. [ 44 ]
Artificial liver support systems are aimed to temporarily replace native liver detoxification functions and they use albumin as scavenger molecule to clear the toxins involved in the physiopathology of the failing liver. Most of the toxins that accumulate in the plasma of patients with liver insufficiency are protein bound, and therefore conventional renal dialysis techniques, such as hemofiltration , hemodialysis or hemodiafiltration are not able to adequately eliminate them.
Liver dialysis has shown promise for patients with hepatorenal syndrome . It is similar to hemodialysis and based on the same principles, but hemodialysis does not remove toxins bound to albumin that accumulate in liver failure. Like a bioartificial liver device , it is a form of artificial extracorporeal liver support. [ 45 ] [ 46 ]
A critical issue of the clinical syndrome in liver failure is the accumulation of toxins not cleared by the failing liver . Based on this hypothesis, the removal of lipophilic , albumin-bound substances such as bilirubin , bile acids , metabolites of aromatic amino acids , medium-chain fatty acids and cytokines should be beneficial to the clinical course of a patient in liver failure. This led to the development of artificial filtration and absorption devices.
Liver dialysis is performed by physicians and surgeons and specialized nurses with training in gastroenterological medicine and surgery, namely, in hepatology , alongside their colleagues in the intensive or critical care unit and the transplantation department, which is responsible for procuring and implanting a new liver, or a part (lobe) of one, if and when it becomes available in time and the patient is eligible. Because of the need for these experts, as well as the relative newness of the procedure in certain areas, it is usually available only in larger hospitals, such as level I trauma center teaching hospitals connected with medical schools.
Between the different albumin dialysis modalities, single pass albumin dialysis (SPAD) has shown some positive results at a very high cost; [ 47 ] it has been proposed that lowering the concentration of albumin in the dialysate does not seem to affect the detoxification capability of the procedure. [ 48 ] Nevertheless, the most widely used systems today are based on hemodialysis and adsorption. These systems use conventional dialysis methods with an albumin containing dialysate that is later regenerated by means of adsorption columns, filled with activated charcoal and ion exchange resins.
At present, there are two artificial extracorporeal liver support systems: the Molecular Adsorbents Recirculating System (MARS) from Gambro and Fractionated Plasma Separation and Adsorption (FPSA), commercialised as Prometheus (PROM) from Fresenius Medical Care . Of the two therapies, MARS is the most frequently studied, and clinically used system to date.
While the technique is in its infancy, the prognosis of patients with liver failure remains guarded. Liver dialysis is currently only considered to be a bridge to transplantation or liver regeneration (in the case of acute liver failure ) [ 49 ] [ 50 ] [ 51 ] and, unlike kidney dialysis (for kidney failure ), cannot support a patient for an extended period of time (months to years).
Artificial detoxification devices currently under clinical evaluation include the Single Pass Albumin Dialysis (SPAD), Molecular Adsorbent Recirculating System (MARS), Prometheus system, and Dialive.
Single pass albumin dialysis (SPAD) is a simple method of albumin dialysis using standard renal replacement therapy machines without an additional perfusion pump system: The patient's blood flows through a circuit with a high-flux hollow fiber hemodiafilter, identical to that used in the MARS system. The other side of this membrane is cleansed with an albumin solution in counter-directional flow, which is discarded after passing the filter. Hemodialysis can be performed in the first circuit via the same high-flux hollow fibers.
The Molecular Adsorbents Recirculation System (MARS) is the best known extracorporal liver dialysis system. It consists of two separate dialysis circuits. The first circuit consists of human serum albumin , is in contact with the patient's blood through a semipermeable membrane and has two filters to clean the albumin after it has absorbed toxins from the patient's blood. The second circuit consists of a hemodialysis machine and is used to clean the albumin in the first circuit, before it is recirculated to the semipermeable membrane in contact with the patient's blood.
SPAD, MARS and continuous veno-venous haemodiafiltration (CVVHDF) were compared in vitro with regard to detoxification capacity. [ 52 ] SPAD and CVVHDF showed a significantly greater reduction of ammonia compared with MARS. No significant differences could be observed between SPAD, MARS and CVVHDF concerning other water-soluble substances. However, SPAD enabled a significantly greater bilirubin reduction than MARS. Bilirubin serves as an important marker substance for albumin-bound (non-water-soluble) substances. Concerning the reduction of bile acids no significant differences between SPAD and MARS were seen. It was concluded that the detoxification capacity of SPAD is similar or even higher when compared with the more sophisticated, more complex and hence more expensive MARS.
Albumin dialysis is a costly procedure: for a seven-hour treatment with MARS, approximately €300 for 600 mL human serum albumin solution (20%), €1740 for a MARS treatment kit, and €125 for disposables used by the dialysis machine have to be spent. The cost of this therapy adds up to approximately €2165. Performing SPAD according to the protocol by Sauer et al., however, requires 1000 mL of human albumin solution (20%) at a cost of €500. A high-flux dialyzer costing approximately €40 and the tubings (€125) must also be purchased. The overall costs of a SPAD treatment is approximately €656—30% of the costs of an equally efficient MARS therapy session. The expenditure for the MARS monitor necessary to operate the MARS disposables is not included in this calculation.
The Prometheus system ( Fresenius Medical Care, Bad Homburg , Germany) is a device based on the combination of albumin adsorption with high-flux hemodialysis after selective filtration of the albumin fraction through a specific polysulfon filter (AlbuFlow). It has been studied [ 53 ] in a group of eleven patients with hepatorenal syndrome (acute-on-chronic liver failure and accompanying kidney failure). The treatment for two consecutive days for more than four hours significantly improved serum levels of conjugated bilirubin, bile acids, ammonia, cholinesterase, creatinine, urea and blood pH. Prometheus was proven to be a safe supportive therapy for patients with liver failure.
Dialive (Yaqrit Limited, London, UK) incorporates albumin removal and replacement and, endotoxin removal. It is at " Technology readiness level " (TRL) 5, which means it is validated in the disease environment. [ 54 ] [ 55 ]
MARS was developed by a group of researchers at the University of Rostock (Germany), in 1993 and later commercialized for its clinical use in 1999. [ 56 ] The system is able to replace the detoxification function of the liver while minimizing the inconvenience and drawbacks of previously used devices. [ 57 ] [ 58 ] [ 59 ]
In vivo preliminary investigations indicated the ability of the system to effectively remove bilirubin, biliary salts, free fatty acids and tryptophan while important physiological proteins such as albumin, alpha-1-glicoproteine, alpha 1 antitrypsin, alpha-2-macroglobulin, transferrin, globulin tyrosine, and hormonal systems are unaffected. [ 60 ] Also, MARS therapy in conjunction with CRRT/HDF can help clear cytokines acting as inflammatory and immunological mediators in hepatocellular damage, and therefore can create the right environment to favour hepatocellular regeneration and recovery of native liver function.
MARS is an extracorporeal hemodialysis system composed of three different circuits: blood, albumin and low-flux dialysis. The blood circuit uses a double lumen catheter and a conventional hemodialysis device to pump the patient's blood into the MARS FLUX, a biocompatible polysulfone high-flux dialyser. With a membrane surface area of 2.1 m 2 , 100 nm of thickness and a cut-off of 50 KDa, the MARSFLUX is essential to retaining the albumin in the dialysate . Blood is dialysed against a human serum albumin (HSA) dialysate solution that allows blood detoxification of both water-soluble and protein-bound toxins, by means of the presence of albumin in the dialysate (albumin dialysis). The albumin dialysate is then regenerated in a close loop in the MARS circuit by passing through the fibres of the low-flux diaFLUX filter, to clear water-soluble toxins and provide electrolyte/acid-base balance, by a standard dialysis fluid. Next, the albumin dialysate passes through two different adsorption columns; protein-bound substances are removed by the diaMARS AC250, containing activated charcoal and anionic substances are removed by the diaMARS IE250, filled with cholestyramine, an anion-exchange resin. The albumin solution is then ready to initiate another detoxifying cycle of the patient's blood that can be sustained until both adsorption columns are saturated, eliminating the need to continuously infuse albumin into the system during treatment (Fig. 1).
A systematic review of the literature from 1999 to June 2011 was performed in the following databases:
Hepatic encephalopathy (HE) represents one of the more serious extrahepatic complications associated with liver dysfunction. [ 61 ] [ 62 ] Neuro-psychiatric manifestations of HE affect consciousness and behaviour.
Evidence suggests that HE develops as some neurotoxins and neuro active substances, produced after hepatocellular breakdown, accumulates in the brain as a consequence of a portosystemic shunt and the limited detoxification capability of the liver. Substances involved are ammonia, manganese, aromatic aminoacids, mercaptans, phenols, medium chain fatty acids, bilirubin, endogenous benzodiazepines, etc. The relationship between ammonia neurotoxicity and HE was first described in animal studies by Pavlov et al. [ 63 ] Subsequently, several studies in either animals or humans have confirmed that, a ratio in ammonia concentration higher than 2 mM between the brain and blood stream, causes HE, and even a comatose state when the value is greater than 5 mM. Some investigators have also reported a decrease in serum ammonia following a MARS treatment (Table 3).
Hours/patient
(μg/dL)
(μg/dL)
Manganese and copper serum levels are increased in patients with either acute or acute on chronic liver failure. Nevertheless, only in those patients with chronic hepatic dysfunction, a bilateral magnetic resonance alteration on Globos Pallidus is observed, [ 68 ] probably because this type of patients selectively shows higher cerebral membrane permeability.
Imbalance between aromatic and branched chain aminoacids (Fischer index), traditionally involved in HE genesis, [ 69 ] [ 70 ] [ 71 ] can be normalized following a MARS treatment. The effects are noticeable even after 3 hours of treatment and this reduction in the Fisher index is accompanied with an improvement in the HE. [ 72 ]
Novelli G et al. [ 73 ] published their three years experience on MARS analyzing the impact of the treatment in the cerebral level for 63 patients reporting an improvement in Glasgow Coma Score (GCS) for all observed in all patients. In the last 22 patients, cerebral perfusion pressure was monitored by Doppler (mean flow velocity in middle cerebral artery), establishing a clear relationship between a clinical improvement (especially neurological) and an improvement in arterial cerebral perfusion. This study confirms other results showing similar increments in cerebral perfusion in patients treated with MARS. [ 66 ]
More recently, several studies have shown a significant improvement of HE in patients treated with MARS. In the studies by Heemann et al. [ 74 ] and Sen et al. [ 75 ] an improvement in HE was considered when encephalopathy grade was reduced by one or more grades vs. basal values; for Hassenein et al., in their randomized controlled trial, improvement was considered when a decrease of two grades was observed. [ 76 ] In the latter, 70 patients with acute on chronic liver failure and encephalopathy grade III and IV were included. Likewise, Kramer et al. [ 77 ] estimated an HE improvement when an improvement in peak N70 latency in electroencephalograms was observed.
Sen et al. 44 observed a significant reduction in Child-Pugh Score (p<0,01) at 7 days following a MARS treatment, without any significant change in the controls. Nevertheless, when they looked at the Model for End-Stage Liver Disease Score (MELD), a significant reduction in both groups, MARS and controls, was recorded (p<0,01 y p<0,05, respectively).
Likewise, in several case series, an improvement in HE grade with MARS therapy is also reported. [ 78 ] [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 83 ] [ 84 ] [ 85 ] [ 86 ]
Hemodynamic instability is often associated with acute liver insufficiency, as a consequence of endogenous accumulation of vasoactive agents in the blood. This is characterized by a systemic vasodilatation, a decrease of systemic vascular resistance, arterial hypotension, and an increase of cardiac output that gives rise to a hyperdynamic circulation. During MARS therapy, systemic vascular resistance index and mean arterial pressure have been shown to increase and show improvement. [ 78 ] [ 80 ] [ 82 ] [ 87 ] [ 88 ] Schmidt et al. [ 89 ] reported the treatment of 8 patients, diagnosed with acute hepatic failure, that were treated with MARS for 6 hours, and were compared with a control group of 5 patients to whom ice pads were applied to match the heat loss produced in the treatment group during the extracorporeal therapy. They analyzed hemodynamic parameters in both groups hourly. In the MARS group, a statistically significant increase of 46% on systemic vascular resistance was observed (1215 ± 437 to 1778 ± 710 dinas x s x cm −5 x m −2 ) compared with a 6% increase in the controls. Mean arterial pressure also increased (69 ± 5 to 83 ± 11 mmHg, p< 0.0001) in the MARS group, whereas no difference was observed in the controls. Cardiac output and heart rate also decreased in the MARS group as a consequence of an improvement in the hyperdynamic circulation. Therefore, it was shown that a statistically significant improvement was obtained with MARS when compared with the SMT.
Catalina et al. [ 90 ] have also evaluated systemic and hepatic hemodynamic changes produced by MARS therapy. In 4 patients with acute decompensation of chronic liver disease, they observed after MARS therapy, an attenuation of hyperdynamic circulation and a reduction in the portal pressure gradient was measured. Results are summarized in table 4.
There are other studies also worth mentioning with similar results: Heemann et al . [ 74 ] and Parés et al . [ 91 ] among others. Dethloff T et al . [ 92 ] concluded that there is a statistically significant improvement favourable to MARS in comparison with Prometheus system (Table 5).
Hepatorenal syndrome is one of the more serious complications in patients with acute decompensation of cirrhosis and increased portal hypertension. It is characterized by hemodynamic changes in splanchnic, systemic and renal circulation. Splanchnic vasodilatation triggers the production of endogenous vasoactive substances that produce renal vasoconstriction and low glomerular filtration rate, leading to oliguria with a concomitant reduction in creatinine clearance. Renal insufficiency is always progressive with an inferior prognosis, [ 93 ] [ 94 ] with survival at 1 and 2 months of 20 and 10% respectively.
Pierre Versin [ 95 ] is one of the pioneers in the study of hepatorenal syndrome in patients with liver impairment.
Great efforts have been made trying to improve the prognosis of this type of patient; however, few have solved the problem. Orthotopic liver transplantation is the only treatment that has shown to improve acute and chronic complications derived from severe liver insufficiency. Today it is possible to combine albumin dialysis with continuous veno-venous hemodialfiltration, which provides a greater expectation for these patients [ 96 ] by optimization of their clinical status.
MARS treatment lowers serum urea and creatinine levels improving their clearance, [ 88 ] [ 89 ] [ 90 ] [ 92 ] and even favors resolution of hepatorenal syndrome. [ 74 ] [ 81 ] [ 82 ] [ 87 ] [ 97 ] Results are confirmed in a randomized controlled trial published by Mitzner et al. . [ 93 ] in which 13 patients diagnosed with hepatorenal syndrome type I was treated with MARS therapy. Mean survival was 25,2±34,6 days in the MARS group compared to 4,6±1,8 days observed in the controls in whom hemodiafiltration and standard care (SMT) was applied. This resulted in a statistically significance difference in survival at 7 and 30 days (p<0.05). Authors concluded that MARS therapy, applied to liver failure patients (Child-Pugh C and UNOS 2A scores) who develop hepatorenal syndrome type I, prolonged survival compared to patients treated with SMT.
Although mechanisms explaining previous findings are not yet fully understood, it has been reported that there was a decrease in plasma renin concentrations in patients diagnosed with acute on chronic liver failure with renal impairment that were treated with MARS.
Likewise, other studies have suggested some efficacy for MARS in the treatment of hepatorenal syndrome. [ 98 ] [ 99 ] [ 100 ] However, other references have been published that do not show efficacy in the treatment of these types of patients with MARS therapy. Khuroo et al. . [ 101 ] published a meta-analysis based on 4 small RCT's and 2 non RCT's in patients diagnosed with ACLF, concluding that MARS therapy would not bring any significant increment in survival compared with SMT.
Another observational study in 6 patients with cirrhosis, refractory ascites and hepatorenal syndrome type I, not responding to vasoconstrictor therapy, showed no impact on hemodynamics following MARS therapy; however authors concluded that MARS therapy could effectively serve as bridge to liver transplantation. [ 83 ] [ 102 ]
Total bilirubin was the only parameter analyzed in all trials that was always reduced in the groups of patients treated with MARS; Banayosy et al. . [ 103 ] measured bilirubin levels 14 days after since MARS therapy was terminated and observed a consistent, significant decrease not only for bilirubin but also for creatinine and urea (Table 6).
4 sessions
Impact of MARS therapy on plasma biliary acids levels was evaluated in 3 studies. In the study from Stadbauer et al. ., [ 105 ] that was specifically addressing the topic, it is reported that MARS and Prometheus systems lower to the same extent biliary acids plasma concentration. Heemann et al. . [ 74 ] and Laleman et al. . [ 88 ] have also published a significant improvement for these organic ions.
Pruritus is one of the most common clinical manifestations in cholestasis liver diseases and one of the most distressing symptoms in patients with chronic liver disease caused by viral hepatitis C . Many hypotheses have been formulated to explain physio pathogenesis of such manifestation, including incremental plasma concentration of biliary acids, abnormalities in the bile ducts , [ 106 ] increased central neurotransmitters coupling opioid receptors , [ 107 ] [ 108 ] etc. Despite the number of historical drugs used, individually or combined (exchange resins, hydrophilic biliary acids, antihistamines, antibiotics, anticonvulsants, opioid antagonists), there are reported cases of intractable or refractory pruritus with a dramatic reduction in patients’ quality of life (i.e. sleep disorders, depression, suicide attempts...). [ 109 ] [ 110 ] Intractable pruritus can be an indication for liver transplantation.
The MARS indication for intractable pruritus is therapeutically an option that has shown to be beneficial for patients in desperate cases, although at high cost. [ 111 ] [ 112 ] [ 113 ] [ 114 ] In several studies, it was confirmed that after MARS treatments, patients remain free from pruritus for a period of time ranging from 6 to 9 months. [ 114 ] Nevertheless, some authors have concluded that besides the good results found in the literature, application of MARS therapy in refractory pruritus requires larger evidence. [ 112 ]
Pharmacokinetics and pharmacodynamics for a majority of drugs can be significantly be modified with liver failure, affecting the therapeutic approach and potential toxicity of the drugs. In these type of patients, Child-Pugh score represents a poor prognostic factor to assess the metabolic capacity of the failing liver.
In patients with hepatic failure, drugs that are only metabolized in the liver, accumulate in the plasma right after they are administered, and therefore it is needed to modify drug dosing in both, concentration and time intervals, to lower the risk of toxicity. It is also necessary to adjust the dosing for those drugs that are exclusively metabolized by the liver, and have low affinity for prioteins and high distribution volume, such as fluoroquinolones ( Levofloxacin and Ciprofloxacin ). [ 115 ] [ 116 ] [ 117 ] [ 118 ]
Extracorporeal detoxification with albumin dialysis increases the clearance of drugs that are bound to plasmatic proteins (Table 7).
In the meta-analysis published by Khuroo et al. . [ 101 ] which included 4 randomized trials [ 74 ] [ 89 ] [ 93 ] [ 103 ] an improvement in survival for the patients with liver failure treated with MARS, compared with SMT, was not observed.
However, neither in the extracorporeal liver support systems review by the Cochrane [ 119 ] (published in 2004), nor the meta-analysis by Kjaergard et al. . [ 120 ] was a significance difference in survival found for patients diagnosed with ALF treated with extracorporeal liver support systems.
Nevertheless, these reviews included all kind of liver support systems and used a heterogeneous type of publication (abstracts, clinical trials, cohort, etc.).
There is literature showing favorable results in survival for patients diagnosed with ALF, and treated with MARS., In a randomized controlled trial, Salibà et al. . [ 121 ] studied the impact on survival of MARS therapy for patients with ALF, waiting on the liver transplant list. Forty-nine patients received SMT and 53 were treated with MARS. They observed that patients that received 3 or more MARS sessions showed a statistically significance increase in transplant-free survival compared with the others patients of the study. Notably, 75% of the patients underwent liver transplantation in the first 24 hours after inclusion in the waiting list, and besides the short exposure to MARS therapy, some patients showed a better survival trend compared to controls, when they were treated with MARS prior to the transplant.
In a case-controlled study by Montejo et al. . [ 104 ] it was reported that MARS treatment do not decrease mortality directly; however, the treatment contributed to significantly improve survival in patients that were transplanted. In studies by Mitzner et al. . [ 93 ] and Heemann et al. . [ 74 ] they were able to show a significance statistical difference in 30-day survival for patients in the MARS group. However, El Banayosy et al. . [ 103 ] and Hassanein et al. . [ 76 ] noticed a non significant improvement in survival, probably because of the short number of patients included in the trials. In the majority of available MARS studies published with patients diagnosed with ALF, either transplanted or not, survival was greater in the MARS group with some variations according to the type of trial, ranging from 20-30%, [ 122 ] [ 123 ] and 60-80%. [ 83 ] [ 124 ] [ 125 ] [ 126 ] Data is summarized in Tables 8, 9 and 10.
MARS (%)
Controls
For patients diagnosed with acute on chronic liver failure and treated with MARS therapy, clinical trial results showed a not statistically significant reduction in mortality (odds ratio [OR] =0,78; confident interval [CI] =95%: 0,58 – 1,03; p= 0,1059, Figure 3)
A non-statistically significant reduction of mortality was shown in patients with ALF treated with MARS (OR = 0,75 [CI= 95%, 0,42 – 1,35]; p= 0,3427). (Figure 4)
Combined results yielded a non-significant reduction on mortality in patients treated with MARS therapy. However, the low number of patients included in each of the studies may be responsible for not being able to achieve enough statistical power to show differences between both treatment groups. Moreover, heterogeneity for the number of MARS sessions and severity of liver disease of the patients included, make it very difficult for the evaluation of MARS impact on survival.
Recently, a meta-analysis on survival in patients treated with an extra-hepatic therapy has been published. [ 131 ] Searching strategies yielded 74 clinical trials: 17 randomized controlled trials, 5 case control and 52 cohort studies. Eight studies were included in the meta-analysis: three addressing acute liver failure, one with MARS therapy [ 103 ] and five addressing acute on chronic, being four MARS related. [ 74 ] [ 75 ] [ 76 ] [ 93 ] Authors concluded that extra-hepatic detoxifying systems improve survival for acute liver insufficiency, whereas results for acute decompensation of chronic liver diseases suggested a non significant survival benefit. Also, due to an increased demand for liver transplantation together with an augmented risk of liver failure following large resections, development of detoxifying extrahepatic systems are necessary.
Safety, defined as presence of adverse events, is evaluated in few trials. Adverse events in patients receiving MARS therapy are similar to those in the controls with the exception of thrombocytopenia and hemorrhage that seems to occur more frequently with the MARS system. [ 132 ]
Heemann et al. [ 74 ] reported two adverse events most probably MARS related: fever and sepsis, presumably originated at the catheter.
In the study by Hassanein et al. , [ 76 ] two patients in the MARS group abandoned the study owing to hemodynamic instability, three patients required larger than average platelets transfusion and three more patients presented gastrointestinal bleeding.
Laleman et al. . [ 88 ] detected one patient with thrombocytopenia in both the MARS and Prometheus treatment groups, and an additional patient with clotting of the dialysis circuit and hypotension, only in the Prometheus group.
Kramer et al. . (Biologic-DT) [ 77 ] wrote about 3 cases with disseminated intravascular coagulation in the interventional group, two of them with fatal outcomes.
Mitzner et al. . [ 93 ] described, among patients treated with MARS, a thrombocytopenia case and a second patient with chronic hepatitis B, who underwent TIPS placement on day 44 after randomization and died on day 105 of multiorgan failure, as a consequence of complications related to the TIPS procedure.
Montejo et al. . [ 104 ] showed that MARS is an easy technique, without serious adverse events related to the procedure, and also easy to implement in ICU settings that are used to renal extracorporeal therapies.
The MARS International Registry, with data from more than 500 patients (although sponsored by the manufacturer), shows that the adverse effects observed are similar to the control group. However, in these severely ill patients it is difficult to distinguish between complications of the disease itself and side effects attributable to the technique.
Only three Studies addressing cost-effectivenenss of MARS therapy have been found.
Hassanein et al. [ 133 ] analysed costs of randomized patients with ACLF receiving MARS therapy or standard medical care. They used the study published in 2001 by Kim et al. [ 134 ] describing the impact of complications in hospitalization costs in patients diagnosed with alcoholic liver failure. Cost of 11 patients treated with standard medical care (SMT) were compared to those that received MARS, in addition to SMT (12 patients). In the MARS group, there was less in-hospital mortality and complications related to the disease, with a remarkable reduction in cost which compensated the MARS related expenditure (Table 11).
n=12
n=11
There were 5 survivors in the control group, with a cost per patient of $35.904, whereas in the MARS group, 11 patients out of 12 survived with a cost per patient of $32.036 which represents a $4000 savings per patient in favors of the MARS group.
Hessel et al. [ 135 ] published a 3-year follow-up of a cohort of 79 patients with ACLF, of whom 33 received MARS treatments and 46 received SMT. Survival was 67% for the MARS group and 63% for the controls, that was reduced to 58 and 35% respectively at one year follow-up, and then 52 and 17% at three years.
Hospitalization costs for the MARS treated group were greater than that for the controls (€31,539 vs. €7,543) and similarly direct cost at 3-year follow-up (€8,493 vs. €5,194). Nevertheless, after adjusting mortality rate, the annual cost per patient was €12,092 for controls and €5,827 for MARS group; also in the latter, they found an incremental cost-effectiveness ratio of 31.448 € per life-year gained (LYG) and an incremental costs per QALY gained of 47171 €.
Two years later, same authors published the results of 149 patients diagnosed with ACLF. [ 136 ] There were 67 patients (44,9%) treated with MARS and 82 patients (55,1%) were allocated to receive SMT. Mean survival time was 692 days in the MARS group (33% at 3 years) and 453 days in the controls (15% at 3 years); the results were significant (p=0,022). Differences in average cost was €19,853 (95% IC: 13.308-25.429): 35.639 € for MARS patients and 15.804 € for the control group. Incremental cost per LYG was 29.985 € (95% IC: 9.441-321.761) and €43,040 (95% IC: 13.551-461.856) per quality-adjusted life years (QALY).
Liver support systems, such as MARS, are very important to stabilize patients with acute or acute on chronic liver failure and avoid organ dysfunction, as well as a bridge-to-transplant. Although initial in-hospital costs are high, they are worth for the favorable outcome.
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
[ 147 ] [ 148 ]
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
[ 149 ]
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
[ 97 ] [ 137 ] [ 140 ] [ 150 ]
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
Same contraindications as with any other extracorporeal treatment may be applied to MARS therapy.
Blood Flow
The trend is to use high flow rates, although it is determined by the technical specifications of the combined machine and catheters’ size
Intermittent treatments:
Continuous treatments:
Dyalisate Flow Rate
Intermittent treatments:
Continuous treatments:
Replacement Flow Rate
Heparin Anticoagulation
Similarly to CVVHD, it depends on previous patient's coagulation status. In many cases it will not be needed, unless the patient presents a PTT inferior to 160 seconds. In patients with normal values, a bolus of 5000 to 10000 IU of heparin could be administered at the commencement of the treatment, followed by a continuous perfusion, to keep PTT in ratios from 1,5 to 2,5 or 160 to 180 seconds.
Monitoring
A biochemical analysis is recommended (liver and kidney profile, ionic, glucose) together with a hemogram at the end of first session and before starting the following one.
Coagulation analysis must be also performed before starting the session to adjusting heparin dose.
In case that medication susceptible to be eliminated by MARS is being administered, it is also recommended to monitor their levels in blood
End of the Session
and both catheter's lumens heparinized
Federal Drug Administration (FDA) cleared, in a document dated on May 27, 2005, MARS therapy for the treatment of drug overdose and poisoning. The only requirement is that the drug or poison must be susceptible to be dialysed and removed by activated charcoal or anionic exchange resins.
More recently, on December 17, 2012, MARS therapy has been cleared by the FDA for the treatment of hepatic encephalopathy due to a decompensation of a chronic liver disease. Clinical trials conducted with MARS treatment in HE patients having a decompensation of chronic liver disease demonstrated a transient effect from MARS treatments to significantly decrease their hepatic encephalopathy scores by at least 2 grades compared to standard medical therapy (SMT).
The MARS is not indicated as a bridge to liver transplant. Safety and efficacy has not been demonstrated in controlled, randomized clinical trials.
The effectiveness of the MARS device in patients that are sedated could not be established in clinical studies and therefore cannot be predicted in sedated patients
The LiverNet is a database dedicated to the liver diseases treated with the support of extracorporeal therapies. To date, the most currently used system is the Molecular Adsorbent Recirculating System (MARS), which is based on the selective removal of albumin bound molecules and toxins from the blood in patients with acute and acute-on-chronic liver failure. The purpose is to register prospectively all patients treated worldwide with the MARS system in order to:
The liverNet is an eCRF database (www.livernet.net) using a SAS platform that allows major advantages for the centres including the automatic calculations of most liver rand ICU scoring systems, instant queries online, instant export of all patients included in the database of each centre to an Excel file for direct statistical analysis and finally instant online statistical analysis of selective data decided by the scientific committee. Therefore, the LiverNet is an important tool to progress in the knowledge of liver support therapies. | https://en.wikipedia.org/wiki/Liver_dialysis |
Liver of sulfur is a loosely defined mixture of potassium sulfide , potassium polysulfide , potassium thiosulfate , and likely potassium bisulfide . Synonyms include hepar sulfuris , sulfur , sulfurated potash and sulfurated potassa . There are two distinct varieties: "potassic liver of sulfur" and "ammoniacal liver of sulfur". [ 1 ]
Liver of sulfur is mainly used in metalworking to form a brown or black patina on copper and silver as well as many (though not all) copper alloys and silver alloys ( brass , for example— a copper alloy— does not react with sulfur compounds). It is sold as a yellow brittle solid (a "lump" which must be mixed with water before use) as well as a pre-mixed liquid and a gel form. The solid is believed to have the longest shelf life, though all liver of sulfur tends to decompose with time. Modern gel forms contain stabilizers that allow the reactivity to last much longer. Liver of sulfur that is kept dry, sealed from air, out of the light, and in a freezer will last many times longer than that kept in any other condition.
The highest quality liver of sulfur in solid form is a dark yellow, almost "liver" colored substance. As it ages and is exposed to air, its potency decreases, it will turn lighter yellow and finally white, at which point its reactivity is negligible. [ 2 ] Liver of sulfur decomposes to sulfate of potash and carbonate of potash , neither of which has any value as an oxidizer of metal. [ 2 ]
The reactivity of liver of sulfur with silver and copper quickly creates a dark or colored patina on the metal. This is done by immersing the metal object in a solution of liver of sulfur and water. When treating silver, the solution must be hot, though if the bath is brought to its boiling point the liver of sulfur will quickly decompose and become ineffective. Also, if the concentration of the solution is too strong, the oxidation process will proceed too quickly and the layer of patina thus created will tend to flake away. The best results are usually obtained by using more dilute solutions and allowing the patina to build more slowly but more securely, and, for silver, keeping the solution just under its boiling point. Lastly, it is critical that the metal surface be extremely clean, as clean as would be necessary to electroplate the same surface. Even small amounts of oil on the metal such as that produced by handling without gloves will be sufficient to protect the metal surface from oxidation. [ 2 ]
Liver of sulfur was once used to counteract poisoning with several metals, including arsenic , copper, lead, and antimony . A lump was dissolved in warm water and the patient was instructed to drink the solution three or four times over the course of an hour. [ 3 ] At one time sulfurated potash was used to combat arthritis . It eventually fell into disfavor for medical purposes because sulfides and polysulfides were discovered to be toxic in their own right. | https://en.wikipedia.org/wiki/Liver_of_sulfur |
A liver support system or diachysis is a type of therapeutic device to assist in performing the functions of the liver. Such systems focus either on removing the accumulating toxins ( liver dialysis ), or providing additional replacement of the metabolic functions of the liver through the inclusion of hepatocytes to the device ( bioartificial liver device ). A diachysis machine is used for acute care i.e. emergency care, as opposed to a dialysis machine which are typically used over the longer term. These systems are being trialed to help people with acute liver failure (ALF) or acute-on-chronic liver failure. [ 1 ]
The primary functions of the liver include removing toxic substances from the blood, manufacturing blood proteins , storing energy in the form of glycogen , and secreting bile . The hepatocytes that perform these tasks can be killed or impaired by disease, resulting in acute liver failure (ALF) which can be seen in person with previously diseased liver or a healthy one.
In hyperacute and acute liver failure, the clinical picture develops rapidly with progressive encephalopathy and multiorgan dysfunction such as hyperdynamic circulation , coagulopathy , acute kidney injury and respiratory insufficiency , severe metabolic alterations, and cerebral edema that can lead to brain death. [ 2 ] [ 3 ] In these cases the mortality without liver transplantation (LTx) ranges between 40-80%. [ 4 ] [ 5 ] LTx is the only effective treatment for these patients although it requires a precise indication and timing to achieve good results. Nevertheless, due to the scarcity of organs to carry out liver transplantations, it is estimated that one third of patients with ALF die while waiting to be transplanted. [ 6 ]
On the other hand, a patient with a chronic hepatic disease can suffer acute decompensation of liver function following a precipitating event such as variceal bleeding, sepsis and excessive alcohol intake among others that can lead to a condition referred to as acute-on-chronic liver failure (ACLF).
Both types of hepatic insufficiency, ALF and ACLF, can potentially be reversible and liver functionality can return to a level similar to that prior to the insult or precipitating event.
LTx has shown an improvement in the prognosis and survival with severe cases of ALF. Nevertheless, cost and donor scarcity have prompted researchers to look for new supportive treatments that can act as “bridge” to the transplant procedure. By stabilizing the patient's clinical state, or by creating the right conditions that could allow the recovery of native liver functions, both detoxification and synthesis can improve, after an episode of ALF or ACLF. [ 7 ]
Three different types of supportive therapies have been developed: bio-artificial, artificial and hybrid liver support systems (Table 2).
Extracorporeal liver assist device
Molecular adsorbent recirculating system
Bioartificial Liver Support System
Fractionated plasma separation and adsorption system
TECA-Hybrid Artificial Liver Support System
Radial Flow Bioreactor
Single-pass albumin dialysis
Modular Extracorporeal Liver Support
Bioartificial Liver
Selective plasma filtration therapy
Bioartificial liver devices are experimental extracorporeal devices that use living cell lines to provide detoxification and synthesis support to the failing liver. Bio-artificial liver (BAL) Hepatassist 2000 uses porcine hepatocytes whereas ELAD system employs hepatocytes derived from human hepatoblastoma C3A cell lines. [ 19 ] [ 20 ] Both techniques can produce, in fulminant hepatic failure (FHF), an improvement of hepatic encephalopathy grade and biochemical parameters. Potential side effects that have been documented include immunological issues (porcine endogenous retrovirus transmission), infectious complications, and tumor transmigration.
Other biological hepatic systems are Bioartificial Liver Support (BLSS) and Radial Flow Bioreactor (RFB). Detoxification capacity of these systems is poor and therefore they must be used combined with other systems to mitigate this deficiency. Today, its use is limited to centers with high experience in their application. [ 21 ]
A bioartificial liver device (BAL) is an artificial extracorporeal liver support (ELS) system for an individual who is suffering from acute liver failure (ALF) or acute-on-chronic liver failure (ACLF). The fundamental difference between artificial and BAL systems lies in the inclusion of hepatocytes into the reactor, often operating alongside the purification circuits used in artificial ELS systems. The overall design varies between different BAL systems, but they largely follow the same basic structure, with patient blood or plasma flow through an artificial matrix housing hepatocytes. Plasma is often separated from the patient’s blood to improve efficiency of the system, and the device can be connected to artificial liver dialysis devices in order to further increase the effectiveness of the device in filtration of toxins. The inclusion of functioning hepatocytes in the reactor allows the restoration of some of the synthetic functions that the patient’s liver is lacking. [ 22 ]
The first bioartificial liver device was developed in 1993 by Dr. Achilles A. Demetriou at Cedars-Sinai Medical Center. The bioartificial liver helped an 18-year-old southern California woman survive without her own liver for 14 hours until she received a human liver using a 20-inch-long, 4-inch-wide plastic cylinder filled with cellulose fibers and pig liver cells. Blood was routed outside the patient's body and through the artificial liver before being returned to the body. [ 23 ] [ 24 ]
Dr. Kenneth Matsumara's work on the BAL led it to be named an invention of the year by Time magazine in 2001. [ 25 ] Liver cells obtained from an animal were used instead of developing a piece of equipment for each function of the liver . The structure and function of the first device also resembles that of today's BALs. Animal liver cells are suspended in a solution and a patient's blood is processed by a semipermeable membrane that allow toxins and blood proteins to pass but restricts an immunological response. [ 25 ]
Advancements in bioengineering techniques in the decade after Matsumara's work have led to improved membranes and hepatocyte attachment systems. [ 26 ] Cell sources now include primary porcine hepatocytes, primary human hepatocytes, human hepatoblastoma (C3A), immortalized human cell lines and stem cells . [ 26 ]
The purpose of BAL-type devices is not to permanently replace liver functions , but to serve as a supportive device, [ 27 ] either allowing the liver to regenerate properly upon acute liver failure , or to bridge the individual's liver functions until a transplant is possible.
BALs are essentially bioreactors , with embedded hepatocytes (liver cells ) that perform the functions of a normal liver . They process oxygenated blood plasma , which is separated from the other blood constituents. [ 28 ] Several types of BALs are being developed, including hollow fiber systems and flat membrane sheet systems. [ 29 ]
Various types of hepatocytes are used in these devices. Porcine hepatocytes are often used due to ease of acquisition and cost; however, they are relatively unstable and carry the risk of cross-species disease transmission. [ 30 ] Primary human hepatocytes sourced from donor organs present several problems in their cost and difficulty to obtain, especially with the current lack in transplantable tissue. [ 30 ] In addition, questions have been raised about tissue collected from patients transmitting malignancy or infection via the BAL device. Several lines of human hepatocytes are also used in BAL devices, including C3A and HepG2 tumour cell lines, but due to their origin from hepatomas, they possess the potential to pass on malignancy to the patient. [ 31 ] There is ongoing research into the cultivation of new types of human hepatocytes capable of improved longevity and efficacy in a bioreactor over currently used cell types, that do not pose the risk of transfer of malignancy or infection, such as the HepZ cell line created by Werner et al. . [ 32 ]
Similar to kidney dialysis , hollow fiber systems employ a hollow fiber cartridge. Hepatocytes are suspended in a gel solution such as collagen , which is injected into a series of hollow fibers. In the case of collagen, the suspension is then gelled within the fibers, usually by a temperature change. The hepatocytes then contract the gel by their attachment to the collagen matrix, reducing the volume of the suspension and creating a flow space within the fibers. Nutrient media is circulated through the fibers to sustain the cells. During use, plasma is removed from the patients blood. The patient's plasma is fed into the space surrounding the fibers. The fibers, which are composed of a semi-permeable membrane, facilitate transfer of toxins, nutrients and other chemicals between the blood and the suspended cells. The membrane also keeps immune bodies, such as immunoglobulins , from passing to the cells to prevent an immune system rejection. [ 33 ]
Currently, hollow-fibre bioreactors are the most commonly accepted design for clinical use due to their capillary-network allowing for easy perfusion of plasma across cell populations. [ 34 ] However, these structures have their limitations, with convectional transport issues, nutritional gradients, non-uniform seeding, inefficient immobilisation of cells, and reduced hepatocyte growth restricting their effectiveness in BAL designs. [ 35 ] Researchers are now investigating the use of cryogels to replace hollow-fibres as the cell carrier components in BAL systems.
Cryogels are super-macroporous three-dimensional polymers prepared at sub-zero temperatures, by the freezing of a solution of cryogel precursors and solvent. The pores develop during this freezing process – as the cryogel solution cools, the solvent begins to form crystals. This causes the concentration of the cryogel precursors in the solution to increase, initiating the cryogelation process and forming the polymer walls. As the cryogel warms, the solvent crystals thaw, leaving cavities that form the pores. [ 36 ] Cryogel pores range in size from 10-100 μm in size, forming an interconnected network that mimics a capillary system with a very large surface area to volume ratio, supporting large numbers of immobilised cells. Convection mediated transport is also supported by cryogels, enabling even distribution of nutrients and metabolite elimination, overcoming some of the shortcomings of hollow-fibre systems. [ 35 ] Cryogel scaffolds demonstrate good mechanical strength and biocompatibility without triggering an immune response, improving their potential for long-term inclusion in BAL devices or in-vitro use. [ 37 ] Another advantage of cryogels is their flexibility for use in a variety of tasks, including separation and purification of substances, along with acting as extracellular matrix for cell growth and proliferation. Immobilisation of specific ligands onto cryogels enables adsorption of specific substances, supporting their use as treatment options for toxins, [ 38 ] for separation of haemoglobin from blood, [ 39 ] and as a localised and sustained method for drug delivery. [ 40 ]
Developing an effective bioartificial liver (BAL) remains a formidable challenge, as it necessitates the intricate optimization of cell colonization, biomaterial scaffold design, and BAL fluid dynamics. Expanding upon prior research indicating its potential as a blood perfusion device for detoxification, some studies have explored the application of Arg-Gly-Asp (RGD)-containingPoly(2-hydroxyethyl methacrylate) (pHEMA)-alginate cryogels as scaffolds for BAL. These cryogels, incorporating alginate to mitigate protein fouling and functionalized with an RGD-containing peptide to enhance hepatocyte adhesion, represent a promising avenue for BAL scaffold development. Methods for characterizing internal flow within the porous cryogel matrix such as Particle Image Velocimetry (PIV), enables visualization of flow dynamics. PIV analysis revealed the laminar flow characteristics within cryogel pores, prompting the design of a multi-layered bioreactor consisting of spaced cryogel discs to optimize blood/hepatocyte mass exchange. Compared to the column configuration, the stacked bioreactor demonstrated significantly elevated production of albumin and urea, alongside enhanced cell colonization and proliferation over time. [ 35 ]
Recent developments in bioartificial liver (BAL) using living liver cells have shown promising advancements in the field of liver support and regeneration. These developments focus on utilizing various cell sources, scaffold materials, and bioreactor designs to enhance the functionality and viability of BAL systems. Key advancements include:
Cell Sources: Researchers have explored different cell sources for BAL, including primary hepatocytes, stem cell-derived hepatocyte-like cells, and immortalized liver cell lines. Efforts have been made to optimize cell culture conditions to maintain cell viability and functionality within BAL systems.
Scaffold Materials: Biomaterial scaffolds play a critical role in providing structural support and facilitating cell attachment and proliferation in BAL systems. Recent studies have investigated the use of natural and synthetic materials, such as hydrogels, alginate, and decellularized liver scaffolds, to create biomimetic environments conducive to liver cell growth and function.
Bioreactor Designs: Innovative bioreactor designs have been developed to enhance the performance of BAL systems by optimizing mass transfer, fluid dynamics, and cell-matrix interactions. These designs include perfusion-based bioreactors, microfluidic devices, and three-dimensional (3D) bioprinted constructs, which aim to mimic the physiological microenvironment of the liver and promote liver cell function and survival.
Functional Assessment: Advances in bioanalytical techniques have enabled researchers to assess the functionality of liver cells within BAL systems more accurately. These techniques include measuring the secretion of liver-specific biomarkers, such as albumin, urea, and bile acids, as well as evaluating metabolic activity, drug metabolism, and detoxification capacity.
There have been numerous clinical studies involving hollow-fibre bioreactors. Overall, they show promise but do not provide statistically significant evidence supporting their effectiveness. This is generally due to inherent design limitations, causing convectional transport issues, nutritional gradients, non-uniform seeding, inefficient immobilisation of cells, and reduced hepatocyte growth. [ 35 ] As of writing, no cryogel-based devices have entered clinical trials. However, laboratory results have been promising, [ 35 ] [ 41 ] and hopefully trials will begin soon.
The HepatAssist , developed at the Cedars-Sinai Medical Center, is a BAL device containing porcine hepatocytes within a hollow-fibre bioreactor. These semi-permeable fibres act as capillaries, allowing the perfusion of plasma through the device, and across the hepatocytes surrounding the fibres. The system incorporates a charcoal column to act as a filter, removing additional toxins from the plasma. [ 42 ]
Demetriou et al. [ 42 ] carried out a large, randomised, multicentre, controlled trial on the safety and efficacy of the HepatAssist device. 171 patients with ALF stemming from viral hepatitis, paracetamol overdose or other drug complications, primary non-function (PNF), or of indeterminate aetiology, were involved in the study and were randomly assigned to either the experimental or control groups. The study found that at the primary end-point 30-day post admission mark, there was an increased survival rate in BAL patients over control patients (71% vs 62%), but the difference was not significant. However, when patients with PNF are excluded from the results there is a 44% reduction in mortality for BAL treated patients, a statistically significant advantage. The investigators noted that exclusion of PNF patients is justifiable due to early retransplantation and lack of intercranial hypertension, so HepatAssist would give little benefit to this group. For the secondary end-point of time-to-death, in patients with ALF of known aetiology there was a significant difference between BAL and control groups, with BAL patients surviving for longer. There was no significant difference for patients of unknown aetiology, however.
The conclusions of the study suggest that such a device has potentially significant importance when used as a treatment measure. While the overall findings were not statistically significant, when the aetiology of the patients was taken into account the BAL group gained a statistically significant reduction in mortality over the control group. This suggests that while the device may not be applicable to patients as an overall treatment for liver dysfunction, it can provide an advantage when the heterogeneity of patients is considered and is used with patients of specific aetiology.
The Extracorporeal Liver Assist Device (ELAD) is a human-cell based treatment system. A catheter removes blood from the patient, and an ultrafiltrate generator separates the plasma from the rest of the blood. This plasma is then run through a separate circuit containing cartridges filled with C3A cells, before being returned to the main circuit and re-entering the patient. [ 43 ]
Thompson et al. [ 43 ] performed a large open-label trial, measuring the effectiveness of ELAD on patients with severe alcoholic hepatitis resulting in ACLF. Their study involved patients screened at 40 sites across the US, UK, and Australia, and enrolled a total of 203 patients. Patients were then randomised into either ELAD (n=96) or standard medical care (n=107) groups, with even distribution for patients in terms of sex, MELD score, and bilirubin levels. Of the 96 patients in the ELAD group, 45 completed the full 120 hours of treatment – the rest were unable to complete the full regimen due to a variety of reasons, including withdrawal of consent or severe adverse events, though 37 completed >72 hours of treatment, with results showing minimal difference in mortality between those receiving either >72 hours or the full 120 hours of treatment. The study was unable to complete its goal, finding no statistically significant improvement in mortality rates for patients that received ELAD treatment over those receiving standard care at 28 and 91 days (76.0% versus 80.4% and 59.4% versus 61.7%, respectively). Biomarker measurements showed a significantly reduced level of bilirubin and alkaline phosphatase in ELAD patients, though neither improvement translated into increased survivability rates. Outcomes for patients with MELD score <28 showed trends towards improved survival on ELAD, whereas those with MELD >28 had decreased survivability on ELAD. These patients presented with raised creatinine from kidney failure, suggesting a reason why ELAD decreased survival chance over standard care. Unlike artificial ELS devices and HepatAssist, ELAD does not incorporate any filtration devices, such as charcoal columns and exchange resins. Therefore, it cannot replace the filtration capability of the kidneys and cannot compensate for multi-organ failure from more severe presentations of ACLF, resulting in increased mortality rates.
While the results of the study cannot provide conclusive evidence to suggest that a BAL device like ELAD improves the outcome of severe ACLF, it does suggest that it can aid the survival of patients with a less severe form of the disease. In those patients with a MELD <28, beneficial effects were seen 2–3 weeks post treatment, suggesting that while C3A incorporating BAL devices are unable to provide short-term aid like artificial albumin filtration devices, they instead provide more long-term aid in recovery of the patient’s liver. [ 43 ]
A randomized, phase 3 trial of the ELAD device in patients with severe alcoholic hepatitis failed to show benefit on overall survival and development was discontinued. [ 44 ]
Artificial liver support systems are aimed to temporarily replace native liver detoxification functions and they use albumin as scavenger molecule to clear the toxins involved in the physiopathology of the failing liver. Most of the toxins that accumulate in the plasma of patients with liver insufficiency are protein bound, and therefore conventional renal dialysis techniques, such as hemofiltration , hemodialysis or hemodiafiltration are not able to adequately eliminate them.
Liver dialysis has shown promise for patients with hepatorenal syndrome . It is similar to hemodialysis and based on the same principles, but hemodialysis does not remove toxins bound to albumin that accumulate in liver failure. Like a bioartificial liver device , it is a form of artificial extracorporeal liver support. [ 45 ] [ 46 ]
A critical issue of the clinical syndrome in liver failure is the accumulation of toxins not cleared by the failing liver . Based on this hypothesis, the removal of lipophilic , albumin-bound substances such as bilirubin , bile acids , metabolites of aromatic amino acids , medium-chain fatty acids and cytokines should be beneficial to the clinical course of a patient in liver failure. This led to the development of artificial filtration and absorption devices.
Liver dialysis is performed by physicians and surgeons and specialized nurses with training in gastroenterological medicine and surgery, namely, in hepatology , alongside their colleagues in the intensive or critical care unit and the transplantation department, which is responsible for procuring and implanting a new liver, or a part (lobe) of one, if and when it becomes available in time and the patient is eligible. Because of the need for these experts, as well as the relative newness of the procedure in certain areas, it is usually available only in larger hospitals, such as level I trauma center teaching hospitals connected with medical schools.
Between the different albumin dialysis modalities, single pass albumin dialysis (SPAD) has shown some positive results at a very high cost; [ 47 ] it has been proposed that lowering the concentration of albumin in the dialysate does not seem to affect the detoxification capability of the procedure. [ 48 ] Nevertheless, the most widely used systems today are based on hemodialysis and adsorption. These systems use conventional dialysis methods with an albumin containing dialysate that is later regenerated by means of adsorption columns, filled with activated charcoal and ion exchange resins.
At present, there are two artificial extracorporeal liver support systems: the Molecular Adsorbents Recirculating System (MARS) from Gambro and Fractionated Plasma Separation and Adsorption (FPSA), commercialised as Prometheus (PROM) from Fresenius Medical Care . Of the two therapies, MARS is the most frequently studied, and clinically used system to date.
While the technique is in its infancy, the prognosis of patients with liver failure remains guarded. Liver dialysis is currently only considered to be a bridge to transplantation or liver regeneration (in the case of acute liver failure ) [ 49 ] [ 50 ] [ 51 ] and, unlike kidney dialysis (for kidney failure ), cannot support a patient for an extended period of time (months to years).
Artificial detoxification devices currently under clinical evaluation include the Single Pass Albumin Dialysis (SPAD), Molecular Adsorbent Recirculating System (MARS), Prometheus system, and Dialive.
Single pass albumin dialysis (SPAD) is a simple method of albumin dialysis using standard renal replacement therapy machines without an additional perfusion pump system: The patient's blood flows through a circuit with a high-flux hollow fiber hemodiafilter, identical to that used in the MARS system. The other side of this membrane is cleansed with an albumin solution in counter-directional flow, which is discarded after passing the filter. Hemodialysis can be performed in the first circuit via the same high-flux hollow fibers.
The Molecular Adsorbents Recirculation System (MARS) is the best known extracorporal liver dialysis system. It consists of two separate dialysis circuits. The first circuit consists of human serum albumin , is in contact with the patient's blood through a semipermeable membrane and has two filters to clean the albumin after it has absorbed toxins from the patient's blood. The second circuit consists of a hemodialysis machine and is used to clean the albumin in the first circuit, before it is recirculated to the semipermeable membrane in contact with the patient's blood.
SPAD, MARS and continuous veno-venous haemodiafiltration (CVVHDF) were compared in vitro with regard to detoxification capacity. [ 52 ] SPAD and CVVHDF showed a significantly greater reduction of ammonia compared with MARS. No significant differences could be observed between SPAD, MARS and CVVHDF concerning other water-soluble substances. However, SPAD enabled a significantly greater bilirubin reduction than MARS. Bilirubin serves as an important marker substance for albumin-bound (non-water-soluble) substances. Concerning the reduction of bile acids no significant differences between SPAD and MARS were seen. It was concluded that the detoxification capacity of SPAD is similar or even higher when compared with the more sophisticated, more complex and hence more expensive MARS.
Albumin dialysis is a costly procedure: for a seven-hour treatment with MARS, approximately €300 for 600 mL human serum albumin solution (20%), €1740 for a MARS treatment kit, and €125 for disposables used by the dialysis machine have to be spent. The cost of this therapy adds up to approximately €2165. Performing SPAD according to the protocol by Sauer et al., however, requires 1000 mL of human albumin solution (20%) at a cost of €500. A high-flux dialyzer costing approximately €40 and the tubings (€125) must also be purchased. The overall costs of a SPAD treatment is approximately €656—30% of the costs of an equally efficient MARS therapy session. The expenditure for the MARS monitor necessary to operate the MARS disposables is not included in this calculation.
The Prometheus system ( Fresenius Medical Care, Bad Homburg , Germany) is a device based on the combination of albumin adsorption with high-flux hemodialysis after selective filtration of the albumin fraction through a specific polysulfon filter (AlbuFlow). It has been studied [ 53 ] in a group of eleven patients with hepatorenal syndrome (acute-on-chronic liver failure and accompanying kidney failure). The treatment for two consecutive days for more than four hours significantly improved serum levels of conjugated bilirubin, bile acids, ammonia, cholinesterase, creatinine, urea and blood pH. Prometheus was proven to be a safe supportive therapy for patients with liver failure.
Dialive (Yaqrit Limited, London, UK) incorporates albumin removal and replacement and, endotoxin removal. It is at " Technology readiness level " (TRL) 5, which means it is validated in the disease environment. [ 54 ] [ 55 ]
MARS was developed by a group of researchers at the University of Rostock (Germany), in 1993 and later commercialized for its clinical use in 1999. [ 56 ] The system is able to replace the detoxification function of the liver while minimizing the inconvenience and drawbacks of previously used devices. [ 57 ] [ 58 ] [ 59 ]
In vivo preliminary investigations indicated the ability of the system to effectively remove bilirubin, biliary salts, free fatty acids and tryptophan while important physiological proteins such as albumin, alpha-1-glicoproteine, alpha 1 antitrypsin, alpha-2-macroglobulin, transferrin, globulin tyrosine, and hormonal systems are unaffected. [ 60 ] Also, MARS therapy in conjunction with CRRT/HDF can help clear cytokines acting as inflammatory and immunological mediators in hepatocellular damage, and therefore can create the right environment to favour hepatocellular regeneration and recovery of native liver function.
MARS is an extracorporeal hemodialysis system composed of three different circuits: blood, albumin and low-flux dialysis. The blood circuit uses a double lumen catheter and a conventional hemodialysis device to pump the patient's blood into the MARS FLUX, a biocompatible polysulfone high-flux dialyser. With a membrane surface area of 2.1 m 2 , 100 nm of thickness and a cut-off of 50 KDa, the MARSFLUX is essential to retaining the albumin in the dialysate . Blood is dialysed against a human serum albumin (HSA) dialysate solution that allows blood detoxification of both water-soluble and protein-bound toxins, by means of the presence of albumin in the dialysate (albumin dialysis). The albumin dialysate is then regenerated in a close loop in the MARS circuit by passing through the fibres of the low-flux diaFLUX filter, to clear water-soluble toxins and provide electrolyte/acid-base balance, by a standard dialysis fluid. Next, the albumin dialysate passes through two different adsorption columns; protein-bound substances are removed by the diaMARS AC250, containing activated charcoal and anionic substances are removed by the diaMARS IE250, filled with cholestyramine, an anion-exchange resin. The albumin solution is then ready to initiate another detoxifying cycle of the patient's blood that can be sustained until both adsorption columns are saturated, eliminating the need to continuously infuse albumin into the system during treatment (Fig. 1).
A systematic review of the literature from 1999 to June 2011 was performed in the following databases:
Hepatic encephalopathy (HE) represents one of the more serious extrahepatic complications associated with liver dysfunction. [ 61 ] [ 62 ] Neuro-psychiatric manifestations of HE affect consciousness and behaviour.
Evidence suggests that HE develops as some neurotoxins and neuro active substances, produced after hepatocellular breakdown, accumulates in the brain as a consequence of a portosystemic shunt and the limited detoxification capability of the liver. Substances involved are ammonia, manganese, aromatic aminoacids, mercaptans, phenols, medium chain fatty acids, bilirubin, endogenous benzodiazepines, etc. The relationship between ammonia neurotoxicity and HE was first described in animal studies by Pavlov et al. [ 63 ] Subsequently, several studies in either animals or humans have confirmed that, a ratio in ammonia concentration higher than 2 mM between the brain and blood stream, causes HE, and even a comatose state when the value is greater than 5 mM. Some investigators have also reported a decrease in serum ammonia following a MARS treatment (Table 3).
Hours/patient
(μg/dL)
(μg/dL)
Manganese and copper serum levels are increased in patients with either acute or acute on chronic liver failure. Nevertheless, only in those patients with chronic hepatic dysfunction, a bilateral magnetic resonance alteration on Globos Pallidus is observed, [ 68 ] probably because this type of patients selectively shows higher cerebral membrane permeability.
Imbalance between aromatic and branched chain aminoacids (Fischer index), traditionally involved in HE genesis, [ 69 ] [ 70 ] [ 71 ] can be normalized following a MARS treatment. The effects are noticeable even after 3 hours of treatment and this reduction in the Fisher index is accompanied with an improvement in the HE. [ 72 ]
Novelli G et al. [ 73 ] published their three years experience on MARS analyzing the impact of the treatment in the cerebral level for 63 patients reporting an improvement in Glasgow Coma Score (GCS) for all observed in all patients. In the last 22 patients, cerebral perfusion pressure was monitored by Doppler (mean flow velocity in middle cerebral artery), establishing a clear relationship between a clinical improvement (especially neurological) and an improvement in arterial cerebral perfusion. This study confirms other results showing similar increments in cerebral perfusion in patients treated with MARS. [ 66 ]
More recently, several studies have shown a significant improvement of HE in patients treated with MARS. In the studies by Heemann et al. [ 74 ] and Sen et al. [ 75 ] an improvement in HE was considered when encephalopathy grade was reduced by one or more grades vs. basal values; for Hassenein et al., in their randomized controlled trial, improvement was considered when a decrease of two grades was observed. [ 76 ] In the latter, 70 patients with acute on chronic liver failure and encephalopathy grade III and IV were included. Likewise, Kramer et al. [ 77 ] estimated an HE improvement when an improvement in peak N70 latency in electroencephalograms was observed.
Sen et al. 44 observed a significant reduction in Child-Pugh Score (p<0,01) at 7 days following a MARS treatment, without any significant change in the controls. Nevertheless, when they looked at the Model for End-Stage Liver Disease Score (MELD), a significant reduction in both groups, MARS and controls, was recorded (p<0,01 y p<0,05, respectively).
Likewise, in several case series, an improvement in HE grade with MARS therapy is also reported. [ 78 ] [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 83 ] [ 84 ] [ 85 ] [ 86 ]
Hemodynamic instability is often associated with acute liver insufficiency, as a consequence of endogenous accumulation of vasoactive agents in the blood. This is characterized by a systemic vasodilatation, a decrease of systemic vascular resistance, arterial hypotension, and an increase of cardiac output that gives rise to a hyperdynamic circulation. During MARS therapy, systemic vascular resistance index and mean arterial pressure have been shown to increase and show improvement. [ 78 ] [ 80 ] [ 82 ] [ 87 ] [ 88 ] Schmidt et al. [ 89 ] reported the treatment of 8 patients, diagnosed with acute hepatic failure, that were treated with MARS for 6 hours, and were compared with a control group of 5 patients to whom ice pads were applied to match the heat loss produced in the treatment group during the extracorporeal therapy. They analyzed hemodynamic parameters in both groups hourly. In the MARS group, a statistically significant increase of 46% on systemic vascular resistance was observed (1215 ± 437 to 1778 ± 710 dinas x s x cm −5 x m −2 ) compared with a 6% increase in the controls. Mean arterial pressure also increased (69 ± 5 to 83 ± 11 mmHg, p< 0.0001) in the MARS group, whereas no difference was observed in the controls. Cardiac output and heart rate also decreased in the MARS group as a consequence of an improvement in the hyperdynamic circulation. Therefore, it was shown that a statistically significant improvement was obtained with MARS when compared with the SMT.
Catalina et al. [ 90 ] have also evaluated systemic and hepatic hemodynamic changes produced by MARS therapy. In 4 patients with acute decompensation of chronic liver disease, they observed after MARS therapy, an attenuation of hyperdynamic circulation and a reduction in the portal pressure gradient was measured. Results are summarized in table 4.
There are other studies also worth mentioning with similar results: Heemann et al . [ 74 ] and Parés et al . [ 91 ] among others. Dethloff T et al . [ 92 ] concluded that there is a statistically significant improvement favourable to MARS in comparison with Prometheus system (Table 5).
Hepatorenal syndrome is one of the more serious complications in patients with acute decompensation of cirrhosis and increased portal hypertension. It is characterized by hemodynamic changes in splanchnic, systemic and renal circulation. Splanchnic vasodilatation triggers the production of endogenous vasoactive substances that produce renal vasoconstriction and low glomerular filtration rate, leading to oliguria with a concomitant reduction in creatinine clearance. Renal insufficiency is always progressive with an inferior prognosis, [ 93 ] [ 94 ] with survival at 1 and 2 months of 20 and 10% respectively.
Pierre Versin [ 95 ] is one of the pioneers in the study of hepatorenal syndrome in patients with liver impairment.
Great efforts have been made trying to improve the prognosis of this type of patient; however, few have solved the problem. Orthotopic liver transplantation is the only treatment that has shown to improve acute and chronic complications derived from severe liver insufficiency. Today it is possible to combine albumin dialysis with continuous veno-venous hemodialfiltration, which provides a greater expectation for these patients [ 96 ] by optimization of their clinical status.
MARS treatment lowers serum urea and creatinine levels improving their clearance, [ 88 ] [ 89 ] [ 90 ] [ 92 ] and even favors resolution of hepatorenal syndrome. [ 74 ] [ 81 ] [ 82 ] [ 87 ] [ 97 ] Results are confirmed in a randomized controlled trial published by Mitzner et al. . [ 93 ] in which 13 patients diagnosed with hepatorenal syndrome type I was treated with MARS therapy. Mean survival was 25,2±34,6 days in the MARS group compared to 4,6±1,8 days observed in the controls in whom hemodiafiltration and standard care (SMT) was applied. This resulted in a statistically significance difference in survival at 7 and 30 days (p<0.05). Authors concluded that MARS therapy, applied to liver failure patients (Child-Pugh C and UNOS 2A scores) who develop hepatorenal syndrome type I, prolonged survival compared to patients treated with SMT.
Although mechanisms explaining previous findings are not yet fully understood, it has been reported that there was a decrease in plasma renin concentrations in patients diagnosed with acute on chronic liver failure with renal impairment that were treated with MARS.
Likewise, other studies have suggested some efficacy for MARS in the treatment of hepatorenal syndrome. [ 98 ] [ 99 ] [ 100 ] However, other references have been published that do not show efficacy in the treatment of these types of patients with MARS therapy. Khuroo et al. . [ 101 ] published a meta-analysis based on 4 small RCT's and 2 non RCT's in patients diagnosed with ACLF, concluding that MARS therapy would not bring any significant increment in survival compared with SMT.
Another observational study in 6 patients with cirrhosis, refractory ascites and hepatorenal syndrome type I, not responding to vasoconstrictor therapy, showed no impact on hemodynamics following MARS therapy; however authors concluded that MARS therapy could effectively serve as bridge to liver transplantation. [ 83 ] [ 102 ]
Total bilirubin was the only parameter analyzed in all trials that was always reduced in the groups of patients treated with MARS; Banayosy et al. . [ 103 ] measured bilirubin levels 14 days after since MARS therapy was terminated and observed a consistent, significant decrease not only for bilirubin but also for creatinine and urea (Table 6).
4 sessions
Impact of MARS therapy on plasma biliary acids levels was evaluated in 3 studies. In the study from Stadbauer et al. ., [ 105 ] that was specifically addressing the topic, it is reported that MARS and Prometheus systems lower to the same extent biliary acids plasma concentration. Heemann et al. . [ 74 ] and Laleman et al. . [ 88 ] have also published a significant improvement for these organic ions.
Pruritus is one of the most common clinical manifestations in cholestasis liver diseases and one of the most distressing symptoms in patients with chronic liver disease caused by viral hepatitis C . Many hypotheses have been formulated to explain physio pathogenesis of such manifestation, including incremental plasma concentration of biliary acids, abnormalities in the bile ducts , [ 106 ] increased central neurotransmitters coupling opioid receptors , [ 107 ] [ 108 ] etc. Despite the number of historical drugs used, individually or combined (exchange resins, hydrophilic biliary acids, antihistamines, antibiotics, anticonvulsants, opioid antagonists), there are reported cases of intractable or refractory pruritus with a dramatic reduction in patients’ quality of life (i.e. sleep disorders, depression, suicide attempts...). [ 109 ] [ 110 ] Intractable pruritus can be an indication for liver transplantation.
The MARS indication for intractable pruritus is therapeutically an option that has shown to be beneficial for patients in desperate cases, although at high cost. [ 111 ] [ 112 ] [ 113 ] [ 114 ] In several studies, it was confirmed that after MARS treatments, patients remain free from pruritus for a period of time ranging from 6 to 9 months. [ 114 ] Nevertheless, some authors have concluded that besides the good results found in the literature, application of MARS therapy in refractory pruritus requires larger evidence. [ 112 ]
Pharmacokinetics and pharmacodynamics for a majority of drugs can be significantly be modified with liver failure, affecting the therapeutic approach and potential toxicity of the drugs. In these type of patients, Child-Pugh score represents a poor prognostic factor to assess the metabolic capacity of the failing liver.
In patients with hepatic failure, drugs that are only metabolized in the liver, accumulate in the plasma right after they are administered, and therefore it is needed to modify drug dosing in both, concentration and time intervals, to lower the risk of toxicity. It is also necessary to adjust the dosing for those drugs that are exclusively metabolized by the liver, and have low affinity for prioteins and high distribution volume, such as fluoroquinolones ( Levofloxacin and Ciprofloxacin ). [ 115 ] [ 116 ] [ 117 ] [ 118 ]
Extracorporeal detoxification with albumin dialysis increases the clearance of drugs that are bound to plasmatic proteins (Table 7).
In the meta-analysis published by Khuroo et al. . [ 101 ] which included 4 randomized trials [ 74 ] [ 89 ] [ 93 ] [ 103 ] an improvement in survival for the patients with liver failure treated with MARS, compared with SMT, was not observed.
However, neither in the extracorporeal liver support systems review by the Cochrane [ 119 ] (published in 2004), nor the meta-analysis by Kjaergard et al. . [ 120 ] was a significance difference in survival found for patients diagnosed with ALF treated with extracorporeal liver support systems.
Nevertheless, these reviews included all kind of liver support systems and used a heterogeneous type of publication (abstracts, clinical trials, cohort, etc.).
There is literature showing favorable results in survival for patients diagnosed with ALF, and treated with MARS., In a randomized controlled trial, Salibà et al. . [ 121 ] studied the impact on survival of MARS therapy for patients with ALF, waiting on the liver transplant list. Forty-nine patients received SMT and 53 were treated with MARS. They observed that patients that received 3 or more MARS sessions showed a statistically significance increase in transplant-free survival compared with the others patients of the study. Notably, 75% of the patients underwent liver transplantation in the first 24 hours after inclusion in the waiting list, and besides the short exposure to MARS therapy, some patients showed a better survival trend compared to controls, when they were treated with MARS prior to the transplant.
In a case-controlled study by Montejo et al. . [ 104 ] it was reported that MARS treatment do not decrease mortality directly; however, the treatment contributed to significantly improve survival in patients that were transplanted. In studies by Mitzner et al. . [ 93 ] and Heemann et al. . [ 74 ] they were able to show a significance statistical difference in 30-day survival for patients in the MARS group. However, El Banayosy et al. . [ 103 ] and Hassanein et al. . [ 76 ] noticed a non significant improvement in survival, probably because of the short number of patients included in the trials. In the majority of available MARS studies published with patients diagnosed with ALF, either transplanted or not, survival was greater in the MARS group with some variations according to the type of trial, ranging from 20-30%, [ 122 ] [ 123 ] and 60-80%. [ 83 ] [ 124 ] [ 125 ] [ 126 ] Data is summarized in Tables 8, 9 and 10.
MARS (%)
Controls
For patients diagnosed with acute on chronic liver failure and treated with MARS therapy, clinical trial results showed a not statistically significant reduction in mortality (odds ratio [OR] =0,78; confident interval [CI] =95%: 0,58 – 1,03; p= 0,1059, Figure 3)
A non-statistically significant reduction of mortality was shown in patients with ALF treated with MARS (OR = 0,75 [CI= 95%, 0,42 – 1,35]; p= 0,3427). (Figure 4)
Combined results yielded a non-significant reduction on mortality in patients treated with MARS therapy. However, the low number of patients included in each of the studies may be responsible for not being able to achieve enough statistical power to show differences between both treatment groups. Moreover, heterogeneity for the number of MARS sessions and severity of liver disease of the patients included, make it very difficult for the evaluation of MARS impact on survival.
Recently, a meta-analysis on survival in patients treated with an extra-hepatic therapy has been published. [ 131 ] Searching strategies yielded 74 clinical trials: 17 randomized controlled trials, 5 case control and 52 cohort studies. Eight studies were included in the meta-analysis: three addressing acute liver failure, one with MARS therapy [ 103 ] and five addressing acute on chronic, being four MARS related. [ 74 ] [ 75 ] [ 76 ] [ 93 ] Authors concluded that extra-hepatic detoxifying systems improve survival for acute liver insufficiency, whereas results for acute decompensation of chronic liver diseases suggested a non significant survival benefit. Also, due to an increased demand for liver transplantation together with an augmented risk of liver failure following large resections, development of detoxifying extrahepatic systems are necessary.
Safety, defined as presence of adverse events, is evaluated in few trials. Adverse events in patients receiving MARS therapy are similar to those in the controls with the exception of thrombocytopenia and hemorrhage that seems to occur more frequently with the MARS system. [ 132 ]
Heemann et al. [ 74 ] reported two adverse events most probably MARS related: fever and sepsis, presumably originated at the catheter.
In the study by Hassanein et al. , [ 76 ] two patients in the MARS group abandoned the study owing to hemodynamic instability, three patients required larger than average platelets transfusion and three more patients presented gastrointestinal bleeding.
Laleman et al. . [ 88 ] detected one patient with thrombocytopenia in both the MARS and Prometheus treatment groups, and an additional patient with clotting of the dialysis circuit and hypotension, only in the Prometheus group.
Kramer et al. . (Biologic-DT) [ 77 ] wrote about 3 cases with disseminated intravascular coagulation in the interventional group, two of them with fatal outcomes.
Mitzner et al. . [ 93 ] described, among patients treated with MARS, a thrombocytopenia case and a second patient with chronic hepatitis B, who underwent TIPS placement on day 44 after randomization and died on day 105 of multiorgan failure, as a consequence of complications related to the TIPS procedure.
Montejo et al. . [ 104 ] showed that MARS is an easy technique, without serious adverse events related to the procedure, and also easy to implement in ICU settings that are used to renal extracorporeal therapies.
The MARS International Registry, with data from more than 500 patients (although sponsored by the manufacturer), shows that the adverse effects observed are similar to the control group. However, in these severely ill patients it is difficult to distinguish between complications of the disease itself and side effects attributable to the technique.
Only three Studies addressing cost-effectivenenss of MARS therapy have been found.
Hassanein et al. [ 133 ] analysed costs of randomized patients with ACLF receiving MARS therapy or standard medical care. They used the study published in 2001 by Kim et al. [ 134 ] describing the impact of complications in hospitalization costs in patients diagnosed with alcoholic liver failure. Cost of 11 patients treated with standard medical care (SMT) were compared to those that received MARS, in addition to SMT (12 patients). In the MARS group, there was less in-hospital mortality and complications related to the disease, with a remarkable reduction in cost which compensated the MARS related expenditure (Table 11).
n=12
n=11
There were 5 survivors in the control group, with a cost per patient of $35.904, whereas in the MARS group, 11 patients out of 12 survived with a cost per patient of $32.036 which represents a $4000 savings per patient in favors of the MARS group.
Hessel et al. [ 135 ] published a 3-year follow-up of a cohort of 79 patients with ACLF, of whom 33 received MARS treatments and 46 received SMT. Survival was 67% for the MARS group and 63% for the controls, that was reduced to 58 and 35% respectively at one year follow-up, and then 52 and 17% at three years.
Hospitalization costs for the MARS treated group were greater than that for the controls (€31,539 vs. €7,543) and similarly direct cost at 3-year follow-up (€8,493 vs. €5,194). Nevertheless, after adjusting mortality rate, the annual cost per patient was €12,092 for controls and €5,827 for MARS group; also in the latter, they found an incremental cost-effectiveness ratio of 31.448 € per life-year gained (LYG) and an incremental costs per QALY gained of 47171 €.
Two years later, same authors published the results of 149 patients diagnosed with ACLF. [ 136 ] There were 67 patients (44,9%) treated with MARS and 82 patients (55,1%) were allocated to receive SMT. Mean survival time was 692 days in the MARS group (33% at 3 years) and 453 days in the controls (15% at 3 years); the results were significant (p=0,022). Differences in average cost was €19,853 (95% IC: 13.308-25.429): 35.639 € for MARS patients and 15.804 € for the control group. Incremental cost per LYG was 29.985 € (95% IC: 9.441-321.761) and €43,040 (95% IC: 13.551-461.856) per quality-adjusted life years (QALY).
Liver support systems, such as MARS, are very important to stabilize patients with acute or acute on chronic liver failure and avoid organ dysfunction, as well as a bridge-to-transplant. Although initial in-hospital costs are high, they are worth for the favorable outcome.
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
[ 147 ] [ 148 ]
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
[ 149 ]
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
[ 97 ] [ 137 ] [ 140 ] [ 150 ]
Etiology:
Goals of MARS Therapy
MARS Therapy Indication
Treatment Schedule:
Same contraindications as with any other extracorporeal treatment may be applied to MARS therapy.
Blood Flow
The trend is to use high flow rates, although it is determined by the technical specifications of the combined machine and catheters’ size
Intermittent treatments:
Continuous treatments:
Dyalisate Flow Rate
Intermittent treatments:
Continuous treatments:
Replacement Flow Rate
Heparin Anticoagulation
Similarly to CVVHD, it depends on previous patient's coagulation status. In many cases it will not be needed, unless the patient presents a PTT inferior to 160 seconds. In patients with normal values, a bolus of 5000 to 10000 IU of heparin could be administered at the commencement of the treatment, followed by a continuous perfusion, to keep PTT in ratios from 1,5 to 2,5 or 160 to 180 seconds.
Monitoring
A biochemical analysis is recommended (liver and kidney profile, ionic, glucose) together with a hemogram at the end of first session and before starting the following one.
Coagulation analysis must be also performed before starting the session to adjusting heparin dose.
In case that medication susceptible to be eliminated by MARS is being administered, it is also recommended to monitor their levels in blood
End of the Session
and both catheter's lumens heparinized
Federal Drug Administration (FDA) cleared, in a document dated on May 27, 2005, MARS therapy for the treatment of drug overdose and poisoning. The only requirement is that the drug or poison must be susceptible to be dialysed and removed by activated charcoal or anionic exchange resins.
More recently, on December 17, 2012, MARS therapy has been cleared by the FDA for the treatment of hepatic encephalopathy due to a decompensation of a chronic liver disease. Clinical trials conducted with MARS treatment in HE patients having a decompensation of chronic liver disease demonstrated a transient effect from MARS treatments to significantly decrease their hepatic encephalopathy scores by at least 2 grades compared to standard medical therapy (SMT).
The MARS is not indicated as a bridge to liver transplant. Safety and efficacy has not been demonstrated in controlled, randomized clinical trials.
The effectiveness of the MARS device in patients that are sedated could not be established in clinical studies and therefore cannot be predicted in sedated patients
The LiverNet is a database dedicated to the liver diseases treated with the support of extracorporeal therapies. To date, the most currently used system is the Molecular Adsorbent Recirculating System (MARS), which is based on the selective removal of albumin bound molecules and toxins from the blood in patients with acute and acute-on-chronic liver failure. The purpose is to register prospectively all patients treated worldwide with the MARS system in order to:
The liverNet is an eCRF database (www.livernet.net) using a SAS platform that allows major advantages for the centres including the automatic calculations of most liver rand ICU scoring systems, instant queries online, instant export of all patients included in the database of each centre to an Excel file for direct statistical analysis and finally instant online statistical analysis of selective data decided by the scientific committee. Therefore, the LiverNet is an important tool to progress in the knowledge of liver support therapies. | https://en.wikipedia.org/wiki/Liver_support_system |
Livestock branding is a technique for marking livestock so as to identify the owner. Originally, livestock branding only referred to hot branding large stock with a branding iron , though the term now includes alternative techniques. Other forms of livestock identification include freeze branding , inner lip or ear tattoos , earmarking , ear tagging , and radio-frequency identification (RFID), which is tagging with a microchip implant . The semi-permanent paint markings used to identify sheep are called a paint or color brand. In the American West , branding evolved into a complex marking system still in use today.
The act of marking livestock with fire-heated marks to identify ownership has origins in ancient times, with use dating back to the ancient Egyptians around 2,700 BCE. [ 1 ] Among the ancient Romans , the symbols used for brands were sometimes chosen as part of a magic spell aimed at protecting animals from harm. [ 2 ]
In English lexicon, the word "brand", common to most Germanic languages (from which root also comes "burn", cf. German Brand "burning, fire"), originally meant anything hot or burning, such as a "firebrand", a burning stick. By the European Middle Ages , it commonly identified the process of burning a mark into stock animals with thick hides, such as cattle , so as to identify ownership under animus revertendi . The practice became particularly widespread in nations with large cattle grazing regions, such as Spain .
These European customs were imported to the Americas and were further refined by the vaquero tradition in what today is the southwestern United States and northern Mexico . In the American West , a "branding iron" consisted of an iron rod with a simple symbol or mark, which cowboys heated in a fire. After the branding iron turned red hot, the cowboy pressed the branding iron against the hide of the cow. The unique brand meant that cattle owned by multiple ranches could then graze freely together on the open range. Cowboys could then separate the cattle at "roundup" time for driving to market . Cattle rustlers using running irons were ingenious in changing brands. [ 3 ] The most famous brand change involved the making of the X I T brand into the Star-Cross brand, a star with a cross inside. [ 4 ] [ 5 ] Brands became so numerous that it became necessary to record them in books that the ranchers could carry in their pockets. Laws were passed requiring the registration of brands, and the inspection of cattle driven through various territories. Penalties were imposed on those who failed to obtain a bill of sale with a list of brands on the animals purchased. [ 6 ]
From the Americas, many cattle branding traditions and techniques spread to Australia , where a distinct set of traditions and techniques developed. Livestock branding has been practiced in Australia since 1866, but after 1897 owners had to register their brands. These fire and paint brands could not then be duplicated legally.
Free-range or open-range grazing is less common today than in the past. However, branding still has its uses. The main purpose is in proving ownership of lost or stolen animals. Many western US states have strict laws regarding brands, including brand registration, and require brand inspections. In many cases, a brand on an animal is considered prima facie proof of ownership. (See Brand Book )
In the hides and leather industry, brands are treated as a defect, and can diminish the value of hides. This industry has a number of traditional terms relating to the type of brand on a hide. "Colorado branded" (slang "Collie") refers to placement of a brand on the side of an animal, although this does not necessarily indicate the animal is from Colorado . "Butt branded" refers to a hide which has had a brand placed on the portion of the skin covering the rump area of the animal. A native hide is one without a brand. [ 7 ]
Outside of the livestock industry, hot branding was used in 2003 by tortoise researchers to provide a permanent means of unique identification of individual Galapagos tortoises being studied. In this case, the brand was applied to the rear of the tortoises' shells. This technique has since been superseded by implanted PIT microchips (combined with ID numbers painted on the shell). [ 8 ]
The traditional cowboy or stockman captured and secured an animal for branding by roping it, laying it over on the ground, tying its legs together, and applying a branding iron that had been heated in a fire. Modern ranch practice has moved toward use of chutes where animals can be run into a confined area and safely secured while the brand is applied. Two types of restraint are the cattle crush or squeeze chute (for larger cattle), which may close on either side of a standing animal, or a branding cradle, where calves are caught in a cradle which is rotated so that the animal is lying on its side.
Bronco branding is an old method of catching cleanskin (unbranded) cattle on Top End cattle stations for branding in Australia. A heavy horse, usually with some draught horse bloodlines and typically fitted with a harness horse collar , is used to rope the selected calf. The calf is then pulled up to several sloping topped panels and a post constructed for the purpose in the centre of the yard. The unmounted stockmen then apply leg ropes and pull it to the ground to be branded, earmarked and castrated (if a bull) there. With the advent of portable cradles, this method of branding has been mostly phased out on stations. However, there are now quite a few bronco branding competitions at rodeos and campdrafting days, etc. [ 9 ]
Some ranches still heat branding irons in a wood or coal fire; others use an electric branding iron or electric sources to heat a traditional iron. Gas-fired branding iron heaters are quite popular in Australia, as iron temperatures can be regulated and there is not the heat of a nearby fire. Regardless of heating method, the iron is only applied for the amount of time needed to remove all hair and create a permanent mark. Branding irons are applied for a longer time to cattle than to horses, due to the differing thicknesses of their skins. If a brand is applied too long, it can damage the skin too deeply, thus requiring treatment for potential infection and longer-term healing. Branding wet stock may result in the smudging of the brand. Brand identification may be difficult on long-haired animals, and may necessitate clipping of the area to view the brand.
Horses may also be branded on their hooves , [ 10 ] but this is not a permanent mark, so needs to be redone about every six months. In the military, some brands indicated the horses' army and squadron numbers. These identification numbers were used on British army horses so dead horses on the battlefield could be identified. The hooves of the dead horses were then removed and returned to the Horse Guards with a request for replacements. This method was used to prevent fraudulent requests for horses. [ 11 ] Merino rams and bulls are sometimes firebranded on their horns for permanent individual identification.
Some types of identification are not permanent. Temporary branding may be achieved by heat branding so that the hair is burned, but the skin is not damaged. Because this persists only until the animal sheds its hair, it is not considered a properly applied brand. [ 12 ] Other temporary, but for a time, persistent marking methods include tagging, and nose printing. Tagging usually uses numbering system as a way to identify animals in a herd. It does this by putting together a letter and number to represent the year born and the birth order, then the tag is either attached to the animal’s ear or to some form of neck collar. Nose printing or use of indelible ink elsewhere on the skin and hair is used at some farms, sales and exhibitions. This method is like fingerprinting: it uses ink and cannot be modified. As hair or skin cells shed, the mark eventually fades.
Microchip identification and lip or ear tattooing are generally permanent, though microchips can be removed and tattoos sometimes fade over many years. [ 13 ] Microchips are used on many animals, and are particularly popular with horses, as the chip leaves no external marks. Tattooing the inside of the upper lip of horses is required for many racehorses , though in some localities, microchips are beginning to replace tattoos.
Temporary branding is particularly common for sheep and goats. Ear marking or tattooing are usually used on goats under eight weeks of age because regular branding would harm them. Techniques similar to these are also used on sheep. [ 14 ] Temporary branding on sheep is done with paint, crayons, spray markers, chalk, and much more. These can last for up to several months at a time. The sheep's identification number is painted or sprayed with an indelible but non-toxic paint designed for the purpose onto their sides or back. [ 15 ]
In stark contrast to traditional hot-iron branding, freeze branding uses an iron that has been chilled with a coolant such as dry ice or liquid nitrogen . Instead of burning a scar into the animal's skin, a freeze brand damages the pigment-producing hair cells, causing the animal's hair to grow back white within the branded area. This white-on-dark pattern is prized by cattle ranchers as its contrast allows some range work to be conducted with binoculars rather than individual visits to every animal. Scientists also value the technique for keeping tabs on studied wildlife without having to approach to read, for example, an ear tag.
To apply a freeze brand the hair coat of the animal is first shaved very closely so that bare skin is exposed. Then the frozen iron is pressed to the animal's bare skin for a period of time that varies with both the species of animal and the color of its hair coat. Shorter times are used on dark-colored animals, as this causes follicle melanocyte death and hence permanent pigment loss to the hair when it regrows. Longer times — sometimes as little as five additional seconds — are needed for animals with white hair coats. In these cases the brand is applied for long enough to outright kill the cells of the growth follicle, preventing them from regrowing new hair filaments and leaving the animal permanently bald in the branded area. The somewhat darker epidermis then contrasts well with a pale animal's coat.
Horses are frequently freeze-branded. Neither hogs nor birds can presently be freeze branded successfully, as their hair pigment cells are better protected. Other downsides of freeze branding include its time consuming preparation, greater expense in material and time, low tolerance for sloppy application, long wait until success (sometimes as much as five months) and absence of legal grounding in some American states. [ 16 ] When an animal grows a long hair coat the freeze brand is still visible, but its details are not always legible. Thus it is sometimes necessary to shave or closely trim the hair to obtain a sharper view of the freeze brand.
Besides livestock, freeze branding can also be used on wild, hairless animals such as dolphins for purposes of tracking individuals. [ 17 ] [ 18 ] [ 19 ] The brand appears as a white mark on their bare skin and can last for decades. [ 20 ]
Immediately after the freeze branding iron is removed from the skin, an indented outline of the brand will be visible. Within seconds, however, the outline will disappear and within several minutes after that, the brand outline will reappear as swollen, puffy skin. Once the swelling subsides, for a short time, the brand will be difficult or impossible to see, but in a few days, the branded skin will begin to flake, and within three to four weeks, the brand will begin to take on its permanent appearance.
In Australia, all Arabian , Part Bred Arabians, Australian Stock Horses , [ 21 ] Quarter Horses , [ 22 ] Thoroughbreds , [ 23 ] must be branded with an owner brand on the near (left) shoulder and an individual foaling drop number (in relation to the other foals) over the foaling year number on the off shoulder. In Queensland , these three brands may be placed on the near shoulder in the above order. Stock Horse and Quarter Horse classification brands are placed on the hindquarters by the classifiers.
Thoroughbreds and Standardbreds in Australia and New Zealand are freeze branded. Standardbred brands are in the form of the Alpha Angle Branding System (AABS), [ 24 ] which the United States also uses. [ 25 ] [ 26 ]
In the United States, branding of horses is not generally mandated by the government; however, there are a few exceptions: captured Mustangs made available for adoption by the BLM are freeze branded on the neck, usually with the AABS or with numbers, for identification. Horses that test positive for equine infectious anemia , that are quarantined for life rather than euthanized , will be freeze branded for permanent identification. Race horses of any breed are usually required by state racing commissions to have a lip tattoo , to be identified at the track. Some breed associations have, at times, offered freeze branding as either a requirement for registration or simply as an optional benefit to members, and individual horse owners may choose branding as a means by which to permanently identify their animals. As of 2011, the issue of whether to mandate horses be implanted with RFID microchips under the National Animal Identification System generated considerable controversy in the United States.
Most brands in the United States include capital letters or numerals , often combined with other symbols such as a slash, circle, half circle, cross, or bar. Brands of this type have a specialized language for "calling" the brand. Some owners prefer to use simple pictures ; these brands are called using a short description of the picture (e.g., "rising sun"). Reading a brand aloud is referred to as “calling the brand“. Brands are called from left to right, top to bottom, and when one character encloses another, from outside to inside. [ 27 ] Reading of complex brands and picture brands depends at times upon the owner's interpretation, may vary depending upon location, and it may require an expert to identify some of the more complex marks.
In general, the following usage of the term "symbol" usually means a capital letter. Uncapitalized letters are not used. Brands are usually “read” top to bottom and left to right. There are regional variations in how brands are read, and deference is given to the terminology preferred by the owner of the brand. Terms used include:
Combinations of symbols can be made with each symbol distinct, or:
Livestock branding causes pain to the animals being branded, seen in behavioural and physiological indicators. Both hot and freeze branding produce thermal injury to the skin, but hot-iron branding creates more inflammation and pain than freeze branding does. [ 28 ] Although alternative methods of identification such as ear tags are suggested, the practice of branding is still common worldwide. [ 29 ]
Standard hot iron branding can take about eight weeks to heal. Use of analgesics helps reduce discomfort. [ 30 ] Topical treatments such as cooling gels helps speed healing in pigs, but results are less clear for cattle. [ 31 ] Common concerns include how long the animal is restrained, size and location of the brand, and whether analgesics are applied for pain relief. A 2018 study in Sri Lanka , where hot-iron branding is illegal but still widely practiced, concluded that it impairs animal welfare and that there is no real way to improve the procedure. [ 29 ] However, this particular study looked at four small dairy farms that used a technique where multiple applications of irons (“drawing”) created large brands extended across the ribs and took at least a full minute to apply and 10 weeks to heal. In contrast, in nations such as the United States and Australia, pre-shaped brands are used to stamp the brand on an animal, applied for 1-5 seconds. Although branding is painful, from a welfare perspective, stamping is preferable over drawing, as less time is needed to apply the brand. [ 29 ] | https://en.wikipedia.org/wiki/Livestock_branding |
Liveware was used in the computer industry as early as 1966 to refer to computer users, often in humorous contexts, [ 1 ] by analogy with hardware and software . [ 2 ]
It is a slang term used to denote people using (attached to) computers, and is based on the need for a human, or liveware, to operate the system using hardware and software. Other words meaning the same or similar to liveware include wetware , meatware and jellyware. Meatware and jellyware are most often used by internal customer support personnel as slang terms when referencing human operating errors.
The term liveware is found in the Culture novels by Iain M. Banks . A Culture Ship is named "Liveware Problem".
This slang article is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Liveware |
Livewire is an audio-over-IP system created by Axia Audio, a division of Telos Alliance . Its primary purpose is routing and distributing broadcast-quality audio in radio stations .
The original Livewire standard was introduced in 2003 and has since been superseded by a second version, Livewire+. Livewire+ includes compatibility with the AES67 and Ravenna standards to allow interoperability with equipment from other manufacturers. Designed as a superset of Livewire functionality utilizing common protocols and formats, Livewire+ is available as an open standard through Axia's Livewire+ Partner Program.
Livewire+ provides flexible routing and transport of audio streams using multicast networking, with the ability to connect any input to any output (known as "anywhere-to-anywhere routing"). [ citation needed ] Distribution utilises standard IP and Ethernet over twisted pair cabling.
The following table lists ports and protocols used in Livewire systems. [ 3 ] [ 4 ] [ 5 ]
Source allocation state announcements and responses | https://en.wikipedia.org/wiki/Livewire_(networking) |
The Living Building Challenge is an international sustainable building certification program created in 2006. It is managed by the non-profit International Living Future Institute. [ 1 ] It is described by the Institute as a philosophy, advocacy tool and certification program that promotes the measurement of sustainability in the built environment . [ 2 ] It can be applied to development at all scales, from buildings—both in new constructions and renovations—to infrastructure, landscapes, neighborhoods, both urban and rural communities, and differs from other green certification schemes such as LEED or BREEAM . [ 2 ] [ 3 ]
The Living Building Challenge was launched in 2006 by the Cascadia Green Building Council (a chapter of both the U.S. Green Building Council and Canada Green Building Council ). [ 4 ] It was created by Jason F. McLennan and Bob Berkebile of BNIM , an architecture and design firm. McLennan brought the program to Cascadia when he became its CEO in 2006. The International Living Building Institute was created of and by Cascadia in May 2009 to oversee the Living Building Challenge and its auxiliary programs. [ 5 ]
The International Living Future Institute is a non-governmental organization (NGO) committed to catalyzing a global transformation toward true sustainability. The Institute seeks partnerships with leaders in the public, private and not-for-profit sectors in pursuit of a future that is socially just, culturally rich and ecologically restorative.
The Institute is the umbrella organization for the Living Building Challenge and the Cascadia Green Building Council, along with The Natural Step US and Ecotone Publishing.
The end goal of the Living Building Challenge is to encourage the creation of a regenerative built environment. [ 4 ] The challenge is an attempt "to raise the bar for building standards from doing less harm to contributing positively to the environment." It "acts to rapidly diminish the gap between current limits and the end-game positive solutions we seek" by challenging architects, contractors, and building owners. [ 13 ]
The Living Building Challenge employs the use of a flower metaphor for the framework. According to founder Jason F. McLennan , flowers are an accurate representation of a truly regenerative building which receives all of its energy from the sun, nutrients from the soil, and water from the sky. Similar to a flower, they simultaneously shelter other organisms and support the surrounding ecosystem. They also serve as beauty and inspiration and adapt to their surroundings. [ 4 ] Meanwhile, the petals of the flower represent each performance area in the framework. These petals include Materials, Place, Water, Energy, Health and Happiness, Equity, and Beauty. [ 14 ]
Living Building Challenge has seven performance areas: site, water, energy, health and happiness, materials, equity and beauty.
Certification is based on actual, rather than modeled or anticipated, performance. Therefore, projects must be operational for at least 12 consecutive months prior to evaluation. Types of projects which can be certified include but are not limited to existing or new buildings, single-family residential, multi-family residential, institutional buildings (government, education, research, or religious), commercial (offices, hospitality, retail), and medical or laboratory buildings. [ 13 ] There are 3 certification pathways, Living Building Certification, Petal Certification, and Zero Energy Certification a project can pursue, all of which are awarded on performance. | https://en.wikipedia.org/wiki/Living_Building_Challenge |
The Living Human Project ( LHP ) is a project that begun in 2002 to develop a distributed repository of anatomo-functional data and simulation algorithms for the human musculoskeletal apparatus used to create the physiome of the human musculoskeletal system. In 2006 the BEL was merged with Biomed Town, an Internet community for those who have a professional interest in biomedical research.
The LHDL project was ended in January 2009, and soon after the LHDL consortium released a biomedical data management and sharing service called Physiome Space . Physiome Space lets individual researchers as well as for large consortia to share with their peers large collections of biomedical data, including medical imaging and computer simulations. | https://en.wikipedia.org/wiki/Living_Human_Project |
The Living Indus is an umbrella initiative by Ministry of Climate Change , Government of Pakistan and United Nations in Pakistan. [ 1 ]
The original Living Indus Initiative document was developed by a team led by Dr. Adil Najam as its Lead Author. The initiative serves as an overarching program, rallying call to action, seeks to spearhead and unify various efforts aimed at revitalizing the ecological well-being of the Indus River within Pakistan's borders. It emerges as a direct response to Pakistan's heightened susceptibility to the adverse effects of climate change. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
The Indus River flows down from the Himalayas, through Indian and Pakistan Administered Kashmir, Gilgit-Baltistan, and Khyber Pakhtunkhwa, flowing south-by-southwest through the length of Pakistan before emptying into the Arabian Sea near Karachi.
Ninety percent of Pakistan's people and more than three-quarters of its economy reside in the Indus basin. More than 80% of Pakistan's arable land is irrigated by its waters. [ 1 ]
The Indus Basin is facing devastating challenges due to environmental degradation, unsustainable population growth, rapid urbanization and industrialization, the unregulated utilization of resources, inefficient water use, and poverty. The Indus and its ecosystems are under pressure both from the seemingly inexorable changing climate, temperature fluctuations, disruption of rainfall patterns, and early-stage efforts to adapt to and mitigate these effects.
The Indus has supported a civilization for thousands of years, but with the current state of the management of the basin and the impact of climate change on the monsoon and the glacial melt, it might not be able to sustain Pakistan for another 100 years. [ 6 ] [ 7 ]
Living Indus is an umbrella initiative and a call to action to lead and consolidate initiatives to restore the ecological health of the Indus within the boundaries of Pakistan. The initiatives have been incorporated into a ‘ Living Indus’ prospectus jointly developed by the Government of Pakistan and the United Nations . Initiated in 2021, it has been endorsed by all governments, the initiative is expected to continue receiving support.
The scale of the initiatives requires the adoption of collective and innovative approaches by all stakeholders, including the government, the private sector, and the UN, toward mobilizing resources. The response of Living Indus is one of building resilience and adaptation to the threats the Indus faces from the impacts of both human use and climate change over the next few decades.
A number of specific interventions under the Initiative are now operational, including the 'Recharge Pakistan' project led by the Ministry of Climate Change , Government of Pakistan and WWF-Pakistan .
Extensive consultations with the government, led by the Chief Ministers of all the provinces, the public sector, private sector, experts, and civil society led to a ‘living’ menu of 25 preliminary interventions. These interventions are in line with global best practices, focusing on green infrastructure and nature-based approaches driven by the community.
The Ministry of Climate Change and Environmental Coordination (MoCC&EC), Government of Pakistan has highlighted eight priority interventions out of the 25. Implementation plans are being prepared for these.
Designated as a World Restoration Flagship by the UN Environment Programme , the Living Indus Initiative embodies the principles of the UN Decade on Ecosystem Restoration. This accolade acknowledges its exemplary contributions to large-scale ecosystem restoration and its alignment with global restoration objectives. [ 8 ] [ 9 ]
Inger Andersen , executive director of UN Environment Programme stated:
Pakistan's climate induced disasters in recent years have been heart-breaking, causing destruction on a scale that no nation can, or should have to, accept. It is therefore important to recognize and support projects like the Living Indus initiative for the hope and resilience it can offer Pakistan and the region. [ 8 ] | https://en.wikipedia.org/wiki/Living_Indus_Initiative |
A Living Machine [ 1 ] is a form of ecological sewage treatment based on fixed-film ecology. [ 2 ] [ 3 ] [ 4 ]
The Living Machine system was commercialized and is marketed by Living Machine Systems, L3C , a corporation based in Charlottesville, Virginia , United States. [ 5 ]
Examples of Living Machines are mechanical composters for industrial kitchens, effective microorganisms as fertilizer for agricultural purposes, and Integrated Biotectural systems in landscaping and architecture like Earthships or the IBTS Greenhouse .
Components like tomato plants (for more water purification ) and fish (for food) have been part of the living, ecosystem -like designs. The theory does not limit the size of the system, or the amount of species. One design optimum is a natural ecosystem which is designed for a special purpose like a sewage treating wetland in a suitable ecosystem for the locality. Another optimum is an economically viable system returning profit for the investor. The practice of permaculture is one example for a compromise between the two optimum design points.
The scale of Living Machine systems ranges from the individual building to community-scale public works . Some of the earliest Living Machines were used to treat domestic wastewater in small, ecologically-conscious villages , such as Findhorn Community in Scotland . [ 6 ] The latest-generation Tidal Flow Wetland Living Machines are being used in major urban office buildings, military bases, housing developments, resorts and institutional campuses. [ 7 ] | https://en.wikipedia.org/wiki/Living_Machine |
The Living Planet Index ( LPI ) is an indicator of the state of global biological diversity , based on trends in vertebrate populations of species from around the world. The Zoological Society of London (ZSL) manages the index in cooperation with the World Wide Fund for Nature (WWF).
As of 2022, the index is statistically created from journal studies, online databases and government reports for 31,821 populations of 5,230 species of mammal, bird, reptile, amphibian and fish. [ 4 ]
According to the 2022 report, monitored wildlife populations show an average decline of 69% between 1970 and 2018, [ 5 ] suggesting that natural ecosystems are degrading at a rate unprecedented in human history [ 6 ] The extent of declines varies with geographic region, with monitored vertebrate populations in Latin America and the Caribbean experiencing average declines of 94%. [ 4 ] One of the key drivers of declines has been identified as land-use change and the associated habitat loss and degradation, often linked to unsustainable agriculture, logging, or other development. [ 4 ]
The Living Planet Database ( LPD ) has been available online since 2013, and has been maintained by ZSL since 2016. The LPD contains more than 30,000 population trends for more than 5,200 species of fish, amphibians, reptiles, birds and mammals. [ 4 ]
The global LPI is calculated using these population time-series, which are gathered from a variety of sources such as journals, online databases and government reports. [ 4 ]
A generalized additive modelling framework is used to determine the underlying trend in each population time-series. Average rates of change are calculated and aggregated to the species level. [ 7 ] [ 8 ]
Each species trend is aggregated to produce an index for the terrestrial, marine and freshwater systems. This process uses a weighted geometric mean [ 9 ] method which places most weight on the largest (most species-rich) groups within a biogeographic realm. This is done to counteract the uneven spatial and taxonomic distribution of data in the LPD. The three system indices are then averaged to produce the global LPI. [ 10 ]
The fact that "all decreases in population size, regardless of whether they bring a population close to extinction, are equally accounted for" has been noted as a limitation. [ 11 ]
In 2005, WWF authors identified that the population data was potentially unrepresentative. [ 12 ] As of 2009, the database was found to contain too much bird data and gaps in the population coverage of tropical species, although it showed "little evidence of bias toward threatened species". [ 7 ] The 2016 report was criticized by a professor at Duke University for over-representing western Europe, where more data were available. [ 13 ] Talking to National Geographic , he criticised the attempt to combine data from different regions and ecosystems into a single figure, arguing that such reports are likely motivated by a desire to grab attention and raise money. [ 14 ]
A 2017 investigation of the index by members of the ZSL team published in PLOS One found higher declines than had been estimated, and indications that in areas where less data is available, species might be declining more quickly. [ 10 ]
In 2020, a re-analysis of the baseline data by McGill University showed that the overall estimated trend of a decline by 60% since 1970 was driven by less than 3% of the studied populations; when some outliers of extreme decline are removed, the decline still exists but is considerably less catastrophic, and when more outliers (roughly amounting to 2.4% of the populations) are removed, the trend shifts to that of a decline between the 1980s and 2000s, but a roughly positive trend after 2000. This extreme sensitivity to outliers indicates that the present approach of the Living Planet Index may be flawed. [ 15 ]
A 2024 study by Charles University focused on calculation method, found that calculation of the Living Planet Index is biased by several mathematical issues, leading to overestimation of vertebrate population declines. When those mathematical issues are fixed, the majority of studied vertebrate populations shows balanced decline and growth (the only exception are the populations of amphibians , where the numbers show steady decline). [ 16 ]
The index was originally developed in 1997 by the World Wide Fund for Nature (WWF) in collaboration with the World Conservation Monitoring Centre (UNEP-WCMC), the biodiversity assessment and policy implementation arm of the United Nations Environment Programme . [ 12 ] WWF first published the index in 1998. [ 12 ] Since 2006, the Zoological Society of London (ZSL) manages the index in cooperation with WWF. [ 17 ]
Results are presented biennially in the WWF Living Planet Report and in publications such as the Millennium Ecosystem Assessment and the UN Global Biodiversity Outlook. National and regional reports are now being produced to focus on relevant issues at a smaller scale. The latest edition of the Living Planet Report was released in October 2022. [ 18 ]
The index is often misinterpreted in the media, [ 19 ] with incorrect suggestions that it shows we have lost 69% of all animals or species since 1970. [ 20 ] This widespread misinterpretation has led to several articles being published which detail what the LPI does and doesn't show, and how to correctly interpret the trend. [ 21 ] [ 22 ] [ 23 ]
In April 2002, and again in 2006, at the Convention on Biological Diversity (CBD), 188 nations committed themselves to actions to: "… achieve, by 2010, a significant reduction of the current rate of biodiversity loss at the global, regional and national levels…" [ 24 ]
The LPI played a pivotal role in measuring progress towards the CBD's 2010 target. [ 25 ] [ 26 ] It has also been adopted by the CBD as an indicator of progress towards its Nagoya Protocol 2011-2020 targets 5, 6, and 12 (part of the Aichi Biodiversity Targets ). [ 27 ]
Informing the CBD 2020 strategic plan, the Indicators and Assessments Unit at ZSL is concerned with ensuring the most rigorous and robust methods are implemented for the measurement of population trends , expanding the coverage of the LPI to more broadly represent biodiversity, and disaggregating the index in meaningful ways (such as assessing the changes in exploited or invasive species ). [ 28 ] | https://en.wikipedia.org/wiki/Living_Planet_Index |
A living building material (LBM) is a material used in construction or industrial design that behaves in a way resembling a living organism . Examples include: self-mending biocement, [ 1 ] self-replicating concrete replacement, [ 2 ] and mycelium -based composites for construction and packaging . [ 3 ] [ 4 ] Artistic projects include building components and household items. [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The development of living building materials began with research of methods for mineralizing concrete, that were inspired by coral mineralization . The use of microbiologically induced calcite precipitation (MICP) in concrete was pioneered by Adolphe et al. in 1990, as a method of applying a protective coating to building façades . [ 9 ]
In 2007, "Greensulate", a mycelium -based building insulation material was introduced by Ecovative Design , a spin off of research conducted at the Rensselaer Polytechnic Institute . [ 10 ] [ 11 ] Mycelium composites were later developed for packaging , sound absorption , and structural building materials such as bricks . [ 12 ] [ 13 ] [ 14 ]
In the United Kingdom , the Materials for Life (M4L) project was founded at Cardiff University in 2013 to "create a built environment and infrastructure which is a sustainable and resilient system comprising materials and structures that continually monitor, regulate, adapt and repair themselves without the need for external intervention." [ 15 ] M4L led to the UK's first self-healing concrete trials. [ 16 ] In 2017 the project expanded into a consortium led by the universities of Cardiff, Cambridge , Bath and Bradford , changing its name to Resilient Materials 4 Life (RM4L) and receiving funding from the Engineering and Physical Sciences Research Council . [ 16 ] This consortium focuses on four aspects of material engineering: self-healing of cracks at multiple scales; self-healing of time-dependent and cycling loading damage; self-diagnosis and healing of chemical damage; and self-diagnosis and immunization against physical damage. [ 17 ]
In 2016 the United States Department of Defense 's Defense Advanced Research Projects Agency (DARPA) launched the Engineered Living Materials (ELM) program. [ 18 ] The goal of this program is to "develop design tools and methods that enable the engineering of structural features into cellular systems that function as living materials, thereby opening up a new design space for building technology... [and] to validate these new methods through the production of living materials that can reproduce, self-organize, and self-heal." [ 19 ] In 2017 the ELM program contracted Ecovative Design to produce "a living hybrid composite building material... [to] genetically re-program that living material with responsive functionality [such as] wound repair... [and to] rapidly reuse and redeploy [the] material into new shapes, forms, and applications." [ 20 ] In 2020 a research group at the University of Colorado , funded by an ELM grant, published a paper after successfully creating exponentially regenerating concrete. [ 2 ] [ 21 ] [ 22 ]
Self-replicating concrete is produced using a mixture of sand and hydrogel , which are used as a growth medium for synechococcus bacteria to grow on. [ 2 ]
The sand-hydrogel mixture from which self-replicating concrete is made has a lower pH , lower ionic strength , and lower curing temperatures than a typical concrete mix , allowing it to serve as a growth medium for the bacteria. As the bacteria reproduce they spread through the medium, and biomineralize it with calcium carbonate , which is the main contributor to the overall strength and durability of the material. After mineralization the sand-hydrogel compound is strong enough to be used in construction, as concrete or mortar . [ 2 ]
The bacteria in self-replicating concrete react to humidity changes: they are most active - and reproduce the fastest - in an environment with 100% humidity, though a drop to 50% does not have a large impact on the cellular activity. Lower humidity does result in a stronger material than high humidity. [ 2 ]
As the bacteria reproduce, their biomineralization activity increases; this allows production capacity to scale exponentially. [ 2 ]
The structural properties of this material are similar to those of Portland cement -based mortars: it has an elastic modulus of 293.9 MPa, and a tensile strength of 3.6 MPa (the minimum required value for Portland-cement based concrete is approximately 3.5 MPa); [ 2 ] however it has a fracture energy of 170 N, which is much less than most standard concrete formulations, which can reach up to several kN.
Self-replicating concrete can be used in a variety of applications and environments, but the effect of humidity on the properties of the end material (see above ) means that the application of the material must be tailored to its environment. In humid environments the material can be used as to fill cracks in roads , walls and sidewalks, sipping into cavities and growing into a solid mass as it sets; [ 23 ] while in drier environments it can be used structurally, due to its increased strength in low-humidity environments.
Unlike traditional concrete, the production of which releases massive amounts of carbon dioxide to the atmosphere, the bacteria used in self-replicating concrete absorb carbon dioxide, resulting in a lower carbon footprint . [ 24 ]
This self-replicating concrete is not meant to replace standard concrete, but to create a new class of materials, with a mixture of strength, ecological benefits, and biological functionality. [ 25 ]
Biocement is a sand aggregate material produced through the process of microbiologically induced calcite precipitation (MICP). [ 27 ] [ 26 ] It is an environmentally friendly material which can be produced using a variety of stocks , from agricultural waste to mine tailings . [ 28 ]
Microscopic organisms are the key component in the formation of bioconcrete, as they provide the nucleation site for CaCO 3 to precipitate on the surface. [ 26 ] Microorganisms such as Sporosarcina pasteurii are useful in this process, as they create highly alkaline environments where dissolved inorganic carbon (DIC) is present at high amounts. [ 29 ] [ failed verification ] These factors are essential for microbiologically induced calcite precipitation (MICP), which is the main mechanism in which bioconcrete is formed. [ 27 ] [ 26 ] [ 29 ] Other organisms that can be used to induce this process include photosynthesizing microorganisms such as microalgae , cyanobacteria , and sulphate reducing bacteria (SRB) such as Desulfovibrio desulfuricans . [ 27 ] [ 30 ]
Calcium carbonate nucleation depends on four major factors:
As long as calcium ion concentrations are high enough, microorganisms can create such an environment through processes such as ureolysis. [ 27 ] [ 31 ]
Advancements in optimizing methods to use microorganisms to facilitate carbonate precipitation are rapidly developing. [ 27 ]
Biocement is able to "self-heal" due to bacteria, calcium lactate, nitrogen, and phosphorus components that are mixed into the material. [ 28 ] These components have the ability to remain active in biocement for up to 200 years. Biocement like any other concrete can crack due to external forces and stresses. Unlike normal concrete however, the microorganisms in biocement can germinate when introduced to water. [ 32 ] Rain can supply this water which is an environment that biocement would find itself in. Once introduced to water, the bacteria will activate and feed on the calcium lactate that was part of the mixture. [ 32 ] This feeding process also consumes oxygen which converts the originally water-soluble calcium lactate into insoluble limestone. This limestone then solidifies on surface it is lying on, which in this case is the cracked area, thereby sealing the crack up. [ 32 ]
Oxygen is one of the main elements that cause corrosion in materials such as metals. When biocement is used in steel reinforced concrete structures, the microorganisms consume the oxygen thereby increasing corrosion resistance. This property also allows for water resistance as it actually induces healing, and reducing overall corrosion. [ 32 ] Water concrete aggregates are what are used to prevent corrosion and these also have the ability to be recycled. [ 32 ] There are different methods to form these such as through crushing or grinding of the biocement. [ 27 ]
The permeability of biocement is also higher compared to normal cement. [ 26 ] This is due to the higher porosity of biocement. Higher porosity can lead to larger crack propagation when exposed to strong enough forces. Biocement is now roughly 20% composed of a self healing agent. This decreases its mechanical strength. [ 26 ] [ 28 ] The mechanical strength of bioconcrete is about 25% weaker than normal concrete, making its compressive strength lower. [ 28 ] Organisms such as Pesudomonas aeruginosa are effective in creating biocement. These are unsafe to be near humans so these must be avoided. [ 33 ]
Heterogeneous nucleation on microbial cell surfaces is common in MICP. Bacterial cell walls and extracellular polymers present negatively charged sites that selectively bind Ca²⁺ ions, effectively lowering the nucleation energy barrier. [ 34 ] In essence, each bound cation–carbonate encounter forms a tiny crystalline embryo. Thus, microbes provide numerous nucleation templates, yielding calcite platelets or needles rather than uniform glassy films. For example, SEM studies show that calcite often precipitates as clustered platelets or needle-like aggregates on bacterial films. [ 34 ] At high local supersaturation, unstable precursors like Amorphous calcium carbonate and Vaterite can initially form and later transform into calcite. In microbial consortia or in seawater, mixed metabolic pathways further modulate local pH and ion activities, affecting nucleation thresholds. These include the hydrolysis of urea or Photosynthesis .
Microscopy of microbially induced calcite often shows characteristic morphologies. Bacterial surfaces and exopolymeric sheaths concentrate Ca²⁺ and CO₃²⁻ ions and act as charged nucleation sites. [ 34 ] The result is often aggregated “rafts” or needle-like clusters of calcite as shown in the image rather than smooth single crystals. Such textures are consistent with heterogeneous nucleation: crystals grow epitaxially on cell templates that locally elevate supersaturation. When supersaturation is relieved by rapid precipitation, calcium ions diffuse in from surrounding fluid, sustaining continued nucleation and growth around the microbe.
Extracellular polymeric substances (EPS) secreted by bacteria also play a crucial role in CaCO₃ nucleation. EPS are complex biopolymers composed of polysaccharides, proteins, and nucleic acids that form a hydrated matrix around microbial colonies. These matrices can bind divalent cations such as Ca²⁺ and localize carbonate ions, thereby increasing ion activity at the cell-fluid interface. EPS mediates heterogeneous nucleation by concentrating reactants and lowering the interfacial energy barrier for crystal formation. Additionally, specific functional groups in the EPS such as carboxyl and hydroxyl moieties can template crystal orientation or polymorph selection. This microenvironmental control over supersaturation and binding energy is a fundamental example of biologically controlled mineralization.
Once nucleated, calcite crystals grow by incorporating ions at their surfaces. Two limiting regimes are often distinguished in materials science: spiral growth fed by dislocation sources and two-dimensional (2D) layer nucleation on crystal terraces. Kinetic studies show that crystal size and ion transport determine which mechanism dominates. For instance, it has been found that calcite crystals larger than roughly 1 µm preferentially grow by spiral steps, while smaller crystallites rely more on surface 2D nucleation. [ 35 ] This size dependence arises because ion transport through the fluid boundary layer is finite: slow diffusion at larger sizes lowers the effective supersaturation at the crystal face, favoring steady spiral growth. [ 35 ]
Diffusion of ions to the growth front is described by Fick’s laws. A simplification is Fick’s first law, J = − D d C d x {\displaystyle J=-D{\frac {dC}{dx}}} , which states that the diffusive flux J (in mol·m⁻²·s⁻¹) is proportional to the concentration gradient of the ion by the proportionality factor of diffusivity D. In MICP, bacterial ureolysis or photosynthesis creates high [Ca²⁺] and [HCO₃⁻] zones. As calcite precipitates, local depletion zones form and ions diffuse in to replenish them. When precipitation is very fast due to high enzyme activity or high supersaturation, diffusion can become rate-limiting: growth slows as D and the concentration gradient define the flux. [ 35 ] Conversely, at lower supersaturation or with plentiful mixing, surface reaction steps involving attachment kinetics may control growth rates.
At the microscale, crystal growth morphologies reflect these kinetics. In the SEM image shown, calcite crystals show elongated needle-like features, indicative of stepwise layer growth. Experimental kinetics reveal that, beyond a critical size, continued supply of ions by diffusion supports a steady spiral growth front. In contrast, sub-micron calcite particles may grow by frequent nucleation of new layers across the face. It has been found that the overall calcite growth can be modeled as a combination of spiral growth and new-layer nucleation.
Calcium carbonate has multiple crystal polymorphs and amorphous precursors. In MICP, the most stable end product is usually calcite with rhombohedral crystal, but metastable forms appear transiently. Initially, a hydrated amorphous CaCO₃ (ACC) phase or transient vaterite may form. ACC is highly soluble and will rapidly recrystallize. Classic Ostwald’s rule of stages applies: the system often precipitates the least stable form first and then transforms to more stable polymorphs. [ 34 ] For example, under many conditions ACC precipitates first, then crystallizes into vaterite , and finally reorders into calcite. [ 34 ] In microbial settings, organic molecules such as proteins or polysaccharides and solution chemistry involving Mg²⁺, phosphate, etc. can stabilize vaterite or even aragonite . However, in neutral pH soils the dominant phase transformation is usually vaterite-to-calcite.
The availability of nucleation templates also influences polymorphism. Bacteria and algae can selectively induce aragonite or calcite by producing specific organic matrices. In laboratory MICP studies, adding magnesium or certain biopolymers tends to favor aragonite or inhibit calcite nucleation. Conversely, in many ureolytic MICP experiments calcite is observed as sharp-edged rhombs since Ca²⁺ binds more strongly to cell surfaces than Mg²⁺. [ 34 ] In all cases, the multistep crystallization path (ACC → vaterite → calcite) is governed by the interplay of kinetics and thermodynamics, where higher supersaturation and rapid urea hydrolysis often push the system through these transformations quickly. [ 34 ]
Biocement is currently used in applications such as in sidewalks and pavements in buildings. [ 36 ] There are ideas of biological building constructions as well. The uses of biocement are still not widespread because there is currently not a feasible method of mass-producing biocement to such a high extent. [ 37 ] There is also much more definitive testing that needs to be done to confidently use biocement in such large scale applications where mechanical strength can not be compromised. The cost of biocement is also twice as much as normal concrete. [ 38 ] Different uses in smaller applications however include spray bars, hoses, drop lines, and bee nesting. Biocement is still in its developmental stages however its potential proves promising for its future uses.
Mycelium composites are materials that are based on mycelium – the mass of branching, thread-like hyphae produced by fungi . There are several ways to synthesize and fabricate mycelium composites, lending to different properties and use cases of the finish product. Mycelium composites are economical and sustainable .
Mycelium-based composites are usually synthesised by using different kinds of fungi , especially mushrooms . [ 40 ] An individual microbe of fungi is introduced to different types of organic substances to form a composite. [ 41 ] The selection of fungal species is important for creating a product with specific properties. Some of the fungal species that are used to make composites are G. lucidum, Ganoderma sp. P. ostretus, Pleurotus sp., T. versicolor, Trametes sp ., etc. [ 42 ] A dense network is formed when the mycelium of the microbe of fungi degrades and colonises the organic substance. Plant waste is a common organic substrate that is used in mycelium-based composites. Fungal mycelium is incubated with a plant waste product to produce sustainable alternatives mostly for petroleum -based materials. [ 42 ] [ 3 ] The mycelium and organic substrate need time to incubate properly and this time is crucial as it is the period that these particles interact together and bind to form a dense network and hence form a composite. During this incubation period, mycelium uses essential nutrients such as carbon, minerals, and water from the waste plant product. [ 41 ] Some of the organic substrate components include cotton, wheat grains, rice husks, sorghum fibres, agricultural waste, sawdust, bread particles, banana peel, coffee residue, etc. [ 42 ] The composites are synthesised and fabricated using different techniques such as adding carbohydrates, altering fermentation conditions, using different fabrication technology, altering post-processing stages, and modifying genetics or biochemicals to form products with certain properties. [ 40 ] Fabrication of most of the mycelium composites are by using plastic molds, so the mycelium can be grown directly into the desired shape. [ 41 ] [ 42 ] Other fabrication methods include laminate skin mold, vacuum skin mold, glass mold, plywood mold, wooden mold, petri dish mold, tile mold, etc. [ 42 ] During fabrication process, it is essential to have a sterilised environment, a controlled environment condition of light, temperature (25-35 °C) and humidity around 60-65% for the best results. [ 41 ] One way to synthesise a mycelium based composite is by mixing different composition ratios of fibers, water and mycelium together and putting in a PVC molds in layers while compressing each layer and letting it incubate for couple of days. [ 43 ] Mycelium based composites can be processed in foam, laminate and mycelium sheet by using processing techniques such as later cutting, cold and heat compression, etc. [ 41 ] [ 42 ] Mycelium composites tend to absorb water when they are newly fabricated, therefore this property can be changed by drying the product. [ 42 ]
One of the advantages about using mycelium based composites is that properties can be altered depending on fabrication process and the use of different fungus. Properties depend on type of fungus used and where they are grown. [ 42 ] Additionally, fungi has an ability to degrade the cellulose component of the plant to make composites in a preferable manner. [ 3 ] Some important mechanical properties such as compressive strength, morphology, tensile strength, hydrophobicity, and flexural strength can be modified as well for different use of the composite. [ 42 ] To increase the tensile strength, the composite can go through heat pressing. [ 40 ] The properties of a mycelium composite are affected by its substrate; for example, a mycelium composite made out of 75 wt% rice hulls has a density of 193 kg/m 3 , while 75 wt% wheat grains has 359 kg/m 3 . [ 3 ] Another method to increase the density of the composite would be by deleting a hydrophobin gene. [ 42 ] These composites also have the ability of self-fusion which increases their strength. [ 42 ] Mycelium based composites are usually compact, porous, lightweight and a good insulator. The main property of these composites is that they are entirely natural, therefore sustainable. Another advantage of mycelium based composites is that this substance acts as an insulator, is fireproof, nontoxic, water-resistant, rapidly growing, and able to bond with neighboring mycelium products. [ 44 ] Mycelium-based foams (MBFs) and sandwich components are two common types of composite. [ 3 ] MBFs are the most efficient type because of their low density property, high quality, and sustainability. [ 39 ] The density of MBFs can be decreased by using substrates that are smaller than 2 mm in diameter. [ 39 ] These composites have higher thermal conductivity as well. [ 39 ]
One of the most common use of mycelium based composites is for the alternatives for petroleum and polystyrene based materials. [ 42 ] These synthetic foams are usually used for sustainable design and architecture products. The use of mycelium based composites are based on their properties. There are several bio-sustainable companies
Beyond the use of living building materials, the application of microbially induced calcium carbonate precipitation (MICP) has the possibility of helping remove pollutants from wastewater, soil, and the air. Currently, heavy metals and radionuclei provide a challenge to remove from water sources and soil. Radionuclei in ground water do not respond to traditional methods of pumping and treating the water, and for heavy metals contaminating soil, the methods of removal include phytoremediation and chemical leaching do work; however, these treatments are expensive, lack longevity in effectiveness, and can destroy the productivity of the soil for future uses. [ 45 ] By using ureolytic bacteria that is capable of CaCO 3 precipitation, the pollutants can move into the calc-be structure, thereby removing them from the soil or water. This works through substitution of calcium ions for pollutants that then form solid particles and can be removed. [ 45 ] It's reported that 95% of these solid particles can be removed by using ureolytic bacteria. [ 45 ] However, when calcium scaling in pipelines occurs, MICP cannot be used as it is calcium-based. Instead of calcium, it is possible to add a low concentration of urea to remove up to 90% of the calcium ions. [ 45 ]
Another further application involves a self-constructed foundation that forms in response to pressure through the use of engineering bacteria. The engineered bacteria could be used to detect increased pressure in soil, and then cement the soil particles in place, effectively solidifying the soil. [ 1 ] Within soil, pore pressure consists of two factors: the amount of applied stress, and how quickly water in the soil is able to drain. Through analyzing the biological behavior of the bacteria in response to a load and the mechanical behavior of the soil, a computational model can be created. [ 1 ] With this model, certain genes within the bacteria can be identified and modified to respond a certain way to a certain pressure. However, the bacteria analyzed in this study was grown in a highly controlled lab, so real soil environments may not be as ideal. [ 1 ] This is a limitation of the model and study it originated from, but it still remains a possible application of living building materials. | https://en.wikipedia.org/wiki/Living_building_material |
A living fossil is a deprecated term for an extant taxon that phenotypically resembles related species known only from the fossil record. To be considered a living fossil, the fossil species must be old relative to the time of origin of the extant clade . Living fossils commonly are of species-poor lineages, but they need not be. While the body plan of a living fossil remains superficially similar, it is never the same species as the remote relatives it resembles, because genetic drift would inevitably change its chromosomal structure.
Living fossils exhibit stasis (also called "bradytely") over geologically long time scales. Popular literature may wrongly claim that a "living fossil" has undergone no significant evolution since fossil times, with practically no molecular evolution or morphological changes. Scientific investigations have repeatedly discredited such claims. [ 1 ] [ 2 ] [ 3 ]
The minimal superficial changes to living fossils are mistakenly declared as an absence of evolution, but they are examples of stabilizing selection , which is an evolutionary process —and perhaps the dominant process of morphological evolution . [ 4 ]
The term is currently deprecated among paleontologists and evolutionary biologists.
Living fossils have two main characteristics, although some have a third:
The first two are required for recognition as a living fossil; some authors also require the third, others merely note it as a frequent trait.
Such criteria are neither well-defined nor clearly quantifiable, but modern methods for analyzing evolutionary dynamics can document the distinctive tempo of stasis. [ 6 ] [ 7 ] [ 8 ] Lineages that exhibit stasis over very short time scales are not considered living fossils; what is poorly-defined is the time scale over which the morphology must persist for that lineage to be recognized as a living fossil.
The term living fossil is much misunderstood in popular media in particular, in which it often is used meaninglessly. In professional literature the expression seldom appears and must be used with far more caution, although it has been used inconsistently. [ 9 ] [ 10 ]
One example of a concept that could be confused with "living fossil" is that of a " Lazarus taxon ", but the two are not equivalent; a Lazarus taxon (whether a single species or a group of related species ) is one that suddenly reappears, either in the fossil record or in nature, as if the fossil had "come to life again". [ 11 ] In contrast to "Lazarus taxa", a living fossil in most senses is a species or lineage that has undergone exceptionally little change throughout a long fossil record, giving the impression that the extant taxon had remained identical through the entire fossil and modern period. Because of the mathematical inevitability of genetic drift , though, the DNA of the modern species is necessarily different from that of its distant, similar-looking ancestor. They almost certainly would not be able to cross-reproduce, and are not the same species. [ 12 ]
The average species turnover time, meaning the time between when a species first is established and when it finally disappears, varies widely among phyla , but averages about 2–3 million years. [ 13 ] A living taxon that had long been thought to be extinct could be called a Lazarus taxon once it was discovered to be still extant. A dramatic example was the order Coelacanthiformes , of which the genus Latimeria was found to be extant in 1938. About that there is little debate – however, whether Latimeria resembles early members of its lineage sufficiently closely to be considered a living fossil as well as a Lazarus taxon has been denied by some authors in recent years. [ 1 ]
Coelacanths disappeared from the fossil record some 80 million years ago (in the upper Cretaceous period) and, to the extent that they exhibit low rates of morphological evolution, extant species qualify as living fossils. It must be emphasised that this criterion reflects fossil evidence, and is totally independent of whether the taxa had been subject to selection at all, which all living populations continuously are, whether they remain genetically unchanged or not. [ 14 ]
This apparent stasis, in turn, gives rise to a great deal of confusion – for one thing, the fossil record seldom preserves much more than the general morphology of a specimen. To determine much about its physiology is seldom possible; not even the most dramatic examples of living fossils can be expected to be without changes, no matter how persistently constant their fossils and the extant specimens might seem. To determine much about noncoding DNA is hardly ever possible, but even if a species were hypothetically unchanged in its physiology, it is to be expected from the very nature of the reproductive processes, that its non-functional genomic changes would continue at more-or-less standard rates. Hence, a fossil lineage with apparently constant morphology need not imply equally constant physiology, and certainly neither implies any cessation of the basic evolutionary processes such as natural selection, nor reduction in the usual rate of change of the noncoding DNA. [ 14 ]
Some living fossils are taxa that were known from palaeontological fossils before living representatives were discovered. The most famous examples of this are:
All the above include taxa that originally were described as fossils but now are known to include still-extant species.
Other examples of living fossils are single living species that have no close living relatives, but are survivors of large and widespread groups in the fossil record. For example:
All of these were described from fossils before later being found alive. [ 15 ] [ 16 ] [ 17 ]
The fact that a living fossil is a surviving representative of an archaic lineage does not imply that it must retain all the "primitive" features ( plesiomorphies ) of its ancestral lineage. Although it is common to say that living fossils exhibit "morphological stasis", stasis, in the scientific literature, does not mean that any species is strictly identical to its ancestor, much less remote ancestors.
Some living fossils are relicts of formerly diverse and morphologically varied lineages, but not all survivors of ancient lineages necessarily are regarded as living fossils. See for example the uniquely and highly autapomorphic oxpeckers , which appear to be the only survivors of an ancient lineage related to starlings and mockingbirds . [ 18 ]
The term living fossil is usually reserved for species or larger clades that are exceptional for their lack of morphological diversity and their exceptional conservatism, and several hypotheses could explain morphological stasis on a geologically long time-scale. Early analyses of evolutionary rates emphasized the persistence of a taxon rather than rates of evolutionary change. [ 19 ] Contemporary studies instead analyze rates and modes of phenotypic evolution, but most have focused on clades that are thought to be adaptive radiations rather than on those thought to be living fossils. Thus, very little is presently known about the evolutionary mechanisms that produce living fossils or how common they might be. Some recent studies have documented exceptionally low rates of ecological and phenotypic evolution despite rapid speciation. [ 20 ] This has been termed a "non-adaptive radiation" referring to diversification not accompanied by adaptation into various significantly different niches. [ 21 ] Such radiations are explanation for groups that are morphologically conservative. Persistent adaptation within an adaptive zone is a common explanation for morphological stasis. [ 22 ] The subject of very low evolutionary rates, however, has received much less attention in the recent literature than that of high rates.
Living fossils are not expected to exhibit exceptionally low rates of molecular evolution, and some studies have shown that they do not. [ 23 ] [ 24 ] For example, on tadpole shrimp ( Triops ), one article notes, "Our work shows that organisms with conservative body plans are constantly radiating, and presumably, adapting to novel conditions... I would favor retiring the term 'living fossil' altogether, as it is generally misleading." [ 24 ] Some scientists instead prefer a new term stabilomorph, being defined as "an effect of a specific formula of adaptative strategy among organisms whose taxonomic status does not exceed genus-level. A high effectiveness of adaptation significantly reduces the need for differentiated phenotypic variants in response to environmental changes and provides for long-term evolutionary success." [ 25 ]
The question posed by several recent studies pointed out that the morphological conservatism of coelacanths is not supported by paleontological data. [ 26 ] [ 27 ] In addition, it was shown recently that studies concluding that a slow rate of molecular evolution is linked to morphological conservatism in coelacanths are biased by the a priori hypothesis that these species are 'living fossils'. [ 1 ] Accordingly, the genome stasis hypothesis is challenged by the recent finding that the genome of the two extant coelacanth species L. chalumnae and L. menadoensis contain multiple species-specific insertions, indicating transposable element recent activity and contribution to post-speciation genome divergence. [ 28 ] Such studies, however, challenge only a genome stasis hypothesis, not the hypothesis of exceptionally low rates of phenotypic evolution.
The term was coined by Charles Darwin in his On the Origin of Species from 1859, when discussing Ornithorhynchus (the platypus) and Lepidosiren (the South American lungfish):
All fresh-water basins, taken together, make a small area compared with that of the sea or of the land; and, consequently, the competition between fresh-water productions will have been less severe than elsewhere; new forms will have been more slowly formed, and old forms more slowly exterminated. And it is in fresh water that we find seven genera of Ganoid fishes, remnants of a once preponderant order: and in fresh water we find some of the most anomalous forms now known in the world, as the Ornithorhynchus and Lepidosiren , which, like fossils, connect to a certain extent orders now widely separated in the natural scale. These anomalous forms may almost be called living fossils; they have endured to the present day, from having inhabited a confined area, and from having thus been exposed to less severe competition.
A living taxon that lived through a large portion of geologic time .
The Australian lungfish ( Neoceratodus fosteri ), also known as the Queensland lungfish, is an example of an organism that meets this criterion. Fossils identical to modern specimens have been dated at over 100 million years old. Modern Queensland lungfish have existed as a species for almost 30 million years. [ 30 ] The contemporary nurse shark has existed for more than 112 million years, making this species one of the oldest, if not actually the oldest extant vertebrate species.
A living taxon morphologically and/or physiologically resembling a fossil taxon through a large portion of geologic time (morphological stasis). [ 31 ]
A living taxon with many characteristics believed to be primitive. This is a more neutral definition. However, it does not make it clear whether the taxon is truly old, or it simply has many plesiomorphies. Note that, as mentioned above, the converse may hold for true living fossil taxa; that is, they may possess a great many derived features ( autapomorphies ), and not be particularly "primitive" in appearance.
Any one of the above three definitions, but also with a relict distribution in refuges .
Some paleontologists believe that living fossils with large distributions (such as Triops cancriformis ) are not real living fossils. In the case of Triops cancriformis (living from the Triassic until now), the Triassic specimens lost most of their appendages (mostly only carapaces remain), and they have not been thoroughly examined since 1938.
Any of the first three definitions, but the clade also has a low taxonomic diversity (low diversity lineages).
Oxpeckers are morphologically somewhat similar to starlings due to shared plesiomorphies, but are uniquely adapted to feed on parasites and blood of large land mammals, which has always obscured their relationships. This lineage forms part of a radiation that includes Sturnidae and Mimidae , but appears to be the most ancient of these groups. Biogeography strongly suggests that oxpeckers originated in eastern Asia and only later arrived in Africa, where they now have a relict distribution. [ 18 ]
The two living species thus seem to represent an entirely extinct and (as Passerida go) rather ancient lineage, as certainly as this can be said in the absence of actual fossils. The latter is probably due to the fact that the oxpecker lineage never occurred in areas where conditions were good for fossilization of small bird bones, but of course, fossils of ancestral oxpeckers may one day turn up enabling this theory to be tested.
An operational definition was proposed in 2017, where a 'living fossil' lineage has a slow rate of evolution and occurs close to the middle of morphological variation (the centroid of morphospace) among related taxa (i.e. a species is morphologically conservative among relatives). [ 32 ] The scientific accuracy of the morphometric analyses used to classify tuatara as a living fossil under this definition have been criticised however, [ 33 ] which prompted a rebuttal from the original authors. [ 34 ]
Some of these are informally known as "living fossils".
Baiji is not officially classified as extinct, but instead critically endangered, possibly extinct and has the unofficial status of functional extinction . [ 56 ] | https://en.wikipedia.org/wiki/Living_fossil |
The concept of the Living Lab has been defined in multiple ways. A definition from the European Network of Living Labs (ENoLL) is used most widely, describing them as "user-centred open innovation ecosystems” that integrate research and innovation through co-creation in real-world environments. [1]
Emerging at the intersection of ambient intelligence research and user experience methodologies in the late 1990s, the concept was pioneered at the Massachusetts Institute of Technology (MIT) as a way to study human interaction with new technologies in natural settings. Over time, living labs have evolved beyond their origins as controlled research environments, becoming dynamic platforms for participatory design, collaborative experimentation, and iterative innovation across various domains, including urban development, healthcare, sustainability, and digital technology. Characterized by principles such as real-world experimentation, active user involvement, and multi-stakeholder collaboration, living labs enable the continuous adaptation and validation of solutions in everyday contexts. Today, they are implemented globally, supported by networks like the European Network of Living Labs (ENoLL), and increasingly recognized as vital tools for addressing local and global transformation agendas.
The term "living lab" has emerged in parallel from the ambient intelligence (AmI) research communities [ 1 ] context and from the discussion on experience and application research (EAR). [ 2 ] The emergence of the term is based on the concept of user experience [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] and ambient intelligence. [ 9 ] [ 10 ] [ 11 ]
The term dates back to the late 1990s when Professor William J. Mitchell , Kent Larson , and Alex (Sandy) Pentland at the Massachusetts Institute of Technology were credited with first exploring the concept of a Living Laboratory. It was first associated with MIT's Media Lab as a concept for studying real-life contexts, where they described a living lab as a controlled environment designed to test new information and communication technology (ICT) innovations in a simulated home setting. [ 12 ] This was also when some of the key characteristics often assigned to living labs today began to take shape. They argued that a living lab represents a user-centric research methodology for sensing, prototyping , validating and refining complex solutions in multiple and evolving real-life contexts.
Research on living labs has expanded since the 1990s, especially in the 2010s, with growing interest in co-creation and participatory design. Particularly in Europe, the living lab evolved into a model that focused on studying user interactions with technology in real-world environments. This shift was influenced by earlier experiences in participatory design and social experiments with ICT. [ 12 ] As interest grew, the term began to encompass a broader array of initiatives and projects, leading to variations in its interpretation and implementation. [ 12 ] [ 13 ] Today, living labs are used in various fields, such as technology, healthcare, and urban sustainability, showing a transition from a narrow focus on their role as controlled environments to a more wide-ranging understanding of collaborative innovation addressing real societal challenges, [ 13 ] while also being referred to with various descriptions and definitions available from different sources. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ]
The ENoLL definition that refers to living labs as "user-centred open innovation ecosystems” that integrate research and innovation through co-creation in real-world environments [ 20 ] is the most widely accepted description of living labs in academic literature. [ 13 ] In simple terms, living labs can be described as an organization or experimental space, that can be both virtually or physically located, bringing different stakeholders from research, business, government, and citizens together to design and test solutions to be implemented in a real world environment. A common definition for the living lab term still does not exist to this day, which is due to the fact that living labs are interpreted and implemented across different contexts and can cover a wide range of activities and organizations, leading to different understandings of how living labs should function. [ 21 ] Living labs also often operate in various territorial contexts (e.g. city, agglomeration, region, campus), and can vary in their methodological approach integrating concurrent research and innovation processes [ 22 ] within a public-private-people partnership. [ 23 ]
Despite these variations, common characteristics include user-centricity, real-world experimentation, multi-stakeholder collaboration, and iterative innovation processes. [ 24 ]
The systematic user co-creation approach refers to integrating research and innovation processes through the co-creation, exploration, experimentation and evaluation of innovative ideas, scenarios, concepts and related technological artefacts in real life use cases. Such use cases involve user communities, not only as observed subjects but also as a source of creation. This approach allows all involved stakeholders to concurrently consider both the global performance of a product or service and its potential adoption by users. This consideration may be made at the earlier stage of research and development and through all elements of the product life-cycle, from design up to recycling. [ 25 ] User-centred research methods, [ 26 ] such as action research , community informatics , contextual design , [ 27 ] user-centered design , participatory design , [ 28 ] empathic design , emotional design , [ 29 ] [ 30 ] [ 31 ] and other usability methods, already exist but fail to sufficiently empower users for co-creating into open development environments. More recently, the Web 2.0 has demonstrated the positive impact of involving user communities in new product development (NPD) such as mass collaboration projects (e.g. crowdsourcing , Wisdom of Crowds ) in collectively creating new contents and applications.
Real-world experimentation emphasizes conducting activities in real-life settings to ensure that the results of the projects and solutions are applicable to actual market conditions. [ 24 ] Multi-stakeholder collaboration refers to an approach that involved various stakeholders, such as users, businesses, researchers, and government entities, working together towards a common goal. This is an important characteristics of living lab because collaboration of these diverse groups allows for exchange of ideas and perspectives, which are thought to enhance innovation processes. [ 24 ] [ 32 ] Iterative innovation processes involve a cyclical method of developing products or services, where stages such as research, development, testing, and implementation are revisited multiple times based on feedback and evaluation. This process allows for continuous improvement of the innovation, product, or service being developed. In particular, the ongoing involvement of the user creates feedback mechanisms that are ultimately key to successful development and implementation of products and services. [ 33 ]
A living lab is not similar to a testbed as its philosophy is to turn users, from being traditionally considered as observed subjects for testing modules against requirements, into value creation in contributing to the co-creation and exploration of emerging ideas, breakthrough scenarios, innovative concepts and related artefacts. Hence, a living lab rather constitutes an experiential environment, which could be compared to the concept of experiential learning , where users are immersed in a creative social space for designing and experiencing their own future. Living labs could also be used by policy makers and users/citizens for designing, exploring, experiencing and refining new policies and regulations in real-life scenarios for evaluating their potential impacts before their implementations. [ 13 ]
The European Network of Living Labs (ENoLL ) is an international, non-profit, independent association of certified Living Labs, which popularized the Living Lab concept in the aim to increase user involvement in innovation. Formed in November 2006 under the guidance of the Finnish European Presidency, ENoLL is composed of a variety of stakeholders, including municipalities and research institutes, businesses, and users. [ 34 ] Its primary role is to support the collaboration among living labs across Europe and includes many living labs focused on user-driven innovation across sectors.
ENoLL focuses on facilitating knowledge exchange , joint actions and project partnerships among its historically labelled +/- 500 members, influencing EU policies, promoting living labs and enabling their implementation worldwide. ENoLL serves as a platform for linking living labs around the globe, which enables knowledge sharing and collaborative learning among diverse cultural environments. [ 34 ] Membership to the platform is open to organizations worldwide, and ENoLL has expanded beyond Europe to include global members. ENoLL follows an application and accreditation process, where aspiring Living Labs must demonstrate adherence to core principles of user-centered, open innovation and real-life experimentation. Successful applicants are officially recognized as accredited Living Labs and become part of the ENoLL network. [ 35 ]
In practice, living labs place the citizen at the core of innovation, ensuring that new information and communication technology (ICT) solutions align with local needs. Living labs bring together multiple stakeholders - typically from the quadruple helix , which includes government, industry, academia, and civil society - to create a shared vision, mission and strategic goals.
Living labs emphasize active user participation , which means participants, especially end-users such as citizens, are not only engaged as testers but also as co-creators who provide insights and feedback during various development phases. [ 21 ] Users in living labs can therefore take on multiple roles, including informants, testers, contributors, and co-creators to bring their knowledge, experience, and needs at the forefront of development activities. [ 36 ] The utilizers , often including private or public organizations, will gain from the outcomes of the innovation activities and also play an important role in the initiating the set up of the lab and promotion of living lab initiatives to advance their own agendas. [ 13 ] Researchers are often involved in facilitating the innovation process, through conducting studies, disseminating findings, and collaborating with users and businesses. [ 13 ] Government entities participate in the living lab ecosystem by providing regulatory frameworks and infrastructure necessary for innovation, and they also support projects that align with public interests. [ 13 ]
A framework for understanding the functioning of living labs, was introduced by Dr. Dimitri Schuurman in his 2015 PhD dissertation, Bridging the gap between open and user innovation. [ 37 ] This three-layered model is commonly used within the European Network of Living Labs (ENoLL), and describes living labs as operating on three levels:
Despite the wide variety in methodological approaches and contexts within which living labs operate, all living labs use the six same building blocks. ENoLL, the European Network of Living Labs, [ 38 ] describes them as follows:
From a conceptual perspective, there are many 'types' of living labs, yet four broad categories can be established as follows:
Most living labs can be a combination of the above mentioned categories, but their primary focus is centered on one of these types.
The urban living lab (ULLs) is increasingly recognized to address a wide range of sustainability issues in the cities, ranging from environmental degradation to social inequality. [ 39 ] For instance, in the context of circular economy initiatives, ULLs have been instrumental to bring together citizens, businesses, and public institutions to design and implement projects that reduce waste and promote resource efficiency. [ 40 ] [ 41 ] ULLs further leverage the involvement of citizens in data collection and experimentation, raising awareness about environmental issues such as air pollution and climate change. Initiatives like the I-CHANGE project promotes citizen science by empowering citizens in taking an active role in driving change. [ 42 ] Finally, ULLs are though to be important mechanisms to promote sustainability transitions. [ 33 ] [ 43 ] By providing a space for experimentation and learning, ULLs enable cities to test and refine innovative solutions before scaling them up. The meta-lab approach, which connects multiple ULLs across different urban contexts, has been proposed as a way to accelerate learning and diffusion of successful practices, thereby supporting system-wide sustainability transformation. [ 44 ]
MIT Living Labs/City Science/Media Lab
From 2004 to 2007, the MIT House_n Consortium (now City Science), directed by Kent Larson, created and operated the PlaceLab, [ 45 ] a residential living laboratory located in a multi-family apartment building in Cambridge. Massachusetts. The PlaceLab was, at the time, the most highly instrumented living environment ever created. Hundreds of sensors and semi-automated activity recognition allowed researchers to determine where occupants were, what they were doing, the systems they interacted with, and the state of the environment. Volunteer occupants lived in the facility for weeks at a time to test the effectiveness of proactive health systems related to diet, exercise, medication adherence, and other interventions. Kent Larson , Stephen Intille, Emmanuel Munguia Tapia, and other PlaceLab researchers twice received the “10-Year Impact Award” from Ubicomp: a “test of time” award for work that, with the benefit of that hindsight, has had the greatest impact. This work was followed by BoxLab, a home furniture object that captured and processed sensor data in the home, and CityHome, which integrated architectural robotics into furniture to effortless transform space from sleeping to socializing to working to dining (now launched commercially as ORI Living).
In 2010, Mitchell, Larson and Pentland, formed the first US-based living labs research consortium. According to the consortium website: [ 46 ]
The convergence of globalization, changing demographics, and urbanization is transforming almost every aspect of our lives. We face new choices about where and how we work, live, travel, communicate, and maintain health. Ultimately, our societies are being transformed. MIT Living Labs brings together interdisciplinary experts to develop, deploy, and test - in actual living environments - new technologies and strategies for design that respond to this changing world. Our work spans in scale from the personal to the urban, and addresses challenges related to health, energy, and creativity.
The consortium has since been reorganized as the City Science Initiative at the MIT Media Lab, within the School of Architecture + Planning. There is now an international network of City Science Labs at Tongji University ( Shanghai ), Taipei Tech ( Taipei ), HafenCity University ( Hamburg ), Aalto University ( Helsinki ), ActuaTech ( Andorra ), and Toronto Metropolitan University ( Toronto ). [ 47 ]
As of August 2019 [update] , Larson is Director of the City Science Initiative at the MIT Media Lab. [ 48 ] and Pentland is Professor of Media Arts and Sciences, and MIT Media Lab Entrepreneurship Program Director (also within the School of Architecture + Planning). [ 49 ] He has recently formed a partnership with the South Australian Government to set up a living lab in the Lot Fourteen hub, similar to MIT Living Labs in New York City , Beijing and Istanbul . [ 50 ]
Cité-ID Living Lab
The Cité-ID Living Lab situated in Montréal, Canada focuses on governance of urban resilience . By engaging multiple actors into the innovation process, this living lab aims to address crises such as climate change, recovery from the impacts of the COVID-19 pandemic , and socio-economic stressors that are particularly present within cities, thus functioning as an incubator for the development of cross-sectoral approaches for the sharing of knowledge and ideas.
The approach of this living lab is largely research-focused with several publications on topics such as intra and inter organizational capacities, social and technological innovation, governance of climate-induced risks and impacts, and recovery and social connections. | https://en.wikipedia.org/wiki/Living_lab |
A living medicine is a type of biologic that consists of a living organism that is used to treat a disease. This usually takes the form of a cell (animal, bacterial, or fungal) or a virus that has been genetically engineered to possess therapeutic properties that is injected into a patient. [ 2 ] [ 3 ] Perhaps the oldest use of a living medicine is the use of leeches for bloodletting , though living medicines have advanced tremendously since that time.
Examples of living medicines include cellular therapeutics (including immunotherapeutics ), phage therapeutics , and bacterial therapeutics , a subset of the latter being probiotics .
Development of living medicines is an extremely active research area in the fields of synthetic biology and microbiology . [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Currently, there is a large focus on: 1) identifying microbes that naturally produce therapeutic effects (for example, probiotic bacteria), and 2) genetically programming organisms to produce therapeutic effects. [ 15 ] [ 16 ] [ 17 ]
There is tremendous interest in using bacteria as a therapy to treat tumors. In particular, tumor-homing bacteria that thrive in hypoxic environments are particularly attractive for this purpose, as they will tend to migrate to, invade (through the leaky vasculature in the tumor microenvironment ) and colonize tumors. This property tends to increase their residence time in the tumor, giving them longer to exert their therapeutic effects, in contrast to other bacteria that would be quickly cleared by the immune system. [ 19 ] [ 20 ] [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Living_medicine |
Living systems are life forms (or, more colloquially known as living things ) treated as a system . They are said to be open self-organizing and said to interact with their environment. These systems are maintained by flows of information , energy and matter . Multiple theories of living systems have been proposed. Such theories attempt to map general principles for how all living systems work.
Some scientists have proposed in the last few decades that a general theory of living systems is required to explain the nature of life. [ 1 ] Such a general theory would arise out of the ecological and biological sciences and attempt to map general principles for how all living systems work. Instead of examining phenomena by attempting to break things down into components, a general living systems theory explores phenomena in terms of dynamic patterns of the relationships of organisms with their environment. [ 2 ]
James Grier Miller 's living systems theory is a general theory about the existence of all living systems, their structure , interaction , behavior and development , intended to formalize the concept of life. According to Miller's 1978 book Living Systems , such a system must contain each of twenty "critical subsystems" defined by their functions. Miller considers living systems as a type of system . Below the level of living systems, he defines space and time , matter and energy , information and entropy , levels of organization , and physical and conceptual factors, and above living systems ecological, planetary and solar systems, galaxies, etc. [ 3 ] [ 4 ] [ 5 ] Miller's central thesis is that the multiple levels of living systems (cells, organs, organisms, groups, organizations, societies, supranational systems) are open systems composed of critical and mutually-dependent subsystems that process inputs, throughputs, and outputs of energy and information. [ 6 ] [ 7 ] [ 8 ] Seppänen (1998) says that Miller applied general systems theory on a broad scale to describe all aspects of living systems. [ 9 ] Bailey states that Miller's theory is perhaps the "most integrative" social systems theory, [ 10 ] clearly distinguishing between matter–energy-processing and information-processing, showing how social systems are linked to biological systems. LST analyzes the irregularities or "organizational pathologies" of systems functioning (e.g., system stress and strain, feedback irregularities, information–input overload). It explicates the role of entropy in social research while it equates negentropy with information and order. It emphasizes both structure and process, as well as their interrelations. [ 11 ]
The idea that Earth is alive is found in philosophy and religion, but the first scientific discussion of it was by the Scottish geologist James Hutton . In 1785, he stated that Earth was a superorganism and that its proper study should be physiology . [ 12 ] : 10 The Gaia hypothesis, proposed in the 1960s by James Lovelock , suggests that life on Earth functions as a single organism that defines and maintains environmental conditions necessary for its survival. [ 13 ] [ 14 ]
A systems view of life treats environmental fluxes and biological fluxes together as a "reciprocity of influence," [ 15 ] and a reciprocal relation with environment is arguably as important for understanding life as it is for understanding ecosystems. As Harold J. Morowitz (1992) explains it, life is a property of an ecological system rather than a single organism or species. [ 16 ] He argues that an ecosystemic definition of life is preferable to a strictly biochemical or physical one. Robert Ulanowicz (2009) highlights mutualism as the key to understand the systemic, order-generating behaviour of life and ecosystems. [ 17 ]
Robert Rosen devoted a large part of his career, from 1958 [ 18 ] onwards, to developing a comprehensive theory of life as a self-organizing complex system, "closed to efficient causation". He defined a system component as "a unit of organization; a part with a function, i.e., a definite relation between part and whole." He identified the "nonfractionability of components in an organism" as the fundamental difference between living systems and "biological machines." He summarised his views in his book Life Itself . [ 19 ]
Complex systems biology is a field of science that studies the emergence of complexity in functional organisms from the viewpoint of dynamic systems theory. [ 20 ] The latter is also often called systems biology and aims to understand the most fundamental aspects of life. A closely related approach, relational biology, is concerned mainly with understanding life processes in terms of the most important relations, and categories of such relations among the essential functional components of organisms; for multicellular organisms, this has been defined as "categorical biology", or a model representation of organisms as a category theory of biological relations, as well as an algebraic topology of the functional organisation of living organisms in terms of their dynamic, complex networks of metabolic, genetic, and epigenetic processes and signalling pathways . [ 21 ] [ 22 ] Related approaches focus on the interdependence of constraints, where constraints can be either molecular, such as enzymes, or macroscopic, such as the geometry of a bone or of the vascular system. [ 23 ]
Harris Bernstein and colleagues argued in 1983 that the evolution of order in living systems and certain physical systems obeys a common fundamental principle termed the Darwinian dynamic. This was formulated by first considering how macroscopic order is generated in a simple non-biological system far from thermodynamic equilibrium, and then extending consideration to short, replicating RNA molecules. The underlying order-generating process was concluded to be basically similar for both types of systems. [ 24 ] [ 25 ]
Gerard Jagers' operator theory proposes that life is a general term for the presence of the typical closures found in organisms; the typical closures are a membrane and an autocatalytic set in the cell [ 26 ] and that an organism is any system with an organisation that complies with an operator type that is at least as complex as the cell. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Life can be modelled as a network of inferior negative feedbacks of regulatory mechanisms subordinated to a superior positive feedback formed by the potential of expansion and reproduction. [ 31 ]
Stuart Kauffman defines a living system as an autonomous agent or a multi-agent system capable of reproducing itself or themselves, and of completing at least one thermodynamic work cycle . [ 32 ] This definition is extended by the evolution of novel functions over time. [ 33 ]
Budisa , Kubyshkin and Schmidt defined cellular life as an organizational unit resting on four pillars/cornerstones: (i) energy , (ii) metabolism , (iii) information and (iv) form . This system is able to regulate and control metabolism and energy supply and contains at least one subsystem that functions as an information carrier ( genetic information ). Cells as self-sustaining units are parts of different populations that are involved in the unidirectional and irreversible open-ended process known as evolution . [ 34 ] | https://en.wikipedia.org/wiki/Living_systems |
A lixiviant is a chemical used in hydrometallurgy to extract elements from its ore . [ 1 ] [ 2 ] One of the most famous lixiviants is cyanide , which is used in extracting 90% of mined gold . The combination of cyanide and air converts gold particles into a soluble salt. Once separated from the bulk gangue , the solution is processed in a series of steps to give the metal. [ 3 ]
The origin is the word lixiviate , meaning to leach or to dissolve out, deriving from the Latin lixivium . [ 4 ] A lixiviant assists in rapid and complete leaching , for example during in situ leaching . The metal can be recovered from it in a concentrated form after leaching.
This metallurgy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lixiviant |
In mathematics , more specifically in the study of dynamical systems and differential equations , a Liénard equation [ 1 ] is a type of second-order ordinary differential equation named after the French physicist Alfred-Marie Liénard .
During the development of radio and vacuum tube technology, Liénard equations were intensely studied as they can be used to model oscillating circuits . Under certain additional assumptions Liénard's theorem guarantees the uniqueness and existence of a limit cycle for such a system. A Liénard system with piecewise-linear functions can also contain homoclinic orbits . [ 2 ]
Let f and g be two continuously differentiable functions on R , {\displaystyle \mathbb {R} ,} with f an even function and g an odd function . Then the second order ordinary differential equation of the form d 2 x d t 2 + f ( x ) d x d t + g ( x ) = 0 {\displaystyle {d^{2}x \over dt^{2}}+f(x){dx \over dt}+g(x)=0} is called a Liénard equation .
The equation can be transformed into an equivalent two-dimensional system of ordinary differential equations . We define
then
is called a Liénard system .
Alternatively, since the Liénard equation itself is also an autonomous differential equation , the substitution v = d x d t {\displaystyle v={dx \over dt}} leads the Liénard equation to become a first order differential equation :
which is an Abel equation of the second kind . [ 3 ] [ 4 ]
The Van der Pol oscillator
is a Liénard equation. The solution of a Van der Pol oscillator has a limit cycle. Such cycle has a solution of a Liénard equation with negative f ( x ) {\displaystyle f(x)} at small | x | {\displaystyle |x|} and positive f ( x ) {\displaystyle f(x)} otherwise. The Van der Pol equation has no exact, analytic solution. Such solution for a limit cycle exists if f ( x ) {\displaystyle f(x)} is a constant piece-wise function. [ 5 ]
A Liénard system has a unique and stable limit cycle surrounding the origin if it satisfies the following additional properties: [ 6 ] | https://en.wikipedia.org/wiki/Liénard_equation |
Liñán diffusion flame theory is a theory developed by Amable Liñán in 1974 to explain the diffusion flame structure using activation energy asymptotics and Damköhler number asymptotics. [ 1 ] [ 2 ] [ 3 ] Liñán used counterflowing jets of fuel and oxidizer to study the diffusion flame structure, analyzing for the entire range of Damköhler number . His theory predicted four different types of flame structure as follows,
The theory is well explained in the simplest possible model. Thus, assuming a one-step irreversible Arrhenius law for the combustion chemistry with constant density and transport properties and with unity Lewis number reactants, the governing equation for the non-dimensional temperature field T ( y ) {\displaystyle T(y)} in the stagnation point flow reduces to
where Z {\displaystyle Z} is the mixture fraction, D a {\displaystyle \mathrm {Da} } is the Damköhler number , T a = E / R {\displaystyle T_{a}=E/R} is the activation temperature and the fuel mass fraction and oxidizer mass fraction are scaled with their respective feed stream values, given by
with boundary conditions T ( − ∞ ) = T ( ∞ ) = T o {\displaystyle T(-\infty )=T(\infty )=T_{o}} . Here, T o {\displaystyle T_{o}} is the unburnt temperature profile (frozen solution) and S {\displaystyle S} is the stoichiometric parameter (mass of oxidizer stream required to burn unit mass of fuel stream). The four regime are analyzed by trying to solve above equations using activation energy asymptotics and Damköhler number asymptotics. The solution to above problem is multi-valued. Treating mixture fraction Z {\displaystyle Z} as independent variable reduces the equation to
with boundary conditions T ( 0 ) = T ( 1 ) = T o {\displaystyle T(0)=T(1)=T_{o}} and y = 2 e r f c − 1 ( 2 Z ) {\displaystyle y={\sqrt {2}}\mathrm {erfc} ^{-1}(2Z)} .
The reduced Damköhler number is defined as follows
where y s = 2 e r f c − 1 ( 2 Z s ) , Z s = 1 / ( S + 1 ) {\displaystyle y_{s}={\sqrt {2}}\mathrm {erfc} ^{-1}(2Z_{s}),\ Z_{s}=1/(S+1)} and T s = T o + Z s {\displaystyle T_{s}=T_{o}+Z_{s}} . The theory predicted an expression for the reduced Damköhler number at which the flame will extinguish, given by
where γ = 1 − 2 ( 1 − α ) ( 1 − Z s ) {\displaystyle \gamma =1-2(1-\alpha )(1-Z_{s})} . | https://en.wikipedia.org/wiki/Liñán's_diffusion_flame_theory |
In the study of diffusion flame , Liñán's equation is a second-order nonlinear ordinary differential equation which describes the inner structure of the diffusion flame, first derived by Amable Liñán in 1974. [ 1 ] The equation reads as
subjected to the boundary conditions
where δ {\displaystyle \delta } is the reduced or rescaled Damköhler number and γ {\displaystyle \gamma } is the ratio of excess heat conducted to one side of the reaction sheet to the total heat generated in the reaction zone. If γ > 0 {\displaystyle \gamma >0} , more heat is transported to the oxidizer side, thereby reducing the reaction rate on the oxidizer side (since reaction rate depends on the temperature) and consequently greater amount of fuel will be leaked into the oxidizer side. Whereas, if γ < 0 {\displaystyle \gamma <0} , more heat is transported to the fuel side of the diffusion flame, thereby reducing the reaction rate on the fuel side of the flame and increasing the oxidizer leakage into the fuel side. When γ → 1 {\displaystyle \gamma \rightarrow 1} ( γ → − 1 ) {\displaystyle (\gamma \rightarrow -1)} , all the heat is transported to the oxidizer (fuel) side and therefore the flame sustains extremely large amount of fuel (oxidizer) leakage. [ 2 ]
The equation is, in some aspects, universal (also called as the canonical equation of the diffusion flame) since although Liñán derived the equation for stagnation point flow , assuming unity Lewis numbers for the reactants, the same equation is found to represent the inner structure for general laminar flamelets, [ 3 ] [ 4 ] [ 5 ] having arbitrary Lewis numbers. [ 6 ] [ 7 ] [ 8 ]
Near the extinction of the diffusion flame, δ {\displaystyle \delta } is order unity. The equation has no solution for δ < δ E {\displaystyle \delta <\delta _{E}} , where δ E {\displaystyle \delta _{E}} is the extinction Damköhler number. For δ > δ E {\displaystyle \delta >\delta _{E}} with | γ | < 1 {\displaystyle |\gamma |<1} , the equation possess two solutions, of which one is an unstable solution. Unique solution exist if | γ | > 1 {\displaystyle |\gamma |>1} and δ > δ E {\displaystyle \delta >\delta _{E}} . The solution is unique for δ > δ I {\displaystyle \delta >\delta _{I}} , where δ I {\displaystyle \delta _{I}} is the ignition Damköhler number.
Liñán also gave a correlation formula for the extinction Damköhler number, which is increasingly accurate for 1 − γ ≪ 1 {\displaystyle 1-\gamma \ll 1} ,
The generalized Liñán's equation is given by
where m {\displaystyle m} and n {\displaystyle n} are constant reaction orders of fuel and oxidizer, respectively.
In the Burke–Schumann limit , δ → ∞ {\displaystyle \delta \rightarrow \infty } . Then the equation reduces to
An approximate solution to this equation was developed by Liñán himself using integral method in 1963 for his thesis, [ 9 ]
where e r f {\displaystyle \mathrm {erf} } is the error function and
Here ζ = ζ m {\displaystyle \zeta =\zeta _{m}} is the location where y ( ζ ) {\displaystyle y(\zeta )} reaches its minimum value y ( ζ m ) = y m {\displaystyle y(\zeta _{m})=y_{m}} . When m = n = 1 {\displaystyle m=n=1} , ζ m = 0 {\displaystyle \zeta _{m}=0} , y m = 0.8702 {\displaystyle y_{m}=0.8702} and k = 0.6711 {\displaystyle k=0.6711} . | https://en.wikipedia.org/wiki/Liñán's_equation |
In combustion , Liñán's flame speed provides the estimate of the upper limit for edge-flame propagation velocity, when the flame curvature is small. The formula is named after Amable Liñán . [ 1 ] When the flame thickness is much smaller than the mixing-layer thickness through which the edge flame is propagating, a flame speed can be defined as the propagating speed of the flame front with respect to a region far ahead of the flame. For small flame curvatures ( flame stretch ), each point of the flame front propagates at a laminar planar premixed speed S L {\displaystyle S_{L}} that depends on a local equivalence ratio ϕ {\displaystyle \phi } just ahead of the flame. However, the flame front as a whole do not propagate at a speed S L {\displaystyle S_{L}} since the mixture ahead of the flame front undergoes thermal expansion due to the heating by the flame front, that aids the flame front to propagate faster with respect to the region far ahead from the flame front. Liñán estimated the edge flame speed to be:
where ρ u {\displaystyle \rho _{u}} and ρ b {\displaystyle \rho _{b}} is the density of the fluid far upstream and far downstream of the flame front. Here S L 0 {\displaystyle S_{L}^{0}} is the stoichiometric value ( ϕ = 1 {\displaystyle \phi =1} ) of the planar speed. Due to the thermal expansion, streamlines diverges as it approaches the flame and a pressure builds just ahead of the flame.
The scaling law for the flame speed was verified experimentally [ 2 ] [ 3 ] In constant density approximation, this influence due to density variations disappear and the upper limit of the edge flame speed is given by the maximum value of S L {\displaystyle S_{L}} . [ 4 ] | https://en.wikipedia.org/wiki/Liñán's_flame_speed |
Ljungström air preheater is an air preheater invented by the Swedish engineer Fredrik Ljungström (1875–1964).
The patent was achieved in 1930. [ 1 ] The factory and workshop were in Lidingö throughout the 1920s, with about 70 employees. In the 1930s, the facilities were used as a film studio, and they were demolished in the 1970s to give space for new development. [ citation needed ]
In 1995, the Ljungström air preheater was distinguished as the 185th International Historic Mechanical Engineering Landmark by the American Society of Mechanical Engineers . [ 2 ] Ljungström's technology of the air preheater is implemented in a vast number of modern power stations around the world, with total attributed worldwide fuel savings estimated at 4,960,000,000 tons of oil , "few inventions have been as successful in saving fuel as the Ljungström Air Preheater". [ This quote needs a citation ] In modern boilers , the preheater can provide up to 20% of the total heat transfer in the boiler process, while only representing 2% of the investment. [ 3 ]
This article about a power station is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ljungström_air_preheater |
The Llewellyn separator is a membrane molecular separator , a device for interfacing the effluence from a gas chromatograph to the ion source input of a mass spectrometer by changing a rather large (e.g. 10^4 TorrLitre) dilute (e.g. 1 part of vapour in 10^5 parts of carrier gas) gas flow into a small(e.g. <10^-3 TorrLitre) concentrated flow that can be admitted to the vacuum of the mass spectrometer. It is typically based on the substance passing through a few silicone rubber membranes in series. In order for the molecules of interest to be enriched in relation to the small and light carrier gas molecules the former must be captured by (dissolved in) the membrane polymer. The selection properties may be augmented by a liquid stationary phase on the membrane.
(login required)
This article related to chromatography is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Llewellyn_separator |
Llinás's law , or law of no interchangeability of neurons , is a statement in neuroscience made by Rodolfo Llinás in 1989, during his Luigi Galvani Award Lecture at the Fidia Research Foundation Neuroscience Award Lectures. [ 1 ]
A neuron of a given kind (e.g. a thalamic cell) cannot be functionally replaced by one of another type (e.g. an inferior ollivary cell) even if their synaptic connectivity and the type of neurotransmitter outputs are identical. (The difference is that the intrinsic electrophysiological properties of thalamic cells are extraordinarily different from those of inferior olivary neurons). [ 2 ] [ 3 ]
The statement of this law is a consequence of an article written by Rodolfo Llinas himself in 1988 and published in Science with the title "The Intrinsic Electrophysiological Properties of Mammalian Neurons: Insights into Central Nervous System Function", [ 4 ] which is considered a watershed due to its more than 2000 citations in the scientific literature, marking a major shift in viewpoint in neuroscience around the functional aspect. Until then, the prevailing belief in neuroscience was that just the connections and neurotransmitters released by neurons was enough to determine their function. Research by Llinás and colleagues during the 80's with vertebrates revealed this previously held dogma was wrong. | https://en.wikipedia.org/wiki/Llinás's_law |
The Lloyd Loom process was patented in 1917 by the American Marshall B. Lloyd , who twisted kraft paper around a metal wire, placed the paper threads on a loom and wove them into what was to become the traditional Lloyd Loom fabric. [ 1 ] Lloyd Loom chairs quickly became very popular in the United States and in 1921, Marshall B. Lloyd sold the British rights to W (William) Lusty & Sons, who used the Lloyd Loom fabric to create a range of furniture simpler in design than the American originals. [ 1 ]
At the height of its popularity, in the 1930s, Lusty Lloyd Loom furniture could be found in hotels, restaurants and tea rooms, as well as aboard a Zeppelin , cruise ships and ocean-going liners, becoming a household name. The Lusty family developed over one thousand designs, and over ten million pieces of Lusty Lloyd Loom were made in America and Great Britain before 1940.
William Lusty began in business in 1872 with a hardware shop in London's East End . Specialising in timber products the business benefitted from the demand for ammunition packing cases during the First World War. In May 1920 Frank Lusty sought the British rights to the new Lloyd Loom fabric, having been tipped off by Lusty's New York agent. Full patent rights were acquired in 1921 and the Lustys were in production the following year.
A few years later the Lusty Lloyd Loom factory covered seventeen acres at Bromley-by-Bow in East London and employed over 500 people making a range of products from baby carriages to kitchen cupboards. By 1933 over four hundred designs were featured in the Lusty Lloyd Loom catalogue. The factory was completely destroyed by bombing during the afternoon of 7 September 1940, together with over twenty thousand items of stock; there were no fatalities. [ 1 ] The Lustys relaunched their business with a new catalogue in 1951, though post war austerity prevented them achieving the pre-war sales level. The London factory site was eventually sold and the business moved to Martley near Worcester . Production ceased in 1968. Whilst the Lusty family found other interests and pursuits Lloyd Loom production continued in the United States at Menominee until 1982. After a brief hiatus, production resumed in Menominee in 1982 after Flanders Industries purchased the Lloyd Manufacturing works there, forming the Lloyd Flanders company. Lloyd Flanders continues to make Lloyd Loom furniture in Menominee today.
Geoffrey Lusty, a grandson of William Lusty who founded the business, rekindled production in the last decade of the twentieth century by moving production to the Far East . Whilst the production standards were as good as the original, and all items were made to the original W Lusty & Sons designs, the sales could not be developed in the United Kingdom and in 2008 the business called in advisors to find a new owner. W Lusty & Sons became The Lusty Furniture Company in July 2008, backed by private investors interested in preserving the legacy of Marshall B Lloyd, the inventor of Lloyd Loom. Reinstating the original design book the new owners maintained production in Indonesia and now provide the original designs in any colour, as W Lusty & Sons had offered in 1922.
During the fallow period between 1951 and the late 1990s a raft of commercial furniture producers entered the Lloyd Loom marketplace, such as the now defunct Lloyd Loom of Spalding in the United Kingdom and Vincent Sheppard in Belgium . A number of Lloyd Loom manufacturers and retailers both in the UK and abroad have emerged designing, producing and selling various indoor and outdoor Lloyd Loom product lines. Some manufacturers and retailers have developed synthetic fibres based on the original paper loom - for use in outdoor furniture.
The majority of current worldwide Lloyd Loom production takes place in Indonesia , China and Vietnam . Whilst Lloyd Loom is made on a small scale in the UK the larger number of Lloyd Loom companies and retailers operate as importers with the vast majority of Lloyd Loom produced in the Far East. | https://en.wikipedia.org/wiki/Lloyd_Loom |
lnfs is a Plan 9 file system enabling use of long filenames on filesystems which do not support them. Similar to the UMSDOS file system for Linux . [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lnfs |
LoJack is a stolen-vehicle recovery and IoT -connected car system that utilizes GPS and cellular technology to locate users' vehicles, view trip-history, see battery levels, track speeding, and maintain vehicle-health via a native app. Prior to selling a vehicle, LoJack dealers can use the system to manage and locate inventory, view and manage battery-health, and recover stolen inventory.
Previous generations of the system utilized radio-tracking signals. The system used a hidden, mounted transceiver and a tracking computer installed in police cars and aircraft.
The original LoJack system was created and patented in 1979 by William Reagan, a former Medfield, Massachusetts police commissioner; who went on to establish LoJack Corporation in Medfield. Reagan served as the company's first CEO and Chairman. [ 1 ] The name "LoJack" was coined to be the "antithesis of hijack ", wherein "hijack" refers to the theft of a vehicle through force.
The original LoJack was a radio-based, hardware system designed to prevent theft of a vehicle and aid in the vehicle’s recovery by transmitting vehicle-location data to the LoJack receiver.
It was installed in the vehicle and connected to the starting mechanism such that only the original key would start the vehicle. It could also include the incorporation of a scheme whereby an additional step was required to activate the ignition. Prior to starting, it would require the activation of any number of the usual vehicle features such as the radio, headlight switch, or other switched device. [ 2 ]
The core of the legacy LoJack system is a small, silent radio transceiver that is discreetly installed in a vehicle. Once installed, the unit and the vehicle's VIN are registered in a database that interfaces with the National Crime Information Center (NCIC) system used by federal, state, and local law-enforcement agencies throughout the United States . In the event of a theft, a customer reports the incident to the police; who make a routine entry into the state-police crime computer, including the stolen vehicle's VIN. This theft report is automatically processed by LoJack network-computers, triggering a remote command to the specific LoJack unit in the stolen vehicle. [ 3 ]
The command activates the LoJack unit to start sending out signals to LoJack police tracking-computers onboard some police cars. Every police car so equipped within a 3-5 mile radius of the signal source will be alerted. The tracking units will display an alphanumeric reply-code and an indication of the approximate direction and distance to the stolen vehicle. Based on the reply-code, the police can obtain a physical description of the vehicle, including make ( brand ), model, color, VIN, and license plate number. Police aircraft can also be equipped with tracking computers; airborne units can receive the ( line-of-sight ) signals from further away than ground-based units. The signal is received in equipped police vehicles, utilizing a phased-array antenna system, hence the four distinctive antennae on the roof. This provides the directional location-tracking capabilities of the system. [ 3 ]
In addition to automobile-theft recovery, LoJack systems are used to recover stolen construction-equipment and motorcycles. [ 4 ] [ 5 ]
By 2013, the LoJack system was reportedly operating in 28 states and the District of Columbia and in more than 30 countries. The company reported that more than 1,800 U.S. law-enforcement agencies had LoJack tracking-computers in their police vehicles. [ 1 ] In November 2013, the company announced they were expanding tracking capabilities to parents, auto-makers, and insurance companies. [ 6 ]
In March 2016, the company was acquired for $134 million by CalAmp , an Irvine, California -based provider of Internet of things (IoT) software applications, cloud services, data intelligence, and telematics products and services. [ 7 ]
In 2024, CalAmp filed for Chapter 11 bankruptcy , allowing for a secured deal with its lenders to swap its $229 million in bonds for equity. The company stated that its financial state has been bleak for many years, blaming its acquisition of LoJack and an ill-fated program that stretches customer's payment-terms. [ 8 ]
LoJack transmits on a radio ( RF ) carrier frequency of 173.075 MHz . Vehicles with the system installed send a 200 millisecond (ms) chirp every fifteen seconds on this frequency. When being tracked after having been reported as stolen, the devices send out a 200 ms signal once per second. [ 9 ] [ 10 ] The radio frequency transmitted by LoJack is near the VHF spectrum used in North America by digital television channel 7, [ 11 ] although there is said to be minimal interference due to the low power of radiation, brief chirp-duration, and long interval between chirps. [ 12 ]
Modern transponder key based systems made the original LoJack starting system obsolete. The system marketed under the LoJack brand since 2021 is a cell phone/GPS based stolen vehicle tracking and recovery system.
In March 2021, the vehicle intelligence company Spireon announced it had acquired the LoJack U.S. Stolen Vehicle Recovery business from CalAmp, joining LoJack users with "nearly 4 million active subscribers from over 20,000 current Spireon customers". [ 13 ] CalAmp would still retain and continue to expand LoJack International, which operates as a subscription-based SaaS business, while also retaining ownership of the LoJack patents and trademarks.
In 2023, a group of security researchers announced discovery of multiple software bugs affecting vehicles from nearly all major car brands, potentially enabling hackers to take full control of the affected cars. The most serious vulnerabilities were found in Spireon's fleet management software, which spans 15 million connected vehicles, and could have allowed remote control over a wide range of fleet vehicles, including those used by law enforcement. All identified bugs have since been fixed. [ 14 ] | https://en.wikipedia.org/wiki/LoJack |
A load-bearing wall or bearing wall is a wall that is an active structural element of a building , which holds the weight of the elements above it, by conducting its weight to a foundation structure below it.
Load -bearing walls are one of the earliest forms of construction. The development of the flying buttress in Gothic architecture allowed structures to maintain an open interior space, transferring more weight to the buttresses instead of to central bearing walls. In housing, load-bearing walls are most common in the light construction method known as " platform framing ". In the birth of the skyscraper era, the concurrent rise of steel as a more suitable framing system first designed by William Le Baron Jenney , and the limitations of load-bearing construction in large buildings, led to a decline in the use of load-bearing walls in large-scale commercial structures.
A load-bearing wall or bearing wall is a wall that is an active structural element of a building — that is, it bears the weight of the elements above said wall, resting upon it by conducting its weight to a foundation structure. [ 1 ] The materials most often used to construct load-bearing walls in large buildings are concrete , block , or brick . By contrast, a curtain wall provides no significant structural support beyond what is necessary to bear its own materials or conduct such loads to a bearing wall. [ 2 ]
Load -bearing walls are one of the earliest forms of construction. [ 3 ] The development of the flying buttress in Gothic architecture allowed structures to maintain an open interior space, transferring more weight to the buttresses instead of to central bearing walls. The Notre Dame Cathedral is an example of a load-bearing wall structure with flying buttresses. [ 4 ]
Depending on the type of building and the number of floors, load-bearing walls are gauged to the appropriate thickness to carry the weight above them. Without doing so, it is possible that an outer wall could become unstable if the load exceeds the strength of the material used, potentially leading to the collapse of the structure. The primary function of this wall is to enclose or divide space of the building to make it more functional and useful. It provides privacy, affords security, and gives protection against heat, cold, sun or rain. [ 5 ]
In housing, load-bearing walls are most common in the light construction method known as " platform framing ", and each load-bearing wall sits on a wall sill plate which is mated to the lowest base plate . The sills are bolted to the masonry or concrete foundation . [ 6 ]
The top plate or ceiling plate is the top of the wall, which sits just below the platform of the next floor (at the ceiling). The base plate or floor plate is the bottom attachment point for the wall studs . Using a top plate and a bottom plate, a wall can be constructed while it lies on its side, allowing for end-nailing of the studs between two plates, and then the finished wall can be tipped up vertically into place atop the wall sill; this not only improves accuracy and shortens construction time, but also produces a stronger wall.
Due to the immense weight of skyscrapers , the base and walls of the lower floors must be extremely strong. Pilings are used to anchor the building to the bedrock underground. For example, the Burj Khalifa , the world's tallest building as well as the world's tallest structure, uses specially treated and mixed reinforced concrete . Over 45,000 cubic metres (59,000 cu yd) of concrete, weighing more than 110,000 t (120,000 short tons) were used to construct the concrete and steel foundation, which features 192 piles, with each pile being 1.5 m diameter × 43 m long (4.9 ft × 141 ft) and buried more than 50 m (160 ft) deep. [ 7 ] | https://en.wikipedia.org/wiki/Load-bearing_wall |
A load cell converts a force such as tension, compression, pressure, or torque into a signal (electrical, pneumatic or hydraulic pressure, or mechanical displacement indicator) that can be measured and standardized. It is a force transducer . As the force applied to the load cell increases, the signal changes proportionally. The most common types of load cells are pneumatic, hydraulic, and strain gauge types for industrial applications. Typical non-electronic bathroom scales are a widespread example of a mechanical displacement indicator where the applied weight (force) is indicated by measuring the deflection of springs supporting the load platform, technically a "load cell".
Strain gauge load cells are the kind most often found in industrial settings. It is ideal as it is highly accurate, versatile, and cost-effective. Structurally, a load cell has a metal body to which strain gauges have been secured. The body is usually made of aluminum, alloy steel, or stainless steel which makes it very sturdy but also minimally elastic. This elasticity gives rise to the term "spring element", referring to the body of the load cell. When force is exerted on the load cell, the spring element is slightly deformed, and unless overloaded, always returns to its original shape. As the spring element deforms, the strain gauges also change shape. The resulting alteration to the resistance in the strain gauges can be measured as voltage. The change in voltage is proportional to the amount of force applied to the cell, so the amount of force can be calculated from the load cell's output.
A strain gauge is constructed of very fine wire, or foil, set up in a grid pattern and attached to a flexible backing. When the shape of the strain gauge is altered, a change in its electrical resistance occurs. The wire or foil in the strain gauge is arranged in a way that, when force is applied in one direction, a linear change in resistance results. Tension force stretches a strain gauge, causing it to get thinner and longer, resulting in an increase in resistance. Compression force does the opposite. The strain gauge compresses, becomes thicker and shorter, and resistance decreases. The strain gauge is attached to a flexible backing enabling it to be easily applied to a load cell, mirroring the minute changes to be measured.
Since the change in resistance measured by a single strain gauge is extremely small, it is difficult to accurately measure changes. Increasing the number of strain gauges applied collectively magnifies these small changes into something more measurable. A set of 4 strain gauges set in a specific circuit is an application of a Wheatstone bridge .
A Wheatstone bridge is a configuration of four balanced resistors with a known excitation voltage applied as shown below:
Excitation voltage V EX {\displaystyle V_{\text{EX}}} is a known constant and output voltage V o {\textstyle V_{o}} is variable depending on the shape of the strain gauges. If all resistors are balanced, meaning R 1 R 2 = R 4 R 3 {\displaystyle {\frac {R1}{R2}}={\frac {R4}{R3}}} then V o {\textstyle V_{o}} is zero. If the resistance in even one of the resistors changes, then V 0 {\displaystyle V_{0}} will likewise change. The change in V o {\textstyle V_{o}} can be measured and interpreted using Ohm's law. Ohm's law states that the current ( I {\textstyle I} , measured in amperes) running through a conductor between two points is directly proportional to the voltage V {\textstyle V} across the two points. Resistance ( R {\displaystyle R} , measured in ohms) is introduced as the constant in this relationship, independent of the current. Ohm's law is expressed in the equation I = V / R {\displaystyle I=V/R} .
When applied to the 4 legs of the Wheatstone bridge circuit, the resulting equation is:
V o = ( R 3 R 3 + R 4 − R 2 R 1 + R 2 ) V EX {\displaystyle V_{o}=\left({\frac {R3}{R3+R4}}-{\frac {R2}{R1+R2}}\right)V_{\text{EX}}}
In a load cell, the resistors are replaced with strain gauges and arranged in alternating tension and compression formation. When force is exerted on the load cell, the structure and resistance of the strain gauges changes and V o {\textstyle V_{o}} is measured. From the resulting data, V o {\textstyle V_{o}} can be easily determined using the equation above. [ 1 ]
There are several types of strain gauge load cells: [ 2 ]
The digital capacitive technology is based on a non-contacting ceramic sensor mounted inside the load cell body. As the load cell contains no moving parts and the ceramic sensor is not in contact with the load cell body, the load cell tolerates very high overloads (up to 1000%), sideloads, torsion, and stray welding voltages. [ 3 ] This allows for simple installation of the load cells without expensive and complicated mounting kits, stay rods, or overload protection devices, which in turn eliminates the need for maintenance.
Capacitive and strain gauge load cells both rely on an elastic element that is deformed by the load to be measured. The material used for the elastic element is normally aluminum or stainless steel for load cells used in corrosive industrial applications. A strain gauge sensor measures the deformation of the elastic element, and the output of the sensor is converted by an electronic circuit to a signal that represents the load. Capacitive strain gauges measure the deformation of the elastic material using the change in capacitance of two plates as the plates move closer to each other.
Capacitive sensors have a high sensitivity compared to strain gauges. Because of the much higher sensitivity, a much lower deformation of the elastic element is needed, and the elastic element of a capacitive load cell is therefore strained around 5 to 10 times lower than the elastic element of a strain gauge load cell. The low strained element combined with the fact that a capacitive sensor is non-contacting, provides the very high shock resistance and overload capability of the capacitive load cell compared to the strain gage load cell. This is an obvious advantage in industrial environments and especially for the lower capacity load cells where the risk of damage because of shocks and overloads is high.
In a standard analog strain gauge load cell, the power supply and the low-level analog signal are normally conducted through a rather expensive 6-wire cable to the instrumentation where the analog signal is converted to a digital signal. Instead, digital capacitive load cells transmit the digital signal back to the instrumentation which may be placed several hundred meters away without influencing the reading.
The pneumatic load cell is designed to automatically regulate the balancing pressure. Air pressure is applied to one end of the diaphragm and it escapes through the nozzle placed at the bottom of the load cell. A pressure gauge is attached to the load cell to measure the pressure inside the cell. The deflection of the diaphragm affects the airflow through the nozzle as well as the pressure inside the chamber.
The hydraulic load cell uses a conventional piston and cylinder arrangement with the piston placed in a thin elastic diaphragm. The piston doesn't actually come in contact with the load cell. Mechanical stops are placed to prevent overstrain of the diaphragm when the loads exceed a certain limit. The load cell is completely filled with oil. When the load is applied on the piston, the movement of the piston and the diaphragm results in an increase of oil pressure. This pressure is then transmitted to a hydraulic pressure gauge via a high pressure hose. [ 4 ] The gauge's Bourdon tube senses the pressure and registers it on the dial. Because this sensor has no electrical components, it is ideal for use in hazardous areas. [ 5 ] Typical hydraulic load cell applications include tank, bin, and hopper weighing. [ 6 ] By example, a hydraulic load cell is immune to transient voltages (lightning) so these type of load cells might be a more effective device in outdoor environments. This technology is more expensive than other types of load cells. It is a more costly technology and thus cannot effectively compete on a cost of purchase basis. [ 7 ]
Vibrating wire load cells, which are useful in geomechanical applications due to low amounts of drift ,
Piezoelectric load cells work on the same principle of deformation as the strain gauge load cells, but a voltage output is generated by the basic piezoelectric material – proportional to the deformation of load cell. Useful for dynamic/frequent measurements of force. Most applications for piezo-based load cells are in the dynamic loading conditions, where strain gauge load cells can fail with high dynamic loading cycles. The piezoelectric effect is dynamic, that is, the electrical output of a gauge is an impulse function and is not static. The voltage output is only useful when the strain is changing and does not measure static values.
However, depending on conditioning system used, "quasi static" operation can be done. Using a charge amplifier with a long time constant allows accurate measurement lasting many minutes for small loads up to many hours for large loads. Another advantage of Piezoelectric load cells conditioned with a charge amplifier is the wide measuring range that can be achieved. Users can choose a load cell with a range of hundred of kilonewtons and use it for measuring few newtons of force with the same signal-to-noise ratio; again this is possible only with the use of a charge amplifier for conditioning.
The bridge is excited with stabilized voltage (usually 10V, but can be 20V, 5V, or less for battery powered instrumentation). The difference voltage proportional to the load then appears on the signal outputs. The cell output is rated in millivolts per volt (mV/V) of the difference voltage at full rated mechanical load. So a 2.96 mV/V load cell will provide 29.6 millivolt signal at full load when excited with 10 volts.
Typical sensitivity values are 1 to 3 mV/V. Typical maximum excitation voltage is around 15 volts.
The full-bridge cells come typically in four-wire configuration. The wires to the top and bottom end of the bridge are the excitation (often labelled E+ and E−, or Ex+ and Ex−), the wires to its sides are the signal (labelled S+ and S−). Ideally, the voltage difference between S+ and S− is zero under zero load, and grows proportionally to the load cell's mechanical load.
Sometimes a six-wire configuration is used. The two additional wires are "sense" (Sen+ and Sen−), and are connected to the bridge with the Ex+ and Ex- wires, in a fashion similar to four-terminal sensing . With these additional signals, the controller can compensate for the change in wire resistance due to external factors, e.g. temperature fluctuations.
The individual resistors on the bridge usually have resistance of 350 Ω . Sometimes other values (typically 120 Ω, 1,000 Ω) can be encountered.
The bridge is typically electrically insulated from the substrate. The sensing elements are in close proximity and in good mutual thermal contact, to avoid differential signals caused by temperature differences.
One or more load cells can be used for sensing a single load.
If the force can be concentrated to a single point (small scale sensing, ropes, tensile loads, point loads), a single cell can be used. For long beams, two cells at the end are used. Vertical cylinders can be measured at three points, rectangular objects usually require four sensors. More sensors are used for large containers or platforms, or very high loads.
If the loads are guaranteed to be symmetrical, some of the load cells can be substituted with pivots. This saves the cost of the load cell but can significantly decrease accuracy.
Load cells can be connected in parallel; in that case, all the corresponding signals are connected together (Ex+ to Ex+, S+ to S+, ...), and the resulting signal is the average of the signals from all the sensing elements. This is often used in e.g. personal scales, or other multipoint weight sensors.
The most common color assignment is red for Ex+, black for Ex−, green for S+, and white for S−.
Less common assignments are red for Ex+, white for Ex−, green for S+, and blue for S−, or red for Ex+, blue for Ex−, green for S+, and yellow for S−. [ 9 ] Other values are also possible, e.g. red for Ex+, green for Ex−, yellow for S+ and blue for S−. [ 10 ]
Every load cell is subject to "ringing" when subjected to abrupt load changes. This stems from the spring-like behavior of load cells. In order to measure the loads, they have to deform. As such, a load cell of finite stiffness must have spring-like behavior, exhibiting vibrations at its natural frequency . An oscillating data pattern can be the result of ringing. Ringing can be suppressed in a limited fashion by passive means. Alternatively, a control system can use an actuator to actively damp out the ringing of a load cell. This method offers better performance at a cost of significant increase in complexity.
Load cells are used in several types of measuring instruments such as laboratory balances, industrial scales, platform scales [ 11 ] and universal testing machines . [ 12 ] From 1993 the British Antarctic Survey installed load cells in glass fibre nests to weigh albatross chicks. [ 13 ] Load cells are used in a wide variety of items such as the seven-post shaker which is often used to set up race cars. Another common use is within sim racing , where the advantage of a load cell over a potentiometer is that simulated braking force is based on the user’s force on the pedal, rather than the position of the pedal.
Load cells are commonly used to measure weight in an industrial environment. They can be installed on hoppers, reactors, etc., to control their weight capacity, which is often of critical importance for an industrial process. Some performance characteristics of the load cells must be defined and specified to make sure they will cope with the expected service. Among those design characteristics are:
The electrical, physical, and environmental specifications of a load cell help to determine which applications it is appropriate for. Common specifications include: [ 14 ]
Load cells are an integral part of most weighing systems in industrial, aerospace and automotive industries, enduring rigorous daily use. Over time, load cells will drift, age and misalign; therefore, they will need to be calibrated regularly to ensure accurate results are maintained. [ 15 ] ISO9000 and most other standards specify a maximum period of around 18 months to 2 years between re-calibration procedures, dependent on the level of load cell deterioration. Annual re-calibration is considered best practice by many load cell users for ensuring the most accurate measurements.
Standard calibration tests will use linearity and repeatability as a calibration guideline as these are both used to determine accuracy. Calibration is conducted incrementally starting working in ascending or descending order. For example, in the case of a 60 tonne load cell, then specific test weights that measure in 5, 10, 20, 40 and 60 tonne increments may be used; a five step calibration process is usually sufficient for ensuring a device is accurately calibrated. Repeating this five-step calibration procedure 2-3 times is recommended for consistent results. [ 16 ] | https://en.wikipedia.org/wiki/Load_cell |
In graphical analysis of nonlinear electronic circuits , a load line is a line drawn on the current–voltage characteristic graph for a nonlinear device like a diode or transistor . It represents the constraint put on the voltage and current in the nonlinear device by the external circuit. The load line, usually a straight line, represents the response of the linear part of the circuit, connected to the nonlinear device in question. The points where the characteristic curve and the load line intersect are the possible operating point (s) ( Q points ) of the circuit; at these points the current and voltage parameters of both parts of the circuit match. [ 1 ]
The example at right shows how a load line is used to determine the current and voltage in a simple diode circuit. The diode, a nonlinear device, is in series with a linear circuit consisting of a resistor , R and a voltage source, V DD . The characteristic curve (curved line) , representing the current I through the diode for any given voltage across the diode V D , is an exponential curve. The load line (diagonal line) , representing the relationship between current and voltage due to Kirchhoff's voltage law applied to the resistor and voltage source, is
Since the same current flows through each of the three elements in series, and the voltage produced by the voltage source and resistor is the voltage across the terminals of the diode, the operating point of the circuit will be at the intersection of the curve with the load line.
In a circuit with a three terminal device, such as a transistor , the current–voltage curve of the collector-emitter current depends on the base current. This is depicted on graphs by a series of (I C –V CE ) curves at different base currents. A load line drawn on this graph shows how the base current will affect the operating point of the circuit.
The load line diagram at right is for a resistive load in a common emitter circuit. The load line shows how the collector load resistor (R L ) constrains the circuit voltage and current. The diagram also plots the transistor's collector current I C versus collector voltage V CE for different values of base current I base . The intersections of the load line with the transistor characteristic curves represent the circuit-constrained values of I C and V CE at different base currents. [ 2 ]
If the transistor could pass all the current available, with no voltage dropped across it, the collector current would be the supply voltage V CC over R L . This is the point where the load line crosses the vertical axis. Even at saturation, however, there will always be some voltage from collector to emitter.
Where the load line crosses the horizontal axis, the transistor current is minimum (approximately zero). The transistor is said to be cut off, passing only a very small leakage current, and so very nearly the entire supply voltage appears as V CE .
The operating point of the circuit in this configuration (labelled Q) is generally designed to be in the active region , approximately in the middle of the load line's active region for amplifier applications. Adjusting the base current so that the circuit is at this operating point with no signal applied is called biasing the transistor . Several techniques are used to stabilize the operating point against minor changes in temperature or transistor operating characteristics. When a signal is applied, the base current varies, and the collector-emitter voltage in turn varies, following the load line - the result is an amplifier stage with gain.
A load line is normally drawn on I c -V ce characteristics curves for the transistor used in an amplifier circuit. The same technique is applied to other types of non-linear elements such as vacuum tubes or field effect transistors .
Semiconductor circuits typically have both DC and AC currents in them, with a source of DC current to bias the nonlinear semiconductor to the correct operating point, and the AC signal superimposed on the DC. Load lines can be used separately for both DC and AC analysis. The DC load line is the load line of the DC equivalent circuit , defined by reducing the reactive components to zero (replacing capacitors by open circuits and inductors by short circuits). It is used to determine the correct DC operating point, often called the Q point .
Once a DC operating point is defined by the DC load line, an AC load line can be drawn through the Q point. The AC load line is a straight line with a slope equal to the AC impedance facing the nonlinear device, which is in general different from the DC resistance. The ratio of AC voltage to current in the device is defined by this line. Because the impedance of the reactive components will vary with frequency, the slope of the AC load line depends on the frequency of the applied signal. So there are many AC load lines, that vary from the DC load line (at low frequency) to a limiting AC load line, all having a common intersection at the DC operating point. This limiting load line, generally referred to as the AC load line , is the load line of the circuit at "infinite frequency", and can be found by replacing capacitors with short circuits, and inductors with open circuits. | https://en.wikipedia.org/wiki/Load_line_(electronics) |
Load path analysis is a technique of mechanical and structural engineering used to determine the path of maximum stress in a non-uniform load -bearing member in response to an applied load. Load path analysis can be used to minimize the material needed in the load-bearing member to support the design load.
Load path analysis may be performed using the concept of a load transfer index, U*. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In a structure, the main portion of the load is transferred through the stiffest route. The U* index represents the internal stiffness of every point within the structure. Consequently, the line connecting the highest U* values is the main load path. In other words, the main load path is the ridge line of the U* distribution (contour) [ 1 ] This method of analysis has been verified in physical experimentation. [ 3 ]
In a structure, the main portion of the load is transferred through the stiffest route. The U* index represents the internal stiffness of every point within the structure. Consequently, the line connecting the highest U* values is the main load path. In other words, the main load path is the ridge line of the U* distribution (contour). [ 1 ] The U* index theory has been validated through two different physical experiments. [ 3 ]
Since the U* index predicts the load paths based on the structural stiffness, it is not affected by the stress concentration problems. The load transfer analysis using the U* index is a new design paradigm for vehicle structural design. [ 4 ] It has been applied in design analysis and optimization by automotive manufacturers like Honda and Nissan .
In the image to the right, a structural member with a central hole is placed under load bearing stress. Figure (a) shows the U* distribution and the resultant load paths while figure (b) is the von Mises Stress distribution . As can be seen from figure (b), higher stresses can be observed at the vicinity of the hole. However, it is unreasonable to conclude the main load passes that area with stress concentration because the hole (which has no material) is not important for carrying the load. The stress concentration caused by the structural singularities like a hole or a notch makes the load transfer analysis more difficult. | https://en.wikipedia.org/wiki/Load_path_analysis |
In electrical engineering , a load profile is a graph of the variation in the electrical load versus time. A load profile will vary according to customer type (typical examples include residential, commercial and industrial), temperature and holiday seasons. Power producers use this information to plan how much electricity they will need to make available at any given time. Teletraffic engineering uses a similar load curve.
In a power system, a load curve or load profile is a chart illustrating the variation in demand/electrical load over a specific time. Generation companies use this information to plan how much power they will need to generate at any given time. A load duration curve is similar to a load curve. The information is the same but is presented in a different form. These curves are useful in the selection of generator units for supplying electricity.
In an electricity distribution grid , the load profile of electricity usage is important to the efficiency and reliability of power transmission. The power transformer or battery-to-grid are critical aspects of power distribution and sizing and modelling of batteries or transformers depends on the load profile. [ 1 ] The factory specification of transformers for the optimization of load losses versus no-load losses is dependent directly on the characteristics of the load profile that the transformer is expected to be subjected to. [ 2 ] This includes such characteristics as average load factor , diversity factor , utilization factor , and demand factor , which can all be calculated based on a given load profile.
On the power market so-called EFA blocks are used to specify the traded forward contract on the delivery of a certain amount of electrical energy at a certain time.
In retail energy markets, supplier obligations are settled on an hourly or subhourly basis. For most customers, consumption is measured on a monthly basis, based on meter reading schedules. Load profiles are used to convert the monthly consumption data into estimates of hourly or subhourly consumption in order to determine the supplier obligation. For each hour, these estimates are aggregated for all customers of an energy supplier, and the aggregate amount is used in market settlement calculations as the total demand that must be covered by the supplier.
Load profiles can be determined by direct metering but on smaller devices such as distribution network transformers this is not routinely done. Instead a load profile can be inferred from customer billing or other data. An example of a practical calculation used by utilities is using a transformer's maximum demand reading and taking into account the known number of each customer type supplied by these transformers. This process is called load research .
Actual demand can be collected at strategic locations to perform more detailed load analysis; this is beneficial to both distribution and end-user customers looking for peak consumption. Smart grid meters , utility meter load profilers, data logging sub-meters and portable data loggers are designed to accomplish this task by recording readings at a set interval. | https://en.wikipedia.org/wiki/Load_profile |
Load-pull is the colloquial term applied to the process of systematically varying the impedance presented to a device under test (DUT), most often a transistor , to assess its performance and the associated conditions to deliver that performance in a network. [ 1 ] While load-pull itself implies impedance variation at the load port, impedance can also be varied at any of the ports of the DUT, most often at the source.
Load-pull is required when superposition is no longer applicable, which occurs under large-signal operating conditions that make linear approximations unusable. The term load-pull derives from classical oscillator characterization whereupon variation of the load impedance pulls the oscillation center frequency away from nominal. Source-pull is also used for noise characterization, which, although linear, requires multiple impedances to be presented at the source to enable simultaneous solution of an over-determined system that yields the four noise parameters.
Load-pull is the most common method globally for RF and MW power amplifier (PA) design, transistor characterization, semiconductor process development , and ruggedness analysis. A central theme of load-pull is management of nonlinearity versus analysis of nonlinearity, the latter being the domain of advanced mathematics that often yields little physical insight to nonlinear phenomena and suffers from an inability to accurately render actual behavior embedded in a network with significant parasitic and distributed effects. With automated load-pull, it is possible to fully optimize and design a final stage for GSM applications in less than a day, thereby providing a dramatic reduction in design cycle-time while assuring the best possible performance trade-off has been achieved.
While there are in theory no physical limits on the frequency of which load-pull can be performed, most load-pull systems are based on passive distributed networks using either the slab transmission line in its TEM mode or the rectangular waveguide in its TE01 mode. Lumped tuners can be made for HF and VHF frequencies, whereas active load-pull is ideal for on-wafer mm-wave environments, where substantial loss between the tuner and DUT reference-plane limits maximum VSWR.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Load_pull |
Load rejection in an electric power system is the condition in which there is a sudden load loss in the system which causes the generating equipment to be over-frequency.
A load rejection test is part of commissioning for power systems to confirm that the system can withstand a sudden loss of load and return to normal operating conditions using its governor . [ 1 ] Load banks are normally used for these tests.
This electricity-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Load_rejection |
A loader is a heavy equipment machine used in construction to move or load materials such as soil , rock , sand , demolition debris, etc. into or onto another type of machinery (such as a dump truck , conveyor belt , feed-hopper , or railroad car ).
There are many types of loader, which, depending on design and application, are variously called a bucket loader , end loader , front loader , front-end loader , payloader , high lift , scoop , shovel dozer , skid-steer , skip loader , tractor loader or wheel loader .
A loader is a type of tractor , usually wheeled, sometimes on tracks , that has a front-mounted wide bucket connected to the end of two booms (arms) to scoop up loose material from the ground, such as dirt, sand or gravel, and move it from one place to another without pushing the material across the ground. A loader is commonly used to move a stockpiled material from ground level and deposit it into an awaiting dump truck or into an open trench excavation.
The loader assembly may be a removable attachment or permanently mounted. Often the bucket can be replaced with other devices or tools—for example, many can mount forks to lift heavy pallets or shipping containers , and a hydraulically opening "clamshell" bucket allows a loader to act as a light dozer or scraper. The bucket can also be augmented with devices like a bale grappler for handling large bales of hay or straw .
Large loaders, such as the Kawasaki 95ZV-2, John Deere 844K, ACR 700K Compact Wheel Loader, Caterpillar 950H, Volvo L120E, Case 921E, or Hitachi ZW310 usually have only a front bucket and are called front loaders, whereas small loader tractors are often also equipped with a small backhoe and are called backhoe loaders or loader backhoes or JCBs , after the company that first claimed to have invented them. Other companies like CASE in America and Whitlock in the UK had been manufacturing excavator loaders well before JCB.
The largest loader in the world is LeTourneau L-2350 . Currently these large loaders are in production in the Longview, Texas facility. The L-2350 uses a diesel-electric propulsion system similar to that used in a locomotive. [ 1 ] Each rubber tired wheel is driven by its own independent electric motor.
Loaders are used mainly for loading materials into trucks , laying pipe, clearing rubble, and digging. A loader is not the most efficient machine for digging as it cannot dig very deep below the level of its wheels, like a backhoe or an excavator can. The capacity of a loader bucket can be anywhere from 0.5 to 36 m 3 [ 2 ] depending upon the size of the machine and its application. The front loader's bucket capacity is generally much bigger than a bucket capacity of a backhoe loader .
Unlike most bulldozers , most loaders are wheeled and not tracked , although track loaders are common. They are successful where sharp-edged materials in construction debris would damage rubber wheels, or where the ground is soft and muddy. Wheels provide better mobility and speed and do not damage paved roads as much as tracks, but provide less traction.
In construction areas loaders are also used to transport building materials such as bricks, pipe, metal bars, and digging tools over short distances.
Front-loaders are commonly used to remove snow especially from sidewalks, parking lots, and other areas too small for using snowplows and other heavy equipment. They are sometimes used as snowplows with a snowplow attachment but commonly have a bucket or snow basket, which can also be used to load snow into the rear compartment of a snowplow or dump truck.
High-tip buckets are suitable for light materials such as chip, peat and light gravel and when the bucket is emptied from a height.
Unlike backhoes or standard tractors fitted with a front bucket, many large loaders do not use automotive steering mechanisms. Instead, they steer by a hydraulically actuated pivot point set exactly between the front and rear axles . This is referred to as "articulated steering" and allows the front axle to be solid, allowing it to carry greater weight. Articulated steering provides better maneuverability for a given wheelbase. Since the front wheels and attachment rotate on the same axis, the operator is able to "steer" his load in an arc after positioning the machine, which can be useful. The tradeoff is that when the machine is "twisted" to one side and a heavy load is lifted high, it has a greater risk of turning over to the "wide" side.
Front loaders gained popularity during the last two decades, especially in urban engineering projects and small earthmoving works. Heavy equipment manufacturers offer a wide range of loader sizes and duties.
The term "loader" is also used in the debris removal field to describe the boom on a grapple truck .
The major components included in a loader are the engine (diesel in almost all cases), the hydraulic components (such as pumps, motors and valves) and the transmission components (gearbox, axles, wheels/tracks, pumps, motors, etc.). The engine runs both the hydraulics and the transmission, and these in turn move the front attachment (a bucket, forks, sweeper, etc.) to manipulate the material being handled, and the wheels or tracks to move the machine around the jobsite.
The first wheel loader was invented by Frank G. Hough in 1939, it was called the Payloader. [ 3 ] This machine consisted of a vertical mast affixed to the front of a tractor with a pair of loader arms running from the back of the machine ending in a forwards bucket, with the main lifting mechanism being driven a cable tensioned via vertically lifting hydraulic cylinder located inside the mast. [ 4 ] Today wheel loaders are articulated, a design choice introduced in 1953 via Mixermobile's Scoopmobile series of wheel loaders. [ 5 ] This articulation allows them both a superior turning radius and the ability to move the bucket in a small horizontal arc without having to move forward like a conventionally steering chassis.
The Israeli Combat Engineering Corps uses armored Caterpillar 966 wheel loaders for construction and combat engineering missions in militarily occupied territories such as the West Bank . They are often seen building or removing road blocks and building bases and fortifications . Since 2005, they have also been used to demolish small houses. The Israel Defense Forces added armor plating to the loader to protect it against rocks, stones, molotov cocktails , and light gunfire.
Rio de Janeiro 's police elite squad Batalhão de Operações Policiais Especiais (BOPE) has acquired one wheel loader designed for military use to open routes and make way for the police in Rio de Janeiro's slums, which are controlled, and blocked, by drug dealers. [ 6 ]
Several if not most countries have similar equipment. The Dutch armed forces for instance use models like the Werklust WG18Edef, which weighs 15 tons, 2 more than the corresponding unarmored civilian model. In addition, the Dutch military previously used extra armor modules covering most of the window surface with steel for extra protection. These were however not popular with the crews due to low visibility.
The Turkish Army and Turkish Police used remote controlled armored wheel loader Tosun during the building of the Syria–Turkey barrier , the Operation Euphrates Shield , Operation Idlib Shield) and Operation Olive Branch . [ 7 ]
People's Armed Police Transportation Detachments operate wheel loaders for search and rescue along if [ 8 ]
These loaders are a popular addition to tractors from 40 to 150 kW (50 to 200 hp). Its current 'drive-in' form was originally designed and developed in 1958 by a Swedish company named Ålö when they launched their Quicke loader. [ 9 ] Tractor loaders were developed to perform a multitude of farming tasks, and are popular due to their relatively low cost (compared to Telehandler ) and high versatility. Tractor loaders can be fitted with many attachments such as hydraulic grabs and spikes to assist with bale and silage handling, forks for pallet work, and buckets for more general farm activities. Industrial tractor loaders equipped with box graders are marketed to contractors as skip loaders . [ 10 ]
Abram Dietrich Thiessen of Eyebrow Saskatchewan in the 1940s built the first quick attach front end loader.
Front-end loaders (FELs) are popular additions to compact utility tractors and farm tractors. Compact utility tractors, also called CUTs, are small tractors, typically with 10 to 40 kW (18 to 50 hp) and used primarily for grounds maintenance and landscape chores. [ citation needed ] There are 2 primary designs of compact tractor FELs, the traditional dogleg designed style and the curved arm style.
John Deere manufactures a semi-curved loader design that does not feature the one piece curved arm, but also is not of the traditional two piece design. New Holland introduced a compact loader with a one piece curved arm on its compact utility tractors, similar one piece curved arm loaders are now available on compact tractors on many brands including Case IH/Farmall , and some Montana and Kioti tractors. Kubota markets traditional loader designs on most of its compact tractors but now features a semi-curved loader design similar to the John Deere loader design on several of its small tractors.
While the front-end loaders on CUT size tractors are capable of many tasks, given their relatively small size and low capacities when compared to commercial loaders, the compact loaders can be made more useful with some simple options. A toothbar is commonly added to the front edge of a loader bucket to aid with digging. Some loaders are equipped with a quick coupler , otherwise known as a quick attach (QA) system. The QA system allows the bucket to be removed easily and other tools to be added in its place. Common additions include a set of pallet forks for lifting pallets of goods or a bale spear for lifting hay bales.
LHD (Load, Haul, Dump machine) is also a front end loader but meant to be used for mine compact conditions, can handle various range of loads with varying size of buckets, and can be driven with electric motors as well as diesel engines. [ 11 ]
A skid loader is a small loader utilizing four wheels with hydraulic drive that directs power to either, or both, sides of the vehicle. Very similar in appearance and design is the track loader, which utilizes a continuous track on either side of the vehicle instead of the wheels. Since the expiration of Bobcat's patent on its quick-connect system, newer tractor models are standardizing that popular format for front end attachments. [ citation needed ]
A swingloader is a rigid frame loader with a swinging boom. The Swingloader was invented in 1953 by German manufacturer Ahlmann with the AR1 model. The boom can swing 180 degrees or more. The loader is able to lift on all sides and dump off on all sides. Swingloaders are often used by the railroad industry to lay rail. Like other loaders many attachments can be attached to the boom such as magnets, forks, and buckets. Smaller swingloaders can be used in farming applications for loading out. A swinging boom is advantageous where space is limited as stability, mobility and space management are greatly increased over their articulated counterparts.
Notable loader manufacturers include (by country):
China:
France:
Germany:
India:
Iran:
Italy-US-Netherlands:
Japan:
Korea:
Serbia:
Sweden:
Switzerland-Germany:
Turkey:
United Kingdom:
United States: | https://en.wikipedia.org/wiki/Loader_(equipment) |
A loading coil or load coil is an inductor that is inserted into an electronic circuit to increase its inductance . The term originated in the 19th century for inductors used to prevent signal distortion in long-distance telegraph transmission cables. The term is also used for inductors in radio antennas , or between the antenna and its feedline , to make an electrically short antenna resonant at its operating frequency.
The concept of loading coils was discovered by Oliver Heaviside in studying the problem of slow signalling speed of the first transatlantic telegraph cable in the 1860s. He concluded additional inductance was required to prevent amplitude and time delay distortion of the transmitted signal. The mathematical condition for distortion-free transmission is known as the Heaviside condition . Previous telegraph lines were overland or shorter and hence had less delay, and the need for extra inductance was not as great. Submarine communications cables are particularly subject to the problem, but early 20th century installations using balanced pairs were often continuously loaded with iron wire or tape rather than discretely with loading coils, which avoided the sealing problem.
Loading coils are historically also known as Pupin coils after Mihajlo Pupin , especially when used for the Heaviside condition and the process of inserting them is sometimes called pupinization .
A common application of loading coils is to improve the voice-frequency amplitude response characteristics of the twisted balanced pairs in a telephone cable. Because twisted pair is a balanced format, half the loading coil must be inserted in each leg of the pair to maintain the balance. It is common for both these windings to be formed on the same core. This increases the flux linkages, without which the number of turns on the coil would need to be increased. Despite the use of common cores, such loading coils do not comprise transformers , as they do not provide coupling to other circuits.
Loading coils inserted periodically in series with a pair of wires reduce the attenuation at the higher voice frequencies up to the cutoff frequency of the low-pass filter formed by the inductance of the coils (plus the distributed inductance of the wires) and the distributed capacitance between the wires. Above the cutoff frequency, attenuation increases rapidly. The shorter the distance between the coils, the higher the cut-off frequency. The cutoff effect is an artifact of using lumped inductors. With loading methods using continuous distributed inductance there is no cutoff.
Without loading coils, the line response is dominated by the resistance and capacitance of the line with the attenuation gently increasing with frequency. With loading coils of exactly the right inductance, neither capacitance nor inductance dominate: the response is flat, waveforms are undistorted and the characteristic impedance is resistive up to the cutoff frequency. The coincidental formation of an audio frequency filter is also beneficial in that noise is reduced.
With loading coils, signal attenuation of a circuit remains low for signals within the passband of the transmission line but increases rapidly for frequencies above the audio cutoff frequency. If the telephone line is subsequently reused to support applications that require higher frequencies, such as in analog or digital carrier systems or digital subscriber line (DSL), loading coils must be removed or replaced. Using coils with parallel capacitors forms a filter with the topology of an m-derived filter and a band of frequencies above the cut-off is also passed. Without removal, for subscribers at an extended distance, e.g., over 4 miles (6.4 km) from the central office, DSL cannot be supported.
American early and middle 20th century telephone cables had load coils at intervals of a mile (1.61 km), usually in coil cases holding many. The coils had to be removed to pass higher frequencies, but the coil cases provided convenient places for repeaters of digital T-carrier systems, which could then transmit a 1.5 Mbit/s signal that distance. Due to narrower streets and higher cost of copper, European cables had thinner wires and used closer spacing. Intervals of a kilometer allowed European systems to carry 2 Mbit/s.
Another type of loading coil is used in radio antennas . Monopole and dipole radio antennas are designed to act as resonators for radio waves; the power from the transmitter, applied to the antenna through the antenna's transmission line , excites standing waves of voltage and current in the antenna element. To be "naturally" resonant, the antenna must have a physical length of one quarter of the wavelength of the radio waves used (or a multiple of that length, with odd multiples usually preferred). At resonance, the antenna acts electrically as a pure resistance , absorbing all the power applied to it from the transmitter.
In many cases, for practical reasons, it is necessary to make the antenna shorter than the resonant length, this is called an electrically short antenna. An antenna shorter than a quarter wavelength presents capacitive reactance to the transmission line. [ 1 ] Some of the applied power is reflected back into the transmission line and travels back toward the transmitter [ citation needed ] . The two currents at the same frequency running in opposite directions causes standing waves on the transmission line [ citation needed ] , measured as a standing wave ratio (SWR) greater than one. The elevated currents waste energy by heating the wire, and can even overheat the transmitter.
To make an electrically short antenna resonant, a loading coil is inserted in series with the antenna. The coil is built to have an inductive reactance equal and opposite to the capacitive reactance of the short antenna, so the combination of reactances cancels. When so loaded the antenna presents a pure resistance to the transmission line, preventing energy from being reflected. The loading coil is often placed at the base of the antenna, between it and the transmission line ( base loading ), but for more efficient radiation, it is sometimes inserted near the midpoint of the antenna element ( center loading ). [ citation needed ]
Loading coils for powerful transmitters can have challenging design requirements, especially at low frequencies. The radiation resistance of short antennas can be very low, as low a few ohms in the LF or VLF bands, where antennas are commonly short and inductive loading is most needed. Because resistance in the coil winding is comparable to, or exceeds the radiation resistance, loading coils for extremely electrically short antennas must have extremely low AC resistance at the operating frequency. To reduce skin effect losses, the coil is often made of tubing or Litz wire , with single layer windings, with turns spaced apart to reduce proximity effect resistance. They must often handle high voltages. To reduce power lost in dielectric losses , the coil is often suspended in air supported on thin ceramic strips. The capacitively loaded antennas used at low frequencies have extremely narrow bandwidths, and therefore if the frequency is changed the loading coil must be adjustable to tune the antenna to resonance with the new transmitter frequency. Variometers are often used.
To reduce losses due to high capacitance on long-distance bulk power transmission lines , inductance can be introduced to the circuit with a flexible AC transmission system (FACTS), a static VAR compensator , or a static synchronous series compensator . Series compensation can be thought of as an inductor connected to the circuit in series if it is supplying inductance to the circuit.
The Campbell equation is a relationship due to George Ashley Campbell for predicting the propagation constant of a loaded line. It is stated as; [ 2 ]
where,
A more engineer friendly rule of thumb is that the approximate requirement for spacing loading coils is ten coils per wavelength of the maximum frequency being transmitted. [ 3 ] This approximation can be arrived at by treating the loaded line as a constant k filter and applying image filter theory to it. From basic image filter theory the angular cutoff frequency and the characteristic impedance of a low-pass constant k filter are given by;
where L 1 2 {\textstyle L_{\frac {1}{2}}} and C 1 2 {\displaystyle C_{\frac {1}{2}}} are the half section element values.
From these basic equations the necessary loading coil inductance and coil spacing can be found;
where C is the capacitance per unit length of the line.
Expressing this in terms of number of coils per cutoff wavelength yields;
where v is the velocity of propagation of the cable in question.
Since v = 1 Z 0 C {\textstyle v={1 \over Z_{0}C}} then
Campbell arrived at this expression by analogy with a mechanical line periodically loaded with weights described by Charles Godfrey in 1898 who obtained a similar result. Mechanical loaded lines of this sort were first studied by Joseph-Louis Lagrange (1736–1813). [ 4 ]
The phenomenon of cutoff whereby frequencies above the cutoff frequency are not transmitted is an undesirable side effect of loading coils (although it proved highly useful in the development of filters ). Cutoff is avoided by the use of continuous loading since it arises from the lumped nature of the loading coils. [ 5 ]
The origin of the loading coil can be found in the work of Oliver Heaviside on the theory of transmission lines . Heaviside (1881) represented the line as a network of infinitesimally small circuit elements. By applying his operational calculus to the analysis of this network he discovered (1887) what has become known as the Heaviside condition . [ 6 ] [ 7 ] This is the condition that must be fulfilled in order for a transmission line to be free from distortion . The Heaviside condition is that the series impedance , Z, must be proportional to the shunt admittance , Y, at all frequencies. In terms of the primary line coefficients the condition is:
where:
Heaviside was aware that this condition was not met in the practical telegraph cables in use in his day. In general, a real cable would have,
This is mainly due to the low value of leakage through the cable insulator, which is even more pronounced in modern cables which have better insulators than in Heaviside's day. In order to meet the condition, the choices are therefore to try to increase G or L or to decrease R or C. Decreasing R requires larger conductors. Copper was already in use in telegraph cables and this is the very best conductor available short of using silver. Decreasing R means using more copper and a more expensive cable. Decreasing C would also mean a larger cable (although not necessarily more copper). Increasing G is highly undesirable; while it would reduce distortion, it would at the same time increase the signal loss. Heaviside considered, but rejected, this possibility which left him with the strategy of increasing L as the way to reduce distortion. [ 8 ]
Heaviside immediately (1887) proposed several methods of increasing the inductance, including spacing the conductors further apart and loading the insulator with iron dust. Finally, Heaviside made the proposal (1893) to use discrete inductors at intervals along the line. [ 9 ] However, he never succeeded in persuading the British GPO to take up the idea. Brittain attributes this to Heaviside's failure to provide engineering details on the size and spacing of the coils for particular cable parameters. Heaviside's eccentric character and setting himself apart from the establishment may also have played a part in their ignoring of him. [ 10 ]
John S. Stone worked for the American Telephone & Telegraph Company (AT&T) and was the first to attempt to apply Heaviside's ideas to real telecommunications. Stone's idea (1896) was to use a bimetallic iron-copper cable which he had patented. [ 11 ] This cable of Stone's would increase the line inductance due to the iron content and had the potential to meet the Heaviside condition. However, Stone left the company in 1899 and the idea was never implemented. [ 12 ] Stone's cable was an example of continuous loading, a principle that was eventually put into practice in other forms, see for instance Krarup cable later in this article.
George Campbell was another AT&T engineer working in their Boston facility. Campbell was tasked with continuing the investigation into Stone's bimetallic cable, but soon abandoned it in favour of the loading coil. His was an independent discovery: Campbell was aware of Heaviside's work in discovering the Heaviside condition, but unaware of Heaviside's suggestion of using loading coils to enable a line to meet it. The motivation for the change of direction was Campbell's limited budget.
Campbell was struggling to set up a practical demonstration over a real telephone route with the budget he had been allocated. After considering that his artificial line simulators used lumped components rather than the distributed quantities found in a real line, he wondered if he could not insert the inductance with lumped components instead of using Stone's distributed line. When his calculations showed that the manholes on telephone routes were sufficiently close together to be able to insert the loading coils without the expense of either having to dig up the route or lay in new cables he changed to this new plan. [ 13 ] The very first demonstration of loading coils on a telephone cable was on a 46-mile length of the so-called Pittsburgh cable (the test was actually in Boston, the cable had previously been used for testing in Pittsburgh) on 6 September 1899 carried out by Campbell himself and his assistant. [ 14 ] The first telephone cable using loaded lines put into public service was between Jamaica Plain and West Newton in Boston on 18 May 1900. [ 15 ]
Campbell's work on loading coils provided the theoretical basis for his subsequent work on filters which proved to be so important for frequency-division multiplexing . The cut-off phenomena of loading coils, an undesirable side-effect, can be exploited to produce a desirable filter frequency response. [ 16 ] [ 17 ]
Michael Pupin , inventor and Serbian immigrant to the US, also played a part in the story of loading coils. Pupin filed a rival patent to the one of Campbell's. [ 18 ] This patent of Pupin's dates from 1899. There is an earlier patent [ 19 ] (1894, filed December 1893) which is sometimes cited as Pupin's loading coil patent but is, in fact, something different. The confusion is easy to understand, Pupin himself claims that he first thought of the idea of loading coils while climbing a mountain in 1894, [ 20 ] although there is nothing from him published at that time. [ 21 ]
Pupin's 1894 patent "loads" the line with capacitors rather than inductors, a scheme that has been criticised as being theoretically flawed [ 22 ] and never put into practice. To add to the confusion, one variant of the capacitor scheme proposed by Pupin does indeed have coils. However, these are not intended to compensate the line in any way. They are there merely to restore DC continuity to the line so that it may be tested with standard equipment. Pupin states that the inductance is to be so large that it blocks all AC signals above 50 Hz. [ 23 ] Consequently, only the capacitor is adding any significant impedance to the line and "the coils will not exercise any material influence on the results before noted". [ 24 ]
Heaviside never patented his idea; indeed, he took no commercial advantage of any of his work. [ 25 ] Despite the legal disputes surrounding this invention, it is unquestionable that Campbell was the first to actually construct a telephone circuit using loading coils. [ 26 ] There also can be little doubt that Heaviside was the first to publish and many would dispute Pupin's priority. [ 27 ]
AT&T fought a legal battle with Pupin over his claim. Pupin was first to patent but Campbell had already conducted practical demonstrations before Pupin had even filed his patent (December 1899). [ 28 ] Campbell's delay in filing was due to the slow internal machinations of AT&T. [ 29 ]
However, AT&T foolishly deleted from Campbell's proposed patent application all the tables and graphs detailing the exact value of inductance that would be required before the patent was submitted. [ 30 ] Since Pupin's patent contained a (less accurate) formula, AT&T was open to claims of incomplete disclosure. Fearing that there was a risk that the battle would end with the invention being declared unpatentable due to Heaviside's prior publication, they decided to desist from the challenge and buy an option on Pupin's patent for a yearly fee so that AT&T would control both patents. By January 1901 Pupin had been paid $200,000 ($13 million in 2011 [ 31 ] ) and by 1917, when the AT&T monopoly ended and payments ceased, he had received a total of $455,000 ($25 million in 2011 [ 31 ] ). [ 32 ]
The invention was of enormous value to AT&T. Telephone cables could now be used to twice the distance previously possible, or alternatively, a cable of half the previous quality (and cost) could be used over the same distance. When considering whether to allow Campbell to go ahead with the demonstration, their engineers had estimated that they stood to save $700,000 in new installation costs in New York and New Jersey alone. [ 33 ] It has been estimated that AT&T saved $100 million in the first quarter of the 20th century. [ 34 ] [ 35 ] Heaviside, who began it all, came away with nothing. He was offered a token payment but would not accept, wanting the credit for his work. He remarked ironically that if his prior publication had been admitted it would "interfere ... with the flow of dollars in the proper direction ...". [ 36 ]
Signal distortion is a particular problem for submarine communication cables , partly because their great length allows more distortion to build up, but also because they are more susceptible to distortion than open wires on poles due to the characteristics of the insulating material. Different wavelengths of the signal travel at different velocities in the material causing dispersion . It was this problem on the first transatlantic telegraph cable that motivated Heaviside to study the problem and find the solution. [ 37 ] Loading coils solve the dispersion problem, and the first use of them on a submarine cable was in 1906 by Siemens and Halske in a cable across Lake Constance . [ 38 ]
There are a number of difficulties using loading coils with heavy submarine cables. The bulge of the loading coils could not easily pass through the cable laying apparatus of cable ships and the ship had to slow down during the laying of a loading coil. [ 39 ] Discontinuities where the coils were installed caused stresses in the cable during laying. Without great care, the cable might part and would be difficult to repair. A further problem was that the material science of the time had difficulties sealing the joint between coil and cable against ingress of seawater. When this occurred the cable was ruined. [ 40 ] Continuous loading was developed to overcome these problems, which also has the benefit of not having a cutoff frequency. [ 39 ]
A Danish engineer, Carl Emil Krarup , invented a form of continuously loaded cable which solved the problems of discrete loading coils. Krarup cable has iron wires continuously wound around the central copper conductor with adjacent turns in contact with each other. This cable was the first use of continuous loading on any telecommunication cable. [ 41 ] In 1902, Krarup both wrote his paper on this subject and saw the installation of the first cable between Helsingør (Denmark) and Helsingborg (Sweden). [ 42 ]
Even though the Krarup cable added inductance to the line, this was insufficient to meet the Heaviside condition. AT&T searched for a better material with higher magnetic permeability . In 1914, Gustav Elmen discovered permalloy , a magnetic nickel-iron annealed alloy. In c. 1915, Oliver E. Buckley , H. D. Arnold , and Elmen, all at Bell Labs , greatly improved transmission speeds by suggesting a method of constructing submarine communications cable using permalloy tape wrapped around the copper conductors. [ 43 ]
The cable was tested in a trial in Bermuda in 1923. The first permalloy cable placed in service connected New York City and Horta (Azores) in September 1924. [ 43 ] Permalloy cable enabled signalling speed on submarine telegraph cables to be increased to 400 words/min at a time when 40 words/min was considered good. [ 44 ] The first transatlantic cable achieved only two words/min. [ 45 ]
Mu-metal has similar magnetic properties to permalloy but the addition of copper to the alloy increases the ductility and allows the metal to be drawn into wire. Mu-metal cable is easier to construct than permalloy cable, the mu-metal being wound around the core copper conductor in much the same way as the iron wire in Krarup cable. A further advantage with mu-metal cable is that the construction lends itself to a variable loading profile whereby the loading is tapered towards the ends.
Mu-metal was invented in 1923 by the Telegraph Construction and Maintenance Company , London, [ 46 ] who made the cable, initially, for the Western Union Telegraph Co . Western Union were in competition with AT&T and the Western Electric Company who were using permalloy. The patent for permalloy was held by Western Electric which prevented Western Union from using it. [ 47 ]
Continuous loading of cables is expensive and hence is only done when absolutely necessary. Lumped loading with coils is cheaper but has the disadvantages of difficult seals and a definite cutoff frequency. A compromise scheme is patch loading whereby the cable is continuously loaded in repeated sections. The intervening sections are left unloaded. [ 48 ]
Loaded cable is no longer a useful technology for submarine communication cables, having first been superseded by co-axial cable using electrically powered in-line repeaters and then by fibre-optic cable . Manufacture of loaded cable declined in the 1930s and was then superseded by other technologies post- World War II . [ 47 ] Loading coils can still be found in some telephone landlines today but new installations use more modern technology. | https://en.wikipedia.org/wiki/Loading_coil |
A loading control is a protein used as a control in a Western blotting experiment. Typically, loading controls are proteins with high and ubiquitous expression, such as beta-actin or GADPH . They are used to make sure that the protein has been loaded equally across all wells. [ 1 ] [ 2 ]
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Loading_control |
Loam molding was formerly used for making cast iron or bronze cannon and is still used for casting large bells.
Loam (pronounced 'low-m') is a mixture of sand and clay with water, sometimes with horse dung (valuable for its straw content), [ 1 ] [ 2 ] animal hair or coke. The object of including dung or hair was to make the mould permeable and allow gas (such as steam) to escape during casting. [ 3 ]
The mold for a cylindrically symmetrical object, such as a cannon , is built up in stages around a spindle, to which is fixed a strickle board with the shape of the eventual casting. The mold also has provision for the casting of a gunhead, beyond the muzzle of the cannon, into which slag can float during casting. If the object is to be hollow, a straw rope is wound around the spindle and covered in a friable material to the dimensions of the exterior of the cannon, the strickle board being turned on the spindle to ensure it is cylindrical. Decorative elements and models of the trunnions are then attached. This is then covered in a thick layer of loam. The mold is then fired. After this the straw rope is then pulled out with the rest of the material used to form the shape of the cannon.
The mould is then mounted vertically in a casting put in front of the furnace. If the cannon is to be cast hollow, a core is mounted in the mould. The furnace was then tapped and metal run into the mold. The mold is then broken off the casting. The gunhead is cut off, and the bore of the cannon reamed out using a boring mill.
The process for the cylinder for a steam engine would be similar. The process for casting a bell is of the same nature, but the procedure is necessarily different. [ 4 ] | https://en.wikipedia.org/wiki/Loam_molding |
In anatomy , a lobe is a clear anatomical division or extension [ 1 ] of an organ (as seen for example in the brain , lung , liver , or kidney ) that can be determined without the use of a microscope at the gross anatomy level. This is in contrast to the much smaller lobule , which is a clear division only visible under the microscope . [ 2 ]
Interlobar ducts connect lobes and interlobular ducts connect lobules.
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lobe_(anatomy) |
Lobelanine is a chemical precursor in the biosynthesis of lobeline . [ 1 ]
This article about an alkaloid is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lobelanine |
In carbohydrate chemistry , the Lobry de Bruyn–Van Ekenstein transformation also known as the Lobry de Bruyn–Alberda van Ekenstein transformation is the base or acid catalyzed transformation of an aldose into the ketose isomer or vice versa, with a tautomeric enediol as reaction intermediate . Ketoses may be transformed into 3-ketoses, etcetera. The enediol is also an intermediate for the epimerization of an aldose or ketose . [ 1 ] [ 2 ]
The reactions are usually base catalyzed, but can also take place under acid or neutral conditions. [ 1 ] A typical rearrangement reaction is that between the aldose glyceraldehyde and the ketose dihydroxyacetone in a chemical equilibrium .
The Lobry de Bruyn–Van Ekenstein transformation is relevant for the industrial production of certain ketoses and was discovered in 1885 by Cornelis Adriaan Lobry van Troostenburg de Bruyn and Willem Alberda van Ekenstein . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The following scheme describes the interconversion between an aldose and a ketose, where R is any organic residue.
The equilibrium or the reactant to product ratio depends on concentration , solvent , pH and temperature . At equilibrium the aldose and ketose form a mixture which in the case of the glyceraldehyde and dihydroxyacetone is also called glycerose .
A related reaction is the alpha-ketol rearrangement .
The carbon atom at which the initial deprotonation takes place is a stereocenter . If, for example, D- glucose (an Aldose) rearranges to D- fructose , the ketose, the stereochemical configuration is lost in the enol form. In the chemical reaction the enol can be protonated from two faces, resulting in the backformation of glucose or the formation of the epimer D-mannose . The final product is a mix of D-glucose, D-fructose and D-mannose. | https://en.wikipedia.org/wiki/Lobry_de_Bruyn–Van_Ekenstein_transformation |
Lobster-eye optics are a biomimetic design, based on the structure of the eyes of a lobster with an ultra wide field of view , used in X-ray optics . This configuration allows X-ray light to enter from multiple angles, capturing more X-rays from a larger area than other X-ray telescopes . The idea was originally proposed for use in X-ray astronomy by Roger Angel in 1979, with a similar idea presented earlier by W. K. H. Schmidt in 1975. It was first used by NASA on a sub-orbital sounding rocket experiment in 2012. The Lobster Eye Imager for Astronomy , a Chinese technology demonstrator satellite, was launched in 2022. The Chinese Einstein Probe , launched in 2024, is the first major space telescope to use lobster-eye optics. Several other such space telescopes are currently under development or consideration.
While most animals have refractive eyes, lobsters and other crustaceans have reflective eyes. [ 2 ] The eyes of a crustacean contain clusters of cells , each reflecting a small amount of light from a particular direction. Lobster-eye optics technology mimics this reflective structure. This arrangement allows the light from a wide viewing area to be focused into a single image. The optics are made of microchannel plates . X-ray light can enter small tubes within these plates from multiple angles, and is focused through grazing-incidence reflection that gives a wide field of view . That, in turn, makes it possible to locate and image transient astronomical events that could not have been predicted in advance. [ 3 ]
The field of view (FoV) of a lobster-eye optic, which is the solid angle subtended by the optic plate to the curvature center, is limited only by the optic size for a given curvature radius. Since the micropore optics are spherically symmetric in essentially all directions, theoretically, an idealized lobster-eye optic is almost free from vignetting except near the edge of the FoV. [ 4 ] Micropore imagers are created from several layers of lobster-eye optics that creates an approximation of Wolter type-I optical design. [ 2 ]
Only three geometries that use grazing incidence reflection of X-rays to produce X-ray images are known: the Wolter system , the Kirkpatrick-Baez system , and the lobster-eye geometry. [ 5 ]
The lobster-eye X-ray optics design was first proposed in 1979 by Roger Angel . [ 6 ] [ 7 ] His design is based on Kirkpatrick-Baez optics , but requires pores with a square cross-section, and is referred to as the "Angel multi-channel lens". [ 5 ] This design was inspired directly by the reflective properties of lobster eyes. [ 1 ] [ 4 ] Before Angel, an alternative design involving a one-dimensional arrangement consisting of a set of flat reflecting surfaces had been proposed by W. K. H. Schmidt in 1975, known as the "Schmidt focusing collimator objective". [ 5 ] [ 8 ] [ 9 ] In 1989, physicists Keith Nugent and Stephen W. Wilkins collaborated to develop lobster-eye optics independently of Angel. Their key contribution was to open up an approach to manufacturing these devices using microchannel plate technology. This lobster-eye approach paved the way for X-ray telescopes with a 360-degree view of the sky. [ 10 ]
In 1992, Philip E. Kaaret and Phillip Geissbuehler proposed a new method for creating lobster-eye optics with microchannel plates. [ 11 ] Micropores required for lobster-eye optics are difficult to manufacture and have strict requirements. The pores must have widths between 0.01 and 0.5 mm and should have a length-to-width ratio of 20–200 (depends on the X-ray energy range); they need to be coated with a dense material for optimal X-ray reflection. The pore's inner walls must be flat and they should be organized in a dense array on a spherical surface with a radius of curvature of 2F, ensuring an open fraction greater than 50% and pore alignment accuracy between 0.1 and 5 arc minutes towards a common center. [ 5 ]
Similar optics designs include honeycomb collimators (used in NEAR Shoemaker 's XGRS detectors and MESSENGER 's XRS) and silicon pore imagers (developed by ESA for its planned ATHENA mission). [ 2 ]
NASA launched the first lobster-eye imager on a Black Brant IX sub-orbital sounding rocket in 2012. The STORM/DXL instrument (Sheath Transport Observer for the Redistribution of Mass/Diffuse X-ray emission from the Local galaxy) had micropore reflectors arranged in an array to form a Kirkpatrick-Baez system. [ 12 ] [ 13 ] BepiColombo , a joint ESA and JAXA Mercury mission launched in 2018, has a non-imaging collimator MIXS-C, with a microchannel geometry similar to the lobster-eye micropore design. [ 8 ] [ 14 ]
CNSA launched the Lobster-Eye X-ray Satellite in 2020, the first in-orbit lobster-eye telescope. [ 15 ] In 2022, the Chinese Academy of Sciences built and launched the Lobster Eye Imager for Astronomy (LEIA), a wide-field X-ray imaging space telescope. It is a technology demonstrator mission that tests the sensor design for the Einstein Probe . [ 16 ] LEIA has a sensor module that gives it a field of view of 340 square degrees . [ 16 ] In August and September of 2022, LEIA conducted measurements to verify its functionality. A number of preselected sky regions and targets were observed, including the Galactic Center , the Magellanic Clouds , Sco X-1 , Cas A , Cygnus Loop , and a few extragalactic sources. To eliminate interference from sunlight, the observations were obtained in Earth's shadow, starting 2 minutes after the satellite entered the shadow and ending 10 minutes before leaving it, resulting in an observational duration of ~23 minutes in each orbit. The CMOS detectors were operating in the event mode. [ 4 ]
The Einstein Probe , a joint mission by the Chinese Academy of Sciences (CAS) in partnership with the European Space Agency (ESA) and the Max Planck Institute for Extraterrestrial Physics , was launched on 9 January 2024. [ 17 ] It uses a 12-sensor module wide-field X-ray telescope for a 3600 square degree field of view, first tested by the Lobster Eye Imager for Astronomy mission. [ 16 ]
The joint French-Chinese SVOM was launched on 22 June 2024. [ 18 ]
NASA's Goddard Space Center proposed an instrument that uses the lobster-eye design for the ISS-TAO mission (Transient Astrophysics Observatory on the International Space Station ), called the X-ray Wide-Field Imager. [ 3 ] ISS-Lobster is a similar concept by ESA. [ 19 ]
Several space telescopes that use lobster-eye optics are under construction. SMILE , a space telescope project by ESA and CAS, is planned to be launched in 2025. [ 20 ] ESA's THESEUS is now under consideration. [ 21 ]
Lobster-eye optics can also be used for backscattering imaging for homeland security , detection of improvised explosive devices , nondestructive testing , and medical imaging . [ 1 ] | https://en.wikipedia.org/wiki/Lobster-eye_optics |
LocDB [ 1 ] is an expert-curated database that collects experimental annotations for the subcellular localization of proteins in Homo sapiens (human) and Arabidopsis thaliana (Weed). The database also contains predictions of subcellular localization from a variety of state-of-the-art prediction methods for all proteins with experimental information.
Proteins are the fundamental functional components of cells. They are responsible for transforming genetic information into physical reality. These macromolecules mediate gene regulation, enzymatic catalysis, cellular metabolism, DNA replication, and transport of nutrients, recognition, and transmission of signals. The interpretation of this wealth of data to elucidate protein function in post-genomic era is a fundamental challenge. To date, even for the most well-studied organisms such as yeast, about one-fourth of the proteins remain uncharacterized. A major obstacle in experimentally determining protein function is that the studies require enormous resources. Hence, the gap between the amount of sequences deposited in databases and the experimental characterization of the corresponding proteins is ever-growing. Bioinformatics plays a central role in bridging this sequence-function gap through the development of tools for faster and more effective prediction of protein function. This repository effectively fills the gap between experimental annotations and predictions and provides a bigger and more reliable dataset for the testing of new prediction methods. [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/LocDB |
The local-area augmentation system ( LAAS ) is an all-weather aircraft landing system based on real-time differential correction of the GPS signal. Local reference receivers located around the airport send data to a central location at the airport. This data is used to formulate a correction message, which is then transmitted to users via a VHF Data Link . A receiver on an aircraft uses this information to correct GPS signals, which then provides a standard instrument landing system (ILS)-style display to use while flying a precision approach . The FAA has stopped using the term LAAS and has transitioned to the International Civil Aviation Organization (ICAO) terminology of ground-based augmentation system (GBAS). [ 1 ] While the FAA has indefinitely delayed plans for federal GBAS acquisition, the system can be purchased by airports and installed as a Non-Federal navigation aid. [ 2 ]
The ground-based augmentation system (GBAS) with aviation standards identified in International Civil Aviation Organization (ICAO) Standards and Recommended Practices (SARPS), Annex 10 on radio-frequency navigation provides international standards for augmentation of GPS to support precision landing. The history of these standards can trace back to efforts in the United States by the Federal Aviation Administration to develop a local area augmentation system (LAAS). Many references still refer to LAAS, although the current international terminology is GBAS and GBAS landing system (GLS).
GBAS monitors GNSS satellites and provides correction messages to users in the vicinity of the GBAS station. The monitoring enables the GBAS to detect anomalous GPS satellite behavior and alert users in a time frame appropriate for aviation uses. The GBAS provides corrections to the GPS signals with a resulting improvement in accuracy sufficient to support aircraft precision approach operations. For more information on how GBAS works, see GBAS-How It Works publication. [ 3 ]
Current GBAS standards only augment a single GNSS frequency and support landings to Category-1 minima. These GBAS systems are identified as GBAS Approach Service Type C (GAST-C). Draft requirements for a GAST-D system are under review by ICAO. A GAST-D system will support operations to Category-III minima. Many organizations are conducting research in multi-frequency GBAS. Other efforts are exploring the addition of Galileo corrections for GBAS.
Honeywell has developed a non-federal CAT-1 GBAS which received system design approval from the Federal Aviation Administration (FAA) in September 2009 [1]. The GBAS installation at Newark Liberty International Airport achieved operational approval on September 28, 2012. [ 4 ] A second GBAS installed at Houston Intercontinental Airport received operational approval on April 23, 2013. [ 5 ] Honeywell systems are also installed internationally, with an operational system in Bremen, Germany. Additional systems are installed or in the process of being installed. Operational approval of several more systems is expected shortly. [ when? ]
Local reference receivers are located around an airport at precisely surveyed locations. The signal received from the GPS constellation is used to calculate the position of the LAAS ground station, which is then compared to its precisely surveyed position. This data is used to formulate a correction message which is transmitted to users via a VHF data link. A receiver on the aircraft uses this information to correct the GPS signals it receives. This information is used to create an ILS-type display for aircraft approach and landing purposes. Honeywell's CAT I system provides precision approach service within a radius of 23 NM surrounding a single airport. LAAS mitigates GPS threats in the Local Area to a much greater accuracy than WAAS and therefore provides a higher level of service not attainable by WAAS. LAAS's VHF uplink signal is currently slated to share the frequency band from 108 MHz to 118 MHz with existing ILS localizer and VOR navigational aids. LAAS utilizes a time-division multiple access (TDMA) technology in servicing the entire airport with a single frequency allocation. With future replacement of ILS, LAAS will reduce the congested VHF NAV band.
The current Category-1 (GAST-C) GBAS achieves a Category I precision approach accuracy of 16 m laterally and 4 m vertically. [ 6 ] The goal of a to-be developed GAST-D GBAS is to provide Category III precision approach capability. The minimum accuracy for lateral and vertical errors of a Category III system are specified in RTCA DO-245A, Minimum Aviation System Performance Standards for Local Area Augmentation System (LAAS) . The GAST-D GBAS will allow aircraft to land with zero visibility using 'autoland' systems.
One of the primary benefits of LAAS is that a single installation at a major airport can be used for multiple precision approaches within the local area. For example, if Chicago O'Hare has twelve runway ends, each with a separate ILS, all twelve ILS facilities can be replaced with a single LAAS system. This represents a significant cost savings in maintenance and upkeep of the existing ILS equipment.
Another benefit is the potential for approaches that are not straight-in. Aircraft equipped with LAAS technology can utilize curved or complex approaches such that they could be flown on to avoid obstacles or to decrease noise levels in areas surrounding an airport. This technology shares similar characteristics with the older microwave landing system (MLS) approaches, commonly seen in Europe. Both systems allow lower visibility requirements on complex approaches that traditional wide area augmentation systems (WAAS) and instrument landing systems (ILS) could not allow. [ citation needed ]
The FAA also contends that only a single set of navigational equipment will be needed on an aircraft for both LAAS and WAAS capability. This lowers initial cost and maintenance per aircraft since only one receiver is required instead of multiple receivers for NDB , DME , VOR , ILS , MLS , and GPS . The FAA hopes this will result in decreased cost to the airlines and passengers as well as general aviation .
LAAS shares the drawbacks of all RF based landing systems; those being jamming both intentional or accidental.
The joint precision approach and landing system (JPALS) is a similar system for military usage. Honeywell has developed the Honeywell International Satellite Landing System (SLS) 4000 series (SLS-4000) which received system design approval from the FAA on September 3, 2009, with a follow-on approval of an enhanced SLS-4000 (SLS-4000 Block 1) in September 2012. [ 2 ] [ 7 ]
The FAA's National Airspace System (NAS) enterprise architecture is the blueprint for transforming the current NAS to the Next Generation Air Transportation System (NextGen). The NAS service roadmaps lay out the strategic activities for service delivery to improve NAS operations and move towards the NextGen vision. They show the evolution of major FAA investments/programs in today's NAS services to meet the future demand. The GBAS precision approaches is one of the investment programs that provide solution to "increase flexibility in the terminal environment" in the NextGen implementation plan.
The FAA expected to replace legacy navigation systems with satellite based navigation technology; however the FAA has indefinitely delayed plans for federal GBAS acquisition, the system can be purchased by airports and installed as a non-federal navigation aid. The FAA continues to develop GBAS systems and seek international standardization. [ 2 ] | https://en.wikipedia.org/wiki/Local-area_augmentation_system |
Local Area Transport (LAT) [ 1 ] [ 2 ] is a non-routable ( data link layer ) networking technology developed by Digital Equipment Corporation [ 3 ] to provide connection between the DECserver terminal servers and Digital's VAX and Alpha and MIPS host computers via Ethernet , giving communication between those hosts and serial devices such as video terminals and printers. The protocol itself was designed in such a manner as to maximize packet efficiency over Ethernet by bundling multiple characters from multiple ports into a single packet for Ethernet transport. [ 4 ]
One LAT strength was efficiently handling time-sensitive data transmission. [ 1 ] [ 5 ] Over time, other host implementations of the LAT protocol appeared allowing communications to a wide range of Unix and other non-Digital operating systems using the LAT protocol.
In 1984, the first implementation of the LAT protocol connected a terminal server to a VMS VAX-Cluster in Spit Brook Road, Nashua, NH. By "virtualizing" the terminal port at the host end, a very large number of plug-and-play VT100-class terminals could connect to each host computer system. Additionally, a single physical terminal could connect via multiple sessions to multiple hosts simultaneously. Future generations of terminal servers included both LAT and TELNET protocols, one of the earliest protocols created to run on a burgeoning TCP/IP based Internet. Additionally, the ability to create reverse direction pathways from users to non-traditional RS232 devices (i.e. UNIX Host TTYS1 operator ports) created an entirely new market for Terminal Servers, now known as console servers in the mid to late 1990s, year 2000 and beyond through today.
LAT and VMS drove the initial surge of adoption of thick Ethernet by the computer industry. By 1986, terminal server networks accounted for 10% of Digital's $10 billion revenue. These early Ethernet LANs scaled using Ethernet bridges (another DEC invention) as well as DECnet routers. Subsequently, Cisco routers, which implemented TCP-IP and DECnet, emerged as a global connection between these packet-based Ethernet LANs.
Over time, when terminals became less popular, terminal emulators had a built-in LAT client.
If a computer communicating via LAT doesn't receive an acknowledgment within 80 milliseconds for a packet it transmitted, it resends that packet; this can clog a network. [ 1 ] No data is sent if no data is offered and under heavy loads LAT limits the number of packets sent per second to twenty-four: twelve transmits and twelve receives. As more characters are sent, the packets get bigger but not more numerous. Above 80 milliseconds latency a touch typist will notice the sluggish character echo. The LAT 80 millisecond delay offloads both the network by sending fewer larger packets which also reduces interrupts at each system.
Most Linux distributions offer a client and server lat package, that can easily be installed via a package manager .
This allows e.g. to access a local area network server while being connected to a corporate VPN network that would otherwise block local TCP/IP traffic. | https://en.wikipedia.org/wiki/Local_Area_Transport |
In the United Kingdom a local biodiversity action plan ( LBAP , pronounced 'ell-bap') is a plan aimed at conserving the fauna, flora and habitats – collectively referred to as biodiversity – of a defined area, usually along local authority boundary lines. [ 1 ] [ 2 ]
The development of such plans at a local level is guided by the publication of a biodiversity action plan at national i.e. UK level and executed locally by 'biodiversity partnerships' which include key stakeholders from different sectors. [ 3 ]
Progress on local plans is overseen by different agencies in each of the four administrations.
As at January 2012, there were many dozens of LBAPs in England overseen by the statutory agency Natural England . [ 4 ]
The Northern Ireland Environment Agency oversees the production of LBAPs in the province. [ 5 ]
As at January 2012 there were 25 LBAPs in place covering each of Scotland's local authority areas and both the Cairngorms and Loch Lomond and The Trossachs national parks. [ 6 ]
By January 2012 there were LBAPs published or in draft form for each of 24 areas identified across Wales including the three national parks. [ 7 ] The Wales Biodiversity Partnership brings together most of the main players across Wales to assist in the process of assembling and delivering LBAPs. | https://en.wikipedia.org/wiki/Local_Biodiversity_Action_Plan |
The Local Bubble , or Local Cavity , [ 3 ] is a relative cavity in the interstellar medium (ISM) of the Orion Arm in the Milky Way . It contains the nearest stars and brown dwarfs and, among others, the Local Interstellar Cloud (which contains the Solar System ), the neighboring G-Cloud , the Ursa Major moving group ( List of nearby stellar associations and moving groups stellar moving group ), and the Hyades (the nearest open cluster ). It is estimated to be at least 1000 light years in size, [ 4 ] [ clarification needed ] and is defined by its neutral- hydrogen density of about 0.05 atoms /cm 3 , or approximately one tenth of the average for the ISM in the Milky Way (0.5 atoms/cm 3 ), and one sixth that of the Local Interstellar Cloud (0.3 atoms/cm 3 ). [ dubious – discuss ] [ 5 ]
The exceptionally sparse gas of the Local Bubble is the result of supernovae that exploded within the past ten to twenty million years. Geminga , a pulsar in the constellation Gemini , was once thought to be the remnant of a single supernova that created the Local Bubble, but now multiple supernovae in subgroup B1 of the Pleiades moving group are thought to have been responsible, [ 6 ] becoming a remnant supershell . [ 7 ] Other research suggests that the subgroups Lower Centaurus–Crux (LCC) and Upper Centaurus–Lupus (UCL), of the Scorpius–Centaurus association created both the Local Bubble and the Loop I Bubble, with LCC being responsible for the Local Bubble and UCL being responsible for the Loop I Bubble. [ 8 ] It was found that 14 to 20 supernovae originated from LCC and UCL, which could have formed these bubbles. [ 9 ]
The Solar System has been traveling through the region currently occupied by the Local Bubble for the last five to ten million years. [ 10 ] Its current location lies in the Local Interstellar Cloud (LIC), a minor region of denser material within the Bubble. The LIC formed where the Local Bubble and the Loop I Bubble met. The gas within the LIC has a density of approximately 0.3 atoms per cubic centimeter.
The Local Bubble is not spherical, but appears to be narrower in the galactic plane , becoming somewhat egg-shaped or elliptical, and may widen above and below the galactic plane, becoming shaped like an hourglass. It abuts other bubbles of less dense interstellar medium (ISM), including, in particular, the Loop I Bubble. The Loop I Bubble was cleared, heated, and maintained by supernovae and stellar winds in the Scorpius–Centaurus association , some 500 light years from the Sun . The Loop I Bubble contains the star Antares (also known as α Sco, or Alpha Scorpii), as shown on the diagram above right. Several tunnels connect the cavities of the Local Bubble with the Loop I Bubble, called the "Lupus Tunnel". [ 11 ] Other bubbles adjacent to the Local Bubble are the Loop II Bubble and the Loop III Bubble . In 2019, researchers found interstellar iron in Antarctica which they relate to the Local Interstellar Cloud , which might be related to the formation of the Local Bubble. [ 12 ]
Launched in February 2003 and active until April 2008, a small space observatory called Cosmic Hot Interstellar Plasma Spectrometer (CHIPSat) examined the hot gas within the Local Bubble. [ 13 ] The Local Bubble was also the region of interest for the Extreme Ultraviolet Explorer mission (1992–2001), which examined hot EUV sources within the bubble. Sources beyond the edge of the bubble were identified but attenuated by the denser interstellar medium. In 2019, the first 3D map of the Local Bubble was reported using observations of diffuse interstellar bands. [ 14 ] In 2020, the shape of the dusty envelope surrounding the Local Bubble was retrieved and modeled from 3D maps of the dust density obtained from stellar extinction data. [ 15 ]
In January 2022, a paper in the journal Nature found that observations and modeling had determined that the action of the expanding surface of the bubble had collected gas and debris and was responsible for the formation of all young, nearby stars. [ 18 ]
These new stars are typically in molecular clouds like the Taurus molecular cloud and the open star cluster Pleiades .
Several radioactive isotopes on Earth have been connected to supernovae occurring relatively nearby to the solar system. The most common source is found in deep sea ferromanganese crusts , which are constantly growing, aggregating iron, manganese, and other elements. Samples are divided into layers which are dated, for example, with Beryllium-10 . Some of these layers have higher concentrations of radioactive isotopes. [ 19 ] The isotope most commonly associated with supernovae on Earth is Iron-60 from deep sea sediments , [ 20 ] Antarctic snow, [ 21 ] and lunar soil . [ 22 ] Other isotopes are Manganese-53 [ 23 ] and Plutonium-244 [ 19 ] from deep sea materials. Supernova-originated Aluminium-26 , which was expected from cosmic ray studies, was not confirmed. [ 24 ] Iron-60 and Manganese-53 have a peak 1.7–3.2 million years ago, and Iron-60 has a second peak 6.5–8.7 million years ago. The older peak likely originated when the solar system moved through the Orion–Eridanus Superbubble and the younger peak was generated when the solar system entered the Local Bubble 4.5 million years ago. [ 25 ] One of the supernovae creating the younger peak might have created the pulsar PSR B1706-16 and turned Zeta Ophiuchi into a runaway star . Both originated from UCL and were released by a supernova 1.78 ± 0.21 million years ago. [ 26 ] Another explanation for the older peak is that it was produced by one supernova in the Tucana-Horologium association 7-9 million years ago. [ 27 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Local_Bubble |
The Local Government Category List (LGCL), is a metadata standard controlled vocabulary of subject metadata terms related to local government, published in the UK. It has been superseded by the Integrated Public Sector Vocabulary (IPSV) but remains available for reference.
The LGCL was developed by the Local Authority Websites National Project (LAWs), an initiative that aimed to meet the "requirement for a structured approach to information handling, publication and navigation". [ 1 ] The LGCL was created as part of the Information Architecture & Standards project strand, the responsibility for which was delegated to the London Borough of Camden . [ 2 ] Version 1.0 was released in October 2003 and version 1.01 in the following month. [ 3 ] [ 4 ] Version 1.02 was released in January 2004 and officially remains the current full version. [ 5 ] Version 1.03 was published in March 2004 but is still considered to be a draft. [ 6 ] The LGCL was merged with the Government Category List , and the seamlessUK taxonomy to form the Integrated Public Sector Vocabulary (IPSV) as part of the UK Government e-GMS initiative. [ 7 ] [ 8 ] The LGCL has since been mapped back to the IPSV as well as having been mapped to the Government Category List and the Local Government Services List (the 'PID List') and the previous APLAWS Category List. [ 9 ] The LGCL and mappings are published in XML , PDF and Word formats. [ 10 ] LGCL is formally deprecated in favour of the IPSV. [ 11 ]
The LGCL comprises a poly-hierarchy of terms, comprising thirteen broad high-level terms divided into increasingly detailed sub-levels. The top-level terms are:
These top-level terms are each divided into two-level terms, some of which may be further subdivided into as many as four further sublevels of detail. Thus, for example, "Wheelchairs" appears as a sixth-level subdivision of Health and social care:
Terms may be repeated at different levels but have consistent meanings, so, for example, "Cycling" appears as a type of transport route under "Transport and streets","Cycling, pedestrian and other pathways","Cycling" as well as an outdoor pursuit under "Leisure and culture","Sports","Types of sports", "Outdoor pursuits", "Cycling". Synonyms are provided for many of the preferred terms. The standard also includes a commitment that terms will never be removed from the list, but may change status or move position in the hierarchy between versions. [ 12 ]
One of the intended uses of the standard was to provide a default local authority Web site navigation hierarchy that could be used flexibly with an appropriate level of detail to meet local requirements. [ 12 ] The standard was also recommended for use as the basis of records management system file plans. [ 13 ]
The LGCL remains freely available "without guarantees and without licensing costs" and may be used and reproduced free of charge provided specific restrictions and appropriate attribution are respected. [ 10 ] | https://en.wikipedia.org/wiki/Local_Government_Category_List |
In Galois cohomology , local Tate duality (or simply local duality ) is a duality for Galois modules for the absolute Galois group of a non-archimedean local field . It is named after John Tate who first proved it. It shows that the dual of such a Galois module is the Tate twist of usual linear dual. This new dual is called the ( local ) Tate dual .
Local duality combined with Tate's local Euler characteristic formula provide a versatile set of tools for computing the Galois cohomology of local fields.
Let K be a non-archimedean local field, let K s denote a separable closure of K , and let G K = Gal( K s / K ) be the absolute Galois group of K .
Denote by μ the Galois module of all roots of unity in K s . Given a finite G K -module A of order prime to the characteristic of K , the Tate dual of A is defined as
(i.e. it is the Tate twist of the usual dual A ∗ ). Let H i ( K , A ) denote the group cohomology of G K with coefficients in A . The theorem states that the pairing
given by the cup product sets up a duality between H i ( K , A ) and H 2− i ( K , A ′ ) for i = 0, 1, 2. [ 1 ] Since G K has cohomological dimension equal to two, the higher cohomology groups vanish. [ 2 ]
Let p be a prime number . Let Q p (1) denote the p -adic cyclotomic character of G K (i.e. the Tate module of μ). A p -adic representation of G K is a continuous representation
where V is a finite-dimensional vector space over the p-adic numbers Q p and GL( V ) denotes the group of invertible linear maps from V to itself. [ 3 ] The Tate dual of V is defined as
(i.e. it is the Tate twist of the usual dual V ∗ = Hom( V , Q p )). In this case, H i ( K , V ) denotes the continuous group cohomology of G K with coefficients in V . Local Tate duality applied to V says that the cup product induces a pairing
which is a duality between H i ( K , V ) and H 2− i ( K , V ′) for i = 0, 1, 2. [ 4 ] Again, the higher cohomology groups vanish. | https://en.wikipedia.org/wiki/Local_Tate_duality |
Local adaptation is a mechanism in evolutionary biology whereby a population of organisms evolves to be more well-suited to its local environment than other members of the same species that live elsewhere. Local adaptation requires that different populations of the same species experience different natural selection . For example, if a species lives across a wide range of temperatures, populations from warm areas may have better heat tolerance than populations of the same species that live in the cold part of its geographic range.
More formally, a population is said to be locally adapted [ 1 ] if organisms in that population have evolved different phenotypes than other populations of the same species, and local phenotypes have higher fitness in their home environment compared to individuals that originate from other locations in the species range. [ 2 ] [ 3 ] This is sometimes called 'home site advantage'. [ 4 ] A stricter definition of local adaptation requires 'reciprocal home site advantage', where for a pair of populations each out performs the other in its home site. [ 5 ] [ 2 ] This definition requires that local adaptation result in a fitness trade-off, such that adapting to one environment comes at the cost of poorer performance in a different environment. [ 3 ] Before 2004, reciprocal transplants sometimes considered populations locally adapted if the population experienced its highest fitness in its home site vs the foreign site (i.e. compared the same population at multiple sites, vs. multiple populations at the same site). This definition of local adaptation has been largely abandoned after Kawecki and Ebert argued convincingly that populations could be adapted to poor-quality sites but still experience higher fitness if moved to a more benign site (right panel of figure). [ 3 ]
Testing for local adaptation requires measuring the fitness of organisms from one population in both their local environment and in foreign environments. This is often done using transplant experiments. Using the stricter definition of reciprocal home site advantage, local adaptation is often tested via reciprocal transplant experiments . In reciprocal transplants, organisms from one population are transplanted into another population, and vice versa, and their fitness is measured (see figure). [ 3 ] If local transplants outperform (i.e. have higher fitness than) the foreign transplants at both sites, the local populations are said to be locally adapted. [ 3 ] If local adaptation is defined simply as a home site advantage of one population (local sources outperform foreign sources at a common site), it can be tested for using common garden experiments, where multiple source populations are grown in a common site, as long as one of the source populations is local to that site.
Transplant experiments have most often been done with plants or other organisms that do not move. [ 5 ] However, evidence for rapid local adaptation in mobile animals has been gathered through transplant experiments with Trinidadian guppies . [ 6 ]
Several meta-analyses have attempted to quantify how common local adaptation is, and generally reach similar conclusions. Roughly 75% of transplant experiments (mostly with plants) find that local populations outcompete foreign populations at a common site, but less than 50% find the reciprocal home site advantage that defines classic local adaptation. [ 5 ] [ 7 ] Exotic plants are locally adapted to their invasive range as often and as strongly as native plant are locally adapted, suggesting that local adaptation can evolve relatively rapidly. [ 8 ] [ 9 ] However, biologists likely test for local adaptation where they expect to find it. Thus these numbers likely reflect local adaptation between obviously differing sites, rather than the probability than any two randomly-selected populations within a species are locally adapted.
Any component of the environment can drive local adaptation, as long as it affects fitness differently at different sites (creating divergent selection among sites), and does so consistently enough for populations to evolve in response. Seminal examples of local adaptation come from plants that adapted to different elevations [ 10 ] or to tolerate heavy metals in soils. [ 11 ] Interactions among species (e.g. herbivore-plant interactions) can also drive local adaptation, though do not seem to be as important as abiotic factors, at least for plants in temperate ecosystems. [ 12 ] Many examples of local adaptation exist in host-parasite systems as well. For instance, a host may be resistant to a locally-abundant pathogen or parasite, but conspecific hosts from elsewhere where that pathogen is not abundant may have no evolved no such adaptation. [ 13 ]
Gene flow can completely prevent local adaptations in populations by increasing the amount of genetic material exchanged which can than lower the frequency of alleles associated with the specific local adaptation. [ 14 ] However gene flow can also introduce beneficial alleles to a population , which increases the amount of genetic variation , therefore strengthening the likelihood of local adaptations. [ 15 ] Gene flow is the transfer of genetic information from one population to another, mainly through migration of organisms or their genetic material. [ 16 ] It is possible for genetic material such as pollen or spores that can travel via wind, water or being brought by an animal, to reach an isolated population. [ 15 ]
The role gene flow plays in local adaptation is complex because gene flow can reduce the likelihood of local adaptation in a population since gene flow is genetic material from different populations mixing frequently, which makes populations genetically more similar which is the opposite of local adaptation. [ 17 ] The level of gene flow impacts its effects on local adaptation, high gene flow tends to reduce local adaptation whereas low gene flow can increase local adaptation. [ 17 ] High gene flow is when there is a lot of new genetic material entering the population often and low gene flow is when a population occasionally gets new genetic material. Populations with extensive local adaptations are the most impacted by high gene flow; in such cases where high gene flow occurs in populations with local adaptations it has negative effects such as reducing or removing the adaptation. [ 14 ]
Populations with local adaptation can be isolated from other populations however complete isolation is not necessary, gene flow can play a role in populations developing local adaptations. Gene flow allows for the introduction of new beneficial alleles into populations where it was not previously present; if these end up being extremely beneficial to the population they were introduced to, this may allow organisms to locally adapt. [ 14 ] Further, local adaptation can happen under gene flow if recombination at genes connected to or controlling the adapted trait is reduced. [ 14 ]
The effect of high gene flow on local adaptation in populations co-evolving with a parasite is of particular interest because parasites are known to specialize on a given host. [ 14 ] Populations of coevolving wasps were studied, a type of paper wasps ( Polistes biglumis ) and the parasite wasp ( Polistes atrimandibularis ) that preys on it, the parasite essentially takes over the nest of the host and begins to reproduce, eventually taking over the host’s nest. [ 14 ] The specific type of parasitism taking place between these two wasp species is social parasitism , meaning one species gets another species to raise its young; social parasitism is known to impact genetic diversity of the host populations. [ 18 ] A specific local adaptation of the P. biglumis is having a small number of offspring and putting more energy towards defenses against potential intruders, which would help prevent the parasitic wasp from entering the nest. [ 14 ]
Looking at different local populations with similar levels of gene flow is particularly important because the presence of local adaptations in some populations but not others could suggest factors other than gene flow and selective pressure from parasites are causing the differences. Further, regional populations with varying levels of gene flow allows us to get a better idea of how gene flow at the local population level within these regions contributes to local adaptations at the regional level. The Alps were chosen as the area for the wasp study because the elevation of the mountains separate regional and local populations ; resulting in multiple local populations of both host and parasite at different elevations and regions. [ 14 ] For example, wasps on the same mountain but at different elevations do interbred so gene flow is occurring between local populations. In addition, there are also more isolated regional populations of both host wasp and parasitic wasp on completely different mountains that do not interbreed with other regional populations. [ 14 ] DNA microsatellites , a type of genetic marker, were used to study the differences between local populations, to compare to regional populations, in an attempt to see how gene flow was impacting their genetics. [ 14 ] What's very important to note is that gene flow is taking place between wasp populations to the same degree; all local populations in the same region have the same amount of gene flow. [ 14 ] Meaning that one host population does not have more exposure to different additional genetic material than another host population at a different elevation.
The wasp study found that significant local adaptation only took place in different regional populations, rather than different local populations, for instance higher and lower elevation populations on the same side of the mountain did not have significant differences. [ 14 ] But populations in different regions, on the other side of the mountain, a completely different mountain, did have significant differences. [ 14 ] Results from the DNA microsatellites showed that the out of the regional wasp populations, the most isolated regional population was the most different from other regional populations. [ 14 ] This evidence supports the idea that some level of isolation is needed in order for local adaptations to occur within populations, further supporting the idea that high levels of gene flow do not produce local adaptations.
Experimental data suggests limited gene flow will produce the most local adaptations and high gene flow will cause populations to hybridize. There was study done on fruit flies ( Drosophila melanogaster ) to see if adaptive potential was increased in populations that were previously isolated and then experienced different levels of gene flow, or complete hybridization between two populations of previously isolated fruit flies. [ 17 ] Experiments introducing different levels of gene flow and complete hydration of D. melanogaster populations showed that limited gene flow (in comparison to high gene flow or full hybridization) was actually what produced the greatest number of beneficial alleles within the fruit fly population. [ 17 ] | https://en.wikipedia.org/wiki/Local_adaptation |
A local area emergency ( SAME code: LAE) is an advisory issued by local authorities through the Emergency Alert System (EAS) in the United States to notify the public of an event that does not pose a significant threat to public safety and/or property by itself, but could escalate, contribute to other more serious events, or disrupt critical public safety services. Instructions, other than public protective actions, may be provided. Examples include: a disruption in water, electric or natural gas service, road closures due to excessive snowfall, or a potential terrorist threat where the public is asked to remain alert.
[ 1 ]
This article incorporates public domain material from Non-weather Related Emergency Message Description Guidelines (PDF) . United States government .
This article about disaster management or a disaster is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Local_area_emergency |
While compass surveying , the magnetic needle is sometimes disturbed from its normal position under the influence of external attractive forces. Such a disturbing influence is called as local attraction . [ 1 ] The external forces are produced by sources of local attraction which may be current carrying wire (magnetic materials) or metal objects. [ 2 ] The term is also used to denote amount of deviation of the needle from its normal position. It mostly causes errors in observations while surveying and thus suitable methods are employed to negate these errors. [ 3 ]
The sources of local attraction may be natural or artificial. Natural sources include iron ores or magnetic rocks while as artificial sources consist of steel structures, iron pipes, current carrying conductors . The iron made surveying instruments such as metric chains, ranging rods and arrows should also be kept at a safe distance apart from compass . [ 3 ]
Local attraction at a place can be detected by observing bearings from both ends of the line in the area. If fore bearing and back bearing of a line differ exactly by 180°, there is no local attraction at either station. But if this difference is not equal to 180°, then local attraction exists there either at one or both ends of the line. [ 3 ]
There are two common methods of correcting observed bearings of the lines taken in the area affected by Local Attraction. The first method involves correcting the bearing with the help of corrected included angles and the second method involves correcting the bearing of traverse from one correct bearing ( in which difference between fore bearing and back bearing is exactly equal to 180°) by the process of distribution of error to other bearings. [ 4 ] | https://en.wikipedia.org/wiki/Local_attraction |
In mathematics , a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number.
A real-valued or complex-valued function f {\displaystyle f} defined on some topological space X {\displaystyle X} is called a locally bounded functional if for any x 0 ∈ X {\displaystyle x_{0}\in X} there exists a neighborhood A {\displaystyle A} of x 0 {\displaystyle x_{0}} such that f ( A ) {\displaystyle f(A)} is a bounded set . That is, for some number M > 0 {\displaystyle M>0} one has | f ( x ) | ≤ M for all x ∈ A . {\displaystyle |f(x)|\leq M\quad {\text{ for all }}x\in A.}
In other words, for each x {\displaystyle x} one can find a constant, depending on x , {\displaystyle x,} which is larger than all the values of the function in the neighborhood of x . {\displaystyle x.} Compare this with a bounded function , for which the constant does not depend on x . {\displaystyle x.} Obviously, if a function is bounded then it is locally bounded. The converse is not true in general (see below).
This definition can be extended to the case when f : X → Y {\displaystyle f:X\to Y} takes values in some metric space ( Y , d ) . {\displaystyle (Y,d).} Then the inequality above needs to be replaced with d ( f ( x ) , y ) ≤ M for all x ∈ A , {\displaystyle d(f(x),y)\leq M\quad {\text{ for all }}x\in A,} where y ∈ Y {\displaystyle y\in Y} is some point in the metric space. The choice of y {\displaystyle y} does not affect the definition; choosing a different y {\displaystyle y} will at most increase the constant r {\displaystyle r} for which this inequality is true.
A set (also called a family ) U of real-valued or complex-valued functions defined on some topological space X {\displaystyle X} is called locally bounded if for any x 0 ∈ X {\displaystyle x_{0}\in X} there exists a neighborhood A {\displaystyle A} of x 0 {\displaystyle x_{0}} and a positive number M > 0 {\displaystyle M>0} such that | f ( x ) | ≤ M {\displaystyle |f(x)|\leq M} for all x ∈ A {\displaystyle x\in A} and f ∈ U . {\displaystyle f\in U.} In other words, all the functions in the family must be locally bounded, and around each point they need to be bounded by the same constant.
This definition can also be extended to the case when the functions in the family U take values in some metric space, by again replacing the absolute value with the distance function.
Local boundedness may also refer to a property of topological vector spaces , or of functions from a topological space into a topological vector space (TVS).
A subset B ⊆ X {\displaystyle B\subseteq X} of a topological vector space (TVS) X {\displaystyle X} is called bounded if for each neighborhood U {\displaystyle U} of the origin in X {\displaystyle X} there exists a real number s > 0 {\displaystyle s>0} such that B ⊆ t U for all t > s . {\displaystyle B\subseteq tU\quad {\text{ for all }}t>s.} A locally bounded TVS is a TVS that possesses a bounded neighborhood of the origin.
By Kolmogorov's normability criterion , this is true of a locally convex space if and only if the topology of the TVS is induced by some seminorm .
In particular, every locally bounded TVS is pseudometrizable .
Let f : X → Y {\displaystyle f:X\to Y} a function between topological vector spaces is said to be a locally bounded function if every point of X {\displaystyle X} has a neighborhood whose image under f {\displaystyle f} is bounded.
The following theorem relates local boundedness of functions with the local boundedness of topological vector spaces: | https://en.wikipedia.org/wiki/Local_boundedness |
In algebraic geometry , local cohomology is an algebraic analogue of relative cohomology . Alexander Grothendieck introduced it in seminars in Harvard in 1961 written up by Hartshorne (1967) , and in 1961-2 at IHES written up as SGA2 - Grothendieck (1968) , republished as Grothendieck (2005) . Given a function (more generally, a section of a quasicoherent sheaf ) defined on an open subset of an algebraic variety (or scheme ), local cohomology measures the obstruction to extending that function to a larger domain . The rational function 1 / x {\displaystyle 1/x} , for example, is defined only on the complement of 0 {\displaystyle 0} on the affine line A K 1 {\displaystyle \mathbb {A} _{K}^{1}} over a field K {\displaystyle K} , and cannot be extended to a function on the entire space. The local cohomology module H ( x ) 1 ( K [ x ] ) {\displaystyle H_{(x)}^{1}(K[x])} (where K [ x ] {\displaystyle K[x]} is the coordinate ring of A K 1 {\displaystyle \mathbb {A} _{K}^{1}} ) detects this in the nonvanishing of a cohomology class [ 1 / x ] {\displaystyle [1/x]} . In a similar manner, 1 / x y {\displaystyle 1/xy} is defined away from the x {\displaystyle x} and y {\displaystyle y} axes in the affine plane , but cannot be extended to either the complement of the x {\displaystyle x} -axis or the complement of the y {\displaystyle y} -axis alone (nor can it be expressed as a sum of such functions); this obstruction corresponds precisely to a nonzero class [ 1 / x y ] {\displaystyle [1/xy]} in the local cohomology module H ( x , y ) 2 ( K [ x , y ] ) {\displaystyle H_{(x,y)}^{2}(K[x,y])} . [ 1 ]
Outside of algebraic geometry, local cohomology has found applications in commutative algebra , [ 2 ] [ 3 ] [ 4 ] combinatorics , [ 5 ] [ 6 ] [ 7 ] and certain kinds of partial differential equations . [ 8 ]
In the most general geometric form of the theory, sections Γ Y {\displaystyle \Gamma _{Y}} are considered of a sheaf F {\displaystyle F} of abelian groups , on a topological space X {\displaystyle X} , with support in a closed subset Y {\displaystyle Y} , The derived functors of Γ Y {\displaystyle \Gamma _{Y}} form local cohomology groups
In the theory's algebraic form, the space X is the spectrum Spec( R ) of a commutative ring R (assumed to be Noetherian throughout this article) and the sheaf F is the quasicoherent sheaf associated to an R - module M , denoted by M ~ {\displaystyle {\tilde {M}}} . The closed subscheme Y is defined by an ideal I . In this situation, the functor Γ Y ( F ) corresponds to the I -torsion functor, a union of annihilators
i.e., the elements of M which are annihilated by some power of I . As a right derived functor , the i th local cohomology module with respect to I is the i th cohomology group H i ( Γ I ( E ∙ ) ) {\displaystyle H^{i}(\Gamma _{I}(E^{\bullet }))} of the chain complex Γ I ( E ∙ ) {\displaystyle \Gamma _{I}(E^{\bullet })} obtained from taking the I -torsion part Γ I ( − ) {\displaystyle \Gamma _{I}(-)} of an injective resolution E ∙ {\displaystyle E^{\bullet }} of the module M {\displaystyle M} . [ 9 ] Because E ∙ {\displaystyle E^{\bullet }} consists of R -modules and R -module homomorphisms , the local cohomology groups each have the natural structure of an R -module.
The I -torsion part Γ I ( M ) {\displaystyle \Gamma _{I}(M)} may alternatively be described as
and for this reason, the local cohomology of an R -module M agrees [ 10 ] with a direct limit of Ext modules ,
It follows from either of these definitions that H I i ( M ) {\displaystyle H_{I}^{i}(M)} would be unchanged if I {\displaystyle I} were replaced by another ideal having the same radical . [ 11 ] It also follows that local cohomology does not depend on any choice of generators for I , a fact which becomes relevant in the following definition involving the Čech complex.
The derived functor definition of local cohomology requires an injective resolution of the module M {\displaystyle M} , which can make it inaccessible for use in explicit computations. The Čech complex is seen as more practical in certain contexts. Iyengar et al. (2007) , for example, state that they "essentially ignore" the "problem of actually producing any one of these [injective] kinds of resolutions for a given module" [ 12 ] prior to presenting the Čech complex definition of local cohomology, and Hartshorne (1977) describes Čech cohomology as "giv[ing] a practical method for computing cohomology of quasi-coherent sheaves on a scheme." [ 13 ] and as being "well suited for computations." [ 14 ]
The Čech complex can be defined as a colimit of Koszul complexes K ∙ ( f 1 , … , f m ) {\displaystyle K^{\bullet }(f_{1},\ldots ,f_{m})} where f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} generate I {\displaystyle I} . The local cohomology modules can be described [ 15 ] as:
Koszul complexes have the property that multiplication by f i {\displaystyle f_{i}} induces a chain complex morphism ⋅ f i : K ∙ ( f 1 , … , f n ) → K ∙ ( f 1 , … , f n ) {\displaystyle \cdot f_{i}:K^{\bullet }(f_{1},\ldots ,f_{n})\to K^{\bullet }(f_{1},\ldots ,f_{n})} that is homotopic to zero, [ 16 ] meaning H i ( K ∙ ( f 1 , … , f n ) ) {\displaystyle H^{i}(K^{\bullet }(f_{1},\ldots ,f_{n}))} is annihilated by the f i {\displaystyle f_{i}} . A non-zero map in the colimit of the Hom {\displaystyle \operatorname {Hom} } sets contains maps from the all but finitely many Koszul complexes, and which are not annihilated by some element in the ideal.
This colimit of Koszul complexes is isomorphic to [ 17 ] the Čech complex , denoted C ˇ ∙ ( f 1 , … , f n ; M ) {\displaystyle {\check {C}}^{\bullet }(f_{1},\ldots ,f_{n};M)} , below.
0 → M → ⨁ i 0 M f i → ⨁ i 0 < i 1 M f i 0 f i 1 → ⋯ → M f 1 ⋯ f n → 0 {\displaystyle 0\to M\to \bigoplus _{i_{0}}M_{f_{i}}\to \bigoplus _{i_{0}<i_{1}}M_{f_{i_{0}}f_{i_{1}}}\to \cdots \to M_{f_{1}\cdots f_{n}}\to 0}
where the i th local cohomology module of M {\displaystyle M} with respect to I = ( f 1 , … , f n ) {\displaystyle I=(f_{1},\ldots ,f_{n})} is isomorphic to [ 18 ] the i th cohomology group of the above chain complex ,
The broader issue of computing local cohomology modules (in characteristic zero ) is discussed in Leykin (2002) and Iyengar et al. (2007 , Lecture 23).
Since local cohomology is defined as derived functor , for any short exact sequence of R -modules 0 → M 1 → M 2 → M 3 → 0 {\displaystyle 0\to M_{1}\to M_{2}\to M_{3}\to 0} , there is, by definition, a natural long exact sequence in local cohomology
There is also a long exact sequence of sheaf cohomology linking the ordinary sheaf cohomology of X and of the open set U = X \ Y , with the local cohomology modules. For a quasicoherent sheaf F defined on X , this has the form
In the setting where X is an affine scheme Spec ( R ) {\displaystyle {\text{Spec}}(R)} and Y is the vanishing set of an ideal I , the cohomology groups H i ( X , F ) {\displaystyle H^{i}(X,F)} vanish for i > 0 {\displaystyle i>0} . [ 19 ] If F = M ~ {\displaystyle F={\tilde {M}}} , this leads to an exact sequence
where the middle map is the restriction of sections. The target of this restriction map is also referred to as the ideal transform . For n ≥ 1, there are isomorphisms
Because of the above isomorphism with sheaf cohomology , local cohomology can be used to express a number of meaningful topological constructions on the scheme X = Spec ( R ) {\displaystyle X=\operatorname {Spec} (R)} in purely algebraic terms. For example, there is a natural analogue in local cohomology of the Mayer–Vietoris sequence with respect to a pair of open sets U and V in X , given by the complements of the closed subschemes corresponding to a pair of ideal I and J , respectively. [ 20 ] This sequence has the form
for any R {\displaystyle R} -module M {\displaystyle M} .
The vanishing of local cohomology can be used to bound the least number of equations (referred to as the arithmetic rank ) needed to (set theoretically) define the algebraic set V ( I ) {\displaystyle V(I)} in Spec ( R ) {\displaystyle \operatorname {Spec} (R)} . If J {\displaystyle J} has the same radical as I {\displaystyle I} , and is generated by n {\displaystyle n} elements, then the Čech complex on the generators of J {\displaystyle J} has no terms in degree i > n {\displaystyle i>n} . The least number of generators among all ideals J {\displaystyle J} such that J = I {\displaystyle {\sqrt {J}}={\sqrt {I}}} is the arithmetic rank of I {\displaystyle I} , denoted ara ( I ) {\displaystyle \operatorname {ara} (I)} . [ 21 ] Since the local cohomology with respect to I {\displaystyle I} may be computed using any such ideal, it follows that H I i ( M ) = 0 {\displaystyle H_{I}^{i}(M)=0} for i > ara ( I ) {\displaystyle i>\operatorname {ara} (I)} . [ 22 ]
When R {\displaystyle R} is graded by N {\displaystyle \mathbb {N} } , I {\displaystyle I} is generated by homogeneous elements, and M {\displaystyle M} is a graded module, there is a natural grading on the local cohomology module H I i ( M ) {\displaystyle H_{I}^{i}(M)} that is compatible with the gradings of M {\displaystyle M} and R {\displaystyle R} . [ 23 ] All of the basic properties of local cohomology expressed in this article are compatible with the graded structure. [ 24 ] If M {\displaystyle M} is finitely generated and I = m {\displaystyle I={\mathfrak {m}}} is the ideal generated by the elements of R {\displaystyle R} having positive degree, then the graded components H m i ( M ) n {\displaystyle H_{\mathfrak {m}}^{i}(M)_{n}} are finitely generated over R {\displaystyle R} and vanish for sufficiently large n {\displaystyle n} . [ 25 ]
The case where I = m {\displaystyle I={\mathfrak {m}}} is the ideal generated by all elements of positive degree (sometimes called the irrelevant ideal ) is particularly special, due to its relationship with projective geometry. [ 26 ] In this case, there is an isomorphism
where Proj ( R ) {\displaystyle {\text{Proj}}(R)} is the projective scheme associated to R {\displaystyle R} , and ( k ) {\displaystyle (k)} denotes the Serre twist . This isomorphism is graded, giving
in all degrees n {\displaystyle n} . [ 27 ]
This isomorphism relates local cohomology with the global cohomology of projective schemes . For example, the Castelnuovo–Mumford regularity can be formulated using local cohomology [ 28 ] as
where end ( N ) {\displaystyle {\text{end}}(N)} denotes the highest degree t {\displaystyle t} such that N t ≠ 0 {\displaystyle N_{t}\neq 0} . Local cohomology can be used to prove certain upper bound results concerning the regularity. [ 29 ]
Using the Čech complex, if I = ( f 1 , … , f n ) R {\displaystyle I=(f_{1},\ldots ,f_{n})R} the local cohomology module H I n ( M ) {\displaystyle H_{I}^{n}(M)} is generated over R {\displaystyle R} by the images of the formal fractions
for m ∈ M {\displaystyle m\in M} and t 1 , … , t n ≥ 1 {\displaystyle t_{1},\ldots ,t_{n}\geq 1} . [ 30 ] This fraction corresponds to a nonzero element of H I n ( M ) {\displaystyle H_{I}^{n}(M)} if and only if there is no k ≥ 0 {\displaystyle k\geq 0} such that ( f 1 ⋯ f t ) k m ∈ ( f 1 t 1 + k , … , f t t n + k ) M {\displaystyle (f_{1}\cdots f_{t})^{k}m\in (f_{1}^{t_{1}+k},\ldots ,f_{t}^{t_{n}+k})M} . [ 31 ] For example, if t i = 1 {\displaystyle t_{i}=1} , then
If H 0 ( U , R ~ ) {\displaystyle H^{0}(U,{\tilde {R}})} is known (where U = Spec ( R ) − V ( I ) {\displaystyle U=\operatorname {Spec} (R)-V(I)} ), the module H I 1 ( R ) {\displaystyle H_{I}^{1}(R)} can sometimes be computed explicitly using the sequence
In the following examples, K {\displaystyle K} is any field .
The dimension dim R (M) of a module (defined as the Krull dimension of its support) provides an upper bound for local cohomology modules: [ 35 ]
If R is local and M finitely generated , then this bound is sharp, i.e., H m n ( M ) ≠ 0 {\displaystyle H_{\mathfrak {m}}^{n}(M)\neq 0} .
The depth (defined as the maximal length of a regular M -sequence ; also referred to as the grade of M ) provides a sharp lower bound, i.e., it is the smallest integer n such that [ 36 ]
These two bounds together yield a characterisation of Cohen–Macaulay modules over local rings: they are precisely those modules where H m n ( M ) {\displaystyle H_{\mathfrak {m}}^{n}(M)} vanishes for all but one n .
The local duality theorem is a local analogue of Serre duality . For a Cohen-Macaulay local ring R {\displaystyle R} of dimension d {\displaystyle d} that is a homomorphic image of a Gorenstein local ring [ 37 ] (for example, if R {\displaystyle R} is complete [ 38 ] ), it states that the natural pairing
is a perfect pairing , where ω R {\displaystyle \omega _{R}} is a dualizing module for R {\displaystyle R} . [ 39 ] In terms of the Matlis duality functor D ( − ) {\displaystyle D(-)} , the local duality theorem may be expressed as the following isomorphism. [ 40 ]
The statement is simpler when ω R ≅ R {\displaystyle \omega _{R}\cong R} , which is equivalent [ 41 ] to the hypothesis that R {\displaystyle R} is Gorenstein . This is the case, for example, if R {\displaystyle R} is regular .
The initial applications were to analogues of the Lefschetz hyperplane theorems . In general such theorems state that homology or cohomology is supported on a hyperplane section of an algebraic variety , except for some 'loss' that can be controlled. These results applied to the algebraic fundamental group and to the Picard group .
Another type of application are connectedness theorems such as Grothendieck's connectedness theorem (a local analogue of the Bertini theorem ) or the Fulton–Hansen connectedness theorem due to Fulton & Hansen (1979) and Faltings (1979) . The latter asserts that for two projective varieties V and W in P r over an algebraically closed field , the connectedness dimension of Z = V ∩ W (i.e., the minimal dimension of a closed subset T of Z that has to be removed from Z so that the complement Z \ T is disconnected ) is bound by
For example, Z is connected if dim V + dim W > r . [ 42 ]
In polyhedral geometry, a key ingredient of Stanley’s 1975 proof of the simplicial form of McMullen’s Upper bound theorem involves showing that the Stanley-Reisner ring of the corresponding simplicial complex is Cohen-Macaulay , and local cohomology is an important tool in this computation, via Hochster’s formula. [ 43 ] [ 6 ] [ 44 ] | https://en.wikipedia.org/wiki/Local_cohomology |
Local coordinates are the ones used in a local coordinate system or a local coordinate space . Simple examples:
Local systems exist for convenience. On ancient times, every work was made on relative bases as there was no conception of global systems. Practically, it is better to use local systems for small works as houses, buildings... For most of the applications, it is desired the position of one element relative to one building or location, and in a more local way, relative to one furniture or person. In a regular way, you will not give your position by geographical coordinates rather than "I am 15 meters away of the entry to the building". So it is a pretty common way to locate things.
It is possible to bring latitude and longitude for all terrestrial locations, but unless one has a highly precise GPS device or you make astronomical observations, this is impractical. It is much simple to use a tape, a rope, a chain... The position information (global) should be transformed into a location. Position refers to a numeric or symbolic description within a spatial reference system, where as location refers to information about surrounding objects and their interrelationships. [ 1 ] ( Topological space )
In computer graphics and computer animation , local coordinate spaces are also useful for their ability to model independently transformable aspects of geometrical scene graphs . When modeling a car, for example, it is desirable to describe the center of each wheel with respect to the car's coordinate system, but then specify the shape of each wheel in separate local spaces centered about these points. This way, the information describing each wheel can be simply duplicated four times, and independent transformations (e.g., steering rotation) can be similarly effected. Bounding volumes of objects may be described more accurately using extents in the local coordinates, (i.e. an object oriented bounding box , contrasted with the simpler axis aligned bounding box ). The trade-off for this flexibility is additional computational cost: the rendering system must access the higher-level coordinate system of the car and combine it with the space of each wheel in order to draw everything in its proper place.
Local coordinates also afford digital designers a means around the finite limits of numerical representation. The tread marks on a tire, for example, can be described using millimeters by allowing the whole tire to occupy the entire range of numeric precision available. The larger aspects of the car, such as its frame, might be described in centimeters, and the terrain that the car travels on could be specified in meters.
In differential topology , local coordinates on a manifold are defined by means of an atlas of charts . The basic idea behind coordinate charts is that each small patch of a manifold can be endowed with a set of local coordinates. These are collected together into an atlas, and stitched together in such a way that they are self-consistent on the manifold.
In Cartography and Maps , the traditional way of works are local datum . With a local datum [ 2 ] the land can be mapped on relative small areas as a country. With the need of global systems, the transformations on between datum became a problem, so geodetic datum have been created. More than 150 local datum have been used in the world. | https://en.wikipedia.org/wiki/Local_coordinates |
Local elevation is a technique used in computational chemistry or physics , mainly in the field of molecular simulation (including molecular dynamics ( MD ) and Monte Carlo ( MC ) simulations). It was developed in 1994 by Huber, Torda and van Gunsteren [ 1 ] to enhance the searching of conformational space in molecular dynamics simulations and is available in the GROMOS software for molecular dynamics simulation (since GROMOS96). The method was, together with the conformational flooding method, [ 2 ] the first to introduce memory dependence into molecular simulations. Many recent methods build on the principles of the local elevation technique,
including the Engkvist-Karlström, [ 3 ] adaptive biasing force, [ 4 ] Wang–Landau , metadynamics ,
adaptively biased molecular dynamics, [ 5 ] adaptive reaction coordinate forces, [ 6 ] and local elevation umbrella sampling [ 7 ] methods.
The basic principle of the method is to add a memory-dependent potential energy term in the simulation so as to prevent the simulation to revisit already sampled configurations, which leads to the increased probability of discovering new configurations. The method can be seen as a continuous variant of the Tabu search method.
The basic step of the algorithm is to add a small, repulsive potential energy function to the current configuration of the molecule such as to penalize this configuration and increase the likelihood of discovering other configurations. This requires the selection of a subset Q ( r ) {\displaystyle \mathbf {Q} (\mathbf {r} )} of the degrees of freedom, which define the relevant conformational variables. These are typically a set of conformationally relevant dihedral angles, but can
in principle be any differentiable function of the cartesian coordinates r {\displaystyle \mathbf {r} } .
The algorithm deforms the physical potential energy surface by introducing a bias energy, such that the total potential energy is defined as
The local elevation bias U b i a s L E ( Q ; t ) {\displaystyle U_{bias}^{LE}(\mathbf {Q} ;t)} depends on the simulation time t {\displaystyle t} and is set to zero at the start of the simulation
( U b i a s L E ( Q ; t = 0 ) = 0 {\displaystyle U_{bias}^{LE}(\mathbf {Q} ;t=0)=0} ) and is gradually built as a sum of small, repulsive functions, giving
where k L E {\displaystyle k_{LE}} is a scaling constant and F ( Q − Q n + 1 ) {\displaystyle F(\mathbf {Q} -\mathbf {Q} _{n+1})} is a multidimensional, repulsive function with F ( 0 ) = 1 {\displaystyle F(0)=1} .
The resulting bias potential will be a sum of all the added functions
To reduce the number of added repulsive functions, a common approach is to add the functions to grid points. The original choice of F ( Q − Q i ) {\displaystyle F(\mathbf {Q} -\mathbf {Q} _{i})} is to use a multidimensional Gaussian function . However, due to the infinite range of the Gaussian as well as the artifacts that can occur with a sum of gridded Gaussians, a better choice is to apply multidimensional truncated polynomial functions [ 8 ] . [ 9 ]
The local elevation method can be applied to free energy calculations as well as to conformational searching problems. In free energy calculations the local elevation technique is applied to level out the free energy surface along the selected set of variables. It has been shown by Engkvist and Karlström [ 3 ] that the bias potential built by the local elevation method will approximate the negative of the free energy surface. The free energy surface can therefore be approximated directly from the bias potential (as done in the metadynamics method) or the bias potential can be used for umbrella sampling (as done in metadynamics with umbrella sampling corrections [ 10 ] and local elevation umbrella sampling [ 7 ] methods) to obtain more accurate free energies. | https://en.wikipedia.org/wiki/Local_elevation |
In topology , a branch of mathematics , local flatness is a smoothness condition that can be imposed on topological submanifolds . In the category of topological manifolds, locally flat submanifolds play a role similar to that of embedded submanifolds in the category of smooth manifolds . Violations of local flatness describe ridge networks and crumpled structures , with applications to materials processing and mechanical engineering .
Suppose a d dimensional manifold N is embedded into an n dimensional manifold M (where d < n ). If x ∈ N , {\displaystyle x\in N,} we say N is locally flat at x if there is a neighborhood U ⊂ M {\displaystyle U\subset M} of x such that the topological pair ( U , U ∩ N ) {\displaystyle (U,U\cap N)} is homeomorphic to the pair ( R n , R d ) {\displaystyle (\mathbb {R} ^{n},\mathbb {R} ^{d})} , with the standard inclusion of R d → R n . {\displaystyle \mathbb {R} ^{d}\to \mathbb {R} ^{n}.} That is, there exists a homeomorphism U → R n {\displaystyle U\to \mathbb {R} ^{n}} such that the image of U ∩ N {\displaystyle U\cap N} coincides with R d {\displaystyle \mathbb {R} ^{d}} . In diagrammatic terms, the following square must commute :
We call N locally flat in M if N is locally flat at every point. Similarly, a map χ : N → M {\displaystyle \chi \colon N\to M} is called locally flat , even if it is not an embedding, if every x in N has a neighborhood U whose image χ ( U ) {\displaystyle \chi (U)} is locally flat in M .
The above definition assumes that, if M has a boundary , x is not a boundary point of M . If x is a point on the boundary of M then the definition is modified as follows. We say that N is locally flat at a boundary point x of M if there is a neighborhood U ⊂ M {\displaystyle U\subset M} of x such that the topological pair ( U , U ∩ N ) {\displaystyle (U,U\cap N)} is homeomorphic to the pair ( R + n , R d ) {\displaystyle (\mathbb {R} _{+}^{n},\mathbb {R} ^{d})} , where R + n {\displaystyle \mathbb {R} _{+}^{n}} is a standard half-space and R d {\displaystyle \mathbb {R} ^{d}} is included as a standard subspace of its boundary.
Local flatness of an embedding implies strong properties not shared by all embeddings. Brown (1962) proved that if d = n − 1, then N is collared; that is, it has a neighborhood which is homeomorphic to N × [0,1] with N itself corresponding to N × 1/2 (if N is in the interior of M ) or N × 0 (if N is in the boundary of M ).
Let K {\displaystyle K} be a non-trivial knot in S 3 {\displaystyle S^{3}} ; that is, a connected, locally flat one-dimensional submanifold of S 3 {\displaystyle S^{3}} such that the pair ( S 3 , K ) {\displaystyle (S^{3},K)} is not homeomorphic to ( S 3 , S 1 ) {\displaystyle (S^{3},S^{1})} . Then the cone on K {\displaystyle K} from the center 0 _ {\displaystyle {\underline {0}}} of D 4 {\displaystyle D^{4}} is a submanifold of D 4 {\displaystyle D^{4}} , but it is not locally flat at 0 _ {\displaystyle {\underline {0}}} . [ 1 ] | https://en.wikipedia.org/wiki/Local_flatness |
Local food is food that is produced within a short distance of where it is consumed, often accompanied by a social structure and supply chain different from the large-scale supermarket system . [ 1 ]
Local food (or locavore ) movements aim to connect food producers and consumers in the same geographic region, to develop more self-reliant and resilient food networks , improve local economies, or to affect the health , environment, community, or society of a particular place. [ 2 ] The term has also been extended to include not only the geographic location of supplier and consumer but can also be "defined in terms of social and supply chain characteristics." [ 3 ] For example, local food initiatives often promote sustainable and organic farming practices, although these are not explicitly related to the geographic proximity of producer and consumer.
Local food represents an alternative to the global food model, which often sees food traveling long distances before it reaches the consumer. [ 4 ]
In the US, the local food movement has been traced to the Agricultural Adjustment Act of 1933, which spawned agricultural subsidies and price supports. [ 5 ] The contemporary American movement can be traced back to proposed resolutions to the Society for Nutrition Education 's 1981 guidelines. In 1994, Chicago pop culture made local food a trend in the Midwest. These largely unsuccessful resolutions encouraged increased local production to slow farmland loss. The program described "sustainable diets" - a term then new to the American public. At the time, the resolutions were met with strong criticism from pro-business institutions, but have had a strong resurgence of backing since 2000. [ 6 ]
In 2008, the United States farm bill was revised to emphasise nutrition: "it provides low-income seniors with vouchers for use at local produce markets, and it added more than $1 billion to the fresh fruit and vegetable program, which serves healthy snacks to 3 million low-income children in schools". [ 7 ]
No single definition of local food systems exists. [ 8 ] The geographic distances between production and consumption varies within the movement. However, the general public recognizes that "local" describes the marketing arrangement (e.g. farmers selling directly to consumers at regional farmers' markets or to schools). [ 3 ] Definitions can be based on political or geographic boundaries, or on food miles . [ 4 ] The American Food, Conservation, and Energy Act of 2008 states that:
(I) the locality or region in which the final product is marketed, so that the total distance that the product is transported is less than 400 miles from the origin of the product; or (II) the State in which the product is produced.
In May 2010 the USDA acknowledged this definition in an informational leaflet. [ 3 ]
State definitions of "local" can be included in laws, statutes, regulations, or program materials, however few state laws explicitly define "local" food. Most states use "local" (or similar words like "native") in food procurement and marketing policies to mean that the food was produced within that state. [ 8 ]
The concept of "local" is also seen in terms of ecology , where food production is considered from the perspective of a basic ecological unit defined by its climate, soil, watershed , species and local agrisystems , a unit also called an ecoregion or a foodshed . Similar to watersheds, foodsheds follow the process of where food comes from and where it ends up. [ 10 ]
In America, local food sales were worth $1.2 billion in 2007, more than doubled from $551 million in 1997. There were 5,274 farmers' markets in 2009, compared to 2,756 in 1998. In 2005, there were 1,144 community-supported agriculture organizations (CSAs). There were 2,095 farm to school programs in 2009. [ 3 ] Using metrics such as these, a Vermont-based farm and food advocacy organization, Strolling of the Heifers , publishes the annual Locavore Index, a ranking of the 50 U.S. states plus Puerto Rico and the District of Columbia . In the 2016 Index, the three top-ranking states were Vermont, Maine and Oregon, while the three lowest-ranking states were Nevada, Texas and Florida. [ 11 ]
Websites now exist that aim to connect people to local food growers. [ 12 ] They often include a map where fruit and vegetable growers can pinpoint their location and advertise their produce.
Supermarket chains also participate in the local food scene. In 2008 Walmart announced plans to invest $400 million in locally grown produce. [ 13 ] Other chains, like Wegman's (a 71-store chain across the northeast), have long cooperated with the local food movement. [ 13 ] A recent study led by economist Miguel Gomez found that the supermarket supply chain often did much better in terms of food miles and fuel consumption for each pound compared to farmers markets. [ 14 ]
Local food campaigns have been successful in supporting small local farmers. After declining for more than a century, the number of small farms increased 20% in the six years to 2008, to 1.2 million, according to the Agriculture Department (USDA). [ 15 ]
Launched in 2009, North Carolina's 10% local food campaign is aimed at stimulating economic development, creating jobs and promoting the state's agricultural offerings. [ 16 ] [ 17 ] The campaign is a partnership between The Center for Environmental Farming Systems (CEFS), with support from N.C. Cooperative Extension and the Golden LEAF Foundation . [ 18 ]
In 2017, a campaign was started in Virginia by the Common Grains Alliance mirroring many of the efforts of the North Carolina campaign. [ 19 ]
Motivations for eating local food include healthier food, environmental benefits, and economic or community benefits. Many local farmers, whom locavores turn to for their source of food, use the crop rotation method when producing their organic crops. This method not only aids in reducing the use of pesticides and pollutants, but also keeps the soil in good condition rather than depleting it. [ 20 ] Locavores seek out farmers close to where they live, and this significantly reduces the amount of travel time required for food to get from farm to table. Reducing the travel time makes it possible to transport the crops while they are still fresh, without using chemical preservatives. [ 21 ] The combination of local farming techniques and short travel distances makes the food consumed more likely to be fresh, an added benefit.
Local eating can support public objectives. It can promote community interaction by fostering relationships between farmers and consumers. Farmers' markets can inspire more sociable behavior, encouraging shoppers to visit in groups. 75% of shoppers at farmers' markets arrived in groups compared to 16% of shoppers at supermarkets. At farmers' markets, 63% had an interaction with a fellow shopper, and 42% had an interaction with an employee or farmer. [ 22 ] More affluent areas tend to have at least some access to local, organic food , whereas low-income communities, which in America often have African American and Hispanic populations, may have little or none, and "are often replete with calorie-dense, low-quality food options", adding to the obesity crisis . [ 7 ] [ 23 ]
Local foods require less energy to store and transport, possibly reducing greenhouse gas emissions . [ 24 ] In local or regional food systems it can be easier to trace resource flows and recycle nutrients in that specific region. [ 25 ] It can also be a way to preserve open landscapes and support biodiversity locally. [ 26 ] [ 27 ] [ 28 ]
Farmers' markets create local jobs . In a study in Iowa (Hood 2010), the introduction of 152 farmers' markets created 576 jobs, a $59.4 million increase in output, and a $17.8 million increase in income. [ 22 ] Promoting local food can support local food actors in the food supply chain and create job opportunities. [ 26 ] [ 28 ] [ 27 ]
Since local foods travel a shorter distance and are often sold directly from producer to consumer, they may not require as much processing or packaging as other foods that need to be transported over long distances. If they are not processed, they may contain fewer added sugars or preservatives. The term "local" is sometimes synonymous with sustainable or organic practices , which can also arguably provide added health benefits. [ 8 ]
Critics of the local foods movement question the fundamental principles behind the push to eat locally. For example, the concept that fewer "food miles" translates to a more sustainable meal has not been supported by major scientific studies. According to a study conducted at Lincoln University in New Zealand : "As a concept, food miles has gained some traction with the popular press and certain groups overseas. However, this debate which only includes the distance food travels is spurious as it does not consider total energy use especially in the production of the product." [ 29 ] The locavore movement has been criticized by Vasile Stănescu, the co-senior editor of the Critical Animal Studies book series, as being idealistic and for not actually achieving the environmental benefits of the claim that the reduced food miles decrease the number of gasses emitted. [ 30 ] Studies have shown that the amount of gasses saved by local transportation while existing, does not have a significant enough impact to consider it a benefit. Food miles concept does not consider agriculture, which is having contributed the highest when it comes to greenhouse gas emissions. Plus, season and transportation medium also makes a difference. [ 31 ]
The only [ citation needed ] study to date that directly focuses on whether or not a local diet is more helpful in reducing greenhouse gases was conducted by Christopher L. Weber and H. Scott Matthews at Carnegie-Mellon . They concluded that "dietary shift can be a more effective means of lowering an average household's food-related climate footprint than 'buying local'". [ 32 ] An Our World In Data post makes the same point, that food choice is overwhelmingly more important than emissions from transport. [ 33 ] However, a 2022 study suggests global food miles CO 2 emissions are 3.5–7.5 times higher than previously estimated , with transport accounting for about 19% of total food-system emissions, [ 34 ] [ 35 ] though shifting towards plant-based diets would still remain substantially more important. [ 36 ] The study concludes that "a shift towards plant-based foods must be coupled with more locally produced items, mainly in affluent countries". [ 35 ]
Numerous studies have shown that locally and sustainably grown foods release more greenhouse gases than food made in factory farms. The "Land Degradation" section of the United Nations report Livestock's Long Shadow concludes that " Intensification - in terms of increased productivity both in livestock production and in feed crop agriculture - can reduce greenhouse gas emissions from deforestation ". [ 37 ] Nathan Pelletier of Dalhousie University in Halifax, Nova Scotia found that cattle raised on open pastures release 50% more greenhouse gas emissions than cattle raised in factory farms. [ 38 ] Adrian Williams of Cranfield University in England found that free range and organic raised chickens have a 20% greater impact on global warming than chickens raised in factory farm conditions, and organic egg production had a 14% higher impact on the climate than factory farm egg production. [ citation needed ] Studies such as Christopher Weber's report on food miles have shown that the total amount of greenhouse gas emissions in production far outweighs those in transportation, which implies that locally grown food is actually worse for the environment than food made in factory farms.
While locavorism has been promoted as a feasible alternative to modern food production, some believe it might negatively affect the efficiency of production. [ 39 ] As technological advances have influenced the amount of output of farms, the productivity of farmers has skyrocketed in the last 70 years. These latter criticisms combine with deeper concerns of food safety, cited on the lines of the historical pattern of economic or food safety inefficiencies of subsistence farming which form the topic of the book The Locavore's Dilemma by geographer Pierre Desrochers and public policy scholar Hiroko Shimizu . [ 39 ] | https://en.wikipedia.org/wiki/Local_food |
Local hormones are a large group of signaling molecules that do not circulate within the blood. Local hormones are produced by nerve and gland cells and bind to either neighboring cells or the same type of cell that produced them. Local hormones are activated and inactivated quickly. [ 1 ] They are released during physical work and exercise. They mainly control smooth and vascular muscle dilation. [ 2 ] Strength of response is dependent upon the concentration of receptors of target cell and the amount of ligand ( the specific local hormone). [ 3 ]
Eicosanoids (ī′kō-să-noydz; eicosa = twenty, eidos = formed) are a primary type of local hormone. These local hormones are polyunsaturated fatty acid derivatives containing 20 carbon atoms and fatty acids derived from phospholipids in the cell membrane or from diet. Eicosanoids initiate either autocrine stimulation or paracrine stimulation . There are two main types of eicosanoids: prostaglandins and leukotrienes, which initiate either autocrine stimulation or paracrine stimulation . Eicosanoids are the result of a ubiquitous pathway which first produces arachidonic acid, and then the eicosanoid product.
Prostaglandins are the most diverse category of eicosanoids and are thought to be synthesized in most tissues of the body. This type of local hormone stimulates pain receptors and increases the inflammatory response . Nonsteroidal anti-inflammatory drugs stop the formation of prostaglandins, thus inhibiting these responses.
Leukotrienes are a type of eicosanoids that are produced in leukocytes and function in inflammatory mediation. [ 4 ]
Paracrines (para- = beside or near) are local hormones that act on neighboring cells. [ 1 ] This type of signaling involves the secretion of paracrine factors, which travel a short distance in the extracellular environment to affect nearby cells. These factors can be excitatory or inhibitory. There are a few families of factors that are very important in embryo development including fibroblast growth factor secreted them. [ 1 ]
Juxtacrines (juxta = near) are local hormones that require close contact and act on either the cell which emitted them or on adjacent cells. [ 5 ]
According to structural and functional similarity, many local hormones fall into either the gastrin or the secretin family. [ 6 ]
The Gastrin family is a group of peptides evolutionarily similar in structure and function. Commonly synthesized in antroduodenal G-cells. Regulate gastric function along with gastric acid secretion and mucosal growth. [ 7 ]
The Secretin family are peptides that act as local hormones which regulate activity of G-protein coupled receptors . Most often found in the pancreas and the intestines. Secretin was discovered in 1902 by E. H. Starling. It was later linked to chemical regulation and was the first substance to be deemed a hormone. [ 8 ] | https://en.wikipedia.org/wiki/Local_hormone |
A local information system ( LIS ) is a form of information system built with business intelligence tools , designed primarily to support geographic reporting . They overlap with some capabilities of geographic information systems (GIS), although their primary function is the reporting of statistical data rather than the analysis of geospatial data . LIS also tend to offer some common knowledge management functionality for storage and retrieval of unstructured data such as documents. They deliver functionality to load, store, analyse and present statistical data that has a strong geographic reference. In most cases the data is structured as indicators and is linked to discrete geographic areas, for example population figures for US counties or numbers claiming unemployment benefit across wards in England . The ability to present this data using data visualization tools like charts and maps is also a core feature of these systems.
The term "LIS" has emerged since 2004, primarily in the UK public sector . To date it is not widely used elsewhere although other terms like community information systems apply to solutions, primarily in North America, that have a great deal of overlap. Another widely used and largely synonymous term is data observatory . Data observatory is a more widely used term internationally particularly within the area of public health where sites which often include this type of statistical reporting application are often termed a health observatory . [ 1 ]
The primary application for LIS is to provide a place-focused evidence base that is easily accessible to a wide range of users including data experts, managers, policy makers, front-line staff and citizens. They provide a wide range of statistics and reports allowing users to review the current evidence base and build a picture of localities and neighbourhoods for their area of interest. LIS are commonly used by partnerships where they need to come together to provide joined-up services for a common area. The ability to have a common evidence base and a platform to share sensitive and non-sensitive data is critical in this situation. LIS enables partners to publish a wide range of indicators in the form of defined outputs which combine locally and nationally available data into more meaningful intelligence aimed at specific user groups
In the UK, like many other countries, there has been a rapid growth in the availability of small area statistics. National Neighbourhood Statistics projects across the UK, set up as a result of the PAT 18 report, [ 2 ] have opened up access to a wide range of government small area based statistics. This has been accompanied by a gradual shift across the public sector, a shift that remains very much on-going, towards the recognition that policy and decisions should be influenced to a greater degree by evidence. There also continues to be a growing acceptance that some services can be more effectively delivered by targeting resources at specific areas of need − the idea of high demand 'hotspots'. This relies on having reliable and detailed data about the needs of customers, in this case citizens, and where they live, work and take leisure time.
In England in particular this has led to a rapid increase in the number of Local Information Systems particularly within local Authorities and Local Strategic Partnerships. This development has been actively supported by the Department of Communities and Local Government (CLG) under their ‘Neighbourhood Renewal’ agenda. A national research project was funded to identify examples and disseminate best practice – this reported publicly in 2004 and led to a more formal report [ 3 ] being published in 2006. CLG's role as a catalyst in this area is further re-enforced through its provision of Neighbourhood Renewal Funds (NRF) - this funding was used by a number of authorities to pump-prime their initial LIS developments.
An initiative is currently on-going through the CLG Information Management Programme to coordinate all LIS activity across local government and partnerships. [ 4 ] This has led to regular national LIS meetings and a dedicated LIS forum. To date it is estimated that approximately 50 per cent of top tier authorities in England now have some form of LIS. In some cases these have been built as bespoke solutions, in other cases they are based on off-the-shelf products. Elsewhere within the UK this figure is lower although an initiative has been launched in Scotland in 2009 resulting in a Scottish LIS Toolkit [ 5 ] to complement the English version. [ 6 ]
The range of data managed within a LIS can be wide and classified in many different ways. Most common is some form of domain specific classification where indicators are grouped into top level categories like ‘Demography’, ‘Health and Welfare’, ‘Crime and Community Safety’, ‘Education and Children's Services’, ‘Environment’ and ‘Economy’. There may also be cross-cutting themes such as ‘Performance’ and ‘Social Disadvantage’. In the UK key government data sources include ONS Neighbourhood Statistics, CLG, Dept for Work and Pensions, NOMIS, Audit Commission and several areas of NHS information services. However the real value of LIS is their ability to combine national data with local data available from a wide range of internal business systems including those of partners. This local data is often not provided to central government and, even when it is, it tends to be in a form that limits its value. | https://en.wikipedia.org/wiki/Local_information_systems |
The murine local lymph node assay (LLNA) is an in vivo test for skin sensitisation .
LLNA has largely superseded the guinea pig maximisation test and the Buehler test . It is considered more scientific and less cruel (lower number of animals; less suffering) and has found broad scientific and regulatory acceptance.
The principle underlying the LLNA is that skin sensitizers induce growth of lymphocytes in the lymph nodes draining the site of application. Lymphocyte proliferation can be measured by radiolabeling (quantifying tritiaded thymidine ), bioluminescence (quantifying ATP content in lymphocytes) or immunoassay ( ELISA utilizing an antibody specific for BrdU ). [ 1 ]
The test material is applied to the ears of mice. Optionally, a tracer substance such as 3H-Methyl- thymidine or BrdU is injected intraperitoneally for lymphocyte incorporation. The animals are euthanized and their lymph node cells are removed and analyzed. The ratio of tracer incorporation in lymph nodes from dosed animals is compared to control animals, giving a stimulation index (SI). When the stimulation index exceeds 3 (SI > 3), a relevant sensitizing potential is assumed. In contrast to the classical guinea pig tests, the LLNA provides a quantitative measurement of sensitizing potency of a tested chemical. [ 2 ]
The LLNA may not be appropriate for certain metallic compounds, surfactants, high-molecular-weight proteins, strong dermal irritants, and materials that do not sufficiently adhere to the ear for an acceptable period of time during treatment. There is no absolute conformity in the sensitizing potential of a substance in mouse, guinea pig and human.
The OECD Guidelines for the Testing of Chemicals guideline No. 429 of 23 July 2010. [ 3 ]
The REACH Regulation , Annex VII, paragraph 8.3 states "The Murine Local Lymph Node Assay (LLNA) is the first-choice method for in vivo testing. Only in exceptional circumstances should another test be used. Justification for the use of another test shall be provided."
According to European Guideline OECD 406 Skin Sensitization, the LLNA or the MEST ( Mouse ear swelling test ) can be used as a first stage in the assessment of skin sensitization potential. If a positive result is seen in either assay, a test substance may be designated as a potential sensitizer, and it may not be necessary to conduct a further guinea pig test. However, if a negative result is seen in the LLNA or MEST, a guinea pig test (preferably a GPMT or BT) must be conducted. | https://en.wikipedia.org/wiki/Local_lymph_node_assay |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.