text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Louis Schwabe (1798-1845) was a manufacturer of silk and artificial silk fabrics in Manchester . He was noted for his pioneering work in the use of spinnerets for the production of an artificial glass based yarn. [ 1 ] [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Louis_Schwabe |
Louis Thomas Jérôme Auzoux (7 April 1797 Saint-Aubin-d'Écrosville – 7 March 1880 Paris) was a French anatomist and naturalist .
Louis Auzoux obtained a medical degree in 1818 and was appointed to the surgical department of the Hotel-Dieu with Guillaume Dupuytren .
In 1820 he visited the papier-mâché workshop of Jean-François Ameline and later (1827) set up a workshop making very accurate human and veterinary anatomical models in his Normandy birthplace Saint-Aubin-d'Écrosville . This traded as Maison Auzoux. Auzoux also made large scale zoological and botanical models for educational use. The models were called "anatomy clastique" (Greek klastos - broken in pieces), because they could be taken apart to show the full structure. The company also traded in other natural history material.
The process invented by Dr Auzoux consisted in a mix of paper pulp, glue and cork powder, pressed in paper-lined molds, following papier-maché method. [ 1 ]
For simpler, less articulated pieces, like large-scale representations of organs, he used plaster molds lined with several layers of colored paper imbibed with glue. The glue-moistened paper follows every detail of the mold ; the layers, starting with thin paper then progressing on heavier paper, will give resistance to the future piece. The use of papers of different colours helps keeping trace of the layers. As many as twelve layers of paper could be used. The model thus obtained was hollow, light and resistant.
For articulated pieces, he designed a paste that would dry in a material dense enough that fasteners and hinges could be fixed, or include metal frames for larger models.
The molds used in that case were in a metal alloy. Workers would line them with a paper and cardboard shell, consisting of only 3 to 4 layers, then fill them with the paste, called "terre" = earth, made of flour glue, shredded paper, shredded oakum, chalk and cork powder, the latter said to be the crucial ingredient in the mix. The mold was then closed and placed under a press to compact the paste and spread it into the finest details.
The dried pieces were then completed with details such as veins, arteries and nerves made of fabric-covered metal wires, equipped with closing hooks, painted, labelled, varnished and assembled.
The success met in France and abroad by his anatomical reproductions, due to their technicity and accuracy led the Dr. Auzoux to found a workshop in his birthplace village Saint-Aubin-d'Écrosville (Eure, France).
The number of workers grew and by 1868, more than eighty men and women were employed to produce hundred of pieces sold worldwide each year. A shop was also opened 8 rue du Paon in Paris in 1833, that would ship sales in France and other countries.
When Louis Auzoux died in 1880, his clastic anatomical models were internationally known and his society prosperous.
The competition and multiplication of other kinds of teaching aids for the study of anatomy (photography, video, digital models, plastination...) led the Auzoux firm to turn to other materials, like cheaper resin models in the 1980s, then close in the early 2000s.
The Musée de l'Ecorché opened in nearby Le Neubourg (Eure, France) with a collection of production tools and models salvaged from the closing factory, and the input of former workers on techniques used.
The personal archives of Dr Louis Auzoux are kept in the French national archives (Archives nationales) under shelf number 242AP. The origins of Auzoux’s teaching model empire, with a special emphasis on his smaller botanical collection is discussed in the essay, “Dr Louis Auzoux and his collection of papier-mâché flowers, fruits and seeds.” [ 2 ]
The Powerhouse Museum (Sydney, Australia) holds a number of his models: [ 3 ] including a model of Atropa belladona , [ 4 ] [ 5 ] and of a dock flower. [ 6 ]
Fabric, Plaster, Rubber and PlasticAnatomical Models: Praiseworthy Precursors of Plastinated Specimens J Int Soc Plastination Vol 15, No 1: 30-35, 2000 | https://en.wikipedia.org/wiki/Louis_Thomas_Jérôme_Auzoux |
The Louis and Beatrice Laufer Center for Physical and Quantitative Biology [ 1 ] (Laufer Center) is a multidisciplinary venue where research from fields such as biology, biochemistry, chemistry, computer science, engineering, genetics, mathematics, and physics come together and target medical and biological problems using both computations and experiments. The Laufer Center is part of Stony Brook University . The Center's current director is Dr. Ivet Bahar , Louis & Beatrice Laufer Endowed Chair, and Professor at the Department of Biochemistry and Cell Biology of Stony Brook University. Other faculty members include: Founding Director and Laufer Family Endowed Chair, Dr. Ken A. Dill , Associate Director, Dr. Carlos Simmerling , Henry Laufer Endowed Professor, Dr. Gábor Balázsi , Assistant Professor, Dr. Eugene Serebryany, and affiliated faculty from the Departments of Chemistry, Physics, Applied Mathematics, Pharmacology, Biomedical Engineering, Microbiology & Immunology, Ecology & Evolution and Computer Science at Stony Brook University , as well as from Brookhaven National Laboratory and Cold Spring Harbor Laboratory . Among the Laufer Center's goals is to enhance interdisciplinary education at Stony Brook University. Dr. Gábor Balázsi coordinates the flagship course of the Center, Physical and Quantitative Biology , which is offered each Fall through the Departments of Physics, Chemistry and Biomedical Engineering.
The center was founded in 2008 by a gift from Drs. Henry Laufer , Marsha Laufer and their family in memory of Louis and Beatrice Laufer. On May 7, 2012 the Laufer Center opened with a ribbon-cutting ceremony. [ 2 ] | https://en.wikipedia.org/wiki/Louis_and_Beatrice_Laufer_Center_for_Physical_and_Quantitative_Biology |
The Louisa Gross Horwitz Prize for Biology or Biochemistry is an annual prize awarded by Columbia University to a researcher or group of researchers who have made an outstanding contribution in basic research in the fields of biology or biochemistry .
The prize was established at the bequest of S. Gross Horwitz and is named to honor his mother, Louisa Gross Horwitz, the daughter of trauma surgeon Samuel D. Gross . The prize was first awarded in 1967. [ 1 ]
As of October 2024, 55 (47%) of the 117 prize recipients have subsequently been awarded the Nobel Prize in Physiology or Medicine (44) or Chemistry (11). It is regarded as one of the important precursors of a future Nobel Prize award. | https://en.wikipedia.org/wiki/Louisa_Gross_Horwitz_Prize |
A louse-feeder was a job in interwar and Nazi-occupied Poland, at the Lviv Institute for Study of Typhus and Virology and the associated Institute in Kraków, Poland . Louse-feeders were human sources of blood for lice infected with typhus , which were then used to research possible vaccines against the disease.
Research into a typhus vaccine was started in 1920 by parasitologist Rudolf Weigl . Weigl and his wife Zofia Weigl were some of the earliest lice feeders. During the Nazi occupation of the city, louse-feeding became the primary means of support and protection for many of the city's Polish intellectuals, including the mathematician Stefan Banach and the poet Zbigniew Herbert . While the profession carried a significant risk of infection, louse-feeders were given additional food rations, were protected from being shipped to slave labour camps and German concentration camps , and were permitted to move around the occupied city.
Typhus research involving human subjects, who were purposely infected with the disease, was also carried out in various Nazi concentration camps , in particular at Buchenwald and Sachsenhausen and to a lesser extent at Auschwitz .
French bacteriologist Charles Nicolle showed in 1909 that lice ( Pediculus humanus corporis ) were the primary means by which the typhus bacteria ( Rickettsia prowazekii ) were spread. [ 1 ] In his experiments Nicolle infected a chimpanzee with typhus, retrieved the lice from it, and placed them on a healthy chimpanzee who developed the disease shortly thereafter. [ 2 ] Further work established that it was lice excrement rather than bites which spread the disease. [ 2 ] Nicolle received a Nobel Prize in Physiology and Medicine for his work on typhus in 1928. [ 2 ]
During World War I , beginning in 1914, Rudolf Weigl, a Polish parasitologist of Austrian background was drafted into the Austrian army and given the task of studying typhus and its causes. [ 1 ] [ 3 ] Weigl worked at a military hospital in Przemyśl , where he supervised the newly established Laboratory for the Study of Spotted Typhus. [ 3 ]
After Poland regained its independence, Weigl was hired in 1920 as a professor of biology at the Jan Kazimierz University in Lwów, at the Institute for Study of Typhus and Virology . [ 3 ] While there, he developed a vaccine against typhus made from grown lice which were then crushed into a paste. Initially the lice were grown on the blood of guinea pigs but the effectiveness of the vaccine depended on the blood being as similar to human blood as possible. As a consequence, by 1933, Weigl began using human volunteers as feeders. While the volunteers fed healthy lice, there was still the danger of accidental exposure to some of the typhus-carrying lice in the institute. Additionally, once the lice were infected with typhus, they required additional feeding, which carried the risk of the human feeder becoming infected with the disease. Weigl protected the donors by vaccinating them beforehand, and although some of them (including Weigl himself) developed the disease, none died. However, the production of the vaccine was still a potentially dangerous activity, and it was still difficult to produce the vaccine on a large scale. [ 1 ] [ 4 ]
At the time Weigl's vaccine was the only one in existence which could be employed in practical applications outside of controlled settings. The first widespread use of his vaccine was carried out in China by Belgian missionaries between 1936 and 1943. [ 1 ] [ 3 ]
The development of the typhus vaccine involved several stages. First, the lice larvae had to be bred and then fed on human blood. Once they matured, they were removed from the feeders, held down in a clamp machine especially designed by Weigl, and anally injected with the strain of the typhus bacteria. At that point the infected louse had to be fed human blood for about five more days. This stage of the production process carried the greatest risk to the human feeder of contracting the disease. Weigl and his staff tried to prevent the danger by heavily vaccinating the feeders beforehand. Once the louse was sufficiently infected, it was removed from the human feeder, killed in a solution of phenol , and then dissected. The contents of the louse abdomen (its feces) was removed and then ground up into a paste. The paste was then made into the typhus vaccine. [ 3 ]
The feeding was done through the use of specially-constructed small wooden boxes, 4 by 7 cm ( 1 + 1 ⁄ 2 by 2 + 3 ⁄ 4 in), developed by Weigl. The boxes were sealed with paraffin on the top which prevented the lice from escaping, and the underside consisted of a screen made of a fabric sieve , adapted by Weigl from sieves that were used by local peasants to separate wheat husks from the seeds. A typical box contained 400 to 800 lice larvae which would mature as the feeding took place. The sieve bottom allowed the lice to stick out their heads and feed on the human flesh. A standard feeding period took thirty to forty-five minutes, and was repeated with the same lice colony for twelve days. Usually, an individual feeder would accommodate from 7 to 11 boxes (of 400 to 800 lice each) on his or her leg, per feeding session. Typically men would place the boxes on their calves, to minimize the discomfort of the bites, while women feeders placed them on their thighs, so that the bite marks could be covered up by a skirt. A nurse had to watch over the feeding process as the lice would feed beyond the point of being gorged on the blood and could burst if left on the human flesh for too long. [ 3 ]
Other dangers that employment at the institute involved, in addition to the contraction of typhus, concerned allergic reactions to the vaccine or asthma attacks because of the louse feces dust. [ 3 ]
After the invasion of Poland by Nazi Germany and the Soviet Union in 1939, Lwów initially came under Soviet occupation. During this period Weigl's institute continued to function, although Poles, particularly those escaping from the German-controlled areas, were banned from being employed there. The Soviet authorities deported ethnic Poles from the seized territories, sending them to Kazakhstan , Siberia and other areas deep within the Soviet Union. Nevertheless, despite the official prohibition on employment, Weigl used his prestige and influence (during this time Nikita Khrushchev visited the institute) to secure the release of several Polish would-be deportees and in some cases managed to obtain permission for those who had already been exiled to return. [ 3 ] These individuals were then given work in the institute as either nurses, interpreters (Weigl himself did not speak Russian) [ 3 ] or as some of the first lice feeders; people who were given the job as a means of protecting them from persecution by the Soviet authorities. [ 3 ]
The vaccine produced by the institute during this time was earmarked for the Red Army, aside from a small quantity used in the civilian sector. [ 5 ]
In June 1941, after the Nazi attack on the Soviet Union , Lwów was taken over by the Germans. Weigl's institute, now renamed Institut für Fleckfieber und Virusforschung des OKH , was kept open because, much like the Soviets before them, the Germans were interested in the applications of the typhus vaccine among their front line soldiers. The institute was made directly subordinate to the German military, which, as it turned out, ended up giving its workers significant protection against the Gestapo . The Nazis converted a building of the former Queen Jadwiga Grammar School into Weigl's new laboratory and ordered that the production of the vaccine be stepped up, with the whole output being shipped to the German armed forces. [ 5 ]
In light of the Sonderaktion Krakau , a German operation in which many distinguished professors from Jagiellonian University in Kraków were arrested and sent to German concentration camps , the danger that a similar fate would befall Lwów intellectuals was very real. As a result, in July 1941, Weigl began hiring prominent Polish intellectuals of the city for his institute, many of whom had lost work as a result of the closure of all Polish institutions of higher learning by the Nazis. In fact, soon after, the Nazis carried out a massacre of Lwów professors . [ 6 ] Weigl managed to convince the occupation authorities to give him full discretion as to whom he hired for his experiments, even as he himself refused to sign the so-called Volksliste which would have identified him as an ethnic German (since he was of Austrian background) with access to privileges and opportunities unavailable to Poles. Similarly, he refused an offer to move to Berlin, direct a dedicated institute and become a Reichsdeutscher . [ 3 ] The group of scholars hired by Weigl were often brought in by Wacław Szybalski , an oncologist , who was also put in charge of supervising the lice feeding. [ 5 ]
Association with the institute offered a measure of protection. Weigl was able to continue his research, and even hire more people, some as research assistants, others as lice feeders, often among those threatened by Nazi authorities with deportation, or even resistance members. [ 1 ] [ 3 ] The feeders of lice who were employed at the institute were issued a special version of the Kennkarte , the "Ausweis" , which noted both that they might be infected with typhus and that they worked for an institution of the German military, the "Oberkommando des Heeres" (Office of the Commander-in-Chief of the German Army). As a result, the workers of the institute, unlike other Poles in the city, could move freely about and, if stopped by the police or the Gestapo, were quickly released. [ 3 ]
In autumn of 1941, the mathematician Stefan Banach began working at the institute as a lice feeder, [ 6 ] as did his son, Stefan Jr. [ 5 ] Banach continued to work at the institute feeding lice until March 1944, and managed to survive the war as a result, unlike many other Polish mathematicians who were killed by the Nazis (although he died of lung cancer shortly after the war's conclusion). Banach's employment at the institute also gave protection to his wife, Łucja (it was she who purchased the notebook that eventually became the Scottish Book ), who was in particular danger because of her Jewish background. [ 5 ] [ 7 ] The poet Zbigniew Herbert also spent the occupation as a lice feeder in Weigl's institute. [ 8 ] According to Alfred Jahn , a geographer and future rector of the University of Wrocław , "Almost the entire University of Lwów worked at Weigl's". Two other future rectors of the University of Wrocław, Kazimierz Szarski and Stanisław Kulczyński , also survived the war as feeders of lice. [ 9 ]
With numerous academics gathering in one place under the pretense of lice feeding and research, underground education and research often took place. The actual feeding time took only about an hour a day, which left the remainder of the day free for conspiratorial activity and scientific discourse. [ 3 ]
Additionally, Weigl began employing members of the Polish anti-Nazi resistance, the Home Army , in his institute, which provided them with sufficient cover to carry out their underground activities. Aleksander Szczęścikiewicz and Zygmunt Kleszczyński, two leaders of the underground scout movement, the Grey Ranks ( Szare Szeregi ), also worked at the institute. Due to his special position, Weigl was allowed to have a radio at the institute – otherwise ownership of a radio by Poles was punishable by death – which was used by him and members of the Polish resistance to gather up-to-date news of the war otherwise censored by German propaganda. [ 3 ]
When the Germans began the systematic murder of the Lwów Jews , Weigl tried to save as many as he could by hiring them as well. Among others, work at the institute saved the life of the bacteriologist Henryk Meisel . Weigl also tried to protect the bacteriologist Filip Eisenberg , from Jagiellonian University, by offering him a position. However, Eisenberg believed that he could survive the war by hiding in Kraków, turned down Weigl's offer, and in 1942 was caught by the Nazis and sent to the Belzec extermination camp where he was murdered. In the end, about 4000 people (feeders, technicians and nurses) passed through Weigl's institute, of whom about 500 are known by name. [ 9 ]
While all of the vaccines produced by the institute during this time were supposed to go to the German army, some portion was smuggled out by the employees associated with the Polish resistance and shipped to partisan units of the Home Army, as well as underground movements in the Lwów and Warsaw ghettos, and even to sick individuals in the Auschwitz and Majdanek concentration camps. [ 3 ] [ 9 ] According to the famous Polish-Jewish pianist and diarist , Władysław Szpilman (the protagonist of the 2002 movie The Pianist ), because of his vaccine, Weigl became "as famous as Hitler in the Warsaw ghetto", with "Weigl as a symbol of Goodness and Hitler as a symbol of Evil". [ 3 ]
After the Red Army, along with the Home Army ( Operation Tempest ) recaptured Lwów in July 1944, Weigl's institute was disbanded and moved to central Poland, along with most other Polish inhabitants of Lwów . [ 3 ] Weigl would continue his research in Kraków at Jagiellonian University . [ 1 ]
Human lice feeders were also used in America in the 1940s. The Wilmington Morning Star reported that the U.S. government's researchers paid around 60 lice feeders $60 a month (equivalent to $1,150 in 2024), rising to $120 (equivalent to $2,310 in 2024) due to the lack of people willing to participate. Humans were used because lice failed to thrive on animals, until it was discovered that some could live on an "Easter bunny" called Samson. Samson and his descendants were used to conduct hundreds of experiments. [ 10 ]
[ 6 ] [ 11 ]
Weigl continued his research on typhus after the war. After his death, his studies were picked up by his friends, students, and his second wife, Anna-Herzig Weigl. [ 3 ]
Rudolf Weigl was posthumously awarded the medal of Righteous among the Nations by the Yad Vashem in 2003. [ 12 ] His contributions to saving lives during the Nazi German occupation of Poland have been compared to those of Oskar Schindler . [ 3 ] [ 13 ] | https://en.wikipedia.org/wiki/Louse-feeder |
Lovastatin , sold under the brand name Mevacor among others, is a statin medication , to treat high blood cholesterol and reduce the risk of cardiovascular disease . [ 2 ] Its use is recommended together with lifestyle changes. [ 2 ] It is taken by mouth. [ 2 ]
Common side effects include diarrhea, constipation, headache, muscles pains, rash, and trouble sleeping. [ 2 ] Serious side effects may include liver problems , muscle breakdown , and kidney failure . [ 2 ] Use during pregnancy may harm the baby and use during breastfeeding is not recommended. [ 3 ] It works by decreasing the liver's ability to produce cholesterol by blocking the enzyme HMG-CoA reductase . [ 2 ]
Lovastatin was patented in 1979 and approved for medical use in 1987. [ 4 ] It is on the World Health Organization's List of Essential Medicines . [ 5 ] It is available as a generic medication . [ 2 ] In 2022, it was the 111th most commonly prescribed medication in the United States, with more than 5 million prescriptions. [ 6 ] [ 7 ]
The primary uses of lovastatin is for the treatment of dyslipidemia and the prevention of cardiovascular disease . [ 8 ] It is recommended to be used only after other measures, such as diet, exercise, and weight reduction, have not improved cholesterol levels. [ 8 ]
Lovastatin is usually well tolerated, with the most common side effects being, in approximately descending order of frequency: creatine phosphokinase elevation, flatulence , abdominal pain, constipation, diarrhoea , muscle aches or pains , nausea, indigestion , weakness, blurred vision, rash, dizziness and muscle cramps. [ 9 ] As with all statin drugs, it can occasionally cause myopathy , hepatotoxicity (liver damage), dermatomyositis or rhabdomyolysis . [ 9 ] This can be life-threatening if not recognised and treated in time, so any unexplained muscle pain or weakness whilst on lovastatin should be promptly mentioned to the prescribing doctor. Other uncommon side effects that should be promptly mentioned to either the prescribing doctor or an emergency medical service include: [ 10 ]
These less serious side effects should still be reported if they persist or increase in severity: [ 10 ]
Contraindications , conditions that warrant withholding treatment with lovastatin, include pregnancy, breast feeding, and liver disease. Lovastatin is contraindicated during pregnancy (Pregnancy Category X); it may cause birth defects such as skeletal deformities or learning disabilities. Owing to its potential to disrupt infant lipid metabolism, lovastatin should not be taken while breastfeeding. [ 11 ] Patients with liver disease should not take lovastatin. [ 12 ]
As with atorvastatin , simvastatin , and other statin drugs metabolized via CYP3A4 , drinking grapefruit juice during lovastatin therapy may increase the risk of side effects. Components of grapefruit juice, the flavonoid naringin , or the furanocoumarin bergamottin inhibit CYP3A4 in vitro , [ 13 ] and may account for the in vivo effect of grapefruit juice concentrate decreasing the metabolic clearance of lovastatin, and increasing its plasma concentrations. [ 14 ]
Lovastatin is an inhibitor of 3-hydroxy-3-methylglutaryl-coenzyme A reductase (HMG-CoA reductase), an enzyme that catalyzes the conversion of HMG-CoA to mevalonate. [ 15 ] Mevalonate is a required building block for cholesterol biosynthesis and lovastatin interferes with its production by acting as a reversible competitive inhibitor for HMG-CoA, which binds to the HMG-CoA reductase. Lovastatin is a prodrug , an inactive lactone in its native form, the gamma-lactone closed ring form in which it is administered, is hydrolysed in vivo to the β-hydroxy acid open ring form; which is the active form. [ citation needed ]
Lovastatin and other statins have been studied for their chemopreventive and chemotherapeutic effects. No such effects were seen in the early studies. [ 16 ] More recent investigations revealed some chemopreventive and therapeutic effects, for certain types of cancer, especially in combination of statins with other anticancer drugs. [ 17 ] It is likely that these effect are mediated by the properties of statins to reduce proteasome activity, leading to an accumulation of cyclin-dependent kinase inhibitors p21 and p27 , and to subsequent G 1 -phase arrest, as seen in cells of different cancer lines. [ 18 ] [ 19 ]
Compactin and lovastatin, natural products with a powerful inhibitory effect on HMG-CoA reductase , were discovered in the 1970s, and taken into clinical development as potential drugs for lowering LDL cholesterol. [ 21 ] [ 22 ]
In 1982, some small-scale clinical investigations of lovastatin, a polyketide-derived natural product isolated from Aspergillus terreus , in very high-risk patients were undertaken, in which dramatic reductions in LDL cholesterol were observed, with very few adverse effects. After the additional animal safety studies with lovastatin revealed no toxicity of the type thought to be associated with compactin, clinical studies continued. [ citation needed ]
Large-scale trials confirmed the effectiveness of lovastatin. Observed tolerability continued to be excellent, and lovastatin was approved by the US FDA in 1987. [ 23 ] It was the first statin approved by the FDA. [ 24 ]
Lovastatin is also naturally produced by certain higher fungi , such as Pleurotus ostreatus (oyster mushroom) and closely related Pleurotus spp. [ 25 ] Research into the effect of oyster mushroom and its extracts on the cholesterol levels of laboratory animals has been extensive, [ 26 ] [ 27 ] [ 25 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ excessive citations ] although the effect has been demonstrated in a very limited number of human subjects. [ 37 ]
In 1998, the FDA placed a ban on the sale of dietary supplements derived from red yeast rice , which naturally contains lovastatin, arguing that products containing prescription agents require drug approval. [ 38 ] Judge Dale A. Kimball of the United States District Court for the District of Utah , granted a motion by Cholestin's manufacturer, Pharmanex, that the agency's ban was illegal under the 1994 Dietary Supplement Health and Education Act because the product was marketed as a dietary supplement, not a drug. [ 39 ]
The objective is to decrease excess levels of cholesterol to an amount consistent with maintenance of normal body function. Cholesterol is biosynthesized in a series of more than 25 separate enzymatic reactions that initially involves three successive condensations of acetyl-CoA units to form the six-carbon compound 3-hydroxy-3-methylglutaryl coenzyme A (HMG CoA). This is reduced to mevalonate and then converted in a series of reactions to the isoprenes that are building-blocks of squalene , the immediate precursor to sterols, which cyclizes to lanosterol (a methylated sterol) and further metabolized to cholesterol. A number of early attempts to block the synthesis of cholesterol resulted in agents that inhibited late in the biosynthetic pathway between lanosterol and cholesterol. A major rate-limiting step in the pathway is at the level of the microsomal enzyme that catalyzes the conversion of HMG CoA to mevalonic acid, and that has been considered to be a prime target for pharmacologic intervention for several years. [ 15 ]
HMG CoA reductase occurs early in the biosynthetic pathway and is among the first committed steps to cholesterol formulation. Inhibition of this enzyme could lead to accumulation of HMG CoA, a water-soluble intermediate that is, then, capable of being readily metabolized to simpler molecules. This inhibition of reductase would lead to accumulation of lipophylic intermediates with a formal sterol ring. [ citation needed ]
Lovastatin was the first specific inhibitor of HMG CoA reductase to receive approval for the treatment of hypercholesterolemia. The first breakthrough in efforts to find a potent, specific, competitive inhibitor of HMG CoA reductase occurred in 1976, when Endo et al. reported the discovery of mevastatin , a highly functionalized fungal metabolite, isolated from cultures of Penicillium citrium . [ 40 ]
The biosynthesis of lovastatin occurs via an iterative type I polyketide synthase (PKS) pathway. The six genes that encode enzymes that are essential for the biosynthesis of lovastatin are lovB, lovC, lovA, lovD, lovG, and lovF . [ 41 ] [ 42 ] The synthesis of dihydromonacolin L requires a total of 9-malonyl Coa . [ 41 ] It proceeds in the PKS pathway until it reaches (E) a hexaketide, where it undergoes a Diels-Alder cycloaddition to form the fused rings. After cyclization it continues through the PKS pathway until it reaches (I) a nonaketide, which then undergoes release from LovB through the thioesterase encoded by LovG. Dihydromonacolin L, (J), then undergoes oxidation and dehydration via a cytochrome P450 oxygenase encoded by LovA to obtain monacolin J, (L) .
The MT domain from lovB is active in the conversion of ( B) to ( C) when it transfers a methyl group from S-adenosyl-L-methionine (SAM) to the tetraketide (C) . [ 41 ] Because LovB contains an inactive ER domain, LovC is required at specific steps to obtain fully reduced products. The domain organization of LovB, LovC, LovG and LovF is shown in Figure 2. The inactive ER domain of lovB is shown with an oval and where LovC acts in trans to LovB is shown with a red box.
In a parallel pathway, the diketide side chain of lovastatin is synthesized by another highly reducing type I polyketide synthase enzyme encoded by LovF . Lastly, the side chain, 2-methylbutyrate (M) is covalently attached to C-8 hydroxy group of monacolin J (L) by a transesterase encoded by LovD to form lovastatin.
A major bulk of work in the synthesis of lovastatin was done by M. Hirama in the 1980s. [ 43 ] [ 44 ] Hirama synthesized compactin and used one of the intermediates to follow a different path to get to lovastatin. The synthetic sequence is shown in the schemes below. The γ-lactone was synthesized using Yamada methodology starting with glutamic acid. Lactone opening was done using lithium methoxide in methanol and then silylation to give a separable mixture of the starting lactone and the silyl ether . The silyl ether on hydrogenolysis followed by Collins oxidation gave the aldehyde. Stereoselective preparation of (E,E)-diene was accomplished by addition of trans-crotyl phenyl sulfone anion, followed by quenching with Ac 2 O and subsequent reductive elimination of sulfone acetate. Condensation of this with lithium anion of dimethyl methylphosphonate gave compound 1. Compound 2 was synthesized as shown in the scheme in the synthetic procedure. Compounds 1 and 2 were then combined using 1.3 eq sodium hydride in THF followed by reflux in chlorobenzene for 82 hr under nitrogen to get the enone 3. [ citation needed ]
Simple organic reactions were used to get to lovastatin as shown in the scheme.
Lovastatin is a naturally occurring compound found in low concentrations in food such as oyster mushrooms , [ 45 ] red yeast rice , [ 46 ] and Pu-erh . [ 47 ]
Mevacor, Advicor (as a combination with niacin ), Altocor, Altoprev [ citation needed ]
In plant physiology, lovastatin has occasionally been used as inhibitor of cytokinin biosynthesis. [ 48 ] | https://en.wikipedia.org/wiki/Lovastatin |
[ 1 ]
A love lock or love padlock is a padlock that couples lock to a bridge , fence , gate , monument , or similar public fixture to symbolize their love. [ 2 ] Typically the sweethearts' names or initials, and perhaps the date, are inscribed on the padlock, and its key is thrown away (often into a nearby river) to symbolize unbreakable love.
Since the 2000s, love locks have proliferated at an increasing number of locations worldwide. They are treated by some municipal authorities as litter or vandalism , and there is some cost to their removal. However, there are other authorities who embrace them, and who use them as fundraising projects or tourist attractions.
In 2014, the New York Times reported that the history of love padlocks dates back at least 100 years to a melancholic Serbian tale of World War I , with an attribution for the bridge Most Ljubavi (lit. the Bridge of Love ) in the spa town of Vrnjačka Banja . [ 3 ] A local schoolmistress named Nada fell in love with a Serbian officer named Relja . After they committed to each other, Relja went to war in Greece , where he fell in love with a local woman from Corfu . As a consequence, Relja and Nada broke off their engagement. Nada never recovered from that devastating blow, and after some time she died due to heartbreak from her unfortunate love. [ 4 ] [ 5 ]
As young women from Vrnjačka Banja wanted to protect their own loves, they started writing down their names, with the names of their loved ones, on padlocks and affixing them to the railings of the bridge where Nada and Relja used to meet. [ 6 ] [ 7 ]
Because, like everywhere else, the locks are heavy and there is a potential of damage to the bridge (which is especially small here in the original Bridge of Love Locks location compared to some larger bridges elsewhere, though nowhere is a bridge big enough to be able to withstand the weight of ever adding heavy material like metal), local authorities remove them regularly and have used them, among other, for smelting for monuments in their spa town. [ 8 ]
In the rest of Europe, love padlocks started appearing in the early 2000s as a ritual . [ 9 ] The reasons love padlocks started to appear vary between locations and in many instances are unclear. However, in Rome , the ritual of affixing love padlocks to the bridge Ponte Milvio can be attributed to the 2006 book I Want You by Italian author Federico Moccia , who made a film adaptation in 2007. [ 10 ] [ 11 ]
The German lock company ABUS manufacturers a corrosion-resistant, aluminium padlock called the "Love Lock", that is decorated with romantic imagery on the lock body. [ 12 ]
The oceanic divergent plate boundary of North American Plate and Eurasian Plate divides Iceland. The western part of Iceland sits on the North American Plate and the eastern part sits on the Eurasian Plate. The Reykjanes Ridge of the Mid-Atlantic ridge system in this region crosses the island from southwest and connects to the Kolbeinsey Ridge in the northeast. There are love locks on the footbridge at the rift. [ 13 ] : 39, 40, 49
In many instances, love locking has been classified an act of vandalism [ 14 ] and local authorities and site owners have had padlocks removed.
On some locations the padlocks have been given almost legendary or superstitious character: | https://en.wikipedia.org/wiki/Love_lock |
In theoretical physics , Lovelock's theory of gravity (often referred to as Lovelock gravity ) is a generalization of Einstein's theory of general relativity introduced by David Lovelock in 1971. [ 1 ] It is the most general metric theory of gravity yielding conserved second order equations of motion in an arbitrary number of spacetime dimensions D . In this sense, Lovelock's theory is the natural generalization of Einstein's general relativity to higher dimensions. In three and four dimensions ( D = 3, 4), Lovelock's theory coincides with Einstein's theory, but in higher dimensions the theories are different. In fact, for D > 4 Einstein gravity can be thought of as a particular case of Lovelock gravity since the Einstein–Hilbert action is one of several terms that constitute the Lovelock action.
The Lagrangian of the theory is given by a sum of dimensionally extended
Euler densities, and it can be written as follows
where R μν αβ represents the Riemann tensor , and where the generalized Kronecker delta δ is defined as the antisymmetric product
Each term R n {\displaystyle {\mathcal {R}}^{n}} in L {\displaystyle {\mathcal {L}}} corresponds to the dimensional extension of the Euler density in 2 n dimensions, so that these only contribute to the equations of motion for n < D /2. Consequently, without lack of generality, t in the equation above can be taken to be D = 2 t + 2 for even dimensions and D = 2 t + 1 for odd dimensions.
The coupling constants α n in the Lagrangian L {\displaystyle {\mathcal {L}}} have dimensions of [length] 2 n − D , although it is usual to normalize the Lagrangian density in units of the Planck scale
Expanding the product in L {\displaystyle {\mathcal {L}}} , the Lovelock Lagrangian takes the form
where one sees that coupling α 0 corresponds to the cosmological constant Λ, while α n with n ≥ 2 are coupling constants of additional terms that represent ultraviolet corrections to Einstein theory, involving higher order contractions of the Riemann tensor R μν αβ . In particular, the second order term
is precisely the quadratic Gauss–Bonnet term , which is the dimensionally extended version of the four-dimensional Euler density.
By noting that
is a topological constant, we can eliminate the Riemann tensor term and thus we can put the Lovelock Lagrangian into the form
which has the equations of motion
Because Lovelock action contains, among others, the quadratic Gauss–Bonnet term (i.e. the four-dimensional Euler characteristic extended to D dimensions), it is usually said that Lovelock theory resembles string-theory -inspired models of gravity. This is because a quadratic term is present in the low energy effective action of heterotic string theory , and it also appears in six-dimensional Calabi–Yau compactifications of M-theory . In the mid-1980s, a decade after Lovelock proposed his generalization of the Einstein tensor, physicists began to discuss the quadratic Gauss–Bonnet term within the context of string theory, with particular attention to its property of being ghost -free in Minkowski space . The theory is known to be free of ghosts about other exact backgrounds as well, e.g. about one of the branches of the spherically symmetric solution found by Boulware and Deser in 1985. In general, Lovelock's theory represents a very interesting scenario to study how the physics of gravity is corrected at short distance due to the presence of higher order curvature terms in the action, and in the mid-2000s the theory was considered as a testing ground to investigate the effects of introducing higher-curvature terms in the context of AdS/CFT correspondence . | https://en.wikipedia.org/wiki/Lovelock_theory_of_gravity |
In graph theory , the Lovász conjecture (1969) is a classical problem on Hamiltonian paths in graphs . It says:
Originally László Lovász stated the problem in the opposite way, but
this version became standard. In 1996, László Babai published a conjecture sharply contradicting this conjecture, [ 1 ] but both conjectures remain widely open . It is not even known if a single counterexample would necessarily lead to a series of counterexamples.
The problem of finding Hamiltonian paths in highly symmetric graphs is quite old. As Donald Knuth describes it in volume 4 of The Art of Computer Programming , [ 2 ] the problem originated in British campanology (bell-ringing). Such Hamiltonian paths and cycles are also closely connected to Gray codes . In each case the constructions are explicit.
Another version of Lovász conjecture states that
There are 5 known examples of vertex-transitive graphs with no Hamiltonian cycles (but with Hamiltonian paths): the complete graph K 2 {\displaystyle K_{2}} , the Petersen graph , the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle. [ 3 ]
None of the 5 vertex-transitive graphs with no Hamiltonian cycles is a Cayley graph . This observation leads to a weaker version of the conjecture:
The advantage of the Cayley graph formulation is that such graphs correspond to a finite group G {\displaystyle G} and a generating set S {\displaystyle S} . Thus one can ask for which G {\displaystyle G} and S {\displaystyle S} the conjecture holds rather than attack it in full generality.
For directed Cayley graphs (digraphs) the Lovász conjecture is false. Various counterexamples were obtained by Robert Alexander Rankin . Still, many of the below results hold in this restrictive setting.
Every directed Cayley graph of an abelian group has a Hamiltonian path; however, every cyclic group whose order is not a prime power has a directed Cayley graph that does not have a Hamiltonian cycle. [ 4 ] In 1986, D. Witte proved that the Lovász conjecture holds for the Cayley graphs of p -groups . It is open even for dihedral groups , although for special sets of generators some progress has been made.
For the symmetric group S n {\displaystyle S_{n}} , there are many attractive generating sets. For example, the Lovász conjecture holds in the following cases of generating sets:
Stong has shown that the conjecture holds for the Cayley graph of the wreath product Z m wr Z n with the natural minimal generating set when m is either even or three. In particular this holds for the cube-connected cycles , which can be generated as the Cayley graph of the wreath product Z 2 wr Z n . [ 5 ]
For general finite groups , only a few results are known:
Finally, it is known that for every finite group G {\displaystyle G} there exists a generating set of size at most log 2 | G | {\displaystyle \log _{2}|G|} such that the corresponding Cayley graph is Hamiltonian (Pak-Radoičić). This result is based on classification of finite simple groups .
The Lovász conjecture was also established for random generating sets of size Ω ( log 5 | G | ) {\displaystyle \Omega (\log ^{5}|G|)} . [ 8 ] | https://en.wikipedia.org/wiki/Lovász_conjecture |
In probability theory , if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows a slight relaxation of the independence condition: As long as the events are "mostly" independent from one another and aren't individually too likely, then there will still be a positive probability that none of them occurs. This lemma is most commonly used in the probabilistic method , in particular to give existence proofs .
There are several different versions of the lemma. The simplest and most frequently used is the symmetric version given below. A weaker version was proved in 1975 by László Lovász and Paul Erdős in the article Problems and results on 3-chromatic hypergraphs and some related questions . For other versions, see Alon & Spencer (2000) . In 2020, Robin Moser and Gábor Tardos received the Gödel Prize for their algorithmic version of the Lovász Local Lemma, which uses entropy compression to provide an efficient randomized algorithm for finding an outcome in which none of the events occurs. [ 1 ] [ 2 ]
Let A 1 , A 2 , … , A k {\displaystyle A_{1},A_{2},\dots ,A_{k}} be a sequence of events such that each event occurs with probability at most p and such that each event is independent of all the other events except for at most d of them.
Lemma I (Lovász and Erdős 1973; published 1975) If
then there is a nonzero probability that none of the events occurs.
Lemma II (Lovász 1977; published by Joel Spencer [ 3 ] ) If
where e = 2.718... is the base of natural logarithms, then there is a nonzero probability that none of the events occurs.
Lemma II today is usually referred to as "Lovász local lemma".
Lemma III (Shearer 1985 [ 4 ] ) If
then there is a nonzero probability that none of the events occurs.
The threshold in Lemma III is optimal and it implies that the bound
is also sufficient.
A statement of the asymmetric version (which allows for events with different probability bounds) is as follows:
Lemma (asymmetric version) . Let A = { A 1 , … , A n } {\displaystyle {\mathcal {A}}=\{A_{1},\ldots ,A_{n}\}} be a finite set of events in the probability space Ω. For A ∈ A {\displaystyle A\in {\mathcal {A}}} let Γ ( A ) {\displaystyle \Gamma (A)} denote the neighbours of A {\displaystyle A} in the dependency graph (In the dependency graph, event A {\displaystyle A} is not adjacent to events which are mutually independent). If there exists an assignment of reals x : A → [ 0 , 1 ) {\displaystyle x:{\mathcal {A}}\to [0,1)} to the events such that
then the probability of avoiding all events in A {\displaystyle {\mathcal {A}}} is positive, in particular
The symmetric version follows immediately from the asymmetric version by setting
to get the sufficient condition
since
As is often the case with probabilistic arguments, this theorem is nonconstructive and gives no method of determining an explicit element of the probability space in which no event occurs. However, algorithmic versions of the local lemma with stronger preconditions are also known (Beck 1991; Czumaj and Scheideler 2000). More recently, a constructive version of the local lemma was given by Robin Moser and Gábor Tardos requiring no stronger preconditions.
We prove the asymmetric version of the lemma, from which the symmetric version can be derived. By using the principle of mathematical induction we prove that for all A {\displaystyle A} in A {\displaystyle {\mathcal {A}}} and all subsets S {\displaystyle S} of A {\displaystyle {\mathcal {A}}} that do not include A {\displaystyle A} , Pr ( A ∣ ⋀ B ∈ S B ¯ ) ≤ x ( A ) {\displaystyle \Pr \left(A\mid \bigwedge _{B\in S}{\overline {B}}\right)\leq x(A)} . The induction here is applied on the size (cardinality) of the set S {\displaystyle S} . For base case S = ∅ {\displaystyle S=\emptyset } the statement obviously holds since Pr ( A i ) ≤ x ( A i ) {\displaystyle \Pr(A_{i})\leq x\left(A_{i}\right)} . We need to show that the inequality holds for any subset of A {\displaystyle {\mathcal {A}}} of a certain cardinality given that it holds for all subsets of a lower cardinality.
Let S 1 = S ∩ Γ ( A ) , S 2 = S ∖ S 1 {\displaystyle S_{1}=S\cap \Gamma (A),S_{2}=S\setminus S_{1}} . We have from Bayes' theorem
We bound the numerator and denominator of the above expression separately. For this, let S 1 = { B j 1 , B j 2 , … , B j l } {\displaystyle S_{1}=\{B_{j1},B_{j2},\ldots ,B_{jl}\}} . First, exploiting the fact that A {\displaystyle A} does not depend upon any event in S 2 {\displaystyle S_{2}} .
Expanding the denominator by using Bayes' theorem and then using the inductive assumption, we get
The inductive assumption can be applied here since each event is conditioned on lesser number of other events, i.e. on a subset of cardinality less than | S | {\displaystyle |S|} . From (1) and (2), we get
Since the value of x is always in [ 0 , 1 ) {\displaystyle [0,1)} . Note that we have essentially proved Pr ( A ¯ ∣ ⋀ B ∈ S B ¯ ) ≥ 1 − x ( A ) {\displaystyle \Pr \left({\overline {A}}\mid \bigwedge _{B\in S}{\overline {B}}\right)\geq 1-x(A)} . To get the desired probability, we write it in terms of conditional probabilities applying Bayes' theorem repeatedly. Hence,
which is what we had intended to prove.
Suppose 11 n points are placed around a circle and colored with n different colors in such a way that each color is applied to exactly 11 points. In any such coloring, there must be a set of n points containing one point of each color but not containing any pair of adjacent points.
To see this, imagine picking a point of each color randomly, with all points equally likely (i.e., having probability 1/11) to be chosen. The 11 n different events we want to avoid correspond to the 11 n pairs of adjacent points on the circle. For each pair our chance of picking both points in that pair is at most 1/121 (exactly 1/121 if the two points are of different colors, otherwise 0), so we will take p = 1/121 .
Whether a given pair ( a , b ) of points is chosen depends only on what happens in the colors of a and b , and not at all on whether any other collection of points in the other n − 2 colors are chosen. This implies the event " a and b are both chosen" is dependent only on those pairs of adjacent points which share a color either with a or with b .
There are 11 points on the circle sharing a color with a (including a itself), each of which is involved with 2 pairs. This means there are 21 pairs other than ( a , b ) which include the same color as a , and the same holds true for b . The worst that can happen is that these two sets are disjoint, so we can take d = 42 in the lemma. This gives
By the local lemma, there is a positive probability that none of the bad events occur, meaning that our set contains no pair of adjacent points. This implies that a set satisfying our conditions must exist. | https://en.wikipedia.org/wiki/Lovász_local_lemma |
In graph theory , the Lovász number of a graph is a real number that is an upper bound on the Shannon capacity of the graph. It is also known as Lovász theta function and is commonly denoted by ϑ ( G ) {\displaystyle \vartheta (G)} , using a script form of the Greek letter theta to contrast with the upright theta used for Shannon capacity. This quantity was first introduced by László Lovász in his 1979 paper On the Shannon Capacity of a Graph . [ 1 ]
Accurate numerical approximations to this number can be computed in polynomial time by semidefinite programming and the ellipsoid method .
The Lovász number of the complement of any graph is sandwiched between the chromatic number and clique number of the graph, and can be used to compute these numbers on graphs for which they are equal, including perfect graphs .
Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph on n {\displaystyle n} vertices. An ordered set of n {\displaystyle n} unit vectors U = ( u i ∣ i ∈ V ) ⊂ R N {\displaystyle U=(u_{i}\mid i\in V)\subset \mathbb {R} ^{N}} is called an orthonormal representation of G {\displaystyle G} in R N {\displaystyle \mathbb {R} ^{N}} , if u i {\displaystyle u_{i}} and u j {\displaystyle u_{j}} are orthogonal whenever vertices i {\displaystyle i} and j {\displaystyle j} are not adjacent in G {\displaystyle G} : u i T u j = { 1 , if i = j , 0 , if i j ∉ E . {\displaystyle u_{i}^{\mathrm {T} }u_{j}={\begin{cases}1,&{\text{if }}i=j,\\0,&{\text{if }}ij\notin E.\end{cases}}} Clearly, every graph admits an orthonormal representation with N = n {\displaystyle N=n} : just represent vertices by distinct vectors from the standard basis of R N {\displaystyle \mathbb {R} ^{N}} . [ 2 ] Depending on the graph it might be possible to take N {\displaystyle N} considerably smaller than the number of vertices n {\displaystyle n} .
The Lovász number ϑ {\displaystyle \vartheta } of graph G {\displaystyle G} is defined as follows: ϑ ( G ) = min c , U max i ∈ V 1 ( c T u i ) 2 , {\displaystyle \vartheta (G)=\min _{c,U}\max _{i\in V}{\frac {1}{(c^{\mathrm {T} }u_{i})^{2}}},} where c {\displaystyle c} is a unit vector in R N {\displaystyle \mathbb {R} ^{N}} and U {\displaystyle U} is an orthonormal representation of G {\displaystyle G} in R N {\displaystyle \mathbb {R} ^{N}} . Here minimization implicitly is performed also over the dimension N {\displaystyle N} , however without loss of generality it suffices to consider N = n {\displaystyle N=n} . [ 3 ] Intuitively, this corresponds to minimizing the half-angle of a rotational cone containing all representing vectors of an orthonormal representation of G {\displaystyle G} . If the optimal angle is ϕ {\displaystyle \phi } , then ϑ ( G ) = 1 / cos 2 ϕ {\displaystyle \vartheta (G)=1/\cos ^{2}\phi } and c {\displaystyle c} corresponds to the symmetry axis of the cone. [ 4 ]
Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph on n {\displaystyle n} vertices. Let A {\displaystyle A} range over all n × n {\displaystyle n\times n} symmetric matrices such that a i j = 1 {\displaystyle a_{ij}=1} whenever i = j {\displaystyle i=j} or vertices i {\displaystyle i} and j {\displaystyle j} are not adjacent, and let λ max ( A ) {\displaystyle \lambda _{\max }(A)} denote the largest eigenvalue of A {\displaystyle A} . Then an alternative way of computing the Lovász number of G {\displaystyle G} is as follows: [ 5 ] ϑ ( G ) = min A λ max ( A ) . {\displaystyle \vartheta (G)=\min _{A}\lambda _{\max }(A).}
The following method is dual to the previous one. Let B {\displaystyle B} range over all n × n {\displaystyle n\times n} symmetric positive semidefinite matrices such that b i j = 0 {\displaystyle b_{ij}=0} whenever vertices i {\displaystyle i} and j {\displaystyle j} are adjacent, and such that the trace (sum of diagonal entries) of B {\displaystyle B} is Tr ( B ) = 1 {\displaystyle \operatorname {Tr} (B)=1} . Let J {\displaystyle J} be the n × n {\displaystyle n\times n} matrix of ones . Then [ 6 ] ϑ ( G ) = max B Tr ( B J ) . {\displaystyle \vartheta (G)=\max _{B}\operatorname {Tr} (BJ).} Here, Tr ( B J ) {\displaystyle \operatorname {Tr} (BJ)} is just the sum of all entries of B {\displaystyle B} .
The Lovász number can be computed also in terms of the complement graph G ¯ {\displaystyle {\bar {G}}} . Let d {\displaystyle d} be a unit vector and U = ( u i ∣ i ∈ V ) {\displaystyle U=(u_{i}\mid i\in V)} be an orthonormal representation of G ¯ {\displaystyle {\bar {G}}} . Then [ 7 ] ϑ ( G ) = max d , U ∑ i ∈ V ( d T u i ) 2 . {\displaystyle \vartheta (G)=\max _{d,U}\sum _{i\in V}(d^{\mathrm {T} }u_{i})^{2}.}
The Lovász number has been computed for the following graphs: [ 8 ]
If G ⊠ H {\displaystyle G\boxtimes H} denotes the strong graph product of graphs G {\displaystyle G} and H {\displaystyle H} , then [ 9 ] ϑ ( G ⊠ H ) = ϑ ( G ) ϑ ( H ) . {\displaystyle \vartheta (G\boxtimes H)=\vartheta (G)\vartheta (H).}
If G ¯ {\displaystyle {\bar {G}}} is the complement of G {\displaystyle G} , then [ 10 ] ϑ ( G ) ϑ ( G ¯ ) ≥ n , {\displaystyle \vartheta (G)\vartheta ({\bar {G}})\geq n,} with equality if G {\displaystyle G} is vertex-transitive .
The Lovász "sandwich theorem" states that the Lovász number always lies between two other numbers that are NP-complete to compute. [ 11 ] More precisely, ω ( G ) ≤ ϑ ( G ¯ ) ≤ χ ( G ) , {\displaystyle \omega (G)\leq \vartheta ({\bar {G}})\leq \chi (G),} where ω ( G ) {\displaystyle \omega (G)} is the clique number of G {\displaystyle G} (the size of the largest clique ) and χ ( G ) {\displaystyle \chi (G)} is the chromatic number of G {\displaystyle G} (the smallest number of colors needed to color the vertices of G {\displaystyle G} so that no two adjacent vertices receive the same color).
The value of ϑ ( G ) {\displaystyle \vartheta (G)} can be formulated as a semidefinite program and numerically approximated by the ellipsoid method in time bounded by a polynomial in the number of vertices of G . [ 12 ] For perfect graphs , the chromatic number and clique number are equal, and therefore are both equal to ϑ ( G ¯ ) {\displaystyle \vartheta ({\bar {G}})} . By computing an approximation of ϑ ( G ¯ ) {\displaystyle \vartheta ({\bar {G}})} and then rounding to the nearest integer value, the chromatic number and clique number of these graphs can be computed in polynomial time.
The Shannon capacity of graph G {\displaystyle G} is defined as follows: Θ ( G ) = sup k α ( G k ) k = lim k → ∞ α ( G k ) k , {\displaystyle \Theta (G)=\sup _{k}{\sqrt[{k}]{\alpha (G^{k})}}=\lim _{k\rightarrow \infty }{\sqrt[{k}]{\alpha (G^{k})}},} where α ( G ) {\displaystyle \alpha (G)} is the independence number of graph G {\displaystyle G} (the size of a largest independent set of G {\displaystyle G} ) and G k {\displaystyle G^{k}} is the strong graph product of G {\displaystyle G} with itself k {\displaystyle k} times. Clearly, Θ ( G ) ≥ α ( G ) {\displaystyle \Theta (G)\geq \alpha (G)} . However, the Lovász number provides an upper bound on the Shannon capacity of graph, [ 13 ] hence α ( G ) ≤ Θ ( G ) ≤ ϑ ( G ) . {\displaystyle \alpha (G)\leq \Theta (G)\leq \vartheta (G).}
For example, let the confusability graph of the channel be C 5 {\displaystyle C_{5}} , a pentagon . Since the original paper of Shannon (1956) it was an open problem to determine the value of Θ ( C 5 ) {\displaystyle \Theta (C_{5})} . It was first established by Lovász (1979) that Θ ( C 5 ) = 5 {\displaystyle \Theta (C_{5})={\sqrt {5}}} .
Clearly, Θ ( C 5 ) ≥ α ( C 5 ) = 2 {\displaystyle \Theta (C_{5})\geq \alpha (C_{5})=2} . However, α ( C 5 2 ) ≥ 5 {\displaystyle \alpha (C_{5}^{2})\geq 5} , since "11", "23", "35", "54", "42" are five mutually non-confusable messages (forming a five-vertex independent set in the strong square of C 5 {\displaystyle C_{5}} ), thus Θ ( C 5 ) ≥ 5 {\displaystyle \Theta (C_{5})\geq {\sqrt {5}}} .
To show that this bound is tight, let U = ( u 1 , … , u 5 ) {\displaystyle U=(u_{1},\dots ,u_{5})} be the following orthonormal representation of the pentagon: u k = ( cos θ sin θ cos φ k sin θ sin φ k ) , cos θ = 1 5 4 , φ k = 2 π k 5 {\displaystyle u_{k}={\begin{pmatrix}\cos {\theta }\\\sin {\theta }\cos {\varphi _{k}}\\\sin {\theta }\sin {\varphi _{k}}\end{pmatrix}},\quad \cos {\theta }={\frac {1}{\sqrt[{4}]{5}}},\quad \varphi _{k}={\frac {2\pi k}{5}}} and let c = ( 1 , 0 , 0 ) {\displaystyle c=(1,0,0)} . By using this choice in the initial definition of Lovász number, we get ϑ ( C 5 ) ≤ 5 {\displaystyle \vartheta (C_{5})\leq {\sqrt {5}}} . Hence, Θ ( C 5 ) = 5 {\displaystyle \Theta (C_{5})={\sqrt {5}}} .
However, there exist graphs for which the Lovász number and Shannon capacity differ, so the Lovász number cannot in general be used to compute exact Shannon capacities. [ 14 ]
The Lovász number has been generalized for "non-commutative graphs" in the context of quantum communication . [ 15 ] The Lovasz number also arises in quantum contextuality [ 16 ] in an attempt to explain the power of quantum computers . [ 17 ] | https://en.wikipedia.org/wiki/Lovász_number |
A low-FODMAP diet is a person's global restriction of consumption of all fermentable carbohydrates ( FODMAPs ), [ 1 ] recommended only for a short time. A low-FODMAP diet is recommended for managing patients with irritable bowel syndrome (IBS) and can reduce digestive symptoms of IBS including bloating and flatulence . [ 2 ]
If the problem lies with indigestible fiber instead, the patient may be directed to a low-residue diet .
Below are low-FODMAP foods categorized by group according to the Monash University "Low-FODMAP Diet". [ 3 ] [ 4 ]
Other sources confirm the suitability of these and suggest some additional foods. [ 5 ]
The basis of many functional gastrointestinal disorders is distension of the intestinal lumen . Such luminal distension may induce pain, a sensation of bloating , abdominal distension and motility disorders. Therapeutic approaches seek to reduce factors that lead to distension, particularly of the distal small and proximal large intestine . Food substances that can induce distension are those that are poorly absorbed in the proximal small intestine, osmotically active, and fermented by intestinal bacteria with hydrogen (as opposed to methane ) production. The small molecule FODMAPs exhibit these characteristics. [ 1 ]
Ingestion of certain short-chain carbohydrates, including lactose, fructose and sorbitol, fructans and galactooligosaccharides , can induce gastrointestinal discomfort similar to that seen in IBS. Dietary restriction of short-chain carbohydrates is associated with improvement of symptoms. [ 6 ]
These short-chain carbohydrates (lactose, fructose and sorbitol, fructans and GOS) behave similarly in the intestine. Firstly, being small molecules and either poorly absorbed or not absorbed at all, they drag water into the intestine via osmosis. [ 7 ] Secondly, these molecules are readily fermented by colonic bacteria, so upon malabsorption in the small intestine they enter the large intestine where they generate gases (hydrogen, carbon dioxide and methane). [ 1 ] The dual actions of these carbohydrates cause an expansion in volume of intestinal contents, which stretches the intestinal wall and stimulates nerves in the gut. It is this 'stretching' that triggers the sensations of pain and discomfort that are commonly experienced by people with IBS. [ 8 ]
The low-FODMAP diet is sometimes used for:
The low-FODMAP diet is intended to be used only after a full medical evaluation. This ensures correct diagnosis and treatment. [ 12 ] Use of a low-FODMAP diet without medical advice can lead to serious health risks, including nutritional deficiencies and misdiagnosis of celiac disease.
Sometimes the diet is used in phases. Firstly, FODMAPs below the threshold value are eliminated from the diet (restriction phase). [ 13 ] The restriction phase does not usually last more than 6 weeks. [ 11 ] After this stage, products that were eliminated are re-introduced into the diet one at a time. [ 13 ] This allows for assessment of the effects of different types of FODMAP on the individual. The final stage involves creation of a long term diet based on the evidence collected from the previous stage. [ 13 ]
There is evidence that a dietician-supervised low-FODMAP diet is the best available way to control IBS symptoms, though there is a lack of evidence on possible adverse effects . [ 14 ]
The beneficial effect of low-FODMAP for people with IBS may be related to reduced osmotic load in the gut or changes in gut-brain axis signaling. [ 9 ] The low-FODMAP diet does not cause any significant change in the gut microbiota in people with IBS. [ 9 ] The effectiveness of low-FODMAP diet in children with IBS is unclear. [ 15 ]
Since the consumption of gluten is suppressed or reduced with a low-FODMAP diet, the improvement of the digestive symptoms with this diet may not be related to the withdrawal of the FODMAPs, but of gluten, indicating the presence of an unrecognized celiac disease , avoiding its diagnosis and correct treatment, with the consequent risk of several serious health complications, including various types of cancer. [ 12 ] [ 16 ]
There is only a little evidence of its effectiveness in treating functional symptoms in inflammatory bowel disease from small studies that are susceptible to bias. [ 17 ] [ 18 ] The low-FODMAP diet is not recommended for ulcerative colitis due to risk of disruption of nutritional status and insufficient evidence of beneficial effects. [ 13 ]
The low-FODMAP diet may reduce symptoms in people with small intestinal bacterial overgrowth. [ 11 ] However, it is not recommended as a long term diet for people with small intestinal bacterial overgrowth. [ 11 ]
The effect of the low-FODMAP diet on the gut microbiota is not fully understood. [ 9 ] It is thought that reduction of fermentable carbohydrates affects the composition and abundance of gut bacteria. FODMAPs are a main food source ( prebiotic ) for many gut bacteria. Deprived of this food source, there is less bacterial fermentation in the gut and less production of intestinal gas, which may also create conditions which favor certain species of bacteria and disfavor others. [ 9 ]
There is some evidence for negative effects of the low-FODMAP diet, such as reduction in the numbers of beneficial bacteria (e.g., Bifidobacteria ). [ 9 ] Such changes are comparable to dysbiosis . [ 13 ] Other studies report no significant change in gut microbiota from the low-FODMAP diet. [ 9 ] There is also some evidence for positive effects on the gut microbiota, such as improved microbial diversity and increased numbers of potentially beneficial bacterial species. [ 9 ] The effect of the low-FODMAP diet on gut microbiota also seems to depend on the medical condition, with more profound changes in microbiota occurring in celiac disease or inflammatory bowel disease, but no significant microbiota changes occurring in IBS. [ 9 ]
Overall, the low-FODMAP diet may have a positive effect on the gut microbiota compared to normal diets. [ 9 ] However, the evidence is mixed and there is significant study heterogeneity , probably because of variation in the methodology and length of the studies, and also differences in the studied populations such as genetics and baseline diet. [ 9 ]
Most of the research studies on the effects of the low-FODMAP diet are short term, usually lasting about 28 days. [ 9 ] When the diet is suddenly changed, the gut microbiota may undergo rapid changes in the short term. The long term stability of the changes in microbiota caused by the low-FODMAP diet and its effects on health are unclear. [ 9 ] It is not known if the changes in microbiota are irreversible. [ 13 ] Long-term use of low-FODMAP diet may have negative effects because it causes a detrimental impact on the gut microbiota and metabolome . [ 8 ] [ 19 ] [ 20 ] [ 21 ] It should only be used for short periods of time and under the advice of a specialist. [ 22 ] The true impact of this diet on health is not fully understood. [ 19 ] [ 20 ] The restriction phase should not last for more than 6 weeks. [ 11 ]
A low-FODMAP diet is highly restrictive in various groups of nutrients, can be impractical to follow in the long-term, and may add an unnecessary financial burden. [ 18 ]
The FODMAP concept was first published in 2005. [ 23 ] In this paper, it was proposed that a collective reduction in the dietary intake of all indigestible or slowly absorbed, short-chain carbohydrates would minimize stretching of the intestinal wall. This was proposed to reduce stimulation of the gut's nervous system and provide the best chance of reducing symptom generation in people with IBS (see below). At the time, there was no collective term for indigestible or slowly absorbed, short-chain carbohydrates, so the term 'FODMAP' was created to improve understanding and facilitate communication of the concept. [ 23 ]
The low FODMAP diet was originally developed by a research team at Monash University in Melbourne, Australia. [ 3 ] The Monash team undertook the first research to investigate whether a low FODMAP diet improved symptom control in patients with IBS and established the mechanism by which the diet exerted its effect. [ 8 ] [ 24 ] Monash University also established a rigorous food analysis program to measure the FODMAP content of a wide selection of Australian and international foods. [ 25 ] [ 26 ] [ 27 ] The FODMAP composition data generated by Monash University updated previous data that was based on limited literature, with guesses (sometimes wrong) made where there was little information. [ 28 ] | https://en.wikipedia.org/wiki/Low-FODMAP_diet |
3EWV , 2N80 , 2N97 , 2N83
4804
18053
ENSG00000064300
ENSMUSG00000000120
P08138
Q9Z0W1
NM_002507
NM_033217
NP_002498
NP_150086
The p75 neurotrophin receptor (p75NTR) was first identified in 1973 as the low-affinity nerve growth factor receptor (LNGFR) [ 5 ] [ 6 ] before discovery that p75NTR bound other neurotrophins equally well as nerve growth factor . [ 7 ] [ 8 ] p75NTR is a neurotrophic factor receptor . Neurotrophic factor receptors bind Neurotrophins including Nerve growth factor , Neurotrophin-3 , Brain-derived neurotrophic factor , and Neurotrophin-4 . All neurotrophins bind to p75NTR. This also includes the immature pro-neurotrophin forms. [ 9 ] [ 10 ] Neurotrophic factor receptors, including p75NTR, are responsible for ensuring a proper density to target ratio of developing neurons, refining broader maps in development into precise connections. p75NTR is involved in pathways that promote neuronal survival and neuronal death. [ 7 ]
p75NTR is a member of the tumor necrosis factor receptor superfamily . p75NTR/LNGFR was the first member of this large family of receptors to be characterized, [ 5 ] [ 6 ] [ 11 ] that now contains about 25 receptors, including tumor necrosis factor 1 (TNFR1) and TNFR2, Fas, RANK, and CD40.
All members of the TNFR superfamily contain structurally related cysteine-rich modules in their ECDs. p75NTR is an unusual member of this family due to its propensity to dimerize rather than trimerize, because of its ability to act as a tyrosine kinase co-receptor, and because the neurotrophins are structurally unrelated to the ligands, which typically bind TNFR family members. Indeed, with the exception of p75NTR, essentially all members of the TNFR family preferentially bind structurally related trimeric Type II transmembrane ligands, members of the TNF ligand superfamily. [ 12 ]
p75NTR is a type I transmembrane protein , with a molecular weight of 75 kDa, determined by glycosylation through both N- and O-linkages in the extracellular domain. [ 13 ] It consists of an extracellular domain, a transmembrane domain and an intracellular domain. The extracellular domain consists of a stalk domain connecting the transmembrane domain and four cysteine-rich repeat domains, CRD1, CRD2, CRD3, and CRD4; which are negatively charged, a property that facilitates Neurotrophin binding. The intracellular part is a global-like domain, known as a death domain, which consists of two sets of perpendicular helixes arranged in sets of three. It connects the transmembrane domain through a flexible linker region N-terminal domain. [ 14 ] It is important to say that, in contrast to the type I death domain found in other TNFR proteins, the type II intracellular death domain of p75NTR does not self-associate. This was an early indication that p75NTR does not signal death through the same mechanism as the TNFR death domains, although the ability of the p75NTR death domain to activate other second messengers is conserved. [ 13 ]
The p75ECD-binding interface to NT-3 can be divided into three main contact sites, two in the case of NGF, that are stabilized by hydrophobic interactions, salt bridges, and hydrogen bonds. The junction regions between CDR1 and CDR2 form the site 1 that contains five hydrogen bonds and one salt bridge. Site 2 is formed by equal contributions from CDR3 and CRD4 and involves two salt bridges and two hydrogen bonds. Site 3, in the CRD4, includes only one salt bridge. [ 15 ]
Neurotrophins that interact with p75NTR include NGF , NT-3 , BDNF , and NT-4/5 . [ 7 ] Neurotrophins activating p75NTR may initiate apoptosis (for example, via c-Jun N-terminal kinases signaling, and subsequent p53, Jax-like proteins and caspase activation). [ 13 ] This effect can be counteracted by anti-apoptotic signaling by TrkA . [ 16 ] Neurotrophin binding to p75NTR, in addition to apoptotic signaling, can also promote neuronal survival (for example, via NF-kB activation). [ 17 ] There are multiple targets of Akt that could play a role in mediating p75NTR-dependent survival, but one of the more intriguing possibilities is that Ant-induced phosphorylation of IkB kinase 1 (IKK1) plays a role in the induction of NF-kB. [ 12 ]
Proforms of NGF and BDNF (proNGF and proBDNF) are precursors to NGF and BDNF. proNGF and proBDNF interact with p75NTR and cause p75NTR-mediated apoptosis without activating TrkA-mediated survival mechanisms. Cleavage of proforms into mature Neurotrophins allows the mature NGF and BDNF to activate TrkA-mediated survival mechanisms. [ 18 ] [ 19 ]
Recent research has suggested a number of roles for the LNGFR, including in development of the eyes and sensory neurons, [ 20 ] [ 21 ] and in repair of muscle and nerve damage in adults. [ 22 ] [ 23 ] [ 24 ] Two distinct subpopulations of Olfactory ensheathing glia have been identified [ 25 ] with high or low cell surface expression of low-affinity nerve growth factor receptor (p75).
Sortilin is required for many apoptosis-promoting p75NTR reactions, functioning as a co-receptor for the binding of neurotrophins such as BDNF . pro-neurotrophins (such as proBDNF) bind especially well to p75NTR when sortilin is present. [ 26 ]
When p75NTR initiates apoptosis, NGF binding to Tropomyosin receptor kinase A (TrkA) can negate p75NTR apoptotic effects. p75NTR c-Jun kinase pathway activation (which causes apoptosis) is suppressed when NGF binds to TrkA. p75NTR activation of NF-kB , which promotes survival, is unaffected by NGF binding to TrkA. [ 26 ]
p75NTR functions in a complex with Nogo-66 receptor (NgR1) to mediate RhoA-dependent inhibition of growth of regenerating axons exposed to inhibitory proteins of CNS myelin, such as Nogo , MAG or OMgP . Without p75NTR, OMgP can activate RhoA and inhibit CNS axon regeneration. Coexpression of p75NTR and OMgP suppress RhoA activation. A complex of NgR1, p75NTR and LINGO1 can activate RhoA. [ 27 ]
NF-kB is a transcription factor that can be activated by p75NTR. Nerve growth factor (NGF) is a neurotrophin that promotes neuronal growth, and, in the absence of NGF, neurons die. Neuronal death in the absence of NGF can be prevented by NF-kB activation. Phosphorylated IκB kinase binds to and activates NF-kB before separating from NF-kB. After separation, IκB degrades and NF-kB continues to the nucleus to initiate pro-survival transcription. NF-kB also promotes neuronal survival in conjunction with NGF. [ 17 ]
NF-kB activity is activated by p75NTR, and is not activated via Trk receptors . NF-kB activity does not effect Brain-derived neurotrophic factor promotion of neuronal survival. [ 17 ]
p75NTR serves as a regulator for actin assembly. Ras homolog family member A ( RhoA ) causes the actin cytoskeleton to become rigid which limits growth cone mobility and inhibits neuronal elongation in the developing nervous system. p75NTR without a ligand bound activates RhoA and limits actin assembly, but neurotrophin binding to p75NTR can inactivate RhoA and promote actin assembly. [ 28 ] p75NTR associates with the Rho GDP dissociation inhibitor (RhoGDI) , and RhoGDI associates with RhoA . Interactions with Nogo can strengthen the association between p75NTR and RhoGDI. Neurotrophin binding to p75NTR inhibits the association of RhoGDI and p75NTR, thereby suppressing RhoA release and promoting growth cone elongation (inhibiting RhoA actin suppression). [ 29 ]
Neurotrophin binding to p75NTR activates the c-Jun N-terminal kinases (JNK) signaling pathway causing apoptosis of developing neurons. JNK, through a series of intermediates, activates p53 and p53 activates Bax which initiates apoptosis. TrkA can prevent p75NTR-mediated JNK pathway apoptosis. [ 30 ]
JNK can directly phosphorylate Bim-EL, a splicing isoform of Bcl-2 interacting mediator of cell death (Bim) , which activates Bim-EL apoptotic activity. JNK activation is required for apoptosis but c-jun , a protein in the JNK signaling pathway, is not always required. [ 16 ]
LNGFR also activates a caspase -dependent signaling pathway that promotes developmental axon pruning, and axon degeneration in neurodegenerative disease. [ 31 ]
In the apoptosis pathway, members of the TNF receptor superfamily assemble a death-inducing signaling complex (DISC) in which TRADD or FADD bind directly to the receptor's death domain, thereby allowing aggregation and activation of Caspase 8 and subsequent activation of the Caspase cascade. However, Caspase 8 induction does not appear to be involved in p75NTR-mediated apoptosis, but Caspase 9 is activated during p75NTR-mediated killing. [ 12 ]
Huntington's disease is characterized by cognitive impairments. There is increased expression of p75NTR in the hippocampus of Huntington's disease patients (including mice models and humans). Over expression of p75NTR in mice causes cognitive impairments similar to Huntington's disease. p75NTR is linked to reduced numbers of dendritic spines in the hippocampus, likely through p75NTR interactions with Transforming protein RhoA . Modulating p75NTR function could be a future direction in treating Huntington's disease. [ 32 ]
Amyotrophic lateral sclerosis ALS is a neurodegenerative disease characterized by progressive muscular paralysis reflecting degeneration of motor neurons in the primary motor cortex, corticospinal tracts, brainstem and spinal cord. One study using the superoxide dismutase 1 (SOD1) mutant mouse, an ALS model which develops severe neurodegeneration, the expression of p75NTR correlated with the extent of degeneration and p75NTR knockdown delayed disease progression. [ 33 ] [ 34 ] [ 35 ]
Alzheimer's disease (AD) is the most common cause of dementia in the elderly. AD is a neurodegenerative disease characterized by the loss of cognitive functioning - thinking, remembering and reasoning- and behavioral abilities to such an extent that it interferes with a person's daily life and activities. The neuropathological hallmarks of AD include amyloid plaques and neurofibrillary tangles, which lead to neuronal death. Studies in animal models of AD have shown that p75NTR contributes to amyloid β-induced neuronal damage. [ 36 ] In humans with AD, increases in p75NTR expression relative to TrkA have been suggested to be responsible for the loss of cholinergic neurons. [ 37 ] [ 38 ] Increases in proNGF in AD [ 39 ] indicate that the Neurotrophin environment is favorable for p75NTR/sortilin signaling and supports the theory that age-related neural damage is facilitated by a shift toward proNGF-mediated signaling. [ 35 ] A recent study found that activation of Ngfr signaling in astroglia of Alzheimer's disease mouse model enhanced neurogenesis and reduced two hallmarks of Alzheimer's disease. [ 40 ] This study also found that NGFR signaling in humans is age-related and correlates with proliferative potential of neural progenitors.
p75NTR has been implicated as a marker for cancer stem cells in melanoma and other cancers. Melanoma cells transplanted into an immunodeficient mouse model were shown to require expression of CD271 in order to grow a melanoma. [ 41 ] Gene knockdown of CD271 has also been shown to abolish neural crest stem cell properties of melanoma cells and decrease genomic stability leading to a reduced migration, tumorigenicity, proliferation and induction of apoptosis. [ 42 ] [ 43 ] [ 44 ] Furthermore, increased levels of CD271 were observed in brain metastatic melanoma cells whereas resistance to the BRAF inhibitor vemurafenib supposedly selects for highly malignant brain and lung-metastasizing melanoma cells. [ 45 ] [ 44 ] [ 46 ] [ 47 ] Recently, expression of p75NTR (NGFR) was associated with progressive intracranial disease in melanoma patients [ 48 ]
Low-affinity nerve growth factor receptor has been shown to interact with: | https://en.wikipedia.org/wiki/Low-affinity_nerve_growth_factor_receptor |
Low-angle laser light scattering or LALLS is an application of light scattering that is particularly useful in conjunction with the technique of Size exclusion chromatography , [ 1 ] one of the most powerful and widely used techniques to study the molecular mass distribution of a polymer.
Typically the eluent of the SEC column is allowed to pass through both a refractive index detector (that gives measures for the concentration in the solution as a function time) and through a laser scattering cell. The scattered intensity is measured as a function of time under a small angle with respect to the laser beam. The low-angle light scattering data can be analyzed if one assumes that the low-angle data is the same as the scattering at zero angle. For the relevant equations, see the article on static light scattering . Under these conditions the laser signal together with the concentration data can be translated into a curve that yields both the M n and the M w , the molar mass weighted by number and by weight respectively. The combination of those two data gives information on the linearity of the polymer.
The technique is sometimes complemented or combined with viscometry and polystyrene standards are available [ 2 ] for validation of the results.
This article about polymer science is a stub . You can help Wikipedia by expanding it .
This scattering –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Low-angle_laser_light_scattering |
A Low-barrier hydrogen bond ( LBHB ) is a special type of hydrogen bond . LBHBs can occur when the pKa of the two heteroatoms are closely matched, which allows the hydrogen to be more equally shared between them. This hydrogen-sharing causes the formation of especially short, strong hydrogen bonds. [ 1 ]
Standard hydrogen bonds are longer (e.g. 2.8 Å for an O···O h-bond), and the hydrogen ion clearly belongs to one of the heteroatoms . When pKa of the heteroatoms is closely matched, a LBHB becomes possible at a shorter distance (~2.55 Å). When the distance further decreases (< 2.29 Å) the bond is characterized as a single-well or short-strong hydrogen bond. [ 3 ]
Low barrier hydrogen bonds occur in the water-excluding environments of proteins. [ 4 ] Multiple residues act together in a charge-relay system to control the pKa values of the residues involved. LBHBs also occur on the surfaces of proteins, but are unstable due to their proximity to bulk water, and the conflicting requirements of strong salt-bridges in protein-protein interfaces. [ 4 ]
Low-barrier hydrogen bonds have been proposed to be relevant to enzyme catalysis in two types of circumstance. [ 5 ] Firstly, a low-barrier hydrogen bond in a charge relay network within an active site could activate a catalytic residue (e.g. between acid and base within a catalytic triad ). Secondly, an LBHB could form during catalysis to stabilise a transition state (e.g. with substrate transition state in an oxyanion hole ). Both of these mechanisms are contentious, with theoretical and experimental evidence split on whether they occur. [ 6 ] [ 7 ] Since the 2000s, the general consensus has been that LBHBs are not used by enzymes to aid catalysis. [ 7 ] [ 8 ] However, in 2012, a low-barrier hydrogen bond has been proposed to be involved in phosphate-arsenate discrimination for a phosphate transport protein. [ 9 ] This finding might indicate the possibility of low-barrier hydrogen bonds playing a catalytic role in ion size selection for some very rare cases. | https://en.wikipedia.org/wiki/Low-barrier_hydrogen_bond |
Low cycle fatigue (LCF) has two fundamental characteristics: plastic deformation in each cycle; and low cycle phenomenon, in which the materials have finite endurance for this type of load. The term cycle refers to repeated applications of stress that lead to eventual fatigue and failure; low-cycle pertains to a long period between applications.
Study in fatigue has been focusing on mainly two fields: size design in aeronautics and energy production using advanced calculation methods. The LCF result allows us to study the behavior of the material in greater depth to better understand the complex mechanical and metallurgical phenomena ( crack propagation , work softening, strain concentration, work hardening , etc.). [ 1 ]
Common factors that have been attributed to low-cycle fatigue (LCF) are high stress levels and a low number of cycles to failure. Many studies have been carried out, particularly in the last 50 years on metals and the relationship between temperature , stress, and number of cycles to failure. Tests are used to plot an S-N curve , and it has been shown that the number of cycles to failure decreased with increasing temperature. However, extensive testing would have been too costly so researchers mainly resorted to using finite element analysis using computer software. [ 2 ]
Through many experiments, it has been found that characteristics of a material can change as a result of LCF. Fracture ductility tends to decrease, with the magnitude depending on the presence of small cracks to begin with. To perform these tests, an electro-hydraulic servo-controlled testing machine was generally used, as it is capable of not changing the stress amplitude . It was also discovered that performing low-cycle fatigue tests on specimens with holes already drilled in them were more susceptible to crack propagation, and hence a greater decrease in fracture ductility. This was true despite the small hole sizes, ranging from 40 to 200 μm. [ 3 ]
When a component is subject to low cycle fatigue, it is repeatedly plastically deformed. For example, if a part were to be loaded in tension until it was permanently deformed (plastically deformed), that would be considered one quarter cycle of low cycle fatigue, or LCF. In order to complete a full cycle the part would need to be deformed back into its original shape. The number of LCF cycles that a part can withstand before failing is much lower than that of regular fatigue. [ 4 ]
This condition of high cyclic strain is often the result of extreme operating conditions, such as high changes in temperature. Thermal stresses originating from an expansion or contraction of materials can exacerbate the loading conditions on a part and LCF characteristics can come into play.
A commonly used equation that describes the behavior of low-cycle fatigue is the Coffin-Manson relation (published by L. F. Coffin in 1954 and S. S. Manson in 1953):
where,
The first half of the equation indicates the Plastic region, and the second half indicates the elastic region. [ 5 ]
In the above given Coffin-Manson relation the constant values (b and c) is determined by the given equations:
c = − 1 1 + 5 n ´ {\displaystyle c={\frac {-1}{1+5{\acute {n}}}}}
b = − n ´ 1 + 5 n ´ {\displaystyle b={\frac {-{\acute {n}}}{1+5{\acute {n}}}}}
One noteworthy event in which the failure was a result of LCF was the 1994 Northridge earthquake . Many buildings and bridges collapsed, and as a result over 9,000 people were injured. [ 6 ] Researchers at the University of Southern California analyzed the main areas of a ten-story building that were subjected to low-cycle fatigue. Unfortunately, there was limited experimental data available to directly construct a S-N curve for low-cycle fatigue, so most of the analysis consisted of plotting the high-cycle fatigue behavior on a S-N curve and extending the line for that graph to create the portion of the low-cycle fatigue curve using the Palmgren-Miner method. Ultimately, this data was used to more accurately predict and analyze similar types of damage that the ten-story steel building in Northridge faced. [ 7 ]
Another more recent event was the 2010 Chile earthquake , in which several researchers from the University of Chile made reports of multiple reinforced concrete structures damaged throughout the country by the seismic event. Many structural elements such as beams, walls and columns failed due to fatigue, exposing the steel reinforcements used in the design with clear signs of longitudinal buckling . [ 9 ] [ 10 ] This event caused Chilean seismic design standards to be updated based on observations on damaged structures caused by the earthquake. [ 11 ] | https://en.wikipedia.org/wiki/Low-cycle_fatigue |
The low-density lipoprotein receptor gene family codes for a class of structurally related cell surface receptors that fulfill diverse biological functions in different organs, tissues, and cell types. [ 3 ] The role that is most commonly associated with this evolutionarily ancient family is cholesterol homeostasis (maintenance of appropriate concentration of cholesterol). In humans, excess cholesterol in the blood is captured by low-density lipoprotein (LDL) and removed by the liver via endocytosis of the LDL receptor . [ 4 ] Recent evidence indicates that the members of the LDL receptor gene family are active in the cell signalling pathways between specialized cells in many, if not all, multicellular organisms. [ 5 ] [ 6 ]
There are seven members of the LDLR family in mammals, namely:
Listed below are human proteins containing low-density lipoprotein receptor domains:
C6 ; C7 ; 8A ; 8B ; C9 ; CD320 ; CFI; CORIN ; DGCR2 ; HSPG2 ; LDLR ; LDLRAD2 ; LDLRAD3 ; LRP1 ; LRP10 ; LRP11 ; LRP12 ; LRP1B ; LRP2 ; LRP3 ; LRP4 ; LRP5 ; LRP6 ; LRP8 ; MAMDC4 ; MFRP ; PRSS7 ; RXFP1 ; RXFP2 ; SORL1 ; SPINT1 ; SSPO ; ST14 ; TMPRSS4 ; TMPRSS6 ; TMPRSS7 ; TMPRSS9 ( serase-1B ); VLDLR ;
EGF ; LDLR ; LRP1 ; LRP10 ; LRP1B ; LRP2 ; LRP4 ; LRP5 ; LRP5L ; LRP6 ; LRP8 ; NID1 ; NID2 ; SORL1 ; VLDLR ;
The members of the LDLR family are characterized by distinct functional domains present in characteristic numbers. These modules are:
In addition to these domains which can be found in all receptors of the gene family, LDL receptor and certain isoforms of ApoER2 and VLDLR contain a short region which can undergo O-linked glycosylation , known as O-linked sugar domain. ApoER2 moreover, can harbour a cleavage site for the protease furin between type A and type B repeats which enables production of a soluble receptor fragment by furin-mediated processing. | https://en.wikipedia.org/wiki/Low-density_lipoprotein_receptor_gene_family |
Low-energy electron diffraction ( LEED ) is a technique for the determination of the surface structure of single-crystalline materials by bombardment with a collimated beam of low-energy electrons (30–200 eV) [ 1 ] and observation of diffracted electrons as spots on a fluorescent screen.
LEED may be used in one of two ways:
An electron-diffraction experiment similar to modern LEED was the first to observe the wavelike properties of electrons, but LEED was established as an ubiquitous tool in surface science only with the advances in vacuum generation and electron detection techniques. [ 2 ] [ 3 ]
The theoretical possibility of the occurrence of electron diffraction first emerged in 1924, when Louis de Broglie introduced wave mechanics and proposed the wavelike nature of all particles. In his Nobel-laureated work de Broglie postulated that the wavelength of a particle with linear momentum p is given by h / p , where h is the Planck constant .
The de Broglie hypothesis was confirmed experimentally at Bell Labs in 1927, when Clinton Davisson and Lester Germer fired low-energy electrons at a crystalline nickel target and observed that the angular dependence of the intensity of backscattered electrons showed diffraction patterns. These observations were consistent with the diffraction theory for X-rays developed by Bragg and Laue earlier. Before the acceptance of the de Broglie hypothesis, diffraction was believed to be an exclusive property of waves.
Davisson and Germer published notes of their electron-diffraction experiment result in Nature and in Physical Review in 1927. One month after Davisson and Germer's work appeared, Thompson and Reid published their electron-diffraction work with higher kinetic energy (thousand times higher than the energy used by Davisson and Germer) in the same journal. Those experiments revealed the wave property of electrons and opened up an era of electron-diffraction study.
Though discovered in 1927, low-energy electron diffraction did not become a popular tool for surface analysis until the early 1960s. The main reasons were that monitoring directions and intensities of diffracted beams was a difficult experimental process due to inadequate vacuum techniques and slow detection methods such as a Faraday cup . Also, since LEED is a surface-sensitive method, it required well-ordered surface structures. Techniques for the preparation of clean metal surfaces first became available much later.
Nonetheless, H. E. Farnsworth and coworkers at Brown University pioneered the use of LEED as a method for characterizing the absorption of gases onto clean metal surfaces and the associated regular adsorption phases, starting shortly after the Davisson and Germer discovery into the 1970s.
In the early 1960s LEED experienced a renaissance, as ultra-high vacuum became widely available, and the post acceleration detection method was introduced by Germer and his coworkers at Bell Labs using a flat phosphor screen. [ 4 ] [ 5 ] Using this technique, diffracted electrons were accelerated to high energies to produce clear and visible diffraction patterns on the screen. Ironically the post-acceleration method had already been proposed by Ehrenberg in 1934. [ 6 ] In 1962 Lander and colleagues introduced the modern hemispherical screen with associated hemispherical grids. [ 7 ] In the mid-1960s, modern LEED systems became commercially available as part of the ultra-high-vacuum instrumentation suite by Varian Associates and triggered an enormous boost of activities in surface science. Notably, future Nobel prize winner Gerhard Ertl started his studies of surface chemistry and catalysis on such a Varian system. [ 8 ]
It soon became clear that the kinematic (single-scattering) theory, which had been successfully used to explain X-ray diffraction experiments, was inadequate for the quantitative interpretation of experimental data obtained from LEED. At this stage a detailed determination of surface structures, including adsorption sites, bond angles and bond lengths was not possible.
A dynamical electron-diffraction theory, which took into account the possibility of multiple scattering, was established in the late 1960s. With this theory, it later became possible to reproduce experimental data with high precision.
In order to keep the studied sample clean and free from unwanted adsorbates, LEED experiments are performed in an ultra-high vacuum environment (residual gas pressure <10 −7 Pa).
The main components of a LEED instrument are: [ 2 ]
The sample of the desired surface crystallographic orientation is initially cut and prepared outside the vacuum chamber. The correct alignment of the crystal can be achieved with the help of X-ray diffraction methods such as Laue diffraction . [ 10 ] After being mounted in the UHV chamber the sample is cleaned and flattened. Unwanted surface contaminants are removed by ion sputtering or by chemical processes such as oxidation and reduction cycles. The surface is flattened by annealing at high temperatures.
Once a clean and well-defined surface is prepared, monolayers can be adsorbed on the surface by exposing it to a gas consisting of the desired adsorbate atoms or molecules.
Often the annealing process will let bulk impurities diffuse to the surface and therefore give rise to a re-contamination after each cleaning cycle. The problem is that impurities that adsorb without changing the basic symmetry of the surface, cannot easily be identified in the diffraction pattern. Therefore, in many LEED experiments Auger electron spectroscopy is used to accurately determine the purity of the sample. [ 11 ]
LEED optics is in some instruments also used for Auger electron spectroscopy . To improve the measured signal, the gate voltage is scanned in a linear ramp. An RC circuit serves to derive the second derivative , which is then amplified and digitized. To reduce the noise, multiple passes are summed up. The first derivative is very large due to the residual capacitive coupling between gate and the anode and may degrade the performance of the circuit. By applying a negative ramp to the screen this can be compensated. It is also possible to add a small sine to the gate. A high-Q RLC circuit is tuned to the second harmonic to detect the second derivative.
A modern data acquisition system usually contains a CCD/CMOS camera pointed to the screen for diffraction pattern visualization and a computer for data recording and further analysis. More expensive instruments have in-vacuum position sensitive electron detectors that measure the current directly, which helps in the quantitative I–V analysis of the diffraction spots.
The basic reason for the high surface sensitivity of LEED is that for low-energy electrons the interaction between the solid and electrons is especially strong. Upon penetrating the crystal, primary electrons will lose kinetic energy due to inelastic scattering processes such as plasmon and phonon excitations, as well as electron–electron interactions.
In cases where the detailed nature of the inelastic processes is unimportant, they are commonly treated by assuming an exponential decay of the primary electron-beam intensity I 0 in the direction of propagation:
Here d is the penetration depth, and Λ ( E ) {\displaystyle \Lambda (E)} denotes the inelastic mean free path , defined as the distance an electron can travel before its intensity has decreased by the factor 1/ e . While the inelastic scattering processes and consequently the electronic mean free path depend on the energy, it is relatively independent of the material. The mean free path turns out to be minimal (5–10 Å) in the energy range of low-energy electrons (20–200 eV). [ 1 ] This effective attenuation means that only a few atomic layers are sampled by the electron beam, and, as a consequence, the contribution of deeper atoms to the diffraction progressively decreases.
Kinematic diffraction is defined as the situation where electrons impinging on a well-ordered crystal surface are elastically scattered only once by that surface. In the theory the electron beam is represented by a plane wave with a wavelength given by the de Broglie hypothesis :
The interaction between the scatterers present in the surface and the incident electrons is most conveniently described in reciprocal space. In three dimensions the primitive reciprocal lattice vectors are related to the real space lattice { a , b , c } in the following way: [ 12 ]
For an incident electron with wave vector k i = 2 π / λ i {\displaystyle \mathbf {k} _{i}=2\pi /\lambda _{i}} and scattered wave vector k f = 2 π / λ f {\displaystyle \mathbf {k} _{f}=2\pi /\lambda _{f}} , the condition for constructive interference and hence diffraction of scattered electron waves is given by the Laue condition :
where ( h , k , l ) is a set of integers, and
is a vector of the reciprocal lattice. Note that these vectors specify the Fourier components of charge density in the reciprocal (momentum) space, and that the incoming electrons scatter at these density modulations within the crystal lattice. The magnitudes of the wave vectors are unchanged, i.e. | k f | = | k i | {\displaystyle |\mathbf {k} _{f}|=|\mathbf {k} _{i}|} , because only elastic scattering is considered.
Since the mean free path of low-energy electrons in a crystal is only a few angstroms, only the first few atomic layers contribute to the diffraction. This means that there are no diffraction conditions in the direction perpendicular to the sample surface. As a consequence, the reciprocal lattice of a surface is a 2D lattice with rods extending perpendicular from each lattice point. The rods can be pictured as regions where the reciprocal lattice points are infinitely dense.
Therefore, in the case of diffraction from a surface the Laue condition reduces to the 2D form: [ 2 ]
where a ∗ {\displaystyle \mathbf {a} ^{*}} and b ∗ {\displaystyle \mathbf {b} ^{*}} are the primitive translation vectors of the 2D reciprocal lattice of the surface and k f ∥ {\displaystyle {\textbf {k}}_{f}^{\parallel }} , k i ∥ {\displaystyle {\textbf {k}}_{i}^{\parallel }} denote the component of respectively the reflected and incident wave vector parallel to the sample surface. a ∗ {\displaystyle {\textbf {a}}^{*}} and b ∗ {\displaystyle {\textbf {b}}^{*}} are related to the real space surface lattice, with n ^ {\displaystyle {\hat {\mathbf {n} }}} as the surface normal, in the following way:
The Laue-condition equation can readily be visualized using the Ewald's sphere construction.
Figures 3 and 4 show a simple illustration of this principle: The wave vector k i {\displaystyle \mathbf {k} _{i}} of the incident electron beam is drawn such that it terminates at a reciprocal lattice point. The Ewald's sphere is then the sphere with radius | k i | {\displaystyle |\mathbf {k} _{i}|} and origin at the center of the incident wave vector. By construction, every wave vector centered at the origin and terminating at an intersection between a rod and the sphere will then satisfy the 2D Laue condition and thus represent an allowed diffracted beam.
Figure 4 shows the Ewald's sphere for the case of normal incidence of the primary electron beam, as would be the case in an actual LEED setup. It is apparent that the pattern observed on the fluorescent screen is a direct picture of the reciprocal lattice of the surface. The spots are indexed according to the values of h and k . The size of the Ewald's sphere and hence the number of diffraction spots on the screen is controlled by the incident electron energy. From the knowledge of the reciprocal lattice models for the real space lattice can be constructed and the surface can be characterized at least qualitatively in terms of the surface periodicity and the point group. Figure 7 shows a model of an unreconstructed (100) face of a simple cubic crystal and the expected LEED pattern. Since these patterns can be inferred from the crystal structure of the bulk crystal, known from other more quantitative diffraction techniques, LEED is more interesting in the cases where the surface layers of a material reconstruct, or where surface adsorbates form their own superstructures.
Overlaying superstructures on a substrate surface may introduce additional spots in the known (1×1) arrangement. These are known as extra spots or super spots . Figure 6 shows many such spots appearing after a simple hexagonal surface of a metal has been covered with a layer of graphene . Figure 7 shows a schematic of real and reciprocal space lattices for a simple (1×2) superstructure on a square lattice.
For a commensurate superstructure the symmetry and the rotational alignment with respect to adsorbent surface can be determined from the LEED pattern. This is easiest shown by using a matrix notation, [ 1 ] where the primitive translation vectors of the superlattice { a s , b s } are linked to the primitive translation vectors of the underlying (1×1) lattice { a , b } in the following way
The matrix for the superstructure then is
Similarly, the primitive translation vectors of the lattice describing the extra spots { a ∗ s , b ∗ s } are linked to the primitive translation vectors of the reciprocal lattice { a ∗ , b ∗ }
G ∗ is related to G in the following way
An essential problem when considering LEED patterns is the existence of symmetrically equivalent domains. Domains may lead to diffraction patterns that have higher symmetry than the actual surface at hand. The reason is that usually the cross sectional area of the primary electron beam (~1 mm 2 ) is large compared to the average domain size on the surface and hence the LEED pattern might be a superposition of diffraction beams from domains oriented along different axes of the substrate lattice.
However, since the average domain size is generally larger than the coherence length of the probing electrons, interference between electrons scattered from different domains can be neglected. Therefore, the total LEED pattern emerges as the incoherent sum of the diffraction patterns associated with the individual domains.
Figure 8 shows the superposition of the diffraction patterns for the two orthogonal domains (2×1) and (1×2) on a square lattice, i.e. for the case where one structure is just rotated by 90° with respect to the other. The (1×2) structure and the respective LEED pattern are shown in Figure 7. It is apparent that the local symmetry of the surface structure is twofold while the LEED pattern exhibits a fourfold symmetry.
Figure 1 shows a real diffraction pattern of the same situation for the case of a Si(100) surface. However, here the (2×1) structure is formed due to surface reconstruction .
The inspection of the LEED pattern gives a qualitative picture of the surface periodicity i.e. the size of the surface unit cell and to a certain degree of surface symmetries. However it will give no information about the atomic arrangement within a surface unit cell or the sites of adsorbed atoms. For instance, when the whole superstructure in Figure 7 is shifted such that the atoms adsorb in bridge sites instead of on-top sites the LEED pattern stays the same, although the individual spot intensities may somewhat differ.
A more quantitative analysis of LEED experimental data can be achieved by analysis of so-called I–V curves, which are measurements of the intensity versus incident electron energy. The I–V curves can be recorded by using a camera connected to computer controlled data handling or by direct measurement with a movable Faraday cup. The experimental curves are then compared to computer calculations based on the assumption of a particular model system. The model is changed in an iterative process until a satisfactory agreement between experimental and theoretical curves is achieved. A quantitative measure for this agreement is the so-called reliability - or R-factor. A commonly used reliability factor is the one proposed by Pendry. [ 13 ] It is expressed in terms of the logarithmic derivative of the intensity:
The R-factor is then given by:
where Y ( E ) = L − 1 / ( L − 2 + V o i 2 ) {\displaystyle Y(E)=L^{-1}/(L^{-2}+V_{oi}^{2})} and V o i {\displaystyle V_{oi}} is the imaginary part of the electron self-energy. In general, R p ≤ 0.2 {\displaystyle R_{p}\leq 0.2} is considered as a good agreement, R p ≃ 0.3 {\displaystyle R_{p}\simeq 0.3} is considered mediocre and R p ≃ 0.5 {\displaystyle R_{p}\simeq 0.5} is considered a bad agreement. Figure 9 shows examples of the comparison between experimental I–V spectra and theoretical calculations.
The term dynamical stems from the studies of X-ray diffraction and describes the situation where the response of the crystal to an incident wave is included self-consistently and multiple scattering can occur. The aim of any dynamical LEED theory is to calculate the intensities of diffraction of an electron beam impinging on a surface as accurately as possible.
A common method to achieve this is the self-consistent multiple scattering approach. [ 14 ] One essential point in this approach is the assumption that the scattering properties of the surface, i.e. of the individual atoms, are known in detail. The main task then reduces to the determination of the effective wave field incident on the individual scatters present in the surface, where the effective field is the sum of the primary field and the field emitted from all the other atoms. This must be done in a self-consistent way, since the emitted field of an atom depends on the incident effective field upon it. Once the effective field incident on each atom is determined, the total field emitted from all atoms can be found and its asymptotic value far from the crystal then gives the desired intensities.
A common approach in LEED calculations is to describe the scattering potential of the crystal by a "muffin tin" model, where the crystal potential can be imagined being divided up by non-overlapping spheres centered at each atom such that the potential has a spherically symmetric form inside the spheres and is constant everywhere else. The choice of this potential reduces the problem to scattering from spherical potentials, which can be dealt with effectively. The task is then to solve the Schrödinger equation for an incident electron wave in that "muffin tin" potential.
In LEED the exact atomic configuration of a surface is determined by a trial and error process where measured I–V curves are compared to computer-calculated spectra under the assumption of a model structure. From an initial reference structure a set of trial structures is created by varying the model parameters. The parameters are changed until an optimal agreement between theory and experiment is achieved. However, for each trial structure a full LEED calculation with multiple scattering corrections must be conducted. For systems with a large parameter space the need for computational time might become significant. This is the case for complex surfaces structures or when considering large molecules as adsorbates.
Tensor LEED [ 15 ] [ 16 ] is an attempt to reduce the computational effort needed by avoiding full LEED calculations for each trial structure. The scheme is as follows: One first defines a reference surface structure for which the I–V spectrum is calculated. Next a trial structure is created by displacing some of the atoms. If the displacements are small the trial structure can be considered as a small perturbation of the reference structure and first-order perturbation theory can be used to determine the I–V curves of a large set of trial structures.
A real surface is not perfectly periodic but has many imperfections in the form of dislocations, atomic steps, terraces and the presence of unwanted adsorbed atoms. This departure from a perfect surface leads to a broadening of the diffraction spots and adds to the background intensity in the LEED pattern.
SPA-LEED [ 17 ] is a technique where the profile and shape of the intensity of diffraction beam spots is measured. The spots are sensitive to the irregularities in the surface structure and their examination therefore permits more-detailed conclusions about some surface characteristics. Using SPA-LEED may for instance permit a quantitative determination of the surface roughness, terrace sizes, dislocation arrays, surface steps and adsorbates. [ 17 ] [ 18 ]
Although some degree of spot profile analysis can be performed in regular LEED and even LEEM setups, dedicated SPA-LEED setups, which scan the profile of the diffraction spot over a dedicated channeltron detector allow for much higher dynamic range and profile resolution. | https://en.wikipedia.org/wiki/Low-energy_electron_diffraction |
Low-energy ion scattering spectroscopy ( LEIS ), sometimes referred to simply as ion scattering spectroscopy ( ISS ), is a surface-sensitive analytical technique used to characterize the chemical and structural makeup of materials. LEIS involves directing a stream of charged particles known as ions at a surface and making observations of the positions, velocities , and energies of the ions that have interacted with the surface. Data that is thus collected can be used to deduce information about the material such as the relative positions of atoms in a surface lattice and the elemental identity of those atoms. LEIS is closely related to both medium-energy ion scattering (MEIS) and high-energy ion scattering (HEIS, known in practice as Rutherford backscattering spectroscopy, or RBS), differing primarily in the energy range of the ion beam used to probe the surface. While much of the information collected using LEIS can be obtained using other surface science techniques , LEIS is unique in its sensitivity to both structure and composition of surfaces. Additionally, LEIS is one of a very few surface-sensitive techniques capable of directly observing hydrogen atoms, an aspect that may make it an increasingly more important technique as the hydrogen economy is being explored.
LEIS systems consist of the following:
Several different types of events may take place as a result of the ion beam impinging on a target surface. Some of these events include electron or photon emission, electron transfer (both ion-surface and surface-ion), scattering , adsorption , and sputtering (i.e. ejection of atoms from the surface). For each system and each interaction there exists an interaction cross-section , and the study of these cross-sections is a field in its own right. As the name suggests, LEIS is primarily concerned with scattering phenomena.
Due to the energy range typically used in ion scattering experiments (> 500 eV), effects of thermal vibrations, phonon oscillations, and interatomic binding are ignored since they are far below this range (~a few eV), and the interaction of particle and surface may be thought of as a classical two-body elastic collision problem. Measuring the energy of ions scattered in this type of interaction can be used to determine the elemental composition of a surface, as is shown in the following:
Two-body elastic collisions are governed by the concepts of energy and momentum conservation. Consider a particle with mass m x , velocity v 0 , and energy given as E 0 = 1 2 m x v 0 2 {\displaystyle E_{0}={\tfrac {1}{2}}m_{x}v_{0}^{2}\,\!} impacting another particle at rest with mass m y . The energies of the particles after collision are E 1 = 1 2 m x v 1 2 {\displaystyle E_{1}={\tfrac {1}{2}}m_{x}v_{1}^{2}\,\!} and E 2 = 1 2 m y v 2 2 {\displaystyle E_{2}={\tfrac {1}{2}}m_{y}v_{2}^{2}\,\!} where E 0 = E 1 + E 2 {\displaystyle E_{0}=E_{1}+E_{2}\,\!} and thus 1 2 m x v 0 2 = 1 2 m x v 1 2 + 1 2 m y v 2 2 {\displaystyle {\tfrac {1}{2}}m_{x}v_{0}^{2}={\tfrac {1}{2}}m_{x}v_{1}^{2}+{\tfrac {1}{2}}m_{y}v_{2}^{2}\,\!} . Additionally, we know m x v 0 = m x v 1 cos θ 1 + m y v 2 cos θ 2 {\displaystyle m_{x}v_{0}=m_{x}v_{1}\cos \theta _{1}+m_{y}v_{2}\cos \theta _{2}\,\!} . Using trigonometry we are able to determine
Similarly, we know
In a well-controlled experiment the energy and mass of the primary ions (E 0 and m x , respectively) and the scattering or recoiling geometries are all known, so determination of surface elemental composition is given by the correlation between E 1 or E 2 and m y . Higher energy scattering peaks correspond to heavier atoms and lower energy peaks correspond to lighter atoms.
While obtaining qualitative information about the elemental composition of a surface is relatively straightforward, it is necessary to understand the statistical cross-section of interaction between ion and surface atoms in order to obtain quantitative information. Stated another way, it is easy to find out if a particular species is present, but much more difficult to determine how much of this species is there.
The two-body collision model fails to give quantitative results as it ignores the contributions of coulomb repulsion as well as the more complicated effects of charge screening by electrons. This is generally less of a problem in MEIS and RBS experiments but presents issues in LEIS. Coulomb repulsion occurs between positively charged primary ions and the nuclei of surface atoms. The interaction potential is given as:
Where Z 1 {\displaystyle Z_{1}\,\!} and Z 2 {\displaystyle Z_{2}\,\!} are the atomic numbers of the primary ion and surface atom, respectively, e {\displaystyle e\,\!} is the elementary charge , r {\displaystyle r\,\!} is the interatomic distance, and ϕ ( r ) {\displaystyle \phi (r)\,\!} is the screening function. ϕ ( r ) {\displaystyle \phi (r)\,\!} accounts for the interference of the electrons orbiting each nucleus. In the case of MEIS and RBS, this potential can be used to calculate the Rutherford scattering cross section (see Rutherford scattering ) d σ d Ω {\displaystyle {\tfrac {d\sigma }{d\Omega }}} :
As shown at right, d σ {\displaystyle d\sigma \,\!} represents a finite region for an incoming particle, while d Ω {\displaystyle d\Omega \,\!} represents the solid scattering angle after the scattering event. However, for LEIS ϕ ( r ) {\displaystyle \phi (r)\,\!} is typically unknown which prevents such a clean analysis. Additionally, when using noble gas ion beams there is a high probability of neutralization on impact (which has strong angular dependence) due to the strong desire of these ions to be in a neutral, closed shell state. This results in poor secondary ion flux. See AISS and TOF-SARS below for approaches to avoiding this problem.
Shadowing and blocking are important concepts in almost all types of ion-surface interactions and result from the repulsive nature of the ion-nucleus interaction. As shown at right, when a flux of ions flows in parallel towards a scattering center (nucleus), they are each scattered according to the force of the Coulomb repulsion. This effect is known as shadowing . In a simple Coulomb repulsion model, the resulting region of “forbidden” space behind the scattering center takes the form of a paraboloid with radius r = 2 Z 1 Z 2 e 2 L E 0 {\displaystyle r=2{\sqrt {\tfrac {Z_{1}Z_{2}e^{2}L}{E_{0}}}}} at a distance L from the scattering center. The flux density is increased near the edge of the paraboloid.
Blocking is closely related to shadowing, and involves the interaction between scattered ions and a neighboring scattering center (as such it inherently requires the presence of at least two scattering centers). As shown, ions scattered from the first nucleus are now on diverging paths as they undergo interaction with the second nucleus. This interaction results in another “shadowing cone” now called a blocking cone where ions scattered from the first nucleus are blocked from exiting at angles below α c r i t {\displaystyle \alpha _{crit}\,\!} . Focusing effects again result in an increased flux density near α c r i t {\displaystyle \alpha _{crit}\,\!} .
In both shadowing and blocking, the "forbidden" regions are actually accessible to trajectories when the mass of incoming ions is greater than that of the surface atoms (e.g. Ar + impacting Si or Al ). In this case the region will have a finite but depleted flux density .
For higher energy ions such as those used in MEIS and RBS the concepts of shadowing and blocking are relatively straightforward since ion-nucleus interactions dominate and electron screening effects are insignificant. However, in the case of LEIS these screening effects do interfere with ion-nucleus interactions and the repulsive potential becomes more complicated. Also, multiple scattering events are very likely which complicates analysis. Importantly , due to the lower energy ions used LEIS is typically characterized by large interaction cross-sections and shadow cone radii. For this reason penetration depth is low and the method has much higher first-layer sensitivity than MEIS or RBS. Overall, these concepts are essential for data analysis in impact collision LEIS experiments (see below).
The de Broglie wavelength of ions used in LEIS experiments is given as λ = h m v {\displaystyle \lambda ={\tfrac {h}{mv}}} . Using a worst-case value of 500 eV for an 4 He + ion, we see λ is still only 0.006 Å, still well below the typical interatomic spacing of 2-3 Å. Because of this, the effects of diffraction are not significant in a normal LEIS experiment.
Depending on the particular experimental setup, LEIS may be used to obtain a variety of information about a sample. The following includes several of these methods. | https://en.wikipedia.org/wiki/Low-energy_ion_scattering |
Low-energy plasma-enhanced chemical vapor deposition ( LEPECVD ) is a plasma-enhanced chemical vapor deposition technique used for the epitaxial deposition of thin semiconductor ( silicon , germanium and SiGe alloys ) films. A remote low energy, high density DC argon plasma is employed to efficiently decompose the gas phase precursors while leaving the epitaxial layer undamaged, resulting in high quality epilayers and high deposition rates (up to 10 nm/s).
The substrate (typically a silicon wafer ) is inserted in the reactor chamber, where it is heated by a graphite resistive heater from the backside. An argon plasma is introduced into the chamber to ionize the precursors' molecules, generating highly reactive radicals which result in the growth of an epilayer on the substrate. Moreover, the bombardment of Ar ions removes the hydrogen atoms adsorbed on the surface of the substrate while introducing no structural damage.
The high reactivity of the radicals and the removal of hydrogen from the surface by ion bombardment prevent the typical problems of Si, Ge and SiGe alloys growth by thermal chemical vapor deposition (CVD), which are
Thanks to this effects the growth rate in a LEPECVD reactor depends only on the plasma parameters and the gas fluxes, and it is possible to obtain epitaxial deposition at much lower temperatures compared to a standard CVD tool.
The LEPECVD reactor is divided in three main parts:
The substrate is placed at the top of the chamber, facing down toward the plasma source. Heating is provided from the back side by thermal radiation from a resistive graphite heater incapsulated between two boron nitride discs, which improve the temperature uniformity across the heater. Thermocouples are used to measure the temperature above the heater, which is then correlated to that of the substrate by a calibration done with an infrared pyrometer . Typical substrate temperatures for monocrystalline films are 400 °C to 760 °C, for germanium and silicon respectively.
The potential of the wafer stage can be controlled by an external power supply, influencing the amount and the energy of radicals impinging on the surface, and is typically kept at 10-15 V with respect to the chamber walls.
The process gases are introduced into the chamber through a gas dispersal ring placed below the wafer stage. The gases used in a LEPECVD reactor are silane ( SiH 4 ) and germane ( GeH 4 ) for silicon and germanium deposition respectively, together with diborane ( B 2 H 6 ) and phosphine ( PH 3 ) for p- and n-type doping.
The plasma source is the most critical component of a LEPECVD reactor, as the low energy, high density, plasma is the key difference from a typical PECVD deposition system.
The plasma is generated in a source which is attached to the bottom of the chamber. Argon is fed directly in the source, where tantalum filaments are heated to create an electron-rich environment by thermionic emission . The plasma is then ignited by a DC discharge from the heated filaments to the grounded walls of the source. Thanks to the high electron density in the source the voltage required to obtain a discharge is around 20-30V, resulting in an ion energy of about 10-20 eV, while the discharge current is of the order of several tens of amperes, giving a high ion density.
The DC discharge current can be tuned to control the ion density, thus changing the growth rate: in particular at a larger discharge current the ion density is higher, therefore increasing the rate.
The plasma enters the growth chamber through an anode electrically connected to the grounded chamber walls, which is used to focus and stabilize the discharge and the plasma.
Further focusing is provided by a magnetic field directed along the chamber's axis, provided by external copper coils wrapped around the chamber. The current flowing through the coils (i.e. the intensity of the magnetic field) can be controlled to change the ion density at the substrate's surface, thus changing the growth rate.
Additional coils ("wobblers") are placed around the chamber, with their axis perpendicular to the magnetic field, to continuously sweep the plasma over the substrate, improving the homogeneity of the deposited film.
Thanks to the possibility of changing the growth rate (through the plasma density or gas fluxes) independently from the substrate temperature, both thin films with sharp interfaces and a precision down to the nanometer scale at rates as low as 0.4 nm/s, as well as thick layers (up to 10 um or more) at rates as high as 10 nm/s, can be grown using the same reactor and in the same deposition process. This has been exploited to grow low-loss composition-graded waveguides for NIR [ 1 ] and MIR [ 2 ] and integrated nanostructures (i.e. quantum well stacks) for NIR optical amplitude modulation. [ 1 ] The capability of LEPECVD to grow both very sharp quantum wells on thick buffers in the same deposition step has also been employed to realize high mobility strained Ge channels. [ 3 ]
Another promising application of the LEPECVD technique is the possibility of growing high aspect ratio, self-assembled silicon and germanium microcrystals on deeply patterned Si substrates. [ 4 ] This solves many problems related to heteroepitaxy (i.e. thermal expansion coefficient and crystal lattice mismatch), leading to very high crystal quality, and is possible thanks to the high rates and low temperatures found in a LEPECVD reactor. [ 5 ] | https://en.wikipedia.org/wiki/Low-energy_plasma-enhanced_chemical_vapor_deposition |
Low-gravity process engineering is a specialized field that focuses on the design, development, and optimization of industrial processes and manufacturing techniques in environments with reduced gravitational forces. [ 1 ] This discipline encompasses a wide range of applications, from microgravity conditions experienced in Earth orbit to the partial gravity environments found on celestial bodies such as the Moon and Mars . [ 2 ]
As humanity extends its reach beyond Earth, the ability to efficiently produce materials, manage fluids, and conduct chemical processes in reduced gravity becomes crucial for sustained space missions and potential colonization efforts. [ 3 ] Furthermore, the unique conditions of microgravity offer opportunities for novel materials and pharmaceuticals that cannot be easily produced on Earth, potentially leading to groundbreaking advancements in various industries. [ 4 ]
The historical context of low-gravity research dates back to the early days of space exploration . Initial experiments conducted during the Mercury and Gemini programs in the 1960s provided the first insights into fluid behavior in microgravity. [ 5 ] Subsequent missions, including Skylab and the Space Shuttle program , expanded our understanding of materials processing and fluid dynamics in space. [ 6 ] The advent of the International Space Station (ISS) in the late 1990s marked a significant milestone, providing a permanent microgravity laboratory for continuous research and development in low-gravity process engineering. [ 7 ]
Low-gravity environments, encompassing both microgravity and reduced gravity conditions, exhibit unique characteristics that significantly alter physical phenomena compared to Earth's gravitational field . These environments are typically characterized by gravitational accelerations ranging from 10 − 6 {\displaystyle 10^{-6}} g {\displaystyle g} to 10 − 2 {\displaystyle 10^{-2}} g {\displaystyle g} , where g {\displaystyle g} represents Earth's standard gravitational acceleration ( 9.81 m / s 2 ) {\displaystyle (9.81m/s^{2})} . [ 8 ]
Microgravity, often experienced in orbiting spacecraft , is characterized by the near absence of perceptible weight. In contrast, reduced gravity conditions, such as those on the Moon ( 0.16 g {\displaystyle 0.16g} ) or Mars ( 0.37 g {\displaystyle 0.37g} ), maintain a fractional gravitational pull relative to Earth. [ 9 ]
These environments differ markedly from Earth's gravity in several key aspects:
In microgravity, fluid behavior is primarily governed by surface tension , viscous forces, and inertia. This leads to phenomena such as large stable liquid bridges, spherical droplet formation, and capillary flow dominance. [ 13 ] The absence of buoyancy-driven convection alters mixing processes and phase separations, necessitating alternative methods for fluid management in space applications. [ 14 ]
The lack of natural convection in microgravity significantly impacts heat transfer processes. Conduction and radiation become the primary modes of heat transfer, while forced convection must be induced artificially . This alteration affects cooling systems, boiling processes, and thermal management in spacecraft and space-based manufacturing. [ 15 ]
Low-gravity environments offer unique conditions for materials processing. The absence of buoyancy-driven convection and sedimentation allows for more uniform crystal growth and the formation of novel alloys and composites. [ 16 ] Additionally, the reduced mechanical stresses in microgravity can lead to changes in material properties and behavior, influencing fields such as materials science and pharmaceutical research . [ 17 ]
Low-gravity process engineering faces a number of challenges that require innovative solutions and adaptations of terrestrial technologies. These challenges stem from the unique physical phenomena observed in microgravity and reduced gravity environments. [ 18 ]
The absence of buoyancy and the dominance of surface tension in low-gravity environments significantly alter fluid behavior, presenting several challenges:
The lack of natural convection in low-gravity environments poses significant challenges for heat transfer processes:
Low-gravity environments present unique challenges in manipulating and containing materials:
Designing equipment for low-gravity operations requires addressing several unique factors
Addressing these challenges requires interdisciplinary approaches, combining insights from fluid dynamics, heat transfer, materials science, and aerospace engineering . As research in low-gravity process engineering progresses, new solutions and technologies continue to emerge, expanding the possibilities for space-based manufacturing and resource utilization. [ 28 ]
Multiphase flow behavior in microgravity differs substantially from terrestrial conditions. The absence of buoyancy-driven phase separation leads to complex flow patterns and phase distributions. [ 21 ] These phenomena affect heat transfer, mass transport, and chemical reactions in multiphase systems, necessitating novel approaches to fluid management in space. [ 14 ]
Boiling and condensation processes are fundamentally altered in microgravity. The lack of buoyancy affects bubble dynamics, heat transfer coefficients, and critical heat flux. [ 15 ] Understanding these changes is crucial for designing efficient thermal management systems for spacecraft and space habitats . [ 22 ]
Capillary flow and wetting phenomena become dominant in low-gravity environments. Surface tension forces drive fluid behavior, leading to unexpected liquid migrations and containment challenges. [ 13 ] These effects are particularly important in the design of fuel tanks, life support systems, and fluid handling equipment for space applications. [ 5 ]
Materials processing in space offers unique opportunities for producing novel materials and improving existing manufacturing techniques.
Crystal growth in space benefits from the absence of gravity-induced convection and sedimentation. This environment allows for the growth of larger, more perfect crystals with fewer defects. [ 29 ] Space-grown crystals have applications in electronics, optics , and pharmaceutical research. [ 30 ]
Metallurgy and alloy formation in microgravity can result in materials with unique properties. The absence of buoyancy-driven convection allows for more uniform mixing of molten metals and the creation of novel alloys and composites that are difficult or impossible to produce on Earth. [ 6 ]
Additive manufacturing in low-gravity environments presents both challenges and opportunities. While the absence of gravity can affect material deposition and layer adhesion, it also allows for the creation of complex structures without the need for support materials. [ 3 ] This technology has potential applications in on-demand manufacturing of spare parts and tools for long-duration space missions. [ 31 ]
Microgravity conditions offer unique advantages for various biotechnology applications.
Protein crystallization in space often results in larger, more well-ordered crystals compared to those grown on Earth. These high-quality crystals are valuable for structural biology studies and drug design. [ 32 ] The microgravity environment reduces sedimentation and convection, allowing for more uniform crystal growth. [ 33 ]
Cell culturing and tissue engineering benefit from the reduced mechanical stresses in microgravity. This environment allows for three-dimensional cell growth and the formation of tissue-like structures that more closely resemble in vivo conditions. [ 34 ] Such studies contribute to our understanding of cellular biology and may lead to advancements in regenerative medicine . [ 35 ]
Pharmaceutical production in space has the potential to yield purer drugs with improved efficacy . The absence of convection and sedimentation can lead to more uniform crystallization and particle formation, potentially enhancing drug properties. [ 36 ]
Chemical engineering processes in microgravity often exhibit different behaviors compared to their terrestrial counterparts.
Reaction kinetics in microgravity can be altered due to the absence of buoyancy-driven convection . This can lead to more uniform reaction conditions and potentially different reaction rates or product distributions. [ 17 ] [ 37 ]
Separation processes, such as distillation and extraction, face unique challenges in low-gravity environments. The lack of buoyancy affects phase separation and mass transfer, requiring novel approaches to achieve efficient separations. [ 38 ] These challenges have led to the development of alternative separation technologies for space applications. [ 39 ]
Catalysis in space presents opportunities for studying fundamental catalytic processes without the interfering effects of gravity. The absence of natural convection and sedimentation can lead to more uniform catalyst distributions and potentially different reaction pathways. [ 1 ] This research may contribute to the development of more efficient catalysts for both space and terrestrial applications. [ 40 ]
The study of low-gravity processes requires specialized platforms and techniques to simulate or create microgravity conditions. These methods range from ground-based facilities to orbital laboratories and computational simulations. [ 41 ]
Drop towers provide short-duration microgravity environments by allowing experiments to free-fall in evacuated shafts. These facilities typically offer 2–10 seconds of high-quality microgravity. [ 42 ] Notable examples include NASA's Glenn Research Center 2.2-Second Drop Tower and the 146-meter ZARM Drop Tower in Bremen, Germany. [ 43 ]
Parabolic flights , often referred to as "vomit comets," create repeated periods of microgravity lasting 20–25 seconds by flying aircraft in parabolic arcs . [ 44 ] These flights allow researchers to conduct hands-on experiments and test equipment destined for space missions. [ 45 ]
Sounding rockets offer extended microgravity durations ranging from 3 to 14 minutes, depending on the rocket's apogee. [ 46 ] These platforms are particularly useful for experiments requiring longer microgravity exposure than drop towers or parabolic flights can provide. [ 47 ]
Suborbital flights , such as those planned by commercial spaceflight companies, present new opportunities for microgravity research. These flights can offer several minutes of microgravity time and the potential for frequent, cost-effective access to space-like conditions. [ 48 ]
The International Space Station serves as a permanent microgravity laboratory, offering long-duration experiments in various scientific disciplines. [ 49 ] Key research facilities on the ISS include:
These facilities enable researchers to conduct complex, long-term studies in a true microgravity environment, advancing our understanding of fundamental physical processes and developing new technologies for space exploration. [ 53 ]
Computational Fluid Dynamics (CFD) plays a crucial role in predicting and analyzing fluid behavior in low-gravity environments. CFD simulations complement experimental research by:
CFD models for low-gravity applications often require modifications to account for the dominance of surface tension forces and the absence of buoyancy-driven flows. [ 57 ] Validation of these models typically involves comparison with experimental data from microgravity platforms. [ 58 ]
As computational power increases, CFD simulations are becoming increasingly sophisticated, enabling more accurate predictions of complex multiphase flows and heat transfer processes in microgravity. [ 21 ] | https://en.wikipedia.org/wiki/Low-gravity_process_engineering |
Low-impact development (LID) is a term used in Canada and the United States to describe a land planning and engineering design approach to manage stormwater runoff as part of green infrastructure . LID emphasizes conservation and use of on-site natural features to protect water quality . This approach implements engineered small-scale hydrologic controls to replicate the pre-development hydrologic regime of watersheds through infiltrating , filtering , storing, evaporating , and detaining runoff close to its source. [ 1 ] Green infrastructure investments are one approach that often yields multiple benefits and builds city resilience. [ 2 ]
Broadly equivalent terms used elsewhere include Sustainable drainage systems (SuDS) in the United Kingdom (where LID has a different meaning ), water-sensitive urban design (WSUD) in Australia, natural drainage systems in Seattle , Washington, [ 3 ] "Environmental Site Design" as used by the Maryland Department of the Environment , [ 4 ] and "Onsite Stormwater Management", as used by the Washington State Department of Ecology. [ 5 ]
A concept that began in Prince George's County , Maryland in 1990, LID began as an alternative to traditional stormwater best management practices (BMPs) installed at construction projects. [ 6 ] Officials found that the traditional practices such as detention ponds and retention basins were not cost-effective and the results did not meet water quality goals. The Low Impact Development Center, Inc., a non-profit water resources research organization, was formed in 1998 to work with government agencies and institutions to further the science, understanding, and implementation of LID and other sustainable environmental planning and design approaches, such as Green Infrastructure and the Green Highways Partnership.
The LID design approach has received support from the U.S. Environmental Protection Agency (EPA) and is being promoted as a method to help meet goals of the Clean Water Act . [ 7 ] Various local, state, and federal agency programs have adopted LID requirements in land development codes and implemented them in public works projects. LID techniques can also play an important role in Smart Growth and Green infrastructure land use planning.
The basic principle of LID to use nature as a model and manage rainfall at the source is accomplished through sequenced implementation of runoff prevention strategies, runoff mitigation strategies, and finally, treatment controls to remove pollutants. Although Integrated Management Practices (IMPs) — decentralized, microscale controls that infiltrate, store, evaporate, and detain runoff close to the source — get most of the attention by engineers, it is crucial to understand that LID is more than just implementing a new list of practices and products. It is a strategic design process to create a sustainable site that mimics the undeveloped hydrologic properties of the site. It requires a prescriptive approach that is appropriate for the proposed land use .
Design using LID principles follows four simple steps.
The basic processes used to manage stormwater include pretreatment, filtration, infiltration, and storage and reuse.
Pre-treatment is recommended to remove pollutants such as trash , debris , and larger sediments. Incorporation of a pretreatment system, such as a hydrodynamic separator , can prolong the longevity of the entire system by preventing the primary treatment practice from becoming prematurely clogged.
When stormwater is passed through a filter media, solids and other pollutants are removed. Most media remove solids by mechanical processes. The gradation of the media, irregularity of shape, porosity, and surface roughness characteristics all influence solids removal. Many other pollutants such as nutrients and metals can be removed through chemical and/or biological processes. Filtration is a key component to LID sites, especially when infiltration is not feasible. Filter systems can be designed to remove the primary pollutants of concern from runoff and can be configured in decentralized small-scale inlets. This allows for runoff to be treated close to its source without additional collection or conveyance infrastructure.
Infiltration reclaims stormwater runoff and allows for groundwater recharge . Runoff enters the soil and percolates through to the subsurface. The rate of infiltration is affected by soil compaction and storage capacity, and will decrease as the soil becomes saturated. The soil texture and structure, vegetation types and cover, water content of the soil, soil temperature, and rainfall intensity all play a role in controlling infiltration rate and capacity. Infiltration plays a critical role in LID site design. Some of the benefits of infiltration include improved water quality (as water is filtered through the soil) and reduction in runoff. When distributed throughout a site, infiltration can significantly help maintain the site's natural hydrology.
Capturing and reusing stormwater as a resource helps maintain a site's predevelopment hydrology while creating an additional supply of water for irrigation or other purposes. Rainwater harvesting is an LID practice that facilitates the reuse of stormwater. [ 8 ]
There are 5 core requirements when it comes to designing for LID.
Planning practices include several related approaches that were developed independently by various practitioners. These differently named approaches include similar concepts and share similar goals in protecting water quality.
Planners select structural LID practices for an individual site in consideration of the site's land use , hydrology , soil type, climate and rainfall patterns. There are many variations on these LID practices, and some practices may not be suitable for a given site. Many are practical for retrofit or site renovation projects, as well as for new construction. Optimal places for retrofitting LID are single houses, school/university areas, and parks. [ 11 ] Frequently used practices include:
Urban areas are especially prone to create barriers for LID practices. The most common limits are:
LID has multiple benefits, such as protecting animal habitats, improving management of runoff and flooding, and reducing impervious surfaces. For example, Dr. Allen Davis from the University of Maryland, College Park conducted research on the runoff management from LID rain gardens. His data indicated that LID rain gardens can hold up to 90% of water after a major rain event and release this water over a time scale of up to two weeks. [ 15 ] LID also improves groundwater quality and increases its quantity, which increases aesthetics, therefore raising community value.
LID can also be used to eliminate the need for stormwater ponds, which occupy expensive land. Incorporating LID into designs enables developers to build more homes on the same plot of land and maximize their profits.
In some municipalities, LID can be a cost-effective way to reduce the incidence of combined sewer overflows (CSO). [ 16 ] [ 17 ]
According to the co-benefits approach, LID is an opportunity to technically mitigate urban heat island (UHI) phenomenon with higher compatibilities in cool pavement and green infrastructures. Although there are some intrinsic discrepancies among understandings of LID and UHI mitigation towards blue infrastructure, the osmotic pool, wet pond, and regulating pond are essential supplements to urban water bodies, performing their roles in nourishing vegetation and evaporating for cooling in UHI mitigation. LID pilot projects have already provided the financial foundation for taking the UHI mitigation further. It is an attempt for people in different disciplines to synergistically think about how to mitigate UHI effects, which is conducive to the generation of holistic policies, guidelines and regulations. Furthermore, the inclusion of UHI mitigation can be a driver to public participation in SPC construction, which can consolidate the PPP model for more funds. [ 18 ] | https://en.wikipedia.org/wiki/Low-impact_development_(U.S._and_Canada) |
Low-latency queuing (LLQ) is a network scheduling feature developed by Cisco to bring strict priority queuing (PQ) to class-based weighted fair queuing (CBWFQ). LLQ allows delay-sensitive data (such as voice) to be given preferential treatment over other traffic by letting the data to be dequeued and sent first. [ 1 ]
Class-based weighted fair queuing (CB-WFQ) was initially released without the support of a priority queuing system, thus it could not guarantee the delay and jitter (delay variation) requirements of real-time, interactive voice and video conversations. Since for CBWFQ, the weight for a packet belonging to a specific class is derived from the bandwidth assigned to the class, which in turn determines the order in which packets are sent. All packets are serviced fairly based on weight and no class of packets may be granted strict priority. This scheme poses problems for voice traffic that is largely intolerant of delay, especially variation in delay.
In order to address this, Cisco released LLQ to provide strict priority queuing for CBWFQ by enabling the use of a single, strict priority queue within CBWFQ at the class level. This allows for directing traffic belonging to a class to the CBWFQ strict priority queue. One or more classes priority status can be given within a policy map. When multiple classes within a single policy map are configured as priority classes, all traffic from these classes is enqueued to the same single, strict priority queue. | https://en.wikipedia.org/wiki/Low-latency_queuing |
Low-level design ( LLD ) is a component-level design process that follows a step-by-step refinement process. This process can be used for designing data structures , required software architecture , source code and ultimately, performance algorithms . Overall, the data organization may be defined during requirement analysis and then refined during data design work. Post-build, each component is specified in detail. [ 1 ]
The LLD phase is the stage where the actual software components are designed.
During the detailed phase the logical and functional design is done and the design of application structure is developed during the high-level design phase.
A design is the order of a system that connects individual components. Often, it can interact with other systems. Design is important to achieve high reliability, low cost, and good maintain-ability. [ 2 ] We can distinguish two types of program design phases:
Structured flow charts and HIPO diagrams typify the class of software design tools and these provide a high-level overview of a program. The advantages of such a design tool are that it yields a design specification understandable to non-programmers and provides a good pictorial display of the module dependencies.
A disadvantage is that it may be difficult for software developers to go from a graphic-oriented representation of software design to implementation. Therefore, it is necessary to provide little insight into the algorithmic structure describing procedural steps to facilitate the early stages of software development, generally using Program Design Languages (PDLs). [ 3 ]
The goal of LLD or a low-level design document (LLDD) is to give the internal logical design of the actual program code. Low-level design is created based on the high-level design. LLD describes the class diagrams with the methods and relations between classes and program specs. It describes the modules so that the programmer can directly code the program from the document.
A good low-level design document makes the program easy to develop when proper analysis is utilized to create a low-level design document. The code can then be developed directly from the low-level design document with minimal debugging and testing.
Other advantages include lower cost and easier maintenance. | https://en.wikipedia.org/wiki/Low-level_design |
A low-power, wide-area network ( LPWAN or LPWA network ) is a type of wireless telecommunication wide area network designed to allow long-range communication at a low bit rate between IoT devices , such as sensors operated on a battery .
Low power , low bit rate, and intended use distinguish this type of network from a wireless WAN that is designed to connect users or businesses, and carry more data, using more power. The LPWAN data rate ranges from 0.3 kbit/s to 50 kbit/s per channel.
A LPWAN may be used to create a private wireless sensor network , but may also be a service or infrastructure offered by a third party, allowing the owners of sensors to deploy them in the field without investing in gateway technology.
Some competing standards and vendors for LPWAN space include: [ 2 ]
Ultra Narrowband (UNB), modulation technology used for LPWAN by various companies including: | https://en.wikipedia.org/wiki/Low-power_wide-area_network |
A low-rise is a building that is only a few stories tall or any building that is shorter than a high-rise , [ 1 ] though others include the classification of mid-rise. [ 2 ] [ 3 ]
Emporis defines a low-rise as "an enclosed structure below 35 metres [115 feet] which is divided into regular floor levels". [ 4 ] The city of Toronto defines a mid-rise as a building between four and twelve stories. [ 5 ] They also have elevators and stairs. Shorter structures may only have stairs.
Low-rise apartments sometimes offer more privacy and negotiability of rent and utilities than high-rise apartments, although they may have fewer amenities and less flexibility with leases. It is easier to put fires out in low-rise buildings. [ 6 ]
Within the United States, due to the legal-economic and modernist perspectives, low-rises can in some cities be seen as less luxurious than high-rises, whereas within Western Europe (for historical identity and legal reasons) low-rise tends to be more attractive. Some businesses prefer low-rise buildings due to lower costs and more usable space. Having all employees on a single floor may also increase work productivity. [ 7 ]
This article about a building or structure type is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Low-rise_building |
Low-rise high-density housing refers to residential developments which are typically 4 stories or less in height, have a high number of housing units per acre of land, and have between 35-80 dwellings per hectare. [ 1 ] This housing type is thought to provide a middle ground between detached single-family homes and high-rise apartment buildings. [ 2 ]
Although the concept of low-rise high-density housing can be traced back to Le Corbusier 's unbuilt Roq et Rob project from 1949, [ 3 ] a more direct influence was the pioneering work of the Swiss firm Atelier 5, whose Siedlung Halen project built in Bern, Switzerland in 1955-61 became a seminal example of the typology. [ 4 ] [ 5 ]
In the 1960s and 1970s, low-rise high-density housing gained popularity among architects as a reaction to the perceived social failures of high-rise "tower-in-the-park" public housing projects. [ 5 ] Architects and planners began to rethink and reintroduce this housing model as a way to combine the benefits of urban and suburban living. [ 6 ]
The low-rise, high-density approach has regained popularity as an alternative to suburban sprawl and high-rise housing, offering a way to create density while providing a sense of community and connection to the ground. [ 7 ] [ 8 ]
Le Corbusie r: His Roq et Rob project in 1949 is considered an early influence on the low-rise, high-density approach.
Atelier 5 : The Swiss architecture firm designed Siedlung Halen in Bern, Switzerland from 1959-61, which is considered the most influential low-rise, high-density project of the 1960s. [ 4 ]
The New York State Urban Development Corporation (UDC) : In 1973, the UDC, along with the Institute for Architecture and Urban Studies , presented the Marcus Garvey Park Village project in Brownsville, Brooklyn and the Another Chance for Housing: Low Rise Alternatives exhibition at the Museum of Modern Art. This showcased a future for housing in the U.S. that combined urban and suburban living benefits. [ 4 ] [ 9 ]
Seven young architecture firms : Engaged by the UDC to further develop the low-rise, high-density prototype presented at MoMA, drawing from the pioneering work of architects like Atelier 5. [ 4 ]
Contemporary architects and researchers : Figures like Karen Kubey, exhibitor of Suburban Alternatives, which traced the typology of low-rise, high-density housing over time, advocates for this approach. [ 10 ]
The aim of this housing model is to deliver the benefits of density, such as supporting public services and reducing environmental impact, while still providing residents with a sense of community and individual identity more typical of single-family homes. [ 2 ] [ 1 ] Studies have found that low-rise high-density developments have several potential benefits:
While low-rise high-density housing is seen as a valuable alternative to high-rise towers, it presents several challenges:
Advocates of low-rise, high-density architecture argue that this type of development can provide an effective "missing middle" between low-density suburbs and high-rise towers. 3-7 story mid-rise buildings, often in a perimeter block configuration with a central courtyard, are cited as an example of this "missing middle" that can enable walkable neighborhoods with multiple different uses and housing types. [ 12 ] Proponents suggest that this medium-density approach can achieve higher densities without the perceived downsides of high-rise towers, such as limited access to outdoor space, reduced community cohesion, and higher maintenance costs. [ 13 ] [ 14 ] Mid-rise, medium-density development is more common in Europe than in North America and Australia, where urban development has tended towards either low-density suburbs or high-rise towers [ 12 ]
Criticisms or challenges associated with low-rise, high-density architecture include: | https://en.wikipedia.org/wiki/Low-rise_high-density |
The low-temperature distillation (LTD) technology is the first implementation of the direct spray distillation (DSD) process. The first large-scale units are now in operation for desalination. The process was first developed by scientists at the University of Applied Sciences in Switzerland, focusing on low-temperature distillation in vacuum conditions, from 2000 to 2005. [ 1 ] [ 2 ]
Direct spray distillation is a water treatment process applied in seawater desalination and industrial wastewater treatment , brine and concentrate treatment as well as zero liquid discharge systems. It is a physical water separation process driven by thermal energy . Direct spray distillation involves evaporation and condensation on water droplets that are sprayed into a chamber that is evacuated of non-condensable permanent gases like air and carbon dioxide. Compared to other vaporization systems, no phase change happens on solid surfaces such as shell and tube heat exchangers .
Currently, the only implementation of DSD technology is low-temperature distillation (LTD). The LTD process runs under partial pressure in the evaporator and condenser chambers, and with process temperatures of below 100 °C. The first large-scale LTD systems for industrial water treatment are now in operation. [ citation needed ]
The DSD process was invented in the late 1990 by Mark Lehmann with the first successful demonstration of the process in a factory hall of the Obrecht AG, Doettingen, Switzerland. The results of the experiments were evaluated and double-checked by Prof. Dr. Kurt Heiniger (University of applied Sciences and Arts, Northwestern Switzerland) [ 3 ] [ 4 ] and Dr. Franco Blanggetti ( Alstom , Co-author of the VDI Wärmeatlas). [ 5 ] During the next years, the process has been further researched in the framework of many thesis [ 1 ] supervised by Heiniger and Lehmann. [ 2 ] The objective has been the examination of the influence of non-condensable gases in lowered pressure environments on the heat transfer during the condensation process on cooled droplets. It has been found that the droplet size and distribution as well as the geometry of the condensation reactor has the most significant influence on the heat transfer. Due to the absence of common tube bundle heat exchangers , the achievable efficiency gains result from the minimized heat resistance during the condensation process.
Low temperature distillation (LTD) is a thermal distillation process in several stages, powered by temperature differences between heat and cooling sources of at least 5 K per stage. Two separate volume flows, a hot evaporator flow and a cool condenser flow, with different temperatures and vapor pressures, are sprayed in a combined pressure chamber, where non-condensable gases are continuously removed. [ 3 ] [ 4 ] As the vapor moves to a partial pressure equilibrium, part of the water from the hot stream evaporates. [ 5 ] Several serial arranged chambers in counter flow of the hot evaporator and cold condenser stream allow a high internal heat recovery by the application of multiple stages. The process excels in a high specific heat conversion rate caused by the reduction of heat transfer losses, which results in a high thermal efficiency and low heat transfer resistance. The LTD process is tolerant to high salinity , other impurities, and fluctuating feed water qualities. The precipitation of solids is technically intended to allow for zero-liquid-discharge operation (complete ZLD). It is possible to combine the low-temperature distillation process with existing desalination technologies serving as downstream process to increase the water output and reduce the brine generation.
The following figures show and explain the thermodynamic principle on which the LTD technology is built. Considering Fig. 1, there are two cylinders given with open buttons and filled with water in two basins with two different temperatures (assumption: hot at 50°C and cold at 20°C). The temperature related vapor pressure of the water is 123 mbar for 50°C and 23 mbar for 20°C. It is assumed that the two cylinders are 10 meters long and allow to be pulled out the same distance.
The pulled-out cylinders in Fig. 2 show now a different situation regarding the level of the water column. Due to the higher vapor pressure at 50°C, in the hot water column the Atmospheric pressure is capable to elevate the hot water column about 877 cm. In the remaining space, the water starts to evaporate at a pressure of 123 mbar. The cold water column at 20°C, the atmospheric pressure (1000 mbar) is 977 cm high in equilibrium with the according vapor pressure of 23 mbar. If no heat exchange takes place, this situation remains unchanged and is thermodynamically in equilibrium .
Now, the two tops of both columns are connected with a vapor channel in Fig. 3. If they are connected, the two vapor chambers (123 mbar and 23 mbar) spontaneously equalize their pressure to an average pressure. As a result, the two water columns tend to have the same level on both sides. However, this connection causes an energetical imbalance of the physical conditions of the water surface on top of the columns. On the 50°C hot column, the vapor pressure of the media is higher than the average pressure. On the 20°C cold side, the average pressure is higher than the vapor pressure of the water. This situation leads to a spontaneous boiling on the hot side and a vapor condensation on the cooler side on the water surface. This process continuous until the temperature on both sides has been balanced out in both columns. After the temperature adaption, both pressures and levels in the chambers are equal.
As a consequence of this, it can be assumed that as long as a temperature difference in both columns is maintained, a spontaneous evaporation and condensation of the surface water takes place in order to achieve equilibrium temperature and pressure. To make this technically possible, an additional external circulation in Fig. 4 can supply heat E i n {\displaystyle \textstyle E_{in}} on the evaporation side and extract heat E o u t {\displaystyle \textstyle E_{out}} on the condenser side. As the reaction velocity is strongly depending on the available water surface, a specially designed spraying system creates millions of small droplets. This huge internal water surface results in very high internal heat transfer rates between evaporator and condenser.
This principle also works if the useless bottom of the open water column is cut off and replaced by a lid as shown in Fig. 5. Experiments on the demonstration plant have shown that a pressure differential of only a few millibar (1 mbar corresponds to 1 cm water column) is sufficient to run this distillation process. It corresponds with very small temperature differentials of a few Kelvin.
If the temperature spread between the heat source and condenser is large enough, the condenser can act as a heater for the following stage. This has the advantage that the condensation heat is re-used multiple times at different temperature/pressures increasing the energetic efficiency with each additional stage. Depending on the available temperature difference, it can be multiplied several times resulting in an increase of the distillation capacity with the same amount of available heat. The result is the creation of the multi-cascaded direct spray distillation, visualized in Fig. 6.
The low temperature distillation process needs reactors for evaporation and condensation equipped with the spaying system to generate the droplets, and three standard plate heat exchangers (heating, cooling, thermal recovery). The feedwater and distillate are pumped in two large circulation streams through the reactors. The thermal recovery is realized in a heat exchanger preheating the feedwater by the distillate after condensation. Saturated brine and distillate are removed from the process by valve locks. The process and media flows are visualized in Fig. 7 in a general process scheme. [ 6 ]
The thermal energy (1) is supplied at the main heat exchanger (HEX 1) by any available media heating up the intake water up to 95°C.
In the evaporator cycle (green), the hot water is sprayed and evaporated in pressure reduced chambers (2) and flows by gravity to the subsequent chambers with lowered temperature and pressure environment. The generated vapor (3) flows from the evaporator to the condenser in every stage where it condenses on the cooled droplets of the sprayed distillate. [ 6 ]
The heat exchanger for cooling (HEX 3) reduces the temperature of the distillate (4) before it is pumped to the condenser cycle. In the condenser cycle (5), the cooled distillate is pumped and sprayed into the pressure chambers to allow for vapor condensation from the evaporators on cooled droplets. During this process, the temperature and pressure increases from stage to stage. After the last condenser, the increased heat of the distillate is recovered in the heat exchanger for thermal recovery (HEX 2) preheating the evaporator cycle. After the condensation in the first reactor, the distillate is hotter compared to the brine of the last evaporator. This condensation heat is recovered in HEX 2 and is used for heating the evaporator cycle (6). It is beneficial for the energetic efficiency to design this heat exchanger as large as possible. [ 6 ]
In order to run the process, a vacuum system (7) extracts non-condensable gases (like C O 2 , N 2 , O 2 {\displaystyle \textstyle CO_{2},N_{2},O_{2}} ), in the chambers. In the connection duct to the vacuum pump, an optional heat exchanger (HEX 5) cools down the vapor to condense as much water as possible (8). The gained distillate is transferred after an optional heat recovery (9) out of the process. A post-treatment system can treat the distillate according to the desired requirements (remineralization). The brine is extracted at the evaporator cycle after the last evaporator stage (10). The over-saturation and precipitation of salts for zero-liquid discharge (ZLD) application requires an additional evaporator acting as crystallizer which is not shown in Fig. 7. [ 6 ]
The main components to fow temperature distillation plants are the pressure vessels and the spraying facilities. Further important components are an adapted instrumentation and controlling system as well as a vacuum system. A low temperature distillation plant has no membranes and no tube bundles, and consists of the following main elements:
Evaporator and condenser vessels are constructed for vacuum pressure conditions up to 20 m b a r a b s {\displaystyle \textstyle mbar_{abs}} and include the spraying installations for the evaporation/condensation reactors. [ 6 ]
For the energy supply of the process itself, only standard plate heat exchangers are installed. A low temperature distillation plant consists of one heat exchanger for the heat transfer from heat source into water and one for the heat transfer from distillate to the re-cooling media. A plant with several cascades has one additional heat exchanger for internal heat recovery (HEX 2) increasing the thermal efficiency of the plant. [ 7 ] Due to the flexibility of the low temperature distillation process, various arrangements are possible to adapt each plant to the given application. If only a small overall temperature spread or a limited heat source is available, internal flows can be adjusted for maximised internal heat recovery. Additional low-temperature heat sources such as solar collector systems can also be integrated. [ 7 ]
The media supply is mostly realized with standard centrifugal pumps. The process conditions favor a low NPSH construction in order to facilitate hot media leaving the system from vacuum conditions. Due to the lowered volume flows in small scale plants, the application of displacer pumps is recommended.
Low temperature distillation operates at low temperature and low pressure, similar to Multi-effect distillation (MED) and Multi-stage flash distillation (MSF) . [ 8 ] While the process flow is similar to a MSF plant, the temperature and pressure dynamics are more comparable to a MED system. [ 7 ] It is designed use low grade or waste heat from other industrial processes or renewable sources, like solar thermal collectors. [ 9 ] The most significant difference compared to MED and MSF technologies is that there are no tube bundles heat exchangers within the pressure chambers. This permits enhancements [ 6 ] of the thermal distillation process:
Due to the relative high energy demand of thermal distillation processes for water treatment, low temperature distillation is most economically applicable for high saline feed waters. Fig. 8 compares the relative energy and plant costs in comparison with membrane-based desalination processes like reverse osmosis (RO) from sea water desalination. The possible feed waters may contain a wide range of impurities like brines from desalination plants, radioactive ground water, produced water from oil production, hydrocarbon polluted water, and high salinities up to 33% NaCl. The plant operates even under high concentrations up to the precipitation of anorganic compounds. Also, the effluent of existing sea water desalination plants can be treated further in a low temperature distillation to maximise the dewatering capacity of a desalination system.
Low temperature distillation can accommodate variations in the plant load, running efficiently from 50 – 100% of plant design capacity depending on the available heat supply. The spraying process is self-adjusting, and the amount of water produced is proportional to the amount of heat provided.
The LTD process is most suitable for high saline feedwaters starting from typical concentrations of sea water to concentrated wastewater solutions from various industrial processes. [ 10 ] One possible application is the capacity duplication of RO based desalination systems by further treatment of the evolving effluents to the precipitation of salts. Brackish water desalination is principally also possible, but other desalination processes tend to be more economical due to low osmotic pressure and resulting low specific energy consumption. [ 6 ]
Low temperature distillation plants are not prone to scaling or clogging even with very high TDS in the feed water. There are no installations within the pressure vessels that could scale. Phase changes (evaporation and condensation) only take place on the surface of the water droplets, never on solid surfaces. The following design features ensure the minimal risk of scaling within the plant:
Low temperature distillation plants are able to treat feed waters such as:
The desalinated water quality from the low temperature distillation process is almost demineralized water with a remaining salinity of 10 ppm. Residual contaminants result from demister losses and depend on the treated feedwater as well as vapor velocities between evaporator and condenser. The brine concentration in the LTD process can be adjusted to the site conditions and disposal options. Current research focuses on selective crystallization to recover various salt species beyond NaCl.
The application of the LTD process becomes economically feasible starting with salinities more than 4%. LTD can be useful for normal seawater desalination if high recovery rates or further treatment of the RO brine are required. High saline effluents from industrial processes such as the oil and gas industry, the textile industry, and the chemical industry are more advantageous. In general, pretreatment for zero-liquid discharge systems with LTD is the most economical option. The treatment of brackish water is possible in principle, but the energy consumption required for evaporation is higher compared to conventional reverse osmosis.
Due to the reduction of the brine volume, the environmental impacts are significantly lowered compared to standard seawater RO units. The recovery of NaCl in high purity is possible and can be used e.g. as regenerative salt for ion exchangers or water softeners.
The LTD process has a stable part-load behaviour which facilitates the use of renewable energy sources. Thermal energy can be supplied by solar collectors like flat plate or evacuated tube, solar ponds, concentrating solar collectors, or in co-generation with solar power plants. [ 7 ] [ 8 ] [ 9 ] [ 10 ]
Opportunities for improvement focus mainly on integration in an appropriate operating environment with heat management. The combination of LTD plants with thermal power plants as heat sources seems advantageous. Combinations with other desalination processes, like thermal or mechanical vapor compression (MVC) are also possible. Under certain process conditions, such systems can compensate for fluctuating heat supply by substituting electric power in an integrated MVC unit.
Current research focuses on the reduction of the heat and electricity consumption of auxiliary systems. The selective crystallization of the brine and recovery of salts are also being researched (in cooperation with TU Berlin, Germany). Further development potential lies in the integration of adsorption and absorption technologies for integrated cooling and desalination. | https://en.wikipedia.org/wiki/Low-temperature_distillation |
The following is a timeline of low-temperature technology and cryogenic technology ( refrigeration down to close to absolute zero , i.e. –273.15 °C, −459.67 °F or 0 K). [ 1 ] It also lists important milestones in thermometry , thermodynamics , statistical physics and calorimetry , that were crucial in development of low temperature systems. | https://en.wikipedia.org/wiki/Low-temperature_technology_timeline |
For environmental remediation , Low-temperature thermal desorption ( LTTD ), also known as low-temperature thermal volatilization , thermal stripping , and soil roasting , is an ex-situ remedial technology that uses heat to physically separate petroleum hydrocarbons from excavated soils. Thermal desorbers are designed to heat soils to temperatures sufficient to cause constituents to volatilize and desorb (physically separate) from the soil. Although they are not designed to decompose organic constituents, thermal desorbers can, depending upon the specific organics present and the temperature of the desorber system, cause some organic constituents to completely or partially decompose . The vaporized hydrocarbons are generally treated in a secondary treatment unit (e.g., an afterburner , catalytic oxidation chamber, condenser, or carbon adsorption unit) prior to discharge to the atmosphere. Afterburners and oxidizers destroy the organic constituents. Condensers and carbon adsorption units trap organic compounds for subsequent treatment or disposal.
Some preprocessing and postprocessing of soil is necessary when using LTTD. Excavated soils are first screened to remove large (greater than 2 inches in diameter) objects. These may be sized (e.g., crushed or shredded) and then introduced back into the feed material. After leaving the desorber, soils are cooled, re-moistened to control dust, and stabilized (if necessary) to prepare them for disposal or reuse. Treated soil may be redeposited onsite, used as cover in landfills , or incorporated into asphalt.
LTTD has proven very effective in reducing concentrations of petroleum products including gasoline , jet fuels , kerosene , diesel fuel , heating oils , and lubricating oils . LTTD is applicable to constituents that are volatile at temperatures up to 1,200 °F. Most desorbers operate at temperatures between 300 °F to 1,000 °F. Desorbers constructed of special alloys can operate at temperatures up to 1,200 °F. More volatile products (e.g. gasoline) can be desorbed at the lower operating range, while semivolatile products (e.g. kerosene, diesel fuel) generally need temperatures over 700 °F, and relatively nonvolatile products (e.g., heating oil, lubricating oils) need even higher temperatures. Essentially all soil types are amenable for treatment by LTTD systems. However, different soils may require varying degrees and types of pretreatment. For example, coarse-grained soils (e.g. gravel and cobbles ) may require crushing; fine-grained soils that are excessively cohesive (e.g. clay ) may require shredding.
State and local regulations specify that petroleum -contaminated soils must be pilot tested , by some soil from the site being processed through the LTTD system (a "test burn"). The results of preliminary testing of soil samples should identify the relevant constituent properties, and examination of the machine's performance records should indicate how effective the system will be in treating the soil. The proven effectiveness of a particular system for a specific site or waste does not ensure that it will be effective at all sites or that the treatment efficiencies achieved will be acceptable at other sites. If a test burn is conducted, it is important to ensure that the soil tested is representative of average conditions and that enough samples are analyzed before and after treatment to confidently determine whether LTTD will be effective.
Operation of LTTD units requires various permits and demonstration of compliance with permit requirements. Monitoring requirements for LTTD systems are by their nature different from monitoring required at a UST site. Monitoring of LTTD system waste streams (e.g. concentrations of particulates , volatiles , and carbon monoxide in stack gas ) are required by the agency or agencies issuing the permits for operation of the facility. The LTTD facility owner/operator is responsible for complying with limits specified by the permits and for other LTTD system operating parameters (e.g. desorber temperature, soil feed rate, afterburner temperature).
The decision as to whether or not LTTD is a practical remedial alternative depends upon site-specific characteristics (e.g. the location and volume of contaminated soils, site layout). Practicability is also determined by regulatory, logistical , and economic considerations. The economics of LTTD as a remedial option are highly site-specific. Economic factors include:-
Thermal desorption systems fall into two general classes—stationary facilities and mobile units. Contaminated soils are excavated and transported to stationary facilities; mobile units can be operated directly onsite. Desorption units are available in a variety of process configurations including rotary desorbers, asphalt plant aggregate dryers, thermal screws, and conveyor furnaces .
The plasticity of the soil is a measure of its ability to deform without shearing and is to some extent a function of water content. Plastic soils tend to stick to screens and other equipment, and agglomerate into large clumps. In addition to slowing down the feed rate, plastic soils are difficult to treat. Heating plastic soils requires higher temperatures because of the low surface area to volume ratio and increased moisture content. Also, because plastic soils tend to be very fine-grained, organic compounds tend to be tightly sorbed . Thermal treatment of highly plastic soils requires pretreatment, such as shredding or blending with more friable soils or other amendments (e.g. gypsum ).
Material larger than 2 inches in diameter will need to be crushed or removed. Crushed material is recycled back into the feed to be processed. Coarser-grained soils tend to be free-flowing and do not agglomerate into clumps. They typically do not retain excessive moisture, therefore, contaminants are easily desorbed. Finer-grained soils tend to retain soil moisture and agglomerate into clumps. When dry, they may yield large amounts of particulates that may require recycling after being intercepted in the baghouse.
The solids processing capacity of a thermal desorption system is inversely proportional to the moisture content of the feed material. The presence of moisture in the excavated soils to be treated in the LTTD unit will determine the residence time required and heating requirements for effective removal of contaminants. In order for desorption of petroleum constituents to occur, most of the soil moisture must be evaporated in the desorber. This process can require significant additional thermal input to the desorber and excessive residence time for the soil in the desorber. Moisture content also influences plasticity which affects handling of the soil. Soils with excessive moisture content (> 20%) must be dewatered. Typical dewatering methods include air drying (if storage space is available to spread the soils), mixing with drier soils, or mechanical dewatering.
The presence of metals in soil can have two implications:
At normal LTTD operating temperatures , heavy metals are not likely to be significantly separated from soils.
High concentrations of petroleum products in soil can result in high soil heating values. Heat released from soils can result in overheating and damage to the desorber. Soils with heating values greater than 2,000 Btu/lb require blending with cleaner soils to dilute the high concentration of hydrocarbons. High hydrocarbon concentrations in the offgas may exceed the thermal capacity of the afterburner and potentially result in the release of untreated vapors into the atmosphere. Excessive constituent levels in soil could also potentially result in the generation of vapors in the desorber at concentrations exceeding the lower explosive limit (LEL). If the LEL is exceeded there is a potential for explosion.
The term "thermal desorber" describes the primary treatment operation that heats petroleum-contaminated materials and desorbs organic materials into a purge gas. Mechanical design features and process operating conditions vary considerably among the various types of LTTD systems. Desorption units are: available in four configurations:
Although all LTTD systems use heat to separate (desorb) organic contaminants from the soil matrix, each system has a different configuration with its own set of advantages and disadvantages. The decision to use one system over another depends on the nature of the contaminants as well as machine availability, system performance, and economic considerations. System performance may be evaluated on the basis of pilot tests (e.g., test burns) or examination of historical machine performance records. Pilot tests to develop treatment conditions are generally not necessary for petroleum-contaminated soils.
Rotary dryer systems use a cylindrical metal reactor (drum) that is inclined slightly from the horizontal. A burner located at one end provides heat to raise the temperature of the soil sufficiently to desorb organic contaminants. The flow of soil may be either cocurrent with or countercurrent to the direction of the purge gas flow. As the drum rotates, soil is conveyed through the drum. Lifters raise the soil, carrying it to near the top of the drum before allowing it to fall through the heated purge gas. Mixing in a rotary dryer enhances heat transfer by convection and allow soils to be rapidly heated. Rotary desorber units are manufactured for a wide range of treatment capacities; these units may be either stationary or mobile.
The maximum soil temperature that can be obtained in a rotary dryer depends on the composition of the dryer shell. The soil discharge temperature of carbon steel drums is typically 300 to 600 degrees F. Alloy drums are available that can increase the soil discharge temperature to 1,200 degrees F. Most rotary dryers that are used to treat petroleum contaminated soil are made of carbon steel. After the treated soil exits the rotary dryer, it enters a cooling conveyor where water is sprayed on the soil for cooling and dust control. Water addition may be conducted in either a screw conveyor or a pugmill.
Besides the direction of purge gas flow relative to soil feed direction, there is one major difference in configuration between countercurrent and cocurrent rotary dryers. The purge gas from a countercurrent rotary dryer is typically only 350 °F to 500 °F and does not require cooling before entering the baghouse where fine particles are trapped. A disadvantage is that these particles may not have been decontaminated and are typically recycled to the dryer. Countercurrent dryers have several advantages over cocurrent systems. They are more efficient in transferring heat from purge gas to contaminated soil, and the volume and temperature of exit gas are lower, allowing the gas to go directly to a baghouse without needing to be cooled. The cooler exit gas temperature and smaller volume eliminates the need for a cooling unit, which allows downstream processing equipment to be smaller. Countercurrent systems are effective on petroleum products with molecular weights lower than No.2 fuel oil.
In cocurrent systems, the purge gas is 50 °F to 100 °F hotter than the soil discharge temperature. The result is that the purge gas exit temperature may range from 400 °F to 1,000 °F and cannot go directly to the baghouse. Purge gas first enters an afterburner to decontaminate the fine particles, then goes into a cooling unit prior to introduction into the baghouse. Because of the higher temperature and volume of the purge gas, the baghouse and all other downstream processing equipment must be larger than in a countercurrent system. Cocurrent systems do have several advantages over countercurrent systems: The afterburner is located upstream of the baghouse ensuring that fine particles are decontaminated; and because the heated purge gas is introduced at the same end of the drum as the feed soil, the soil is heated faster, resulting in a longer residence time. Higher temperatures and longer residence time mean that cocurrent systems can be used to treat soils contaminated with heavier petroleum products. Cocurrent systems are effective for light and heavy petroleum products including No. 6 fuel oil, crude oil, motor oil, and lubricating oil.
Hot-mix asphalt plants use aggregate that has been processed in a dryer before it is mixed with liquid asphalt. The use of petroleum contaminated soils for aggregate material is widespread. Aggregate dryers may either be stationary or mobile. Soil treatment capacities range from 25-150 tons per hour. The soil may be incorporated into the asphalt as a recycling process or the treated soil may be used for other purposes.
Asphalt rotary dryers are normally constructed of carbon steel and have a soil discharge temperature of 300 °F to 600 °F. Typically, asphalt plant aggregate dryers are identical to the countercurrent rotary desorbers described above and are effective on the same types of contaminants. The primary difference is that an afterburner is not required for incorporation of clean aggregate into the asphalt mix. In some areas, asphalt plants that use petroleum-contaminated soil for aggregate may be required to be equipped with an afterburner.
A thermal screw desorber typically consists of a series of 1-4 augers. The auger system conveys, mixes, and heats contaminated soils to volatilize moisture and organic contaminants into a purge gas stream. Augers can be arranged in series to increase the soil residence time, or they can be configured in parallel to increase throughput capacity. Most thermal screw systems circulate a hot heat-transfer oil through the hollow flights of the auger and return the hot oil through the shaft to the heat transfer fluid heating system. The heated oil is also circulated through the jacketed trough in which each auger rotates. Thermal screws can also be steam-heated. Systems heated with oil can achieve soil temperatures of up to 500 °F, and steam-heated systems can heat soil to approximately 350 °F.
Most of the gas generated during heating of the heat-transfer oil does not come into contact the waste material and can be discharged directly to the atmosphere without emission controls. The remainder of the flue gas maintains the thermal screw purge gas exit temperature above 300 degrees F. This ensures that volatilized organics and moisture do not condense. In addition, the recycled flue gas has a low oxygen content (less than 2% by volume) which minimizes oxidation of the organics and reduces the explosion hazard. If pretreatment analytical data indicates a high organic content (greater than 4 percent), use of a thermal screw is recommended. After the treated soil exits the thermal screw, water is sprayed on the soil for cooling and dust control. Thermal screws are available with soil treatment capacities ranging from 3-15 tons per hour.
Since thermal screws are indirectly heated, the volume of purge gas from the primary thermal treatment unit is less than one half of the volume from a directly heated system with an equivalent soil processing capacity. Therefore, offgas treatment systems consist of relatively small unit operations that are well suited to mobile applications. Indirect heating also allows thermal screws to process materials with high organic contents since the recycled flue gas is inert, thereby reducing the explosion hazard.
A conveyor furnace uses a flexible metal belt to convey soil through the primary heating chamber. A one-inch-deep layer of soil is spread evenly over the belt. As the belt moves through the system, soil agitators lift the belt and turn the soil to enhance heat transfer and volatilization of organics. The conveyor furnace can heat soils to temperatures from 300 to 800 degrees F. At the higher temperature range, the conveyor furnace is more effective in treating some heavier petroleum hydrocarbons than are oil- or steam-heated thermal screws, asphalt plant aggregate dryers, and carbon steel rotary dryers. After the treated soil exits the conveyor furnace, it is sprayed with water for cooling and dust control. As of February 1993, only one conveyor furnace system was currently in use for the remediation of petroleum contaminated soil. This system is mobile and can treat 5 to 10 tons of soil per hour.
Offgas treatment systems for LTTD systems are designed to address three types of air pollutants: particulates, organic vapors, and carbon monoxide. Particulates are controlled with both wet (e.g., venturi scrubbers) and dry (e.g., cyclones, baghouses) unit operations. Rotary dryers and asphalt aggregate dryers most commonly use dry gas cleaning unit operations. Cyclones are used to capture large particulates and reduce the particulate load to the baghouse. Baghouses are used as the final particulate control device. Thermal screw systems typically use a venturi scrubber as the primary particulate control.
The control of organic vapors is achieved by either destruction or collection. Afterburners are used downstream of rotary dryers and conveyor furnaces to destroy organic contaminants and oxidize carbon monoxide. Conventional afterburners are designed so that exit gas temperatures reach 1,400 °F to 1,600 °F. Organic destruction efficiency typically ranges from 95% to greater than 99%.
Condensers and activated carbon may also be used to treat the offgas from thermal screw systems. Condensers may be either water-cooled or electrically cooled systems to decrease offgas temperatures to 100 °F to 140 °F. The efficiency of condensers for removing organic compounds ranges from 50% to greater than 95%. Noncondensible gases exiting the condenser are normally treated by a vapor-phase activated carbon treatment system. The efficiency of activated carbon adsorption systems for removing organic contaminants ranges from 50% to 99%. Condensate from the condenser is processed through a phase separator where the non-aqueous phase organic component is separated and disposed of or recycled. The remaining water is then processed through activated carbon and used to rehumidify treated soil.
Treatment temperature is a key parameter affecting the degree of treatment of organic components. The required treatment temperature depends upon the specific types of petroleum contamination in the soil. The actual temperature achieved by an LTTD system is a function of the moisture content and heat capacity of the soil, soil particle size, and the heat transfer and mixing characteristics of the thermal desorber.
Residence time is a key parameter affecting the degree to which decontamination is achievable. Residence time depends upon the design and operation of the system, characteristics of the contaminants and the soil, and the degree of treatment required. | https://en.wikipedia.org/wiki/Low-temperature_thermal_desorption |
In semiconductor manufacturing, a low-κ is a material with a small relative dielectric constant (κ, kappa ) relative to silicon dioxide . Low-κ dielectric material implementation is one of several strategies used to allow continued scaling of microelectronic devices, colloquially referred to as extending Moore's law . In digital circuits , insulating dielectrics separate the conducting parts (wire interconnects and transistors ) from one another. As components have scaled and transistors have gotten closer together, the insulating dielectrics have thinned to the point where charge build up and crosstalk adversely affect the performance of the device. Replacing the silicon dioxide with a low-κ dielectric of the same thickness reduces parasitic capacitance , enabling faster switching speeds (in case of synchronous circuits ) and lower heat dissipation. In conversation such materials may be referred to as "low-k" (spoken "low-kay") rather than "low-κ" (low-kappa).
In integrated circuits , and CMOS devices, silicon dioxide can readily be formed on surfaces of Si through thermal oxidation , and can further be deposited on the surfaces of conductors using chemical vapor deposition or various other thin film fabrication methods. Due to the wide range of methods that can be used to cheaply form silicon dioxide layers, this material is used conventionally as the baseline to which other low permittivity dielectrics are compared. The relative dielectric constant of SiO 2 , the insulating material still used in silicon chips, is 3.9. This number is the ratio of the permittivity of SiO 2 divided by permittivity of vacuum, ε SiO 2 /ε 0 , where ε 0 = 8.854×10 −6 pF/μm. [ 1 ] There are many materials with lower relative dielectric constants but few of them can be suitably integrated into a manufacturing process. Development efforts have focused primarily on the following classes of materials:
By doping SiO 2 with fluorine to produce fluorinated silica glass, the relative dielectric constant is lowered from 3.9 to 3.5. [ 2 ] Fluorine-doped oxide materials were used for the 180 nm and 130 nm technology nodes. [ 3 ]
By doping SiO 2 with carbon, one can lower the relative dielectric constant to 3.0, the density to 1.4 g/cm 3 and the thermal conductivity to 0.39 W/(m*K). The semiconductor industry has been using the organosilicate glass dielectrics since the 90 nm technology node. [ 4 ]
Various methods may be employed to create voids or pores in a silicon dioxide dielectric. [ 3 ] Voids can have a relative dielectric constant of nearly 1, thus the dielectric constant of the porous material may be reduced by increasing the porosity of the film. Relative dielectric constants lower than 2.0 have been reported. Integration difficulties related to porous silicon dioxide implementation include low mechanical strength and difficult integration with etch and polish processes.
Porous organosilicate materials are usually obtained by a two-step procedure [ 4 ] where the first step consists of the co-deposition of a labile organic phase (known as porogen) together with an organosilicate phase resulting in an organic-inorganic hybrid material . In the second step, the organic phase is decomposed by UV curing or annealing at a temperature of up to 400 °C, leaving behind pores in the organosilicate low-κ materials. Porous organosilicate glasses have been employed since the 45 nm technology node. [ 5 ]
Polymeric dielectrics are generally deposited by a spin-on approach, which is traditionally used for the deposition of photoresist materials, rather than chemical vapor deposition . Integration difficulties include low mechanical strength, coefficient of thermal expansion (CTE) mismatch and thermal stability. Some examples of spin-on organic low-κ polymers are polyimide , polynorbornenes , benzocyclobutene , and PTFE .
There are two kinds of silicon based polymeric dielectric materials, hydrogen silsesquioxane and methylsilsesquioxane.
The ultimate low-κ material is air with a relative permittivity value of ~1.0. However, the placement of air gaps between the conducting wires compromises the mechanical stability of the integrated circuit making it impractical to build an IC consisting entirely of air as the insulating material. Nevertheless, the strategic placement of air gaps can improve the chip's electrical performance without compromising critically its durability. For example, Intel uses air gaps for two interconnect levels in its 14 nm FinFET technology. [ 6 ] | https://en.wikipedia.org/wiki/Low-κ_dielectric |
In computing , LBX , or Low Bandwidth X , is a protocol to use the X Window System over network links with low bandwidth and high latency . It was introduced in X11R6.3 ("Broadway") in 1996, but never achieved wide use. It was disabled by default as of X.Org Server 7.1, and was removed for version 7.2.
X was originally implemented for use with the server and client on the same machine or the same local area network . By 1996, the Internet was becoming popular, and X's performance over narrow, slow links was problematic.
LBX ran as a proxy server ( lbxproxy ). It cached commonly used information — connection setup, large window properties, font metrics, keymaps and so on — and compressed data transmission over the network link.
LBX was never widely deployed as it did not offer significant speed improvements. The slow links it was introduced to help were typically insecure, and RFB ( VNC ) over a secure shell connection — which includes compression — proved faster than LBX, and also provided session resumption.
Finally, it was shown that greater speed improvements to X could be obtained for all networked environments with replacement of X's antiquated font system as part of the new composited graphics system, along with care and attention to application and widget toolkit design, particularly care to avoid network round trips and hence latency . | https://en.wikipedia.org/wiki/Low_Bandwidth_X |
In a low-IF receiver , the radio frequency (RF) signal is mixed down to a non-zero low or moderate intermediate frequency (IF). Typical frequency values are a few megahertz (instead of 33–40 MHz) for TV, and even lower frequencies, typically 120–130 kHz (instead of 10.7–10.8 MHz or 13.45 MHz) in the case of FM radio receivers or 455–470 kHz for AM radio (MW/LW/SW) receivers. Low-IF receiver topologies have many of the desirable properties of zero-IF architectures , but avoid the DC offset and 1/f noise problems.
The use of a non-zero IF re-introduces the image issue . However, when there are relatively relaxed image and neighbouring channel rejection requirements they can be satisfied by carefully designed low-IF receivers. Image signal and unwanted blockers can be rejected by quadrature down-conversion (complex mixing) and subsequent filtering .
This technique is now widely used in the tiny FM receivers incorporated into MP3 players and mobile phones and is becoming commonplace in both analog and digital TV receiver designs. Using advanced analog- and digital signal processing techniques, cheap, high quality receivers using no resonant circuits at all are now possible.
This article related to radio communications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Low_IF_receiver |
Low Copy Number (LCN) is a DNA profiling technique developed by the UK Forensic Science Service (FSS) which has been in use since 1999. [ 1 ]
In the United Kingdom use of the technique was suspended between 21 December 2007 and 14 January 2008 while the Crown Prosecution Service conducted a review into its use – this suspension has now been lifted. [ 2 ]
LCN is an extension of Second Generation Multiplex Plus (SGM Plus) profiling technique. It is a more sensitive technique because it involves a greater amount of copying via polymerase chain reaction (PCR) from a smaller amount of starting material, meaning that a profile can be obtained from only a few cells, which may be as small as a millionth the size of a grain of salt , and amount to just a few cells of skin or sweat left from a fingerprint. [ 3 ]
LCN evidence has allowed convictions to be made in several cold cases . For example, Mark Henson was convicted of rape in 2005, 10 years after the crime was committed, from re-analysis of a microscope slide. [ 4 ] In 1981, evidence was deliberately kept after the rape and murder of 14-year-old Marion Crofts in Aldershot . In 1999, a DNA profile was obtained from this using LCN. This was continually checked against the UK National DNA Database for the next two years, until a match was eventually found for Tony Jasinskyj after he was arrested for another crime. He was eventually given a life sentence in 2002. [ 5 ]
So far the technique is only used in several countries:the UK , the Netherlands , Poland and New Zealand . [ 6 ]
It has been used in more than 21,000 serious crime cases in the UK and internationally, particularly in "cold" cases. A FSS spokesman said: "LCN DNA analysis is only carried out by the most-experienced DNA scientists, who have undergone special additional training and testing in this area of casework." [ 7 ] However, the technique came under attack from the Judge during the trial of Sean Hoey – who was eventually cleared of involvement in the Omagh Bombing . One of the criticisms the judge leveled at LCN was that although the FSS had internally validated and published scientific papers on the technique, there was an alleged lack of external validation by the wider scientific community. [ 8 ] Following the Judge's ruling, the use of the technique was suspended in the UK, pending a review by the Crown Prosecution Service . This review was completed and the suspension lifted on the January 14, 2008 with the CPS stating that it had not seen anything to suggest that any current problems exist with LCN ". [ 2 ] | https://en.wikipedia.org/wiki/Low_copy_number |
Low copy repeats ( LCRs ), also known as segmental duplications ( SDs ), or duplicons , are DNA sequences present in multiple locations within a genome that share high levels of sequence identity.
The repeats, or duplications, are typically 10–300 kb in length, and bear greater than 95% sequence identity . Though rare in most mammals, LCRs comprise a large portion of the human genome owing to a significant expansion during primate evolution . [ 1 ] In humans, chromosomes Y and 22 have the greatest proportion of SDs: 50.4% and 11.9% respectively. [ 2 ] SRGAP2 is an SD.
Misalignment of LCRs during non-allelic homologous recombination (NAHR) [ 3 ] is an important mechanism underlying the chromosomal microdeletion disorders as well as their reciprocal duplication partners. [ 4 ] Many LCRs are concentrated in "hotspots", such as the 17p11-12 region, 27% of which is composed of LCR sequence. NAHR and non-homologous end joining (NHEJ) within this region are responsible for a wide range of disorders, including Charcot–Marie–Tooth syndrome type 1A , [ 5 ] hereditary neuropathy with liability to pressure palsies , [ 5 ] Smith–Magenis syndrome , [ 6 ] and Potocki–Lupski syndrome . [ 3 ]
The two widely accepted methods for SD detection [ 7 ] are: | https://en.wikipedia.org/wiki/Low_copy_repeats |
Colossal magnetoresistance (CMR) is a property in many perovskite oxides. However, the requirement of large external magnetic field hinders the potential applications. On one hand, people were looking for the physical mechanisms for the CMR originality. On the other hand, people were trying to find alternative ways to further improve the CMR effect. Large magnetoresistance at relative low magnetic field had been reported in doped LaMnO3 polycrystal samples, rather than single crystal. The spin polarized tunneling and spin dependent scattering across large angle boundaries are responsible for the Low field magnetoresistance (LFMR). [ 1 ]
In order to obtain LFMR in epitaxial thin films (single-crystal like materials), epitaxial strain has been used. Wang and Li reported an enhancement of the magnetoresistance in 5- to 15-nm-thick Pr0.67Sr0.33MO3 films using out-of-plane tensile strain. [ 2 ] In a conventional strain engineering framework, epitaxial strain is only effective below the critical thickness, which is usually less than a few tens of nanometers. Tuning electron transport by epitaxial strain has only been achieved in ultrathin layers because of the relaxation of epitaxial strains in relatively thick films. [ 3 ]
Vertically aligned heteroepitaxial nanoscaffolding films have been proposed to generate strain in thick films. A vertical lattice strain as large as 2% has been achieved in La0.7Sr0.3MnO3:MgO vertical nanocomposites. The magnetoresistance, magnetic anisotropy, and magnetization can be tuned by the vertical strain in films over few hundred nanometers thick. [ 4 ] | https://en.wikipedia.org/wiki/Low_field_magnetoresistance |
Low field NMR spans a range of different nuclear magnetic resonance (NMR) modalities, going from NMR conducted in permanent magnets , supporting magnetic fields of a few tesla (T), all the way down to zero field NMR , where the Earth's field is carefully shielded such that magnetic fields of nanotesla (nT) are achieved where nuclear spin precession is close to zero. In a broad sense, Low-field NMR is the branch of NMR that is not conducted in superconducting high-field magnets . Low field NMR also includes Earth's field NMR where simply the Earth's magnetic field is exploited to cause nuclear spin-precession which is detected. With magnetic fields on the order of μT and below magnetometers such as SQUIDs or atomic magnetometers (among others) are used as detectors. "Normal" high field NMR relies on the detection of spin-precession with inductive detection with a simple coil.
However, this detection modality becomes less sensitive as the magnetic field and the associated frequencies decrease. Hence the push toward alternative detection methods at very low fields.
This nuclear magnetic resonance –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Low_field_nuclear_magnetic_resonance |
Low plasticity burnishing ( LPB ) cold compresses metal to provide deep, stable surface residual stresses to improve damage tolerance and extend metal fatigue life; mitigating surface damage, including fretting, corrosion pitting , stress corrosion cracking (SCC), and foreign object damage (FOD). Improved fretting fatigue and stress corrosion performance has been documented, even at elevated temperatures where the compression from other metal improvement processes: low stress grinding (LSG) etc. relax. The resulting deep layer of compressive residual stress has also been shown to improve high cycle fatigue (HCF), low cycle fatigue (LCF), and stress corrosion cracking (SCC) performance. [ 1 ]
LPB is the only known metal improvement method applied under continuous closed-loop process control and has been successfully applied to turbine engines, piston engines, propellers, aging aircraft structures, landing gear, nuclear waste material containers, biomedical implants, armaments, fitness equipment and welded joints. Typical applications involve titanium, iron, nickel and steel-based components which showed improved damage tolerance as well as HCF and LCF performance by an order of magnitude over existing metal improvement processes.
Low plasticity burnishing was invented and patented by Lambda Research, Inc., part of the Lambda Technologies Group, in Cincinnati, Ohio in 1996. The first patent on the process was issued in 1998. [ 2 ] LPB® was later trademarked by Surface Technology Holdings, also part of the Lambda Technologies Group. [ 3 ] As of 2025, Lambda Technologies Group is the only provider of LPB® in the world.
The basic LPB tool is a ball, wheel or other similar tip supported in a spherical hydrostatic bearing held in a CNC machine or industrial robot, depending on the application. Continuous coolant flow pressurizes the LPB tool bearing to support the ball. The ball does not contact the mechanical bearing seat, even under load. The ball is loaded at a normal state to the surface of a component with a hydraulic cylinder that is in the body of the tool. LPB can be performed in conjunction with chip forming machining operations in the same CNC machining tool.
The ball rolls across the surface of a component in a pattern defined in the CNC code, as in any machining operation. The tool path and normal pressure applied are designed to create a distribution of compressive residual stress. The form of the distribution is designed to counter applied stresses and optimize fatigue and stress corrosion performance. Since there is no shear being applied to the ball, it is free to roll in any direction. As the ball rolls over the component, the pressure from the ball causes plastic deformation to occur in the surface of the material under the ball. Since the bulk of the material constrains the deformed area, the deformed zone is left in compression after the ball passes.
The cold work temperature produced from this process is typically minimal; similar to the cold work produced by laser peening , but a great deal less than shot peening , gravity peening or, deep rolling. Cold work is particularly important because the higher the cold work temperatures at the surface of a component, the more vulnerable to elevated temperatures and mechanical overload that component will be and the easier the beneficial surface residual compression will relax, rendering the treatment pointless. In other words, a highly cold worked component will not hold compression if it comes into contact with extreme heat, like an engine, and will be just as vulnerable to damage without cold working. Therefore, LPB and laser peening stand out in the surface enhancement industry because both are thermally stable at high temperatures. The reason LPB produces such low percentages of cold work is because of the aforementioned closed-loop process control. Conventional shot peening processes have some guesswork on complete component coverage and are not exact at all, causing the procedure to be performed multiple times on one component to ensure adequate cold work. For example, shot peening, in order to make sure every spot on the component is treated, typically specifies coverage of between 200% (2T) and 400% (4T). This means that at 200% coverage (2T), 5 or more impacts occur at 84% of locations and at 400% coverage (4T), it is significantly more. One problem is that one area will be hit several times while the area next to may be hit fewer times, leaving uneven compression at the surface; resulting in the whole process being unstable and easily “undone”, as mentioned above. LPB requires only one pass with the tool and leaves a deep, even, stable compressive stress.
The LPB process can be performed on-site in the shop or in situ using robots, making it easy to incorporate into everyday maintenance and manufacturing procedures. The method is applied under continuous closed loop process control (CLPC), creating accuracy within 0.1% and alerting the operator and QA immediately if the processing bounds are exceeded. One issue of this process is that different CNC processing codes need to be developed for each application, just like other machining tasks. Another potential issue is that because of dimensional restrictions, it may not be possible to create the tools necessary to work on certain geometries, although this has yet to be a problem. | https://en.wikipedia.org/wiki/Low_plasticity_burnishing |
Low technology ( low tech ; adjective forms: low-technology , low-tech , lo-tech ) is simple technology , as opposed to high technology . [ 1 ] In addition, low tech is related to the concept of mid-tech, that is a balance between low-tech and high-tech, which combines the efficiency and versatility of high tech with low tech's potential for autonomy and resilience. [ 2 ]
Primitive technologies such as bushcraft , tools that use wood , stone , wool , etc. can be seen as low-tech , as the pre– Industrial Revolution machines such as windmills or sailboats . [ 3 ]
The economic boom after the Vietnam War resulted in a doubt on progress, technology and growth at the beginning of the 70s, notably with through the report The Limits to Growth (1972). Many have sought to define what soft technologies are, leading to a "low-tech movement". Such technologies have been described as "intermediaries" ( E. F. Schumacher ), [ 4 ] "liberating" ( M. Bookchin ), [ 5 ] or even democratic . Thus, a philosophy of advocating a widespread use of soft technologies was developed in the United States, and many studies were carried out in those years, in particular by researchers like Langdon Winner . [ 6 ]
"Low-tech" has been more and more employed in the scientific writings, in particular in the analyzes of the work from some authors of the 1970s: see for example Hirsch ‐ Kreinsen, [ 7 ] the book "High tech, low tech, no tech" [ 3 ] or Gordon. [ 8 ]
More recently, the perspective of resource scarcity [ 9 ] – especially minerals – lead to an increasingly severe criticism on high-techs and technology.
In 2014, the French engineer Philippe Bihouix published "L'âge des low tech" (The age of low-techs) where he presents how a European nation like France , with little mineral and energy resources, could become a "low-tech" nation (instead of a "start-up" nation) to better correspond to the sustainable development goals of such nation. [ 10 ] He cites various examples of low-techs initiative and describe the low-tech philosophy and principles.
Numerous new definitions have come to supplement or qualify the term "low-tech", intended to be more precise because they are restricted to a particular characteristic:
According to the Cambridge International Dictionary of English , the concept of low-tech is simply defined as a technique that is not recent, or using old materials . [ 20 ] Companies that are considered low-tech have a simple operation. The less sophisticated an object, the more low-tech. This definition does not take into account the ecological or social aspect, as it is only based on a simplistic definition of low-tech philosophy. The low-techs would then be seen as a "step backwards", and not as possible innovation .
Also, with this definition, the " high-tech " (ex: the telegraph ) of a certain era becomes the "low-tech" of the one after (ex: compared to the telephone ).
Low-tech is sometimes described as an "anti high-tech " movement, as a deliberate renunciation of a complicated and expensive technology . This kind of protest movement criticizes any disproportionate technology : a comparison with the neo-luddic or technocritical movements , which appeared since the Industrial Revolution , is then possible. This critical part of the low-tech movement can be called "no-tech". [ citation needed ]
A second, more nuanced definition of low-tech may appear. This definition takes into account the philosophical , environmental and social aspects. Low-tech are no longer restricted to old techniques, but also extended to new, future-oriented techniques, more ecological and intended to recreate social bounds. A low-tech innovation is then possible. [ 10 ]
Contrary to the first definition, this one is much more optimistic and has a positive connotation. It would then oppose the planned obsolescence of objects (often " high-tech ") and question the consumer society , as well as the materialist principles underneath. With this definition, the concept of low-tech thus implies that anyone could make objects using their intelligence, and share their know-how to popularize their creations. A low-tech must therefore be accessible to all, and could therefore help in reduction of inequalities . [ 10 ]
Furthermore, some reduce the definition of low-tech to meet basic needs (eating, drinking, housing, heating ...), which disqualifies many technologies from the definition of low-techs, but this definition does not is not always accepted. [ 12 ] Finally, considering that the definition of low-tech is relative, some prefer to use lower tech , [ 10 ] to emphasize a higher sobriety compared to high-tech , without claiming to be perfectly "low".
Note: almost all of the entries in this section should be prefixed by the word traditional .
( Wright is the agent form of the word wrought , which itself is the original past passive participle of the word work , now superseded by the weak verb forms worker and worked respectively.)
Note : home canning is a counter example of a low technology since some of the supplies needed to pursue this skill rely on a global trade network and an existing manufacturing infrastructure . [ citation needed ]
(Non exhaustive) list of low-tech in a westerner 's everyday life :
Among the thinkers opposed to modern technologies, Jacques Ellul ( The Technological Society , 1954; The technological bluff , 1988), Lewis Mumford and E. F. Schumacher . In the second volume of his book The Myth of the Machine (1970), Lewis Mumford develops the notion of "biotechnology", to designate "bioviable" techniques that would be considered as ecologically responsible, i.e. which establish a homeostatic relationship between resources and needs. In his famous Small is beautiful (1973), Schumacher uses the concept of "intermediate technology", [ 4 ] which corresponds fairly precisely to what "low tech" means. He has also created the " Intermediate Technology Development Group ".
By federal law in the United States, only those articles produced with little or no use of machinery or tools with complex mechanisms may be stamped with the designation "hand-wrought" or "hand-made". Lengthy court-battles are currently underway over the precise definition of the terms "organic" and "natural" as applied to foodstuffs . [ citation needed ] | https://en.wikipedia.org/wiki/Low_technology |
In electrical engineering, low voltage is a relative term, the definition varying by context. Different definitions are used in electric power transmission and distribution, compared with electronics design. Electrical safety codes define "low voltage" circuits that are exempt from the protection required at higher voltages. These definitions vary by country and specific codes or regulations.
The International Electrotechnical Commission (IEC) standard IEC 61140:2016 defines Low voltage as 0 to 1000 V AC RMS or 0 to 1500 V DC . [ 1 ] Other standards such as IEC 60038 defines supply system low voltage as voltage in the range 50 to 1000 V AC or 120 to 1500 V DC in IEC Standard Voltages [ 2 ] which defines power distribution system voltages around the world.
In electrical power systems low voltage most commonly refers to the mains voltages as used by domestic and light industrial and commercial consumers. "Low voltage" in this context still presents a risk of electric shock , but only a minor risk of electric arcs through the air.
British Standard BS 7671 , Requirements for Electrical Installations. IET Wiring Regulations , defines supply system low voltage as:
exceeding 50 V AC or 120 V ripple-free DC. but not exceeding 1000 V AC or 1500 V DC between conductors, or 600 V AC or 900 V DC between conductors and earth. [ 3 ] [ check quotation syntax ]
The ripple-free direct current requirement only applies to 120 V DC, not to any DC voltage above that. For example, a direct current that is exceeding 1500 V during voltage fluctuations is not categorized as low-voltage.
In electrical power distribution , the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as up to 49 V.
The NFPA standard 79 article 6.4.1.1 [ 4 ] defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V DC ripple-free for dry locations, and 6 Vrms or 15 V DC in all other cases.
Standard NFPA 70E, Article 130, 2021 Edition, [ 5 ] omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established.
UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits. | https://en.wikipedia.org/wiki/Low_voltage |
In proof compression LowerUnits ( LU ) is an algorithm used to compress propositional logic resolution proofs. The main idea of LowerUnits is to exploit the following fact: [ 1 ]
The algorithm targets exactly the class of global redundancy stemming from multiple resolutions with unit clauses. The algorithm takes its name from the fact that, when this rewriting is done and the resulting proof is displayed as a DAG ( directed acyclic graph ), the unit node η {\displaystyle \eta } appears lower (i.e., closer to the root) than it used to appear in the original proof.
A naive implementation exploiting theorem would require the proof to be traversed and fixed after each unit node is lowered. It is possible, however, to do better by first collecting and removing all the unit nodes in a single traversal, and afterwards fixing the whole proof in a single second traversal. Finally, the collected and fixed unit nodes have to be reinserted at the bottom of the proof.
Care must be taken with cases when a unit node η ′ {\displaystyle \eta ^{\prime }} occurs above in the subproof that derives another unit node η {\displaystyle \eta } . In such cases, η {\displaystyle \eta } depends on η ′ {\displaystyle \eta ^{\prime }} . Let ℓ {\displaystyle \ell } be the single literal of the unit clause of η ′ {\displaystyle \eta ^{\prime }} . Then any occurrence of ℓ ¯ {\displaystyle {\overline {\ell }}} in the subproof above η {\displaystyle \eta } will not be cancelled by resolution inferences with η ′ {\displaystyle \eta ^{\prime }} anymore. Consequently, ℓ ¯ {\displaystyle {\overline {\ell }}} will be propagated downwards when the proof is fixed and will appear in the clause of η {\displaystyle \eta } . Difficulties with such dependencies can be easily avoided if we reinsert the upper unit node η ′ {\displaystyle \eta ^{\prime }} after reinserting the unit node η {\displaystyle \eta } (i.e. after reinsertion, η ′ {\displaystyle \eta ^{\prime }} must appear below η {\displaystyle \eta } , to cancel the extra literal ℓ ¯ {\displaystyle {\overline {\ell }}} from η {\displaystyle \eta } ’s clause). This can be ensured by collecting the unit nodes in a queue during a bottom-up traversal of the proof and reinserting them in the order they were queued.
The algorithm for fixing a proof containing many roots performs a top-down traversal of the proof, recomputing the resolvents and replacing broken nodes (e.g. nodes having deletedNodeMarker as one of their parents) by their surviving parents (e.g. the other parent, in case one parent was deletedNodeMarker).
When unit nodes are collected and removed from a proof of a clause κ {\displaystyle \kappa } and the proof is fixed, the clause κ ′ {\displaystyle \kappa ^{\prime }} in the root node of the new proof is not equal to κ {\displaystyle \kappa } anymore, but contains (some of) the duals of the literals of the unit clauses that have been removed from the proof. The reinsertion of unit nodes at the bottom of the proof resolves κ ′ {\displaystyle \kappa ^{\prime }} with the clauses of (some of) the collected unit nodes, in order to obtain a proof of κ {\displaystyle \kappa } again.
General structure of the algorithm
We collect the unit clauses as follow
Then we reinsert the units | https://en.wikipedia.org/wiki/LowerUnits |
In proof compression , an area of mathematical logic , LowerUnivalents is an algorithm used for the compression of propositional resolution proofs. LowerUnivalents is a generalised algorithm of the LowerUnits , and it is able to lower not only units but also subproofs of non-unit clauses, provided that they satisfy some additional conditions. [ 1 ]
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/LowerUnivalents |
Lower Seyhan Irrıgation Project ( Turkish : Aşağı Seyhan Sulama Projesi ) is one of the major irrigation projects of Turkey , which is located in the Seyhan River basin.
Seyhan is a 660 km (410 mi)-long river in southern Turkey, which flows into the Mediterranean Sea . The upper reaches of the river is in Taurus Mountains . It flows within the city of Adana . Seyhan Dam is located to the north of the city and the irrigation project is situated to the south.
The irrigation project comprises four phases. During the first phase between 1957 and 1968, 65,000 ha (160,000 acres) of land was irrigated and 2,200 ha (5,400 acres) of land was protected against floods. In the second phase between 1968 and 1974, land covering 48,600 ha (120,000 acres) was irrigated. The third project phase took place between 1974 and 1985, and dealt mainly with Tarsus area to the west . In this phase, 19,831 ha (49,000 acres) was irrigated and 2,000 ha (4,900 acres) land was protected against floods. The forth phase, which is still under construction deals with the coastal area. The total area of the irrigation project will stretch over 173,638 ha (429,070 acres). [ 1 ]
Turkish Chamber of Civil Engineers lists the first phase of this Project as one of the fifty civil engineering feats in Turkey , a list of remarkable engineering projects realized in the first 50 years of the chamber. [ 2 ] | https://en.wikipedia.org/wiki/Lower_Seyhan_Irrigation_Project |
The lower critical solution temperature ( LCST ) or lower consolute temperature is the critical temperature below which the components of a mixture are miscible in all proportions. [ 1 ] [ 2 ] The word lower indicates that the LCST is a lower bound to a temperature interval of partial miscibility, or miscibility for certain compositions only.
The phase behavior of polymer solutions is an important property involved in the development and design of most polymer-related processes. Partially miscible polymer solutions often exhibit two solubility boundaries, the upper critical solution temperature (UCST) and the LCST, both of which depend on the molar mass and the pressure. At temperatures below LCST, the system is completely miscible in all proportions, whereas above LCST partial liquid miscibility occurs. [ 3 ] [ 4 ]
In the phase diagram of the mixture components, the LCST is the shared minimum of the concave up spinodal and binodal (or coexistence) curves. It is in general pressure dependent, increasing as a function of increased pressure.
For small molecules, the existence of an LCST is much less common than the existence of an upper critical solution temperature (UCST), but some cases do exist. For example, the system triethylamine -water has an LCST of 19 °C, so that these two substances are miscible in all proportions below 19 °C but not at higher temperatures. [ 1 ] [ 2 ] The nicotine -water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C. [ 1 ] [ 2 ]
Some polymer solutions have an LCST at temperatures higher than the UCST. As shown in the diagram, this means that there is a temperature interval of complete miscibility, with partial miscibility at both higher and lower temperatures. [ 5 ]
In the case of polymer solutions, the LCST also depends on polymer degree of polymerization , polydispersity and branching [ 6 ] as well as on the polymer's composition and architecture. [ 7 ] One of the most studied polymers whose aqueous solutions exhibit LCST is poly(N-isopropylacrylamide) . Although it is widely believed that this phase transition occurs at 32 °C (90 °F), [ 8 ] the actual temperatures may differ 5 to 10 °C (or even more) depending on the polymer concentration, [ 8 ] molar mass of polymer chains, polymer dispersity as well as terminal moieties. [ 8 ] [ 9 ] Furthermore, other molecules in the polymer solution, such as salts or proteins, can alter the cloud point temperature. [ 10 ] [ 11 ] Another monomer whose homo- and co-polymers exhibit LCST behavior in solution is 2-(dimethylamino)ethyl methacrylate. [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ]
The LCST depends on the polymer preparation and in the case of copolymers, the monomer ratios, as well as the hydrophobic or hydrophilic nature of the polymer.
To date, over 70 examples of non-ionic polymers with an LCST in aqueous solution have been found. [ 17 ]
A key physical factor which distinguishes the LCST from other mixture behavior is that the LCST phase separation is driven by unfavorable entropy of mixing . [ 18 ] Since mixing of the two phases is spontaneous below the LCST and not above, the Gibbs free energy change (ΔG) for the mixing of these two phases is negative below the LCST and positive above, and the entropy change ΔS = – (dΔG/dT) is negative for this mixing process. This is in contrast to the more common and intuitive case in which entropies drive mixing due to the increased volume accessible to each component upon mixing.
In general, the unfavorable entropy of mixing responsible for the LCST has one of two physical origins. The first is associating interactions between the two components such as strong polar interactions or hydrogen bonds , which prevent random mixing. For example, in the triethylamine-water system, the amine molecules cannot form hydrogen bonds with each other but only with water molecules, so in solution they remain associated to water molecules with loss of entropy. The mixing which occurs below 19 °C is not due to entropy but due to the enthalpy of formation of the hydrogen bonds. Sufficiently strong, geometrically-informed, associative interactions between solute and solvent(s) have been shown to be sufficient to lead to an LCST. [ 19 ]
The second physical factor which can lead to an LCST is compressibility effects, especially in polymer-solvent systems. [ 18 ] For nonpolar systems such as polystyrene in cyclohexane , phase separation has been observed in sealed tubes (at high pressure) at temperatures approaching the liquid-vapor critical point of the solvent. At such temperatures the solvent expands much more rapidly than the polymer, whose segments are covalently linked. Mixing therefore requires contraction of the solvent for compatibility of the polymer, resulting in a loss of entropy. [ 5 ]
Within statistical mechanics , the LCST may be modeled theoretically via the lattice fluid model, an extension of Flory–Huggins solution theory , that incorporates vacancies, and thus accounts for variable density and compressibility effects. [ 18 ]
Newer extensions of the Flory-Huggins solution theory have shown that the inclusion of only geometrically-informed, associative interactions between solute and solvent are sufficient to observe the LCST. [ 19 ]
There are three groups of methods for correlating and predicting LCSTs. The first group proposes models that are based on a solid theoretical background using liquid–liquid or vapor–liquid experimental data. These methods require experimental data to adjust the unknown parameters, resulting in limited predictive ability . [ 20 ] Another approach uses empirical equations that correlate θ (LCST) with physicochemical properties such as density, critical properties etc., but suffers from the disadvantage that these properties are not always available. [ 21 ] [ 22 ] A new approach proposed by Liu and Zhong develops linear models for the prediction of θ(LCST) using molecular connectivity indices, which depends only on the solvent and polymer structures. [ 23 ] [ 24 ] The latter approach has proven to be a very useful technique in quantitative structure–activity/property relationships (QSAR/QSPR) research for polymers and polymer solutions. QSAR / QSPR studies constitute an attempt to reduce the trial-and-error element in the design of compounds with desired activity/properties by establishing mathematical relationships between the activity/property of interest and measurable or computable parameters, such as topological, physicochemical, stereochemistry, or electronic indices. More recently QSPR models for the prediction of the θ (LCST) using molecular (electronic, physicochemical etc.) descriptors have been published. [ 25 ] Using validated robust QSPR models, experimental time and effort can be reduced significantly as reliable estimates of θ (LCST) for polymer solutions can be obtained before they are actually synthesized in the laboratory. | https://en.wikipedia.org/wiki/Lower_critical_solution_temperature |
The lower flammability limit ( LFL ), [ 1 ] usually expressed in volume per cent, is the lower end of the concentration range over which a flammable mixture of gas or vapour in air can be ignited at a given temperature and pressure. The flammability range is delineated by the upper and lower flammability limits. Outside this range of air/vapor mixtures, the mixture cannot be ignited at that temperature and pressure. The LFL decreases with increasing temperature; thus, a mixture that is below its LFL at a given temperature may be ignitable if heated sufficiently.
For liquids, the LFL is typically close to the saturated vapor concentration at the flash point , however, due to differences in the liquid properties, the relationship of LFL to flash point (which is also dependent on the test apparatus) is not fixed and some spread in the data usually exists.
The L F L m i x {\displaystyle LFL_{mix}} of a mixture can be evaluated using the Le Chatelier mixing rule if the L F L i {\displaystyle LFL_{i}} of the components i {\displaystyle i} are known: [ 2 ]
L F L m i x = 1 ∑ x i L F L i {\displaystyle LFL_{mix}={\frac {1}{\sum {\frac {x_{i}}{LFL_{i}}}}}}
Where L F L m i x {\displaystyle LFL_{mix}} is the lower flammability of the mixture, L F L i {\displaystyle LFL_{i}} is the lower flammability of the i {\displaystyle i} -th component of the mixture, and x i {\displaystyle x_{i}} is the molar fraction of the i {\displaystyle i} -th component of the mixture.
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lower_flammability_limit |
The IUCN has many ranks that define an animal's population and risk of extinction. [ 1 ] Species are classified into one of nine Red List Categories: Extinct , Extinct in the Wild , Critically Endangered , Endangered , Vulnerable , Near Threatened , Least Concern , Data Deficient , and Not Evaluated . [ 2 ] They formerly used an identification called lower risk to describe some animals.
The IUCN defined an animal with the conservation status of lower risk is one with populations levels high enough to ensure its survival. [ 3 ] Animals with this status did not qualify as being threatened or extinct. However, natural disasters or certain human activities would cause them to change to either of these classifications. [ 4 ]
When it was in use, this classification was sub-divided into three types:
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lower_risk |
The lowest-observed-adverse-effect level ( LOAEL ), or the lowest-observed-adverse-effect concentration ( LOAEC ), is the lowest concentration or amount of a substance found by experiment or observation that causes an adverse alteration of morphology , function, capacity, growth, development, or lifespan of a target organism distinguished from normal organisms of the same species under defined conditions of exposure. [ 1 ] Federal agencies use the LOAEL during risk assessment to set approval standards below this level. [ 2 ]
The United States Environmental Protection Agency defines LOAEL as the 'lowest level of a chemical stressor evaluated in a toxicity test that shows harmful effects on a plant or animal. While LOAELs and LOAECs are similar, they are not interchangeable. A LOAEL refers to a dose of chemical that is ingested, while a LOAEC refers to direct exposure to a chemical (e.g., through gills or the skin). [ 3 ]
This toxicology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lowest-observed-adverse-effect_level |
In toxicology , the lowest published toxic dose ( Toxic Dose Low , TD Lo ) is the lowest dosage per unit of bodyweight (typically stated in milligrams per kilogram ) of a substance known to have produced signs of toxicity in a particular animal species . [ 1 ] When quoting a TD Lo , the particular species and method of administration ( e.g. ingested, inhaled, intravenous ) are typically stated.
The TD Lo is different from the LD 50 (lethal dose) which is the dose causing death in 50% of people who are exposed or who consume the substance. [ 2 ]
This toxicology -related article is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lowest_published_toxic_dose |
The Lowry protein assay is a biochemical assay for determining the total level of protein in a solution . The total protein concentration is exhibited by a color change of the sample solution in proportion to protein concentration, which can then be measured using colorimetric techniques . It is named for the biochemist Oliver H. Lowry who developed the reagent in the 1940s. His 1951 paper describing the technique is the most-highly cited paper ever in the scientific literature, cited over 300,000 times. [ 1 ] [ 2 ] [ 3 ]
The method combines the reactions of copper ions with the peptide bonds under alkaline conditions (the Biuret test ) with the oxidation of aromatic protein residues. The Lowry method is based on the reaction of Cu + , produced by the oxidation of peptide bonds, with Folin–Ciocalteu reagent (a mixture of phosphotungstic acid and phosphomolybdic acid in the Folin–Ciocalteu reaction). The reaction mechanism is not well understood, but involves reduction of the Folin–Ciocalteu reagent and oxidation of aromatic residues (mainly tryptophan , also tyrosine ).
Proper caution must be taken when dealing with the Folin's reagent, which is only active in acidic conditions. Although this is true, the reduction reaction, as previously mentioned, will only occur in basic pH 10. Thus, the reduction must occur before the reagent breaks down. Mixing the protein solution as the Folin's reagent is simultaneously added will ensure that the reaction occurs in the desired manner. [ 4 ]
Experiments have shown that cysteine is also reactive to the reagent. Therefore, cysteine residues in protein probably also contribute to the absorbance seen in the Lowry assay. [ 5 ] The result of this reaction is an intense blue molecule known as heteropolymolybdenum Blue. [ 6 ] The concentration of the reduced Folin reagent (heteropolymolybdenum Blue) is measured by absorbance at 660 nm. [ 7 ] As a result, the total concentration of protein in the sample can be deduced from the concentration of tryptophan and tyrosine residues that reduce the Folin–Ciocalteu reagent.
The method was first proposed by Lowry in 1951. The bicinchoninic acid assay and the Hartree–Lowry assay are subsequent modifications of the original Lowry procedure. | https://en.wikipedia.org/wiki/Lowry_protein_assay |
lp0 on fire (also known as Printer on Fire ) is an outdated error message generated on some Unix and Unix-like computer operating systems in response to certain types of printer errors. lp0 is the Unix device handle for the first line printer , but the error can be displayed for any printer attached to a Unix or Linux system. It indicates a printer error that requires further investigation to diagnose, but not necessarily that it is on fire.
In the late 1950s, high speed computerized printing was still a somewhat experimental field. The first documented fire-starting printer was a Stromberg-Carlson 5000 xerographic printer (similar to a modern laser printer , but with a CRT as the light source instead of a laser), installed around 1959 at the Lawrence Livermore National Laboratory and modified with an extended fusing oven to achieve a print speed of one page per second. In the event of a printing stall, and occasionally during normal operation, the fusing oven would cause the paper to combust. This fire risk was aggravated by the fact that if the printer continued to operate, it would feed a fire with fresh paper at high speed. However, there is no evidence of the "lp0 on fire" message appearing in any software of the time. [ 1 ]
As the technology matured, most large printer installations were drum printers , a type of impact printer which could print an entire line of text at once through the use of a high speed rotary printing drum. It was thought [ by whom? ] that in the event of a severe jam, the friction of paper against the drum could ignite either the paper itself, or, in a dirty machine, the accumulated paper and ink dust in the mechanism. Whether this ever happened is not known; there are no reports of friction-related printer fires.
The line printer employed a series of status codes, specifically ready , online , and check . If the online status was set to "off" and the check status was set to "on," the operating system would interpret this as the printer running out of paper. However, if the online code was set to "on" and the check code was also set to "on", it meant that the printer still had paper, but was suffering an error (and may still be attempting to run). Due to the potentially hazardous conditions which could arise in early line printers , UNIX displayed the message "on fire" to motivate any system operator viewing the message to go and check on the line printer immediately. [ 2 ]
In the early 1980s, Xerox created a prototype laser printer engine and provided units to various computer companies. To fuse the toner , the paper path passed a glowing wire. If paper jammed anywhere in the path, the sheet in the fuser caught fire. The prototype UNIX driver reported paper jams as "on fire." Later print engine models used a hot drum in place of the wire.
Michael K. Johnson ("mkj" of Red Hat and Fedora fame) wrote the first Linux version of this error message in 1992. [ 3 ] [ 4 ] However, he, Herbert Rosmanith and Alan Cox (all Linux developers) have acknowledged that the phrase existed in Unix in different forms prior to his Linux printer implementation. [ 5 ] [ 6 ]
Since then, the lp printer code has spread across all sorts of POSIX -compliant operating systems, which often still retain this legacy message.
Modern printer drivers and support have improved and hidden low-level error messages from users, so most Unix/Linux users today have never seen the "on fire" message. The "on fire" message remains in the Linux source code as of version 6.0. [ 7 ]
The message is also present in other software modules, often to humorous effect. For example, in some kernels' CPU code, a CPU thermal failure could result in the message "CPU#0: Possible thermal failure (CPU on fire ?)" [ 8 ] and similar humor can be found in the phrase " halt and catch fire ". | https://en.wikipedia.org/wiki/Lp0_on_fire |
Lutetium(III) oxide , a white solid, is a cubic compound of lutetium sometimes used in the preparation of specialty glasses . It is also called lutecia . It is a lanthanide oxide, also known as a rare earth . [ 2 ] [ 3 ] [ 4 ]
In 1879, Swiss chemist Jean Charles Galissard de Marignac (1817–1894) claimed to have discovered ytterbium, but he had found a mixture of elements. In 1907, French chemist Georges Urbain (1872–1938) reported that ytterbium was a mixture of two new elements and was not a single element. Two other chemists, Carl Auer von Welsbach (1858–1929) and Charles James (1880–1926) also extracted lutetium(III) oxide around the same time. All three scientists successfully separated Marignac's ytterbia into oxides of two elements which were eventually named ytterbium and lutetium ). None of these chemists were able to isolate pure lutetium. James' separation was of very high quality, but Urbain and Auer von Welsbach published before him. [ 5 ] [ 6 ]
Lutetium(III) oxide is an important raw material for laser crystals. [ 7 ] It also has specialized uses in ceramics, glass, phosphors, and lasers. Lutetium(III) oxide is used as a catalyst in cracking, alkylation, hydrogenation, and polymerization. [ 2 ] The band gap of lutetium oxide is 5.5 eV. [ 1 ] | https://en.wikipedia.org/wiki/Lu2O3 |
Lutetium vanadate is inorganic compound with ferromagnetic and semiconducting properties, with the chemical formula of Lu 2 V 2 O 7 [ 1 ] with the same structure as pyrochlore . [ 2 ]
Lutetium vanadate can be obtained by the reaction between lutetium oxide , vanadium trioxide and vanadium pentoxide at a high temperature (1400 °C) in an argon atmosphere with oxygen pressure of 2.0×10 −5 bar. [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lu2V2O7 |
Lutetium(III) bromide is a crystalline compound made of one lutetium atom and three bromine atoms. [ 2 ] It takes the form of a white powder at room temperature. [ 1 ] It is hygroscopic . [ 2 ] It is odorless . [ 5 ]
Lutetium(III) bromide can be synthesized through the following reaction: [ 6 ]
If burned, lutetium(III) bromide may produce hydrogen bromide and metal oxide fumes. [ 5 ]
Lutetium(III) bromide reacts to strong oxidizing agents . [ 5 ]
An experiment by T. Mioduski showed that the solubility of LuBr 3 in tetrahydrofuran at 21-23 °C was 0.30 g per 100 ml of solution. [ 7 ] | https://en.wikipedia.org/wiki/LuBr3 |
Lutetium(III) chloride or lutetium trichloride is the chemical compound composed of lutetium and chlorine with the formula LuCl 3 . It forms hygroscopic white monoclinic crystals [ 3 ] and also a hygroscopic hexahydrate LuCl 3 ·6H 2 O. [ 6 ] Anhydrous lutetium(III) chloride has the YCl 3 (AlCl 3 ) layer structure with octahedral lutetium ions. [ 7 ]
Lutetium-177, a radioisotope that can be derived from lutetium(III) chloride, is used in targeted cancer therapies. [ 8 ] When lutetium-177 is attached to molecules that specifically target cancer cells, it can deliver localized radiation to destroy those cells while sparing surrounding healthy tissue. [ 9 ] This makes lutetium-177-based treatments especially valuable for cancers that are difficult to treat with traditional methods, such as neuroendocrine tumors and prostate cancer. [ 10 ] Additionally, lutetium(III) chloride is used in scintillators , materials that emit light when exposed to radiation. [ 11 ] These scintillators are crucial in detectors for gamma rays and other high-energy particles, used in both medical diagnostics and in scientific research. [ 12 ]
Pure lutetium metal can be produced from lutetium(III) chloride by heating it together with elemental calcium : [ 13 ] | https://en.wikipedia.org/wiki/LuCl3 |
Lutetium(III) fluoride is an inorganic compound with a chemical formula LuF 3 .
Lutetium(III) fluoride can be produced by reacting lutetium oxide with hydrogen fluoride , or reacting lutetium chloride and hydrofluoric acid : [ 3 ]
It can also be produced by reacting lutetium sulfide and hydrofluoric acid : [ 4 ]
Lutetium oxide and nitrogen trifluoride react at 240 °C to produce LuOF. A second step happens below 460 °C to produce LuF 3 . [ 5 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/LuF3 |
Lu Gwei-djen ( Chinese : 魯桂珍 ; pinyin : Lǔ Guìzhēn ; Wade–Giles : Lu Kui-chen ; July 22, 1904 – November 28, 1991) was a Chinese biochemist and historian. She was an expert on the history of science and technology in China and a researcher of nutriology . She was an important researcher and co-author of the project Science and Civilisation in China led by Joseph Needham .
Lu began her distinguished career teaching biochemistry at the Women's Medical College in Shanghai between 1928 and 1930, then moved to teach at the medical school at St. John's University, Shanghai , between 1930 and 1933. She then took up a post as research assistant at the Henry Lester Institute for Medical Research, Shanghai, from 1933 to 1937. [ 3 ]
In 1938, she came to the UK for a year's postgraduate study at the University of Cambridge under Dorothy M. Needham , as a research student at Newnham College . [ 3 ]
In 1939, during World War II, she took up a post as research fellow at the Institute of Experimental Biochemistry, University of California, Berkeley , and at the Harriman Research Lab, San Francisco, from 1939 to 1941. She moved to the Hillman Hospital, Birmingham, Alabama, from 1941 to 1942, and then to the International Cancer Research Foundation, Philadelphia, from 1942 to 1945. [ 3 ]
In 1945, she joined the Needhams in Chongqing as a consultant for nutrition at the Co-operation office and in 1948, moved to Paris to work at UNESCO at the secretariat for natural sciences. [ 4 ]
From 1957 onwards, she was a research fellow of the Wellcome Medical Foundation, working with Dr Joseph Needham in Cambridge on the "Science & Civilisation in China" project. [ 3 ]
She was a Foundation Fellow of Lucy Cavendish College, Cambridge . [ 3 ]
Among the work on which she is credited as co-author are:
The Lu Gwei-Djen Prize for the History of Science awarded by Gonville and Caius College, Cambridge is named in her honour [ 5 ] as is the Lu Gwei Djen Research Fellowship awarded by Lucy Cavendish College, Cambridge - a position previously held by biophysicist Dr Eileen Nugent. [ 6 ]
The daughter of a pharmacist, [ 4 ] she was well known as Needham's long-time collaborator, co-author, Chinese language teacher and his second wife. [ 7 ] | https://en.wikipedia.org/wiki/Lu_Gwei-Djen_Prize_for_the_History_of_Science |
Lubachevsky-Stillinger (compression) algorithm (LS algorithm, LSA, or LS protocol) is a numerical procedure suggested by F. H. Stillinger and Boris D. Lubachevsky that simulates or imitates a physical process of compressing an assembly of hard particles. [ 1 ] As the LSA may need thousands of arithmetic operations even for a few particles, it is usually carried out on a computer.
A physical process of compression often involves a contracting hard boundary of the container, such as a piston pressing against the particles. The LSA is able to simulate such a scenario. [ 2 ] However, the LSA was originally introduced in the setting without a hard boundary [ 1 ] [ 3 ] where the virtual particles were "swelling" or expanding in a fixed, finite virtual volume with periodic boundary conditions . The absolute sizes of the particles were increasing but particle-to-particle relative sizes remained constant. In general, the LSA can handle an external compression and an internal particle expansion, both occurring simultaneously and possibly, but not necessarily, combined with a hard boundary. In addition, the boundary can be mobile.
In a final, compressed, or "jammed" state, some particles are not jammed, they are able to move within "cages" formed by their immobile, jammed neighbors and the hard boundary, if any. These free-to-move particles are not an artifact, or pre-designed, or target feature of the LSA, but rather a real phenomenon. The simulation revealed this phenomenon, somewhat unexpectedly for the authors of the LSA. Frank H. Stillinger coined the term "rattlers" for the free-to-move particles, because if one physically shakes a compressed bunch of hard particles, the rattlers will be rattling.
In the "pre-jammed" mode when the density of the configuration is low and when the particles are mobile, the compression and expansion can be stopped, if so desired. Then the LSA, in effect, would be simulating a granular flow . Various dynamics of the instantaneous collisions can be simulated such as: with or without a full restitution, with or without tangential friction. Differences in masses of the particles can be taken into account. It is also easy and sometimes proves useful to "fluidize" a jammed configuration, by decreasing the sizes of all or some of the particles. Another possible extension of the LSA is replacing the hard collision force potential (zero outside the particle, infinity at or inside) with a piece-wise constant force potential . The LSA thus modified would approximately simulate molecular dynamics with continuous
short range particle-particle force interaction. External force fields , such as gravitation , can be also introduced, as long as the inter-collision motion of each particle can be represented by a simple one-step calculation.
Using LSA for spherical particles of different sizes and/or for jamming in a non-commeasureable size container proved to be a useful technique for generating and studying micro-structures formed under conditions of a crystallographic defect [ 4 ] or a geometrical frustration [ 5 ] [ 6 ] It should be added that the original LS protocol was designed primarily for spheres of same or different sizes. [ 7 ]
Any deviation from the spherical (or circular in two dimensions) shape, even a simplest one, when spheres are replaced with ellipsoids (or ellipses in two dimensions), [ 8 ] causes thus modified LSA to slow down substantially.
But as long as the shape is spherical, the LSA is able to handle particle assemblies in tens to hundreds of thousands
on today's (2011) standard personal computers . Only a very limited experience was reported [ 9 ] in using the LSA in dimensions higher than 3.
The state of particle jamming is achieved via simulating a granular flow . The flow is rendered as a discrete event simulation , the events being particle-particle or particle-boundary collisions. Ideally, the calculations should have been
performed with the infinite precision. Then the jamming would have occurred ad infinitum . In practice, the precision is finite as is the available resolution of representing the real numbers in the computer memory , for example, a double-precision resolution. The real calculations are stopped when inter-collision runs of the non-rattler particles become
smaller than an explicitly or implicitly specified small threshold. For example, it is useless to continue the calculations when inter-collision runs are smaller than the roundoff error.
The LSA is efficient in the sense that the events are processed essentially in an event-driven fashion, rather than in a time-driven fashion. This means almost no calculation is wasted on computing or maintaining the positions and velocities of the particles between the collisions. Among the event-driven algorithms intended for the same task of simulating granular flow , like, for example, the algorithm of D.C. Rapaport, [ 10 ] the LSA is distinguished by a simpler data structure and data handling.
For any particle at any stage of calculations the LSA keeps record of only two events: an old, already processed committed event, which comprises the committed event time stamp , the particle state (including position and velocity), and, perhaps, the "partner" which could be another particle or boundary identification, the one with which the particle collided in the past, and a new event proposed for a future processing with a similar set of parameters. The new event is not committed. The maximum of the committed old event times must never exceed the minimum of the non-committed new event times.
Next particle to be examined by the algorithm has the current minimum of new event times. At examining the chosen particle,
what was previously the new event, is declared to be the old one and to be committed, whereas the next new event is being scheduled, with its new time stamp, new state, and new partner, if any. As the next new event for a particle is being set,
some of the neighboring particles may update their non-committed new events to better account for the new information.
As the calculations of the LSA progress, the collision rates of particles may and usually do increase. Still the LSA successfully approaches the jamming state as long as those rates remain comparable among all the particles, except for the rattlers. (Rattlers experience consistently low collision rates. This property allows one to detect rattlers.) However,
it is possible for a few particles, even just for a single particle, to experience a very high collision rate along the approach to a certain simulated time. The rate will be increasing without a bound in proportion to the rates of collisions in the rest of the particle ensemble. If this happens, then the simulation will be stuck in time, it won't be able to progress toward the state of jamming.
The stuck-in-time failure can also occur when simulating a granular flow without particle compression or expansion. This failure mode was recognized by the practitioners of granular flow simulations as an "inelastic collapse" [ 11 ] because it often occurs in such simulations when the restitution coefficient in collisions is low (i.e. inelastic). The failure is not specific to only the LSA algorithm. Techniques to avoid the failure have been proposed. [ 12 ]
The LSA was a by-product of an attempt to find a fair measure of speedup in parallel simulations . The Time Warp parallel simulation algorithm by David Jefferson was advanced as a method to simulate asynchronous spatial interactions of fighting units in combat models on a parallel computer . [ 13 ] Colliding particles models [ 14 ] offered similar simulation tasks with spatial interactions of particles but clear of the details that are non-essential for exposing the simulation techniques. The speedup was presented as the ratio of the execution time on a uniprocessor over that on a multiprocessor , when executing the same parallel Time Warp algorithm. Boris D. Lubachevsky noticed that such a speedup assessment might be faulty because executing a parallel algorithm for a task on a uniprocessor is not necessarily the fastest way to perform the task on such a machine. The LSA was created in an attempt to produce a faster uniprocessor simulation and hence to have a more fair assessment of the parallel speedup . Later on, a parallel simulation algorithm, different from the Time Warp, was also proposed, that, when run on a uniprocessor, reduces to the LSA. [ 15 ] | https://en.wikipedia.org/wiki/Lubachevsky–Stillinger_algorithm |
In combinatorial mathematics , the Lubell–Yamamoto–Meshalkin inequality , more commonly known as the LYM inequality , is an inequality on the sizes of sets in a Sperner family , proved by Bollobás (1965) , Lubell (1966) , Meshalkin (1963) , and Yamamoto (1954) . It is named for the initials of three of its discoverers. To include the initials of all four discoverers, it is sometimes referred to as the YBLM inequality .
This inequality belongs to the field of combinatorics of sets, and has many applications in combinatorics. In particular, it can be used to prove Sperner's theorem . Its name is also used for similar inequalities.
Let U be an n -element set, let A be a family of subsets of U such that no set in A is a subset of another set in A , and let a k denote the number of sets of size k in A . Then
Lubell (1966) proves the Lubell–Yamamoto–Meshalkin inequality by a double counting argument in which he counts the permutations of U in two different ways. First, by counting all permutations of U identified with {1, …, n } directly, one finds that there are n ! of them. But secondly, one can generate a permutation (i.e., an order) of the elements of U by selecting a set S in A and choosing a map that sends {1, … , | S | } to S . If | S | = k , the set S is associated in this way with k !( n − k )! permutations, and in each of them the image of the first k elements of U is exactly S . Each permutation may only be associated with a single set in A , for if two prefixes of a permutation both formed sets in A then one would be a subset of the other. Therefore, the number of permutations that can be generated by this procedure is
Since this number is at most the total number of all permutations,
Finally dividing the above inequality by n ! leads to the result. | https://en.wikipedia.org/wiki/Lubell–Yamamoto–Meshalkin_inequality |
A lubricant (sometimes shortened to lube ) is a substance that helps to reduce friction between surfaces in mutual contact, which ultimately reduces the heat generated when the surfaces move. It may also have the function of transmitting forces, transporting foreign particles, or heating or cooling the surfaces. The property of reducing friction is known as lubricity .
In addition to industrial applications, lubricants are used for many other purposes. Other uses include cooking ( oils and fats in use in frying pans and baking to prevent food sticking), to reduce rusting and friction in machinery , through the use of motor oil and grease , bioapplications on humans (e.g., lubricants for artificial joints ), ultrasound examination, medical examination, and sexual intercourse. It is mainly used to reduce friction and to contribute to a better, more efficient functioning of a mechanism.
Lubricants have been in some use for thousands of years. Calcium soaps have been identified on the axles of chariots dated to 1400 BC. Building stones were slid on oil-impregnated lumber in the time of the pyramids. In the Roman era , lubricants were based on olive oil and rapeseed oil , as well as animal fats. The growth of lubrication accelerated in the Industrial Revolution with the accompanying use of metal-based machinery. Relying initially on natural oils, needs for such machinery shifted toward petroleum-based materials early in the 1900s. A breakthrough came with the development of vacuum distillation of petroleum, as described by the Vacuum Oil Company . This technology allowed the purification of very non-volatile substances, which are common in many lubricants. [ 1 ]
A good lubricant generally possesses the following characteristics:
Typically lubricants contain 90% base oil (most often petroleum fractions, called mineral oils ) and less than 10% additives . Vegetable oils or synthetic liquids such as hydrogenated polyolefins , esters , silicones , fluorocarbons and many others are sometimes used as base oils. Additives deliver reduced friction and wear, increased viscosity , improved viscosity index, resistance to corrosion and oxidation , aging or contamination, etc.
Non-liquid lubricants include powders (dry graphite , PTFE , molybdenum disulphide , tungsten disulphide , etc.), PTFE tape used in plumbing, air cushion and others. Dry lubricants such as graphite, molybdenum disulphide and tungsten disulphide also offer lubrication at temperatures (up to 350 °C) higher than liquid and oil-based lubricants are able to operate. Limited interest has been shown in low friction properties of compacted oxide glaze layers formed at several hundred degrees Celsius in metallic sliding systems; however, practical use is still many years away due to their physically unstable nature.
A large number of additives are used to impart performance characteristics to the lubricants. Modern automotive lubricants contain as many as ten additives, comprising up to 20% of the lubricant, the main families of additives are: [ 1 ]
In 1999, an estimated 37,300,000 tons of lubricants were consumed worldwide. [ 4 ] Automotive applications dominate, including electric vehicles [ 5 ] but other industrial, marine, and metal working applications are also big consumers of lubricants. Although air and other gas-based lubricants are known (e.g., in fluid bearings ), liquid lubricants dominate the market, followed by solid lubricants.
Lubricants are generally composed of a majority of base oil plus a variety of additives to impart desirable characteristics. Although generally lubricants are based on one type of base oil, mixtures of the base oils also are used to meet performance requirements.
The term " mineral oil " is used to refer to lubricating base oils derived from crude oil . The American Petroleum Institute (API) designates several types of lubricant base oil: [ 6 ]
The lubricant industry commonly extends this group terminology to include:
Can also be classified into three categories depending on the prevailing compositions:
Petroleum-derived lubricant can also be produced using synthetic hydrocarbons (derived ultimately from petroleum), " synthetic oils ".
These include:
PTFE: polytetrafluoroethylene (PTFE) is typically used as a coating layer on, for example, cooking utensils to provide a non-stick surface. Its usable temperature range up to 350 °C and chemical inertness make it a useful additive in special greases , where it can function both as a thickener and a lubricant. Under extreme pressures, PTFE powder or solids is of little value as it is soft and flows away from the area of contact. Ceramic or metal or alloy lubricants must be used then. [ 7 ]
Inorganic solids: Graphite , hexagonal boron nitride , molybdenum disulfide and tungsten disulfide are examples of solid lubricants . Some retain their lubricity to very high temperatures. The use of some such materials is sometimes restricted by their poor resistance to oxidation (e.g., molybdenum disulfide degrades above 350 °C in air, but 1100 °C in reducing environments.
Metal/alloy: Metal alloys, composites and pure metals can be used as grease additives or the sole constituents of sliding surfaces and bearings. Cadmium and gold are used for plating surfaces which gives them good corrosion resistance and sliding properties, Lead , tin , zinc alloys and various bronze alloys are used as sliding bearings, or their powder can be used to lubricate sliding surfaces alone.
Aqueous lubrication is of interest in a number of technological applications. Strongly hydrated brush polymers such as PEG can serve as lubricants at liquid solid interfaces. [ 8 ] By continuous rapid exchange of bound water with other free water molecules, these polymer films keep the surfaces separated while maintaining a high fluidity at the brush–brush interface at high compressions, thus leading to a very low coefficient of friction.
Biolubricants [ 9 ] are derived from vegetable oils and other renewable sources. They usually are triglyceride esters (fats obtained from plants and animals). For lubricant base oil use, the vegetable derived materials are preferred. Common ones include high oleic canola oil , castor oil , palm oil , sunflower seed oil and rapeseed oil from vegetable, and tall oil from tree sources. Many vegetable oils are often hydrolyzed to yield the acids which are subsequently combined selectively to form specialist synthetic esters. Other naturally derived lubricants include lanolin (wool grease, a natural water repellent). [ 10 ]
Whale oil was a historically important lubricant, with some uses up to the latter part of the 20th century as a friction modifier additive for automatic transmission fluid . [ 11 ]
In 2008, the biolubricant market was around 1% of UK lubricant sales in a total lubricant market of 840,000 tonnes/year. [ 12 ]
As of 2020 [update] , researchers at Australia's CSIRO have been studying safflower oil as an engine lubricant, finding superior performance and lower emissions than petroleum -based lubricants in applications such as engine -driven lawn mowers , chainsaws and other agricultural equipment. Grain -growers trialling the product have welcomed the innovation, with one describing it as needing very little refining, biodegradable , a bioenergy and biofuel . The scientists have reengineered the plant using gene silencing , creating a variety that produces up to 93% of oil, the highest currently available from any plant. Researchers at Montana State University ’s Advanced Fuel Centre in the US studying the oil’s performance in a large diesel engine , comparing it with conventional oil, have described the results as a "game-changer". [ 13 ]
Greases are a solid or semi-solid lubricant produced by blending thickening agents within a liquid lubricant. Greases are typically composed of about 80% lubricating oil, around 5% to 10% thickener, and approximately 10% to 15% additives. In most common greases, the thickener is a light or alkali metal soap, forming a sponge-like structure that encapsulates the oil droplets. Beyond lubrication, greases are generally expected to provide corrosion protection, typically achieved through additives. To prevent drying out at higher temperatures, dry lubricants are also added. By selecting appropriate oils, thickeners, and additives, the properties of greases can be optimized for a wide range of applications. There are greases suited for high or extremely low temperatures, vacuum applications, water-resistant and weatherproof greases, highly pressure-resistant or creeping types, food-grade, or exceptionally adhesive greases. [ 14 ]
One of the largest applications for lubricants, in the form of motor oil , is protecting the internal combustion engines in motor vehicles and powered equipment.
Anti-tack or anti-stick coatings are designed to reduce the adhesive condition (stickiness) of a given material. The rubber, hose, and wire and cable industries are the largest consumers of anti-tack products but virtually every industry uses some form of anti-sticking agent. Anti-sticking agents differ from lubricants in that they are designed to reduce the inherently adhesive qualities of a given compound while lubricants are designed to reduce friction between any two surfaces.
Lubricants are typically used to separate moving parts in a system. This separation has the benefit of reducing friction, wear and surface fatigue, together with reduced heat generation, operating noise and vibrations. Lubricants achieve this in several ways. The most common is by forming a physical barrier i.e., a thin layer of lubricant separates the moving parts. This is analogous to hydroplaning, the loss of friction observed when a car tire is separated from the road surface by moving through standing water. This is termed hydrodynamic lubrication. In cases of high surface pressures or temperatures, the fluid film is much thinner and some of the forces are transmitted between the surfaces through the lubricant.
Typically the lubricant-to-surface friction is much less than surface-to-surface friction in a system without any lubrication. Thus use of a lubricant reduces the overall system friction. Reduced friction has the benefit of reducing heat generation and reduced formation of wear particles as well as improved efficiency. Lubricants may contain polar additives known as friction modifiers that chemically bind to metal surfaces to reduce surface friction even when there is insufficient bulk lubricant present for hydrodynamic lubrication, e.g. protecting the valve train in a car engine at startup. The base oil itself might also be polar in nature and as a result inherently able to bind to metal surfaces, as with polyolester oils.
Both gas and liquid lubricants can transfer heat. However, liquid lubricants are much more effective on account of their high specific heat capacity . Typically the liquid lubricant is constantly circulated to and from a cooler part of the system, although lubricants may be used to warm as well as to cool when a regulated temperature is required. This circulating flow also determines the amount of heat that is carried away in any given unit of time. High flow systems can carry away a lot of heat and have the additional benefit of reducing the thermal stress on the lubricant. Thus lower cost liquid lubricants may be used. The primary drawback is that high flows typically require larger sumps and bigger cooling units. A secondary drawback is that a high flow system that relies on the flow rate to protect the lubricant from thermal stress is susceptible to catastrophic failure during sudden system shut downs. An automotive oil-cooled turbocharger is a typical example. Turbochargers get red hot during operation and the oil that is cooling them only survives as its residence time in the system is very short (i.e. high flow rate). If the system is shut down suddenly (pulling into a service area after a high-speed drive and stopping the engine) the oil that is in the turbo charger immediately oxidizes and will clog the oil ways with deposits. Over time these deposits can completely block the oil ways, reducing the cooling with the result that the turbo charger experiences total failure, typically with seized bearings . Non-flowing lubricants such as greases and pastes are not effective at heat transfer although they do contribute by reducing the generation of heat in the first place.
Lubricant circulation systems have the benefit of carrying away internally generated debris and external contaminants that get introduced into the system to a filter where they can be removed. Lubricants for machines that regularly generate debris or contaminants such as automotive engines typically contain detergent and dispersant additives to assist in debris and contaminant transport to the filter and removal. Over time the filter will get clogged and require cleaning or replacement, hence the recommendation to change a car's oil filter at the same time as changing the oil. In closed systems such as gear boxes the filter may be supplemented by a magnet to attract any iron fines that get created.
It is apparent that in a circulatory system the oil will only be as clean as the filter can make it, thus it is unfortunate that there are no industry standards by which consumers can readily assess the filtering ability of various automotive filters. Poor automotive filters
significantly reduce the life of the machine (engine) as well as make the system inefficient.
Lubricants known as hydraulic fluid are used as the working fluid in hydrostatic power transmission. Hydraulic fluids comprise a large portion of all lubricants produced in the world. The automatic transmission 's torque converter is another important application for power transmission with lubricants.
Lubricants prevent wear by reducing friction between two parts. Lubricants may also contain anti-wear or extreme pressure additives to boost their performance against wear and fatigue.
Many lubricants are formulated with additives that form chemical bonds with surfaces or that exclude moisture, to prevent corrosion and rust. It reduces corrosion between two metallic surfaces and avoids contact between these surfaces to avoid immersed corrosion.
Lubricants will occupy the clearance between moving parts through the capillary force, thus sealing the clearance. This effect can be used to seal pistons and shafts.
A further phenomenon that has undergone investigation in relation to high-temperature wear prevention and lubrication is that of a compacted oxide layer glaze formation. Such glazes are generated by sintering a compacted oxide layer. Such glazes are crystalline, in contrast to the amorphous glazes seen in pottery. The required high temperatures arise from metallic surfaces sliding against each other (or a metallic surface against a ceramic surface). Due to the elimination of metallic contact and adhesion by the generation of oxide, friction and wear is reduced. Effectively, such a surface is self-lubricating.
As the "glaze" is already an oxide, it can survive to very high temperatures in air or oxidising environments. However, it is disadvantaged by it being necessary for the base metal (or ceramic) having to undergo some wear first to generate sufficient oxide debris.
Lubricants both fresh and used can cause considerable damage to the environment mainly due to their high potential of serious water pollution. Further, the additives typically contained in lubricant can be toxic to flora and fauna. In used fluids, the oxidation products can be toxic as well. Lubricant persistence in the environment largely depends upon the base fluid, however if very toxic additives are used they may negatively affect the persistence. Lanolin lubricants are non-toxic making them the environmental alternative which is safe for both users and the environment.
It is estimated that about 50% of all lubricants are released into the environment. [ citation needed ] Common disposal methods include recycling , burning , landfill and discharge into water, though typically disposal in landfill and discharge into water are strictly regulated in most countries, as even small amount of lubricant can contaminate a large amount of water. Most regulations permit a threshold level of lubricant that may be present in waste streams and companies spend hundreds of millions of dollars annually in treating their waste waters to get to acceptable levels. [ citation needed ]
Burning the lubricant as fuel, typically to generate electricity, is also governed by regulations mainly on account of the relatively high level of additives present. Burning generates both airborne pollutants and ash rich in toxic materials, mainly heavy metal compounds. Thus lubricant burning takes place in specialized facilities that have incorporated special scrubbers to remove airborne pollutants and have access to landfill sites with permits to handle the toxic ash.
Unfortunately, most lubricant that ends up directly in the environment is due to the general public discharging it onto the ground, into drains, and directly into landfills as trash. Other direct contamination sources include runoff from roadways, accidental spillages, natural or man-made disasters, and pipeline leakages.
Improvement in filtration technologies and processes has now made recycling a viable option (with the rising price of base stock and crude oil ). Typically various filtration systems remove particulates, additives, and oxidation products and recover the base oil. The oil may get refined during the process. This base oil is then treated much the same as virgin base oil however there is considerable reluctance to use recycled oils as they are generally considered inferior. Basestock fractionally vacuum distilled from used lubricants has superior properties to all-natural oils, but cost-effectiveness depends on many factors. Used lubricant may also be used as refinery feedstock to become part of crude oil. Again, there is considerable reluctance to this use as the additives, soot, and wear metals will seriously poison/deactivate the critical catalysts in the process. Cost prohibits carrying out both filtration (soot, additives removal) and re-refining ( distilling , isomerization, hydrocrack, etc.) however the primary hindrance to recycling still remains the collection of fluids as refineries need continuous supply in amounts measured in cisterns (rail tanks).
Occasionally, unused lubricant requires disposal. The best course of action in such situations is to return it to the manufacturer where it can be processed as a part of fresh batches. | https://en.wikipedia.org/wiki/Lubricant |
Lubrication is the process or technique of using a lubricant to reduce friction and wear and tear in a contact between two surfaces. The study of lubrication is a discipline in the field of tribology .
Lubrication mechanisms such as fluid-lubricated systems are designed so that the applied load is partially or completely carried by hydrodynamic or hydrostatic pressure, which reduces solid body interactions (and consequently friction and wear). Depending on the degree of surface separation, different lubrication regimes can be distinguished.
Adequate lubrication allows smooth, continuous operation of machine elements , reduces the rate of wear, and prevents excessive stresses or seizures at bearings. By repelling water and other substances, it also reduces corrosion. When lubrication breaks down, components can rub destructively against each other, causing heat, local welding, destructive damage and failure.
As the load increases on the contacting surfaces, distinct situations can be observed with respect to the mode of lubrication, which are called lubrication regimes: [ 1 ]
Besides supporting the load the lubricant may have to perform other functions as well, for instance it may cool the contact areas and remove wear products. While carrying out these functions the lubricant is constantly replaced from the contact areas either by the relative movement (hydrodynamics) or by externally induced forces.
Lubrication is required for correct operation of mechanical systems such as pistons , pumps , cams , bearings , turbines , gears , roller chains , cutting tools etc. where without lubrication the pressure between the surfaces in proximity would generate enough heat for rapid surface damage which in a coarsened condition may literally weld the surfaces together, causing seizure .
In some applications, such as piston engines, the film between the piston and the cylinder wall also seals the combustion chamber, preventing combustion gases from escaping into the crankcase.
If an engine required pressurised lubrication to, say, plain bearings , there would be an oil pump and an oil filter . On early engines (such as a Sabb marine diesel ), where pressurised feed was not required splash lubrication would suffice. | https://en.wikipedia.org/wiki/Lubrication |
In fluid dynamics , lubrication theory describes the flow of fluids ( liquids or gases ) in a geometry in which one dimension is significantly smaller than the others. An example is the flow above air hockey tables, where the thickness of the air layer beneath the puck is much smaller than the dimensions of the puck itself.
Internal flows are those where the fluid is fully bounded. Internal flow lubrication theory has many industrial applications because of its role in the design of fluid bearings . Here a key goal of lubrication theory is to determine the pressure distribution in the fluid volume, and hence the forces on the bearing components. The working fluid in this case is often termed a lubricant .
Free film lubrication theory is concerned with the case in which one of the surfaces containing the fluid is a free surface . In that case, the position of the free surface is itself unknown, and one goal of lubrication theory is then to determine this. Examples include the flow of a viscous fluid over an inclined plane or over topography. [ 1 ] [ 2 ] Surface tension may be significant, or even dominant. [ 3 ] Issues of wetting and dewetting then arise. For very thin films (thickness less than one micrometre ), additional intermolecular forces, such as Van der Waals forces or disjoining forces , may become significant. [ citation needed ]
Mathematically, lubrication theory can be seen as exploiting the disparity between two length scales. The first is the characteristic film thickness, H {\displaystyle H} , and the second is a characteristic substrate length scale L {\displaystyle L} . The key requirement for lubrication theory is that the ratio ϵ = H / L {\displaystyle \epsilon =H/L} is small, that is, ϵ ≪ 1 {\displaystyle \epsilon \ll 1} .
The Navier–Stokes equations (or Stokes equations , when fluid inertia may be neglected) are expanded in this small parameter, and the leading-order equations are then
where x {\displaystyle x} and z {\displaystyle z} are coordinates in the direction of the substrate and perpendicular to it respectively. Here p {\displaystyle p} is the fluid pressure, and u {\displaystyle u} is the fluid velocity component parallel to the substrate; μ {\displaystyle \mu } is the fluid viscosity . The equations show, for example, that pressure variations across the gap are small, and that those along the gap are proportional to the fluid viscosity. A more general formulation of the lubrication approximation would include a third dimension, and the resulting differential equation is known as the Reynolds equation .
Further details can be found in the literature [ 4 ] or in the textbooks given in the bibliography.
An important application area is lubrication of machinery components such as fluid bearings and mechanical seals . Coating is another major application area including the preparation of thin films , printing , painting and adhesives .
Biological applications have included studies of red blood cells in narrow capillaries and of liquid flow in the lung and eye. | https://en.wikipedia.org/wiki/Lubrication_theory |
Lubricity is the measure of the reduction in friction and/or wear by a lubricant . The study of lubrication and wear mechanisms is called tribology .
The lubricity of a substance is not a material property, and cannot be measured directly. Tests are performed to quantify a lubricant's performance for a specific system. This is often done by determining how much wear is caused to a surface by a given wear-inducing object in a given amount of time. Other factors such as surface size, temperature, and pressure are also specified. For two fluids with the same viscosity, the one that results in a smaller wear scar is considered to have higher lubricity. For this reason, lubricity is also termed a substance's anti-wear property .
Examples of tribometer test setups include "Ball-on-cylinder" and "Ball-on-three-discs" tests.
In a modern diesel engine , the fuel is part of the engine lubrication process. Diesel fuel naturally contains compounds that provide lubricity, but because of regulations in many countries (such as the US and the EU countries), sulphur must be removed from the fuel before it can be sold. The hydrotreatment of diesel fuel to remove sulphur also removes the compounds that provide lubricity. Reformulated diesel fuel that does not have biodiesel added has a lower lubricity and requires lubricity improving additives to prevent excessive engine wear. [ 1 ] [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Lubricity |
In computer science , Luby transform codes ( LT codes ) are the first class of practical fountain codes that are near-optimal erasure correcting codes . They were invented by Michael Luby in 1998 and published in 2002. [ 1 ] Like some other fountain codes , LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation ( ⊕ {\displaystyle \oplus } ) to encode and decode the message. [ 2 ]
LT codes are rateless because the encoding algorithm can in principle produce an infinite number of message packets (i.e., the percentage of packets that must be received to decode the message can be arbitrarily small). They are erasure correcting codes because they can be used to transmit digital data reliably on an erasure channel .
The next generation beyond LT codes are Raptor codes (see for example IETF RFC 5053 or IETF RFC 6330), which have linear time encoding and decoding. Raptor codes are fundamentally based on LT codes, i.e., encoding for Raptor codes uses two encoding stages, where the second stage is LT encoding. Similarly, decoding with Raptor codes primarily relies upon LT decoding, but LT decoding is intermixed with more advanced decoding techniques. The RaptorQ code specified in IETF RFC 6330, which is the most advanced fountain code, has vastly superior decoding probabilities and performance compared to using only an LT code.
The traditional scheme for transferring data across an erasure channel depends on continuous two-way communication.
Certain networks, such as ones used for cellular wireless broadcasting, do not have a feedback channel. Applications on these networks still require reliability. Fountain codes in general, and LT codes in particular, get around this problem by adopting an essentially one-way communication protocol.
As mentioned above, the RaptorQ code specified in IETF RFC 6330 outperforms an LT code in practice.
The encoding process begins by dividing the uncoded message into n blocks of roughly equal length. Encoded packets are then produced with the help of a pseudorandom number generator .
This process continues until the receiver signals that the message has been received and successfully decoded.
The decoding process uses the " exclusive or " operation to retrieve the encoded message.
This decoding procedure works because A ⊕ {\displaystyle \oplus } A = 0 for any bit string A . After d − 1 distinct blocks have been exclusive-ored into a packet of degree d , the original unencoded content of the unmatched block is all that remains. In symbols we have
Several variations of the encoding and decoding processes described above are possible. For instance, instead of prefixing each packet with a list of the actual message block indices { i 1 , i 2 , ..., i d }, the encoder might simply send a short "key" which served as the seed for the pseudorandom number generator (PRNG) or index table used to construct the list of indices. Since the receiver equipped with the same RNG or index table can reliably recreate the "random" list of indices from this seed, the decoding process can be completed successfully. Alternatively, by combining a simple LT code of low average degree with a robust error-correcting code, a raptor code can be constructed that will outperform an optimized LT code in practice. [ 3 ]
There is only one parameter that can be used to optimize a straight LT code: the degree distribution function (described as a pseudorandom number generator for the degree d in the LT encoding section above). In practice the other "random" numbers (the list of indices { i 1 , i 2 , ..., i d } ) are invariably taken from a uniform distribution on [0, n ), where n is the number of blocks into which the message has been divided. [ 4 ]
Luby himself [ 1 ] discussed the "ideal soliton distribution " defined by
This degree distribution theoretically minimizes the expected number of redundant code words that will be sent before the decoding process can be completed. However the ideal soliton distribution does not work well in practice because any fluctuation around the expected behavior makes it likely that at some step in the decoding process there will be no available packet of (reduced) degree 1 so decoding will fail. Furthermore, some of the original blocks will not be xor-ed into any of the transmission packets. Therefore, in practice, a modified distribution, the "robust soliton distribution ", is substituted for the ideal distribution. The effect of the modification is, generally, to produce more packets of very small degree (around 1) and fewer packets of degree greater than 1, except for a spike of packets at a fairly large quantity chosen to ensure that all original blocks will be included in some packet. [ 4 ] | https://en.wikipedia.org/wiki/Luby_transform_code |
Luca Comai is an Italian plant biologist whose work has focused on trait discovery for improving agricultural crops and on developing protocols and systems for identifying new genes and mutations in plants. Through his work at Calgene , Comai was one of the first discoverers of the glyphosate resistance gene and is considered a pioneer in the field of plant biotechnology research.
His research since then has focused on developing the Targeting Induced Local Lesions in Genomes ( TILLING ) protocol that allows for new mutations and traits to be quickly identified within a target plant species through genome and sequence analysis. He has received a number of research and teaching awards, along with being named a Fellow for the American Association for the Advancement of Science (AAAS). In 2023, he was elected to the National Academy of Sciences . [ 1 ]
Comai received his bachelor's degree in agricultural sciences from the University of Bologna in 1976 and his Master's degree in the field of plant pathology in 1978 from Washington State University . [ 2 ] He then went on to earn his Ph.D. in plant pathology from the University of California, Davis and completed a postdoc at the same university. [ 3 ] His doctoral thesis was on the subject of how Indole-3-acetic acid (IAA) is produced in bacteria and how this genetic function was homologous to the plant hormone production of the same name in plants that is encoded in the genome as T-DNA from Agrobacterium . [ 4 ]
Comai first applied for a teaching position at the University of California, Riverside in January 1981. [ 5 ] But when this position wasn't offered to him, he instead joined the biotech company Calgene in the latter half of 1981 during its initial opening period. [ 6 ] While he had been attempting to get support among the Riverside faculty for his application, he had been informed of the properties of glyphosate and its specific targeting of the EPSP synthase enzyme. He proposed to Calgene's science board that they try to develop a plant gene mutation that changed the shape of EPSP synthase so that glyphosate would be unable to bind to it. His suggestion was rejected due to glyphosate being a product produced by another company, but he decided to work on the gene mutation on his own time. [ 5 ] Using Salmonella , he used random mutagenesis and subsequent application of glyphosate to try and stumble across the EPSP synthase mutation he was seeking and he succeeded. [ 7 ]
In 1982, Comai presented his glyphosate tolerance mutation to a fellow scientist, Steve Rogers, who worked at Monsanto and demonstrated that he had made a superior form of the resistance gene than the one Monsanto had been working on. Though it was still not good enough for agricultural production and Comai continued his independent work. He published a paper in the journal Nature in October 1985 describing how he and his colleagues at Calgene had created glyphosate-resistant plants using the gene mutation Comai had found years earlier. [ 5 ] This outcompeting of Monsanto's flagship product created a strong sense of rivalry with Calgene and subsequent layoffs at Monsanto at the end of 1985. [ 5 ]
First becoming a professor at the University of Washington in 1990, Comai's lab focused on the development of improved agricultural genetic traits by using the model organism Arabidopsis thaliana to co-develop what was referred to as the TILLING protocol. [ 8 ] This system included developing gene models and inbred lines, including an expanded EcoTILLING protocol developed in 2004, to compare differences in these plant lines to the reference genome and isolate new mutations and traits for further research. [ 9 ] He would later lead the TILLING Core Service Facility at UC Davis that continued developing a genetic analysis platform called "TILLING-by-Sequencing" that would be used on not just Arabidopsis, but was expanded to also include Camelina , tomato , onion , rice , and wheat . [ 10 ] An award sponsorship of $489,000 was given to Comai's lab in 2014 from a joint donation of three companies in order to sponsor the further use of TILLING in current tomato cultivar populations. [ 11 ]
Comai joined UC Davis in 2006, with his lab's research focusing on chromosome biology, functional genomics , and epigenetics , along with general mutational trait research. He is also well known for his work as a teacher of the "BIS 101" undergraduate genetics course, his use of whiteboard writing, and his co-produced video series alongside the university. [ 3 ] In 2014, a collaboration between Comai's lab and the group of Professor Ryutaro Tao at Kyoto University found, through investigating the transcriptomes of several dozen male and female plants, the specific genes involved in sex determination in the persimmon species Diospyros lotus . As persimmon is among the 5% of plants that exhibit dioecy , this discovery opened up agricultural opportunities for trait improvement, and the research received significant media interest. [ 12 ] [ 13 ]
Comai was named a Fellow of the (AAAS) in 2012. [ 14 ] The Distinguished Research Award was given to Comai in 2015 from the College of Biological Sciences at UC Davis thanks to his accomplishments in the TILLING protocol. [ 15 ] In 2016, Comai was awarded with a Institute Honorary Fellowship from the University of Bologna for his work on the genetic improvement of plants. [ 15 ] He was also given a Faculty Teaching Award from the College of Biological Sciences at UC Davis in 2017 for his innovations in teaching and his encouragement of high motivation among his students. [ 3 ] The 2017 "Innovation Prize for Agricultural Technology" from the American Society of Plant Biologists was presented to Comai for his work on TILLING protocols and plant trait development. [ 16 ] | https://en.wikipedia.org/wiki/Luca_Comai |
Luca Incurvati is a logician and philosopher , currently an associate professor at the Institute for Logic, Language and Computation , University of Amsterdam . Incurvati's research areas include set theory , philosophy of mathematics , philosophy of language , and metaethics . In set theory and philosophy of maths, Incurvati has argued for the iterative conception of sets, based on a methodology he terms inference to the best conception . [ 1 ] Incurvati is currently the principal investigator in the Amsterdam-based project From the Expression of Disagreement to New Foundations for Expressivist Semantics , for which he was awarded a prestigious ERC Starting Grant of 1.5 million euros. [ 2 ] This project proposes a inferentialist expressivist treatment of disagreement, in particular arguing that the speech act of rejection is not reducible to negated assertion . For a paper produced as part of this project, Incurvati and coauthor Julian Schlöder received the 2019 Marc Sanders Prize in Metaethics. [ 3 ]
Incurvati earned his MPhil and PhD at St John's College , University of Cambridge , where he worked under the supervision of Michael Potter and Peter Smith. [ 4 ] He was awarded the Matthew Buncombe Prize for his MPhil thesis in 2005. Before his current position in Amsterdam, he was a lecturer at Cambridge, where he served as Director of Studies at Fitzwilliam , Gonville and Caius and Magdalene colleges. | https://en.wikipedia.org/wiki/Luca_Incurvati |
Luca Turin (born 20 November 1953) is a biophysicist and writer with a long-standing interest in bioelectronics, the sense of smell, perfumery, and the fragrance industry.
Turin was born in Beirut , Lebanon on 20 November 1953 into an Italian-Argentinian family, and raised in France, Italy and Switzerland. His father, Duccio Turin, was a UN diplomat and chief architect of the Palestinian refugee camps, [ 1 ] and his mother, Adela Turin (born Mandelli), is an art historian, designer, and award-winning children's author. [ 2 ] Turin studied Physiology and Biophysics at University College London and earned his PhD in 1978. [ 3 ] He worked at the CNRS from 1982-1992, and served as lecturer in Biophysics at University College London from 1992-2000.
After leaving the CNRS , Turin first held a visiting research position at the National Institutes of Health in North Carolina [ 4 ] before moving back to London , where he became a lecturer in biophysics at University College London . In 2001 Turin was hired as CTO of start-up company Flexitral, based in Chantilly , Virginia, to pursue rational odorant design based on his theories. In April 2010 he described this role in the past tense, [ 5 ] and the company's domain name appears to have been surrendered. [ 6 ]
In 2010, Turin was based at MIT , working on a project to develop an electronic nose using natural receptors, financed by DARPA . [ 5 ] In 2014 he moved to the Institute of Theoretical Physics at the University of Ulm where he was a Visiting Professor. [ 7 ] He is a Stavros Niarchos Researcher [ 8 ] in the neurobiology division at the Biomedical Sciences Research Center Alexander Fleming in Greece. [ 9 ] In 2021 he moved to the University of Buckingham, UK as Professor of Physiology in the Medical School.
A major prediction of Turin's vibration theory of olfaction is the isotope effect: that the normal and deuterated versions of a compound should smell different due to unique vibration frequencies, despite having the same shape. A 2001 study by Haffenden et al. showed humans able to distinguish benzaldehyde from its deuterated version. [ 10 ]
However, experimental tests published in Nature Neuroscience in 2004 by Keller and Vosshall failed to support this prediction, with human subjects unable to distinguish acetophenone and its deuterated counterpart. [ 11 ] The study was accompanied by an editorial, which considered the work of Keller and Vosshall to be "refutation of a theory that, while provocative, has almost no credence in scientific circles." It continued, "The only reason for the authors to do the study, or for Nature Neuroscience to publish it, is the extraordinary -- and inappropriate -- degree of publicity that the theory has received from uncritical journalists." [ 12 ] The journal also published a review of The Emperor of Scent , calling Chandler Burr's book about Turin and his theory "giddy and overwrought." [ 13 ] However, tests with animals have shown fish and insects able to distinguish isotopes by smell. [ 14 ] [ 15 ] Biophysical simulations published in Physical Review Letters in 2007 suggest that Turin's proposal is viable from a physics standpoint. [ 16 ]
The vibration theory received possible support from a 2004 paper published in the journal Organic Biomolecular Chemistry by Takane and Mitchell, which shows that odor descriptions in the olfaction literature correlate more strongly with vibrational frequency than with molecular shape. [ 17 ]
In 2011, Turin and colleagues published a paper in PNAS showing drosophila fruit flies can distinguish between odorants and their deuterated counterparts. Tests on drosophila differ from human experiments by using an animal subject known to have a good sense of smell and free from psychological biases that may complicate human tests. [ 18 ] Drosophila were trained to avoid the deuterated odorant in a deuterated/normal pair, indicating a difference in odor. Furthermore, drosophila trained to avoid one deuterated odorant also avoided other deuterated odorants, chemically unrelated, indicating that the deuterated bond itself had a distinct smell. The authors identified a vibrational frequency that could be responsible and found it close to one found in nitriles. When flies trained to avoid deuterated odorants were exposed to the nitrile and its non-nitrile counterpart, the flies also avoided the nitrile, consistent with the theory that fly olfaction detects molecular vibrations. [ 19 ]
Two years later, in 2013, Turin and colleagues published a study in PLoS ONE showing that humans easily distinguish gas-chromatography -purified deuterated musk in double-blind tests. The team chose musks due to the high number of carbon-hydrogen bonds available for deuteration. They replicated the earlier results of Vosshall and Keller showing that humans cannot reliably distinguish between acetophenone and its deuterated counterpart, with 8 hydrogens, and showed that humans only begin to detect the isotope odor of the musks beginning at 14 deuteriums, or 50% deuteration. [ 20 ] Because Turin's proposed mechanism is a biological method of inelastic electron tunnelling spectroscopy , which exploits a quantum effect, his theory of olfaction mechanism has been described as an example of quantum biology . [ 21 ]
In response to Turin's 2013 paper, involving deuterated and undeuterated isotopomers of the musk cyclopentadecanone, [ 20 ] Block et al. in a 2015 paper in PNAS [ 22 ] report that the human musk -recognizing receptor, OR5AN1, identified using a heterologous olfactory receptor expression system and robustly responding to cyclopentadecanone and muscone (which has 30 hydrogens), fails to distinguish isotopomers of these compounds in vitro. Furthermore, the mouse (methylthio)methanethiol-recognizing receptor, MOR244-3, as well as other selected human and mouse olfactory receptors , responded similarly to normal, deuterated, and carbon-13 isotopomers of their respective ligands, paralleling results found with the musk receptor OR5AN1. Based on these findings, the authors conclude that the proposed vibration theory of olfaction does not apply to the human musk receptor OR5AN1, mouse thiol receptor MOR244-3, or other olfactory receptors examined. Additionally, theoretical analysis by the authors shows that the proposed electron transfer mechanism of the vibrational frequencies of odorants could be easily suppressed by quantum effects of nonodorant molecular vibrational modes. The authors conclude: "These and other concerns about electron transfer at olfactory receptors, together with our extensive experimental data, argue against the plausibility of the vibration theory ." In commenting on this work, Vosshall writes "In PNAS, Block et al…. shift the "shape vs. vibration" debate from olfactory psychophysics to the biophysics of the ORs themselves. The authors mount a sophisticated multidisciplinary attack on the central tenets of the vibration theory using synthetic organic chemistry, heterologous expression of olfactory receptors , and theoretical considerations to find no evidence to support the vibration theory of smell." [ 23 ] While Turin comments that Block used "cells in a dish rather than within whole organisms" and that "expressing an olfactory receptor in human embryonic kidney cells doesn't adequately reconstitute the complex nature of olfaction ...", Vosshall responds "Embryonic kidney cells are not identical to the cells in the nose .. but if you are looking at receptors, it's the best system in the world." [ 24 ] In a Letter to the Editor of PNAS , Turin et al. [ 25 ] raise concerns about Block et al. [ 22 ] and Block et al. respond. [ 26 ] A recent study [ 27 ] describes the responses of primary olfactory neurons in tissue culture to isotopes and finds that a small fraction of the population (<1%) clearly discriminates between isotopes, some even giving an all-or-or -none response to H or D isotopomers of octanal. The authors attribute this to "hypersensitivity" of some receptors to differences in hydrophobicity between normal and deuterated odorants.
Turin filed one of the first patents for a semiconductor device made with protein. [ 28 ] Turin's recent work focuses on the relevance of his olfaction theory to more general mechanisms of G-protein coupled receptor activation. In an article [ 29 ] in Inference Review, he proposed that the electronic mechanism was a special case of a more general involvement of electron currents in GPCRs. A 2019 preprint [ 30 ] argues that the highest-resolution x-ray diffraction structure of rhodopsin, [ 31 ] considered the ancestor of all GPCRs, contains the elements of an electronic circuit. He has also reported detection of non-equilibrium electron spins in Drosophila by their radiofrequency emissions, [ 32 ] though this is described as a "work in progress".
In 1988, Turin began work at the lab led by neuroscience researcher Henri Korn at the Pasteur Institute . There, Turin and his colleague Nicole Ropert reported to their superiors that they believed some of Korn's research on neurotransmitters was based on fabricated results. [ 33 ] After Turin made a formal request that the CNRS investigate the allegations, he was told to find work outside France; Ropert was also asked to leave. [ 34 ]
Korn was awarded the prestigious Richard Lounsbery Award in 1992 and became a member of the National Academy of Sciences in the U.S. and the French Academy of Sciences. [ 35 ] Then in 2007, re-analysis of Korn's data by Jacques Ninio in the Journal of Neurophysiology showed serious anomalies that suggested the results were indeed fabricated. [ 33 ]
Turin is the author of the book The Secret of Scent (2006), which details the history and science of his theory of olfaction; an acclaimed critical guide to perfume in French, Parfums: Le Guide , with two editions in 1992 and 1994; and is co-author of the English-language books Perfumes: The A-Z Guide (2008) and The Little Book of Perfumes (2011). He is also the subject of the 2002 book The Emperor of Scent by Chandler Burr [ 4 ] and the 1995 BBC Horizons documentary "A Code in the Nose."
Since 2003, Turin has also written a regular column on perfume, "Duftnote," for NZZ Folio , the German-language monthly magazine of Swiss newspaper Neue Zürcher Zeitung . The column is also published in English on the magazine's website. [ 36 ] The column ended in 2014. The collected columns are published as a book [ 37 ]
In 2001 and 2004, Turin won the Prix Jasmin, the highest honor for perfume writing in France. He won the Jasmine Prize in the UK in 2009. [ 38 ] | https://en.wikipedia.org/wiki/Luca_Turin |
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).
For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ]
Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ]
In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ]
The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ]
It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ]
Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ]
The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension.
When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ]
About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ]
Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization .
In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ]
A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ]
As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ]
The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list:
The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values.
The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).
It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits.
0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings.
For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ):
There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by
λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ]
We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form
{ g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by
If we define the transition multiplicities
m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is
λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ]
Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e.
V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} .
An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and
P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by
G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines .
Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 N , where N is the number of vertices in the middle-level subgraph. [ 70 ]
Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ]
Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ]
The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC for P = 30 and n = 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022.
An STGC for P = 360 and n = 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ]
Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ]
There are a number of binary codes similar to Gray codes, including:
The following binary-coded decimal (BCD) codes are Gray code variants as well: | https://en.wikipedia.org/wiki/Lucal_code |
"Lucas' reagent" is a solution of anhydrous zinc chloride in concentrated hydrochloric acid . This solution is used to classify alcohols of low molecular weight. The reaction is a substitution in which the chloride replaces a hydroxyl group. A positive test is indicated by a change from clear and colourless to turbid, signalling formation of a chloroalkane . [ 1 ] Also, the best results for this test are observed in tertiary alcohols, as they form the respective alkyl halides fastest due to higher stability of the intermediate tertiary carbocation. The test was reported in 1930 and became a standard method in qualitative organic chemistry. [ 2 ] The test has since become somewhat obsolete with the availability of various spectroscopic and chromatographic methods of analysis. It was named after Howard Lucas (1885–1963).
The Lucas test in alcohols is a test to differentiate between primary, secondary, and tertiary alcohols . It is based on the difference in reactivity of the three classes of alcohols with hydrogen halides via an S N 1 reaction : [ 3 ]
(CH 3 ) 3 COH + HCl → (CH 3 ) 3 CCl + H 2 O {\displaystyle {\text{(CH}}_{3}{\text{)}}_{3}{\text{COH}}+{\text{HCl}}\rightarrow {\text{(CH}}_{3}{\text{)}}_{3}{\text{CCl}}+{\text{H}}_{2}{\text{O}}}
The differing reactivity reflects the differing ease of formation of the corresponding carbocations . Tertiary carbocations are far more stable than secondary carbocations, and primary carbocations are the least stable(due to hyperconjugation).
An equimolar mixture of ZnCl 2 and concentrated HCl is the reagent. The alcohol is protonated, the H 2 O group formed leaves, forming a carbocation, and the nucleophile Cl − (which is present in excess) readily attacks the carbocation, forming the chloroalkane. Tertiary alcohols react immediately with Lucas reagent as evidenced by turbidity owing to the low solubility of the organic chloride in the aqueous mixture. Secondary alcohols react within five or so minutes (depending on their solubility). Primary alcohols do not react appreciably with Lucas reagent at room temperature. [ 3 ] Hence, the time taken for turbidity to appear is a measure of the reactivity of the class of alcohol, and this time difference is used to differentiate among the three classes of alcohols:
The Lucas test is usually an alternative to the oxidation test - which is used to identify primary and secondary alcohols. | https://en.wikipedia.org/wiki/Lucas'_reagent |
In number theory , Lucas's theorem expresses the remainder of division of the binomial coefficient ( m n ) {\displaystyle {\tbinom {m}{n}}} by a prime number p in terms of the base p expansions of the integers m and n .
Lucas's theorem first appeared in 1878 in papers by Édouard Lucas . [ 1 ]
For non-negative integers m and n and a prime p , the following congruence relation holds:
where
and
are the base p expansions of m and n respectively. This uses the convention that ( m n ) = 0 {\displaystyle {\tbinom {m}{n}}=0} if m < n .
There are several ways to prove Lucas's theorem.
Let M be a set with m elements, and divide it into m i cycles of length p i for the various values of i . Then each of these cycles can be rotated separately, so that a group G which is the Cartesian product of cyclic groups C p i acts on M . It thus also acts on subsets N of size n . Since the number of elements in G is a power of p , the same is true of any of its orbits. Hence, ( m n ) {\displaystyle {\tbinom {m}{n}}} modulo p equals the number of sets N whose orbit is of size 1, i.e., the number of fixed points of this group action. The fixed points are those subsets N that are a union of some of the cycles. This means that N must have exactly n i cycles of size p i for each i , for the same reason that the integer n has a unique representation in base p . Thus the number of choices for N is exactly ∏ i = 0 k ( m i n i ) ( mod p ) {\displaystyle \prod _{i=0}^{k}{\binom {m_{i}}{n_{i}}}{\pmod {p}}} .
This proof is due to Nathan Fine. [ 2 ]
If p is a prime and n is an integer with 1 ≤ n ≤ p − 1, then the numerator of the binomial coefficient
is divisible by p but the denominator is not. Hence p divides ( p n ) {\displaystyle {\tbinom {p}{n}}} . In terms of ordinary generating functions, this means that
Continuing by induction, we have for every nonnegative integer i that
Now let m be a nonnegative integer, and let p be a prime. Write m in base p , so that m = ∑ i = 0 k m i p i {\displaystyle m=\sum _{i=0}^{k}m_{i}p^{i}} for some nonnegative integer k and integers m i with 0 ≤ m i ≤ p − 1. Then
as the representation of n in base p is unique and
in the final product, n i is the i th digit in the base p representation of n . This proves Lucas's theorem.
Lucas's theorem can be generalized to give an expression for the remainder when ( m n ) {\displaystyle {\tbinom {m}{n}}} is divided by a prime power p k . However, the formulas become more complicated.
If the modulo is the square of a prime p , the following congruence relation holds for all 0 ≤ s ≤ r ≤ p − 1, a ≥ 0, and b ≥ 0.
where H n = 1 + 1 2 + 1 3 + ⋯ + 1 n {\displaystyle H_{n}=1+{\tfrac {1}{2}}+{\tfrac {1}{3}}+\cdots +{\tfrac {1}{n}}} is the n th harmonic number . [ 3 ]
Generalizations of Lucas's theorem for higher prime powers p k are also given by Davis and Webb (1990) [ 4 ] and Granville (1997). [ 5 ] | https://en.wikipedia.org/wiki/Lucas's_theorem |
In probability theory , Luce's choice axiom , formulated by R. Duncan Luce (1959), [ 1 ] states that the relative odds of selecting one item over another from a pool of many items is not affected by the presence or absence of other items in the pool. Selection of this kind is said to have " independence from irrelevant alternatives " (IIA). [ 2 ]
Consider a set X {\displaystyle X} of possible outcomes, and consider a selection rule P {\displaystyle P} , such that for any a ∈ A ⊂ X {\displaystyle a\in A\subset X} with A {\displaystyle A} a finite set, the selector selects a {\displaystyle a} from A {\displaystyle A} with probability P ( a ∣ A ) {\displaystyle P(a\mid A)} .
Luce proposed two choice axioms. The second one is usually meant by "Luce's choice axiom", as the first one is usually called " independence from irrelevant alternatives " (IIA). [ 3 ]
Luce's choice axiom 1 (IIA): if P ( a ∣ A ) = 0 , P ( b ∣ A ) > 0 {\displaystyle P(a\mid A)=0,P(b\mid A)>0} , then for any a , b ∈ B ⊂ A {\displaystyle a,b\in B\subset A} , we still have P ( a ∣ B ) = 0 {\displaystyle P(a\mid B)=0} .
Luce's choice axiom 2 ("path independence"): P ( a ∣ A ) = P ( a ∣ B ) ∑ b ∈ B P ( b ∣ A ) {\displaystyle P(a\mid A)=P(a\mid B)\sum _{b\in B}P(b\mid A)} for any a ∈ B ⊂ A {\displaystyle a\in B\subset A} . [ 4 ]
Luce's choice axiom 1 is implied by choice axiom 2.
Define the matching law selection rule P ( a ∣ A ) = u ( a ) ∑ a ′ ∈ A u ( a ′ ) {\displaystyle P(a\mid A)={\frac {u(a)}{\sum _{a'\in A}u(a')}}} , for some "value" function u : A → ( 0 , ∞ ) {\displaystyle u:A\to (0,\infty )} . This is sometimes called the softmax function, or the Boltzmann distribution .
Theorem : Any matching law selection rule satisfies Luce's choice axiom. Conversely, if P ( a ∣ A ) > 0 {\displaystyle P(a\mid A)>0} for all a ∈ A ⊂ X {\displaystyle a\in A\subset X} , then Luce's choice axiom implies that it is a matching law selection rule.
In economics , it can be used to model a consumer 's tendency to choose one brand of product over another. [ citation needed ]
In behavioral psychology , it is used to model response behavior in the form of matching law .
In cognitive science , it is used to model approximately rational decision processes. | https://en.wikipedia.org/wiki/Luce's_choice_axiom |
Luche reduction is the selective organic reduction of α,β-unsaturated ketones to allylic alcohols . [ 1 ] [ 2 ] [ 3 ] The active reductant is described as "cerium borohydride", which is generated in situ from NaBH 4 and CeCl 3 (H 2 O) 7 . [ 4 ]
The Luche reduction can be conducted chemoselectively toward ketone in the presence of aldehydes or towards α,β-unsaturated ketones in the presence of a non- conjugated ketone. [ 5 ]
An enone forms an allylic alcohol in a 1,2-addition, and the competing conjugate 1,4-addition is suppressed.
The selectivity can be explained in terms of the HSAB theory : carbonyl groups require hard nucleophiles for 1,2-addition. The hardness of the borohydride is increased by replacing hydride groups with alkoxide groups, a reaction catalyzed by the cerium salt by increasing the electrophilicity of the carbonyl group. This is selective for ketones because they are more Lewis basic .
In one application, a ketone is selectively reduced in the presence of an aldehyde. Actually, in the presence of methanol as solvent, the aldehyde forms a methoxy acetal that is inactive in the reducing conditions. | https://en.wikipedia.org/wiki/Luche_reduction |
Luchezar L. Avramov ( Bulgarian : Лъчезар Л. Аврамов ) is a Bulgarian-American mathematician who works in commutative algebra . He held the Dale M. Jensen Chair in Mathematics at the University of Nebraska , and is now an Emeritus. [ 1 ]
Avramov was educated at Moscow State University , earning a master's degree in 1970, a Ph.D. in 1975 (under the supervision of Evgeny Golod ), and a D.Sc. in 1986. [ 2 ] He worked for the Bulgarian Academy of Sciences in 1970–1981 and 1989–1990, and Sofia University in 1981–1989, before moving to the United States in 1991 to become a professor at Purdue University . He moved again to the University of Nebraska in 2002. [ 3 ]
In 2012, he became a Fellow of the American Mathematical Society in its inaugural class. [ 4 ] | https://en.wikipedia.org/wiki/Luchezar_Avramov |
Lucia V. Streng (November 6, 1909 – April 28, 1995) was a Russian-born American chemist . She spent much of her career studying the noble gases and their properties, successfully synthesizing krypton difluoride . She and her husband, Alex G. Streng , both held positions at Temple University . [ 2 ]
Streng was among the first women to receive a degree in mining engineering from Donetsk Mining Institute . She was born in the Russian Empire. During World War II she fled the Soviet Union with her husband and son. The family settled in West Germany for several years, then emigrated to the United States in 1950. Lucia Streng earned money painting china lamps until she and her husband found positions at Temple University . [ 3 ]
Lucia Streng became a research associate at the Temple University Research Institute several years after her husband, Alex G. Streng , was hired as a research chemist. She performed analytical work for the federal Bureau of Mines as well as private companies. In 1963, Streng reported the successful photochemical synthesis of krypton difluoride , a result that no one else was able to produce until 1975. [ 3 ] [ 4 ] [ 5 ]
Streng published a number of papers, often relating to experimental work with the noble gases krypton [ 6 ] [ 7 ] and xenon . [ 8 ] [ 9 ] [ 10 ] [ 11 ] Her contributions were sometimes noted in a manner less formal than shared authorship: in the acknowledgements of one of Alex Streng's papers, he thanked Lucia and another frequent collaborator, Abraham D. Kirshenbaum , for "their contributions in the experimental work." [ 12 ]
Lucia Streng retired from the Research Institute in 1975. [ 3 ] | https://en.wikipedia.org/wiki/Lucia_V._Streng |
Lucid Nation is an American Los Angeles -based experimental rock band formed in 1995 made up of Tamra Spivey (stage name Tamra Lucid) and R.C. Hogart (stage name Ronnie Pontiac).
Lucid Nation was formed in Los Angeles in 1994, when founding drummer, Debbie Haliday, joined Spivey and Ronnie Pontiac to form a riot grrrl band. [ 2 ] An early show was in a downtown LA art gallery opening for Team Dresch , followed by a show opening for Bikini Kill in Montebello . Lucid Nation toured the West Coast next, playing seven riot grrrl conventions in one summer. They also backed Warhol superstar Holly Woodlawn at several live shows. [ 3 ]
At Koo's Anarchist Cafe in Santa Ana, California the band played matinees promoted by Peace Punk and McCarley, including Food Not Bombs fundraisers. At these shows they became acquainted with the local Black Panther Party , which had renamed itself New Panther Vanguard Movement . The New Panther Vanguard Movement helped distribute Lucid Nation zines including Eracism to prisons all over the western United States. [ 4 ]
Lucid Nation turned to Tia Sprocket, formerly of Sexpod , who was on a break from touring with Luscious Jackson . After the tour, the band (Spivey and Pontiac) invited Sprocket to write and record with them back in L.A. Spivey's former bass teacher, Margaret "Grit" Maldonado (bassist of Girl Jesus ), began playing with them. [ 5 ] Two songs from DNA "Las Vegas the Instrumental" and "Fun" were later chosen by Sasha Grey for two scenes in avant garde porn filmmaker Jack the Zipper's "Naked and Famous". [ 6 ]
In 2000 Lucid Nation put out another collection of recordings from the DNA sessions called Suburban Legends , a totally improvisational album. The album got the attention of Randy Roark (assistant to Allen Ginsberg for sixteen years) who was interested in Spivey's writing. In 2002 Laccoon Press released "Dialogue of a Hundred Preoccupations" by Roark and Spivey. [ 7 ]
In 2002 the band came out with a double CD of improvised songs named Tacoma Ballet . Patty Schemel (of Hole ) volunteered to play drums [ 8 ] and Greta Brinkman (of Moby 's backing band) was on bass. [ 9 ] Larry Schemel of Death Valley Girls and Midnight Movies played guitar. Diane Naegel was recruited on keyboards and Lucid Nation recorded the whole album in Tacoma, Washington at Uptone Studio. There were no rehearsals, and Naegel had never played with a band before. The band recorded fifty-two tracks, thirty-two of which ended up on the album. Recording ended on September 10, 2001, and several of the songs foreshadowed 9/11 including the phrase "homeland security" and the chorus "everything's falling down" from the song "Fall." After some rearrangement, the songs were revealed to depict a story about a girl who realized the hypocrisy of her town, her family, and herself. [ 10 ] Tacoma Ballet was broken into two discs of sixteen songs each. The first was labeled What is the Answer? and the second one was named What is the Question? (inspired by the final words of Gertrude Stein ). The album gained critical praise from Rolling Stone and Magnet . Tacoma Ballet hit #8 most added on the College Music Journal charts in July 2002. [ 11 ]
In 2008 Lucid Nation headlined RockNRead at the VirginMega on Hollywood Boulevard where they covered a protest song written by Alex Maranjian called "Bring My Brothers Home". [ 12 ]
In 2011 Rookie included Lucid Nation in "Girl Germs", its list of favorite riot grrrl songs. [ 13 ]
In Jan. 2015 Rookie included Lucid Nation in its list "Staying Power: Music that endures." [ 14 ]
In February 2017 the band released a live video of the cover song "You Can't Put Your Arms Around A Memory" by Johnny Thunders , in honor of Tia Sprocket , drummer on the band's DNA and Suburban Legends records, who had recently died. [ 15 ]
In early 2018 Lucid Nation released Ecosteria, an 18-song record, on Bandcamp , Tidal , Amazon , Apple Music , Spotify , Rhapsody , Pandora Radio , and Slacker Radio . [ 16 ]
Rolling Stone wrote "If Spivey sounds spacey, she's not. Her songs range from aggressive, screaming punk to beautifully melodic rhythm and blues, the very definition of garage rock . Like Sleater-Kinney and Bikini Kill -- Lucid Nation has opened for both -- her band's music is raw, poetic, sloppy and infectious...simply bare-bones, kick-ass rock and roll ." [ 17 ]
Mario Mesquita Borges of Allmusic wrote "Lucid Nation's creations expose fierce streams of experimentalism within the rock genre by captioning a singular set of conceptual alternative pop/rock style, somehow following a similar trail as the one unclosed by Sonic Youth ... " [ 18 ] | https://en.wikipedia.org/wiki/Lucid_Nation |
Lucidchart is a web-based diagramming application [ 2 ] that allows users to visually collaborate on drawing, revising and sharing charts and diagrams, and improve processes, systems, and organizational structures. [ 3 ] [ 4 ] [ 5 ] It is produced by Lucid Software Inc., based in Utah , United States [ 3 ] [ 6 ] and co-founded by Ben Dilts and Karl Sun. [ 7 ]
In January 2011, Lucid Software Inc. was incorporated in Delaware through the conversion of Lucidchart, LLC, a Utah limited liability company formed in April 2009. [ 8 ] In 2010, Lucid announced that it had integrated Lucidchart into the Google Apps Marketplace . [ 9 ]
In 2011, Lucid raised $1 million in seed funding from 500 Startups , 2M Companies, K9 Ventures, and several angel investors. [ 4 ]
On October 17, 2018, Lucid announced it had raised an additional $72 million from Meritech Capital , Spectrum Equity and ICONIQ Capital . [ 10 ]
In 2020, Lucid launched a digital whiteboarding capability called Lucidspark. [ 11 ]
In 2021, Lucid launched a digital cloud visualization capability called Lucidscale.
Lucidchart is entirely browser-based, running on browsers that support HTML5 . [ 12 ] This means it does not require plugins or updates of a third-party software like Adobe Flash . [ 13 ] The platform supports real-time collaboration, allowing all users to work simultaneously on projects and see each user’s additions reflected in real time. [ 13 ] All data is encrypted and stored in secure data centers. [ 12 ]
Additional features include: [ 14 ] [ 10 ]
Lucidchart also supports importing files from draw.io, Gliffy, OmniGraffle, and Microsoft Visio. [ 15 ] The platform is integrated with Google Workspace and Drive, Microsoft Teams and other Office products, Atlassian’s Jira and Confluence, Salesforce, GitHub, Slack, and others. [ 13 ] [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Lucidchart |
Lucideon (formerly Ceram ) is an independent materials development, testing and assurance company based in Stoke-on-Trent and in the UK. Lucideon owns testing facilities around the world.
The British Refractories Research Association was formed in 1920. The pottery industry was required by the Import Duties Advisory Committee in 1937 to create a research association , so the British Pottery Research Association was formed in 1937. The two combined in April 1948 as the British Ceramic Research Association .
The original main building on Queens Road in Penkhull was opened by the Duke of Edinburgh in December 1951. In May 1986 it changed its name to British Ceramic Research Ltd, having been incorporated as a company on 18 November 1985.
From the late 1990s the company traded under the abbreviated name Ceram. On 1 February 2014 the company name changed to Lucideon Limited.
Lucideon is situated south of the University Hospital of North Staffordshire .
Lucideon incorporates:
Lucideon's laboratories and techniques are accredited by the United Kingdom Accreditation Service (UKAS).
Lucideon provides materials development, technologies, consultancy and testing and analysis to a diverse range of industries; principally healthcare, construction, ceramics, aerospace, nuclear and power generation. | https://en.wikipedia.org/wiki/Lucideon |
Lucie Blanquies was a woman scientist who worked in Madame Curie 's laboratory in Paris from 1908 to 1910. She measured the power of the alpha particles emitted by different radioactive materials. [ 1 ] [ 2 ] [ 3 ]
This biographical article about a chemist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lucie_Blanquies |
Luciferase is a generic term for the class of oxidative enzymes that produce bioluminescence , and is usually distinguished from a photoprotein . The name was first used by Raphaël Dubois who invented the words luciferin and luciferase , for the substrate and enzyme , respectively. [ 1 ] Both words are derived from the Latin word lucifer , meaning "lightbearer", which in turn is derived from the Latin words for "light" ( lux) and "to bring or carry" ( ferre) . [ 2 ]
Luciferases are widely used in biotechnology , for bioluminescence imaging [ 3 ] microscopy and as reporter genes , for many of the same applications as fluorescent proteins . However, unlike fluorescent proteins, luciferases do not require an external light source , but do require addition of luciferin , the consumable substrate.
A variety of organisms regulate their light production using different luciferases in a variety of light-emitting reactions. The majority of studied luciferases have been found in animals, including fireflies , [ 4 ] and many marine animals such as copepods , jellyfish , and the sea pansy . However, luciferases have been studied in luminous fungi, like the Jack-O-Lantern mushroom , as well as examples in other kingdoms including bioluminescent bacteria , and dinoflagellates .
The luciferases of fireflies – of which there are over 2000 species – and of the other Elateroidea (click beetles and relatives in general) are diverse enough to be useful in molecular phylogeny . [ 5 ] In fireflies, the oxygen required is supplied through a tube in the abdomen called the abdominal trachea . One well-studied luciferase is that of the Photinini firefly Photinus pyralis , which has an optimum pH of 7.8. [ 6 ]
Also well studied is the sea pansy, Renilla reniformis . In this organism, the luciferase ( Renilla-luciferin 2-monooxygenase ) is closely associated with a luciferin-binding protein as well as a green fluorescent protein ( GFP ). Calcium triggers release of the luciferin ( coelenterazine ) from the luciferin binding protein. The substrate is then available for oxidation by the luciferase, where it is degraded to coelenteramide with a resultant release of energy. In the absence of GFP, this energy would be released as a photon of blue light (peak emission wavelength 482 nm). However, due to the closely associated GFP, the energy released by the luciferase is instead coupled through resonance energy transfer to the fluorophore of the GFP, and is subsequently released as a photon of green light (peak emission wavelength 510 nm). The catalyzed reaction is: [ 7 ]
Newer luciferases have recently been identified that, unlike other luciferases, are naturally secreted molecules. One such example is the Metridia coelenterazine -dependent luciferase (MetLuc, A0A1L6CBM1 ) that is derived from the marine copepod Metridia longa . The Metridia longa secreted luciferase gene encodes a 24 kDa protein containing an N-terminal secretory signal peptide of 17 amino acid residues. The sensitivity and high signal intensity of this luciferase molecule proves advantageous in many reporter studies. Some of the benefits of using a secreted reporter molecule like MetLuc is its no-lysis protocol that allows one to be able to conduct live cell assays and multiple assays on the same cell. [ 8 ]
Bacterial bioluminescence is seen in Photobacterium species, Vibrio fischeri , Vibrio haweyi, and Vibrio harveyi . Light emission in some bioluminescent bacteria utilizes 'antenna' such as lumazine protein to accept the energy from the primary excited state on the luciferase, resulting in an excited lulnazine chromophore which emits light that is of a shorter wavelength (more blue), while in others use a yellow fluorescent protein (YFP) with flavin mononucleotide (FMN) as the chromophore and emits light that is red-shifted relative to that from luciferase. [ 9 ]
Dinoflagellate luciferase is a multi- domain eukaryote protein, consisting of an N-terminal domain, and three catalytic domains , each of which preceded by a helical bundle domain. The structure of the dinoflagellate luciferase catalytic domain has been solved. [ 10 ] The core part of the domain is a 10 stranded beta barrel that is structurally similar to lipocalins and FABP . [ 10 ] The N-terminal domain is conserved between dinoflagellate luciferase and luciferin binding proteins (LBPs). It has been suggested that this region may mediate an interaction between LBP and luciferase or their association with the vacuolar membrane. [ 11 ] The helical bundle domain has a three helix bundle structure that holds four important histidines that are thought to play a role in the pH regulation of the enzyme . [ 10 ] There is a large pocket in the β-barrel of the dinoflagellate luciferase at pH 8 to accommodate the tetrapyrrole substrate but there is no opening to allow the substrate to enter. Therefore, a significant conformational change must occur to provide access and space for a ligand in the active site and the source for this change is through the four N-terminal histidine residues. [ 10 ] At pH 8, it can be seen that the unprotonated histidine residues are involved in a network of hydrogen bonds at the interface of the helices in the bundle that block substrate access to the active site and disruption of this interaction by protonation (at pH 6.3) or by replacement of the histidine residues by alanine causes a large molecular motion of the bundle, separating the helices by 11Å and opening the catalytic site. [ 10 ] Logically, the histidine residues cannot be replaced by alanine in nature but this experimental replacement further confirms that the larger histidine residues block the active site. Additionally, three Gly-Gly sequences, one in the N-terminal helix and two in the helix-loop-helix motif, could serve as hinges about which the chains rotate in order to further open the pathway to the catalytic site and enlarge the active site. [ 10 ]
A dinoflagellate luciferase is capable of emitting light due to its interaction with its substrate ( luciferin ) and the luciferin-binding protein (LBP) in the scintillon organelle found in dinoflagellates. [ 10 ] The luciferase acts in accordance with luciferin and LBP in order to emit light but each component functions at a different pH. Luciferase and its domains are not active at pH 8 but they are extremely active at the optimum pH of 6.3 whereas LBP binds luciferin at pH 8 and releases it at pH 6.3. [ 10 ] Consequently, luciferin is only released to react with an active luciferase when the scintillon is acidified to pH 6.3. Therefore, in order to lower the pH, voltage-gated channels in the scintillon membrane are opened to allow the entry of protons from a vacuole possessing an action potential produced from a mechanical stimulation. [ 10 ] Hence, it can be seen that the action potential in the vacuolar membrane leads to acidification and this in turn allows the luciferin to be released to react with luciferase in the scintillon, producing a flash of blue light.
All luciferases are classified as oxidoreductases ( EC 1.13.12.- ), meaning they act on single donors with incorporation of molecular oxygen. Because luciferases are from many diverse protein families that are unrelated, there is no unifying mechanism, as any mechanism depends on the luciferase and luciferin combination. However, all characterised luciferase-luciferin reactions to date have been shown to require molecular oxygen at some stage.
Firefly Photinus pyralis luciferase in the adenylate-forming conformation bound to DLSA. Key interaction observed between K529 and carbonyl oxygen of adenylate. PDB 4G36]]
The luciferase of Photinus pyralis catalyzes a two-step bioluminescent reaction. First is adenylation , a process in which D-luciferin is converted to D-luciferyl-adenylate (D-AMP) via the covalent addition of adenosine monophosphate to an amino acid side chain. Next, oxidative decarboxylation of the adenylated intermediate occurs, a necessary step for light emission. Studies have presented the first crystal structure of luciferase in its second catalytic conformation using DLSA (5′-O-[N-(dehydroluciferyl)-sulfamoyl]adenosine), a stable analog of D-AMP. The Photinus pyralis luciferase in the adenylate-forming conformation bound to DLSA illustrates conserved interactions observed in other adenylate-forming enzymes as well as key insights into the mechanism of bioluminescence . The active site is located at the interface of the N-terminal and C-terminal domains. Lys529 is the catalytic lysine for the initial adenylation reaction, interacting with the carbonyl oxygen of the ligand. A transition to the oxidation-available conformation involves a ~140° rotation of the C-terminal domain, upon which oxidation initiates formation of the dioxetanone intermediate. The decomposition of this intermediate releases visible light. Unlike D-AMP, DLSA cannot undergo oxidation, but the “locking” of the enzyme in its second catalytic conformation allowed researchers to study the oxidation-ready state of luciferase. [ 12 ]
The reaction catalyzed by bacterial luciferase is also an oxidative process:
In the reaction, molecular oxygen oxidizes flavin mononucleotide and a long-chain aliphatic aldehyde to an aliphatic carboxylic acid . The reaction forms an excited hydroxyflavin intermediate, which is dehydrated to the product FMN to emit blue-green light. [ 13 ]
Nearly all of the energy input into the reaction is transformed into light. The reaction is 80% [ 14 ] to 90% [ 15 ] efficient. In comparison, the incandescent light bulb only converts about 10% of its energy into light [ 16 ] and a 150 lumen per Watt (lm/W) LED converts 20% of input energy to visible light. [ 15 ]
Luciferases can be produced in the lab through genetic engineering for a number of purposes. Luciferase genes can be synthesized and inserted into organisms or transfected into cells. As of 2002, mice , silkworms , and potatoes are just a few of the organisms that have already been engineered to produce the protein. [ 17 ]
In the luciferase reaction, light is emitted when luciferase acts on the appropriate luciferin substrate . Photon emission can be detected by light sensitive apparatus such as a luminometer or an optical microscope with a CCD camera . This allows observation of biological processes. [ 18 ] Since light excitation is not needed for luciferase bioluminescence, there is minimal autofluorescence and therefore the bioluminescent signal is virtually background-free. [ 19 ] Therefore, as little as 0.02 pg can still be accurately measured using a standard scintillation counter . [ 20 ]
In biological research, luciferase is commonly used as a reporter to assess the transcriptional activity in cells that are transfected with a genetic construct containing the luciferase gene under the control of a promoter of interest. [ 21 ] Additionally, proluminescent molecules that are converted to luciferin upon activity of a particular enzyme can be used to detect enzyme activity in coupled or two-step luciferase assays. Such substrates have been used to detect caspase activity and cytochrome P450 activity, among others. [ 18 ] [ 21 ]
Luciferase can also be used to detect the level of cellular ATP in cell viability assays or for kinase activity assays. [ 21 ] [ 22 ] Luciferase can act as an ATP sensor protein through biotinylation . Biotinylation will immobilize luciferase on the cell-surface by binding to a streptavidin - biotin complex. This allows luciferase to detect the efflux of ATP from the cell and will effectively display the real-time release of ATP through bioluminescence. [ 23 ] Luciferase can additionally be made more sensitive for ATP detection by increasing the luminescence intensity by changing certain amino acid residues in the sequence of the protein. [ 24 ]
Whole organism imaging (referred to as in vivo when intact or, otherwise called ex vivo imaging for example of living but explanted tissue) is a powerful technique for studying cell populations in live plants or animals, such as mice. [ 25 ] Different types of cells (e.g. bone marrow stem cells, T-cells) can be engineered to express a luciferase allowing their non-invasive visualization inside a live animal using a sensitive charge-couple device camera ( CCD camera ).This technique has been used to follow tumorigenesis and response of tumors to treatment in animal models. [ 26 ] [ 27 ] However, environmental factors and therapeutic interferences may cause some discrepancies between tumor burden and bioluminescence intensity in relation to changes in proliferative activity. The intensity of the signal measured by in vivo imaging may depend on various factors, such as D -luciferin absorption through the peritoneum, blood flow, cell membrane permeability, availability of co-factors, intracellular pH and transparency of overlying tissue, in addition to the amount of luciferase. [ 28 ]
Luciferase is a heat-sensitive protein that is used in studies on protein denaturation , testing the protective capacities of heat shock proteins . The opportunities for using luciferase continue to expand. [ 29 ] | https://en.wikipedia.org/wiki/Luciferase |
Luciferin (from Latin lucifer ' light-bearer ' ) is a generic term for the light-emitting compound found in organisms that generate bioluminescence . Luciferins typically undergo an enzyme -catalyzed reaction with molecular oxygen . The resulting transformation, which usually involves breaking off a molecular fragment, produces an excited state intermediate that emits light upon decaying to its ground state . The term may refer to molecules that are substrates for both luciferases and photoproteins . [ 1 ]
Luciferins are a class of small-molecule substrates that react with oxygen in the presence of a luciferase (an enzyme) to release energy in the form of light . It is not known just how many types of luciferins there are, but some of the better-studied compounds are listed below.
Because of the chemical diversity of luciferins, there is no clear unifying mechanism of action, except that all require molecular oxygen, [ 2 ] The variety of luciferins and luciferases, their diverse reaction mechanisms and the scattered phylogenetic distribution indicate that many of them have arisen independently in the course of evolution. [ 2 ]
Firefly luciferin is the luciferin found in many Lampyridae species, such as P. pyralis . It is the substrate of beetle luciferases ( EC 1.13.12.7) responsible for the characteristic yellow light emission from fireflies, though can cross-react to produce light with related enzymes from non-luminous species. [ 3 ] The chemistry is unusual, as adenosine triphosphate (ATP) is required for light emission, in addition to molecular oxygen . [ 4 ]
Latia luciferin is, in terms of chemistry, ( E )-2-methyl-4-(2,6,6-trimethyl-1-cyclohex-1-yl)-1-buten-1-ol formate and is from the freshwater snail Latia neritoides . [ 5 ]
Bacterial luciferin is two-component system consisting of flavin mononucleotide and a fatty aldehyde found in bioluminescent bacteria . [ 6 ]
Coelenterazine is found in radiolarians , ctenophores , cnidarians , squid , brittle stars , copepods , chaetognaths , fish, and shrimp. It is the prosthetic group in the protein aequorin responsible for the blue light emission. [ 7 ]
Dinoflagellate luciferin is a chlorophyll derivative (i. e. a tetrapyrrole ) and is found in some dinoflagellates , which are often responsible for the phenomenon of nighttime glowing waves (historically this was called phosphorescence , but is a misleading term). A very similar type of luciferin is found in some types of euphausiid shrimp . [ 8 ]
Vargulin is found in certain ostracods and deep-sea fish , to be specific, Poricthys . Like the compound coelenterazine, it is an imidazopyrazinone and emits primarily blue light in the animals.
Foxfire is the bioluminescence created by some species of fungi present in decaying wood. While there may be multiple different luciferins within the kingdom of fungi , 3-hydroxy hispidin was determined to be the luciferin in the fruiting bodies of several species of fungi, including Neonothopanus nambi , Omphalotus olearius , Omphalotus nidiformis , and Panellus stipticus . [ 9 ]
Luciferin is widely used in science and medicine as a method of in vivo imaging , using living organisms to non-invasively detect images and in molecular imaging. The reaction between luciferin substrate paired with the receptor enzyme luciferase produces a catalytic reaction, generating bioluminescence. [ 10 ] This reaction and the luminescence produced is useful for imaging such as detecting tumors from cancer or capable of measuring gene expression . | https://en.wikipedia.org/wiki/Luciferin |
Lucilia mexicana is a species of blow fly of the family Calliphoridae , one of many species known as a green bottle fly . Its habitat range extends from southwestern North America to Brazil . L. mexicana is typically 6–9 mm in length with metallic blue-green coloring. This species is very similar in appearance to L. coeruleiviridis , the primary difference being that L. mexicana has two or more complete rows of post-ocular setae . L. mexicana has the potential to be forensically important in the stored-products and medicocriminal fields, but more research is needed for the fly to be used as evidence in criminal investigations.
Lucilia mexicana , a member of the family Calliphoridae, was first described by the French entomologist Pierre-Justin-Marie Macquart in 1843. The genera Phaenicia , Bufolucilia , and Francilia are now synonymous with the genus name Lucilia . [ 1 ] In the 1940s, L. mexicana was expanded to include the species, L. unicolor , L. infuscata , and L. caesar . [ 2 ] Disputes remain as to whether L. mexicana is also synonymous with L. coeruleiviridis , in which case the name mexicana holds priority. [ 3 ]
As adults, the genus Lucilia is characterized by a shining green, blue, or bronze thorax and abdomen , a suprasquamal ridge (the ridge above the squamal lobes at the base of the wings) with setae, and no hair on the lower calypter . L. mexicana adults are most commonly differentiated from other Lucilia species by two or more complete rows of black post-ocular setae on the head. [ 1 ]
L. mexicana is normally 6–9 mm in length. This species specifically has a metallic blue green thorax with purple tints, a propleuron with black setae and dark brown basal sclerites at the wing. The legs of adults are usually black. The abdomen is similarly colored like the thorax in four segments . The first segment is mainly purple, the second segment may or may not have a row a bristles , the third segment has a row of bristles, and the fourth segment has scattered erect bristles. [ 2 ] The head of the species has black cheeks with black vestiture or hairs. [ 1 ] The back of the head is also black with an orange colored metacephalon (a region on the posterior of the head) that distinctly shows the two or more rows of post-ocular setae . [ 2 ] The main difference between the males and females is that males have their frontal plates separated with a wider frontal vittal and the females have a broader frons and larger head width. [ 1 ]
The larvae in the third instar stage of L. mexicana does not have a defined sclerotized head capsule. It has a smooth body and lacks lateral processes . [ 4 ] It has eleven posteriorly spinose dorsal segments with a median pair of tubercles on the upper border of the stigmal field. [ 2 ] Lucilia larvae have posterior spiracles and tend to be larger larvae ranging from 9–18 mm in length. These larva bodies have peritreme, the area around the spiracles, with three distinct non-sinuous slits. The spiracle plates and button are also not heavily sclerotized. [ 4 ]
Eggs are laid in carrion . The egg to first instar cycle of L. mexicana can take from 7–14 hours of incubation. As with all insects, developmental rates depend on temperature and degree days . Egg hatching does not occur at temperatures below 75 °F. At 75 °F, it takes the eggs 14.03 hours to hatch. Hatching also ceases when temperatures are higher than 99 °F, and at 99 °F it takes the eggs 8.12 hours to hatch. It has been discovered that the optimal temperature is 94 °F taking 7.77 hours. [ 2 ]
The larval stage has three distinct instars. In the first instar, the spine is heavily pigmented with tubercles on the last segment. The second instar develops the spine into a complete band on segments 2–8. This stage also develops anterior spiracles with six to eight branches. The third instar has narrow or not heavily pigmented posterior spiracles and varying distribution of spines. [ 2 ]
L. mexicana is geographically distributed in North and South America . Specifically, their range is in the southwestern United States and extends into Mexico . This species is also found in Brazil and Central America , although they are not abundant there. [ 2 ] In Texas , the range for L. mexicana is similar to L. eximia . [ 1 ] Lucilia mexicana is mainly found in wooded areas, but may also inhabit urban areas due to its attraction to animal and human feces , garbage and fresh carrion . [ 3 ]
In the field of medico-criminal forensic entomology , L. mexicana can be used to determine post mortem intervals using a time of colonization on corpses because the fly is attracted to freshly killed animal carcasses. [ 3 ] As for other fields of forensic entomology, stored product specialists should keep in mind that although attributed to L. coeruleiviridis , [ 5 ] an episode of contamination of drying fruit in Sacramento Valley was most likely caused by L. mexicana . [ 3 ] More research on the insect is needed in order to efficiently use it in criminal and health cases.
A recent study investigated the Calliphoridae population present on pig carcasses in three different Texas cities during the summer months. Results showed that in two consecutive years, there was a considerable difference in the abundance of L. mexicana found in the three cities of Junction , Guadalupe , and Lubbock from one year to the next. Therefore, L. mexicana , among other calliphorids, can fluctuate in abundance at different individual locations over successive years. This is critical for forensic entomologists to consider when investigating evidence involving Calliphoridae. [ 6 ]
Further research with DNA analysis of Lucilia sp. would clarify discrepancies between morphological and molecular similarities. [ 7 ] One study about this involved a systematic collection of blowflies from the Mexican, Caribbean and Florida regions to test the reliability of DNA barcodes and develop a voucher collection , especially so that larvae could be identified accurately. Adults of the species L. coeruleiviridis and L. mexicana , found in Florida and Mexico respectively, are readily distinguished by morphology, but required analysis of at least two barcode regions to resolve through molecular genetics methods. [ 8 ]
Research on degree days and hours for L. mexicana would benefit investigations involving post mortem intervals. | https://en.wikipedia.org/wiki/Lucilia_mexicana |
See text.
Lucinidae , common name hatchet shells , is a family of saltwater clams , marine bivalve molluscs .
These bivalves are remarkable for their endosymbiosis with sulphide -oxidizing bacteria . [ 1 ]
The members of this family have a worldwide distribution. They are found in muddy sand or gravel at or below low tide mark. But they can also be found at bathyal depths. They have characteristically rounded shells with forward-facing projections. The shell is predominantly white and buff and is often thin-shelled. The shells are equivalve with unequal sides. The umbones (the apical part of each valve) are just anterior to mid-line. The adductor scars are unequal: the anterior are narrower and somewhat longer than the posterior. They are partly or largely separated from the pallial line. The valves are flattened and etched with concentric or radial rings. Each valve bears two cardinal and two plate-like lateral teeth. These molluscs do not have siphons but the extremely long foot makes a channel which is then lined with slime and serves for the intake and expulsion of water. The ligament is external and is often deeply inset. The pallial line lacks a sinus. [ 2 ]
An Eocene species Superlucina megameris was the largest lucinid ever recorded, with shell size up to 31.1 centimetres (12.2 in) high, over 28 centimetres (11 in) wide and 8.6 centimetres (3.4 in) thick. [ 3 ]
Lucinids host their sulfur-oxidizing symbionts in specialized gill cells called bacteriocytes. [ 4 ] Lucinids are burrowing bivalves that live in environments with sulfide-rich sediments. [ 5 ] The bivalve will pump sulfide-rich water over its gills from the inhalant siphon in order to provide symbionts with sulfur and oxygen. [ 5 ] The endosymbionts then use these substrates to fix carbon into organic compounds, which are then transferred to the host as nutrients. [ 6 ] During periods of starvation, lucinids may harvest and digest their symbionts as food. [ 6 ]
Symbionts are acquired via phagocytosis of bacteria by bacterioctyes. [ 7 ] Symbiont transmission occurs horizontally, where juvenile lucinids are aposymbiotic and acquire their symbionts from the environment in each generation. [ 8 ] Lucinids maintain their symbiont population by reacquiring sulfur-oxidizing bacteria throughout their lifetime. [ 9 ] Although process of symbiont acquisition is not entirely characterized, it likely involves the use of the binding protein, codakine, isolated from the lucinid bivalve, Codakia orbicularis . [ 10 ] It is also known that symbionts do not replicate within bacteriocytes because of inhibition by the host. However, this mechanism is not well understood. [ 9 ]
Lucinid bivalves originated in the Silurian ; however, they did not diversify until the late Cretaceous , along with the evolution of seagrass meadows and mangrove swamps . [ 11 ] Lucinids were able to colonize these sulfide rich sediments because they already maintained a population of sulfide-oxidizing symbionts. In modern environments, seagrass, lucinid bivalves, and the sulfur-oxidizing symbionts constitute a three-way symbiosis. Because of the lack of oxygen in coastal marine sediments, dense seagrass meadows produce sulfide-rich sediments by trapping organic matter that is later decomposed by sulfate-reducing bacteria. [ 12 ] The lucinid-symbiont holobiont removes toxic sulfide from the sediment, and the seagrass roots provide oxygen to the bivalve-symbiont system. [ 12 ]
The symbionts from at least two species of lucinid clams, Codakia orbicularis and Loripes lucinalis , are able to fix nitrogen gas into organic nitrogen. [ 13 ] [ 14 ]
The following genera are recognised in the family Lucinidae: [ 15 ] | https://en.wikipedia.org/wiki/Lucinidae |
Lucy Marie Ziurys (born May 6, 1957) [ 1 ] is an American astrochemist known for her work on high-resolution molecular spectroscopy . She is Regent's Professor of Chemistry & Biology and of Astronomy at the University of Arizona . [ 2 ]
Ziurys's work has discovered new molecules in interstellar space and in carbon-rich circumstellar envelopes , found unexpectedly long molecular lifetimes in planetary nebulae , [ 3 ] and made pioneering high-resolution submillimeter astronomy observations of supermassive black holes using very-long-baseline interferometry . [ 4 ] In earthbound experiments, she has also found models for the interstellar creation of buckminsterfullerene . [ 5 ]
Ziurys is originally from Annapolis, Maryland . She majored in chemistry, chemical physics, and physics at Rice University , graduating summa cum laude in 1978. She went to the University of California, Berkeley for graduate study in physical chemistry , completing a Ph.D. in 1984 under the supervision of Richard J. Saykally . [ 1 ]
After postdoctoral research University of Massachusetts Amherst , working in the Five College Radio Astronomy Observatory , she joined the Arizona State University Department of Chemistry in 1988. She moved to the University of Arizona in 1997. At the University of Arizona, she directed the Arizona Radio Observatory from 2000 to 2016. [ 1 ] She was named Regent's Professor in 2019 [ 3 ]
In 2008, Ziurys was named a Fellow of the American Physical Society (APS), after a nomination from the APS Division of Atomic, Molecular & Optical Physics, "for forefront contributions in molecular spectroscopy leading to new discoveries and understanding of molecules in interstellar and circumstellar environments". [ 6 ]
In 2015, Ziurys won the Barbara Mez-Starck Prize "for her microwave spectroscopic studies of transition metal compounds in high spin states as well as for her laboratory investigations with interplay with astrophysics, astrochemistry, and astrobiology". [ 7 ] She was the 2019 winner of the Laboratory Astrophysics Prize, the highest honor of the Laboratory Astrophysics Division of the American Astronomical Society . [ 8 ]
She was part of the Event Horizon Telescope team, which won the 2020 Breakthrough Prize in Fundamental Physics . [ 9 ] | https://en.wikipedia.org/wiki/Lucy_Ziurys |
In proof theory , ludics is an analysis of the principles governing inference rules of mathematical logic . Key features of ludics include notion of compound connectives, using a technique known as focusing or focalisation (invented by the computer scientist Jean-Marc Andreoli ), and its use of locations or loci over a base instead of propositions .
More precisely, ludics tries to retrieve known logical connectives and proof behaviours by following the paradigm of interactive computation, similarly to what is done in game semantics to which it is closely related. By abstracting the notion of formulae and focusing on their concrete uses—that is distinct occurrences—it provides an abstract syntax for computer science , as loci can be seen as pointers on memory.
The primary achievement of ludics is the discovery of a relationship between two natural, but distinct notions of type , or proposition.
The first view, which might be termed the proof-theoretic or Gentzen -style interpretation of propositions, says that the meaning of a proposition arises from its introduction and elimination rules. Focalization refines this viewpoint by distinguishing between positive propositions, whose meaning arises from their introduction rules, and negative propositions, whose meaning arises from their elimination rules. In focused calculi, it is possible to define positive connectives by giving only their introduction rules, with the shape of the elimination rules being forced by this choice. (Symmetrically, negative connectives can be defined in focused calculi by giving only the elimination rules, with the introduction rules forced by this choice.)
The second view, which might be termed the computational or Brouwer–Heyting–Kolmogorov interpretation of propositions, takes the view that we fix a computational system up front, and then give a realizability interpretation of propositions to give them constructive content. For example, a realizer for the proposition "A implies B" is a computable function that takes a realizer for A, and uses it to compute a realizer for B. Realizability models characterize realizers for propositions in terms of their visible behavior, and not in terms of their internal structure.
Girard shows that for second-order affine linear logic , given a computational system with nontermination and error stops as effects, realizability and focalization give the same meaning to types.
Ludics was proposed by the logician Jean-Yves Girard . His paper introducing ludics, Locus solum: from the rules of logic to the logic of rules , has some features that may be seen as eccentric for a publication in mathematical logic (such as illustrations of skunks). The intent of these features is to enforce the point of view of Jean-Yves Girard at the time of its writing. And, thus, it offers to readers the possibility to understand ludics independently of their backgrounds. [ dubious – discuss ] [ citation needed ] | https://en.wikipedia.org/wiki/Ludics |
A Ludwieg tube is a cheap and efficient way of producing supersonic flow. Mach numbers up to 4 in air are easily obtained without any additional heating of the flow. With heating, Mach numbers of up to 11 can be reached.
A Ludwieg tube is a wind tunnel that produces supersonic flow for short periods of time. A large evacuated dump tank is separated from the downstream end of a convergent-divergent nozzle by a diaphragm or fast acting valve. The upstream end of the nozzle connects to a long cylindrical tube, whose cross-sectional area is significantly larger than the throat area of the nozzle. Initially, the pressure in the nozzle and tube is high. To start the tunnel, the diaphragm is ruptured, e.g., by piercing it with a suitable cutting device, or opening the valve respectively. As always when a diaphragm ruptures, a shock wave propagates into the low-pressure region (here the dump tank) and an expansion wave propagates into the high-pressure region (here the nozzle and the long tube). As this unsteady expansion propagates through the long tube, it sets up a steady subsonic flow toward the nozzle, which is accelerated by the convergent-divergent nozzle to a supersonic condition. The flow is steady until the expansion, having been reflected from the far end of the tube, arrives at the nozzle again. For practical reasons, flow times are about 100 milliseconds for most Ludwieg tubes. [ 1 ] For many purposes, this flow duration is sufficient. However, by taking advantage of multiple quasi-static flows between expansion wave reflections, experimentation times of up to 6 seconds can be achieved. [ 2 ]
The Ludwieg tube was invented by Hubert Ludwieg (1912-2000) in 1955 in response to a competition for a transonic or supersonic wind tunnel design that would be capable of producing high Reynolds number at low operating cost. Professor Ludwieg was also responsible for the experimental demonstration and explanation of the large effect of sweep on the drag of transonic wings (his dissertation in 1937). | https://en.wikipedia.org/wiki/Ludwieg_tube |
The Ludwig Biermann Award is an annual prize awarded by the German Astronomische Gesellschaft (German Astronomical Society) to an outstanding young astronomer . [ 1 ] The prize is named in honour of the German astronomer Ludwig Biermann and was first awarded in 1989, three years after his death. Nominees for the award must be under the age of 35. [ 2 ] The monetary value of the award is 2500 €, and it is intended to enable the awardee to make one or more research visits to an institute of their choice. Usually, only a single prize is awarded per year, but in a few cases, two
prizes have been awarded. [ 3 ] | https://en.wikipedia.org/wiki/Ludwig_Biermann_Award |
Ludwig Blattner (5 February 1880 – 29 October 1935) was a German-born inventor, film producer, director and studio owner in the United Kingdom, and developer of one of the earliest magnetic sound recording devices. [ 1 ]
Ludwig Blattner, also known as Louis Blattner , [ 2 ] was a pioneer of early magnetic sound recording, licensing a steel wire-based design from German inventor Dr. Kurt Stille, [ citation needed ] and enhancing it to use steel tape instead of wire, thereby creating an early form of tape recorder . This device was marketed as the Blattnerphone. [ 3 ] Whilst on a promotional tour of his sound recording technology in 1928 he would choose ladies from the audience to dance with to music being played from a Blattnerphone. [ 4 ]
Prior to the First World War, Blattner was involved in the entertainment industry in the Liverpool City Region : he managed the "La Scala" cinema in Wallasey from 1912 to 1914, conducted the cinema's orchestra, and composed a waltz "The Ladies of Wallasey". [ 5 ] In about 1920 he moved to Manchester where he managed a chain of cinemas. [ 6 ] There, in 1923 he composed and published a piece of music about the film actress Pola Negri titled "Pola Negri Grand Souvenir March". [ 7 ] Later in the 1920s, he bought the British film rights to Lion Feuchtwanger 's novel Jew Süss although the film was not made until 1934 after Blattner had sold the rights [ 8 ] to Gaumont British . In early 1928, press reports appeared saying that Blattner was planning a 400-acre "Hollywood, England" complex with a hospital, 150 room hotel, aeroplane club and the largest collection of studios in the world, for which he was planning to spend between 2 million and 5 million pounds. [ 9 ] Blattner later formed the Ludwig Blattner Picture Corporation in Borehamwood in the studio complex that is now known as BBC Elstree Centre , buying the Ideal Film Company studio (formerly known as Neptune Studios) in 1928, renaming it as Blattner Studios. [ 10 ] In 1928 his company produced a series of short films of musical performances such as "Albert Sandler and His Violin [Serenade – Schubert]" and "Teddy Brown and His Xylophone". The best known films produced by his film company were A Knight in London (1929) and My Lucky Star (1933), which was co-directed by Blattner. Films produced by other companies at the Blattner Studios included Dorothy Gish and Charles Laughton 's first drama talkie Wolves (1930), [ 11 ] the 1934 adaptation of Edgar Allan Poe 's short story " The Tell-Tale Heart ", [ 12 ] Rookery Nook (1930) and A Lucky Sweep (1932). [ 13 ]
Ludwig Blattner was also involved in an early colour motion picture process: in about 1929 he bought the rights for the use outside the USA of a lenticular colour process called Keller-Dorian cinematography . [ 14 ] This process was then known as the Blattner Keller-Dorian process, [ 15 ] which lost out to rival colour systems.
Ludwig Blattner originally intended the Blattnerphone to be used as a system of recording and playback for talking pictures, [ 16 ] but the BBC saw its potential to record and "timeshift" BBC radio programmes for use with the BBC Empire Service , and rented several Blattnerphones from 1930 onwards, one of which was used to record King George V's speech at the opening of the India Round Table Conference on 12 November 1930. [ 17 ] The 1932 BBC Year Book (covering November 1930 to October 1931) said: [ 18 ]
In some ways the most important event of the year has been the adoption by the B.B.C. of the Blattnerphone recording apparatus described in the Technical Section. For years the B.B.C 's programme officials have longed for a machine which would be useful on the one hand for recording outside events such as commentaries, speeches, etc., of which normally no record existed, and on the other for rehearsals, and in particular for enabling certain broadcasters to hear themselves as others hear them.
In 1939, the BBC used a Blattnerphone (not the later Marconi-Stille recorder) to record Prime Minister Neville Chamberlain 's announcement to Britain of the outbreak of World War II . [ 19 ]
In 1930, Blattner promoted a version of his Blattnerphone technology as one of the first telephone answering machines , [ 20 ] and in 1931 Blatter promoted a version of the Blattnerphone as the Blattner Book Reader, an early Audiobook playback system for the blind. [ 21 ] [ 22 ]
Despite being a "promoter of genius with far-seeing ideas about technical developments in sound and colour" according to the film director Michael Powell , [ 23 ] business problems with the studio, due to the advent of rival talking picture systems, led to heavy financial loss, and in 1934 Joe Rock leased Elstree Studios from Ludwig Blattner, and bought it outright in 1936, a year after Blattner's suicide. [ 24 ]
Born into a Jewish family in Altona, Hamburg, Blattner first visited Great Britain in 1897. He appears to have returned later and worked for a while in the publicity department of Mellin's Food probably arranged through family contact with Gustav Mellin. [ 25 ] [ 26 ] He moved to Birkenhead by 1901 and settled in New Brighton, Merseyside where he married Margaret Mary Gracey and they had two British-born children, Gerry Blattner (born 1913 in Liverpool ), [ 27 ] and Betty Blattner (born in 1914 in Cheshire). [ 28 ] They both followed their father into the film business, Gerry as a producer and Betty as a makeup artist. [ 29 ] Ludwig Blattner never became a British citizen, and during the First World War he remained in an internment camp, which interrupted his management of the Gaiety cinema in Wallasey. [ 30 ] The hearsay based suggestion in a letter by Jay Leyda in 1968 that he married Else (also known as Elisabeth), the widow of Edmund Meisel the composer of the score for Battleship Potemkin , some time after Meisel's death in 1930, is without any hard evidence. Indeed, he was resident with his wife Margaret Mary at the Country Club in Elstree when he took his own life in 1935. [ 31 ]
Ludwig hanged himself at the Elstree Country Club in October 1935, when his son was 22 and his daughter was 21. Ludwig and Gerry were honoured by the naming of Blattner Close in Elstree in the mid-1990s. [ 32 ] [ 33 ] [ 34 ] | https://en.wikipedia.org/wiki/Ludwig_Blattner |
Ludwig Josef Johann Wittgenstein ( / ˈ v ɪ t ɡ ən ʃ t aɪ n , - s t aɪ n / VIT -gən-s(h)tyne ; [ 6 ] Austrian German: [ˈluːdvɪɡ ˈjoːsɛf ˈjoːhan ˈvɪtɡn̩ʃtaɪn] ; 26 April 1889 – 29 April 1951) was an Austrian philosopher who worked primarily in logic , the philosophy of mathematics , the philosophy of mind , and the philosophy of language . [ 7 ]
From 1929 to 1947, Wittgenstein taught at the University of Cambridge . [ 7 ] Despite his position, only one book of his philosophy was published during his entire life: the 75-page Logisch-Philosophische Abhandlung ( Logical-Philosophical Treatise , 1921), which appeared, together with an English translation, in 1922 under the Latin title Tractatus Logico-Philosophicus . His only other published works were an article, " Some Remarks on Logical Form " (1929); a book review; and a children's dictionary. [ a ] [ b ] His voluminous manuscripts were edited and published posthumously. The first and best-known of this posthumous series is the 1953 book Philosophical Investigations . A 1999 survey among American university and college teachers ranked the Investigations as the most important book of 20th-century philosophy , standing out as "the one crossover masterpiece in twentieth-century philosophy, appealing across diverse specializations and philosophical orientations". [ 8 ]
His philosophy is often divided into an early period, exemplified by the Tractatus , and a later period, articulated primarily in the Philosophical Investigations . [ 9 ] The "early Wittgenstein" was concerned with the logical relationship between propositions and the world, and he believed that by providing an account of the logic underlying this relationship, he had solved all philosophical problems. The "later Wittgenstein", however, rejected many of the assumptions of the Tractatus , arguing that the meaning of words is best understood as their use within a given language game . [ 10 ] More precisely, Wittgenstein wrote, "For a large class of cases of the employment of the word 'meaning'—though not for all —this word can be explained in this way: the meaning of a word is its use in the language." [ 11 ]
Born in Vienna into one of Europe's richest families, he inherited a fortune from his father in 1913. Before World War I, he "made a very generous financial bequest to a group of poets and artists chosen by Ludwig von Ficker, the editor of Der Brenner , from artists in need. These included [Georg] Trakl as well as Rainer Maria Rilke and the architect Adolf Loos ", [ 12 ] as well as the painter Oskar Kokoschka . [ 13 ] "In autumn 1916, as his sister reported, 'Ludwig made a donation of a million crowns ["equivalent to about $2,869,000 in 2016 dollars"] for the construction of a 30cm mortar.'" [ 14 ] Later, in a period of severe personal depression after World War I, he gave away his remaining fortune to his brothers and sisters. [ 15 ] [ 16 ] Three of his four older brothers died by separate acts of suicide. Wittgenstein left academia several times: serving as an officer on the front line during World War I, where he was decorated a number of times for his courage; teaching in schools in remote Austrian villages, where he encountered controversy for using sometimes violent corporal punishment on both girls and boys (see, for example, the Haidbauer incident ), especially during mathematics classes; working during World War II as a hospital porter in London; and working as a hospital laboratory technician at the Royal Victoria Infirmary in Newcastle upon Tyne .
According to a family tree prepared in Jerusalem after World War II, Wittgenstein's paternal great-great-grandfather was Moses Meier, [ 18 ] an Ashkenazi Jewish land agent who lived with his wife, Brendel Simon, in Bad Laasphe in the Principality of Wittgenstein , Westphalia . [ 19 ] In July 1808, Napoleon issued a decree that everyone, including Jews, must adopt an inheritable family surname, so Meier's son, also Moses, took the name of his employers, the Sayn-Wittgensteins , and became Moses Meier Wittgenstein. [ 20 ] His son, Hermann Christian Wittgenstein — who took the middle name "Christian" to distance himself from his Jewish background — married Fanny Figdor, also Jewish, who converted to Protestantism just before they married, and the couple founded a successful business trading in wool in Leipzig . [ 21 ] Ludwig's grandmother Fanny was a first cousin of the violinist Joseph Joachim . [ 22 ]
They had 11 children – among them Wittgenstein's father. Karl Otto Clemens Wittgenstein (1847–1913) became an industrial tycoon, and by the late 1880s was one of the richest men in Europe, with an effective monopoly on Austria's steel cartel. [ 17 ] [ 23 ] Thanks to Karl, the Wittgensteins became the second wealthiest family in the Austro-Hungarian Empire , only the Rothschilds being wealthier. [ 23 ] Karl Wittgenstein was viewed as the Austrian equivalent of Andrew Carnegie , with whom he was friends, and was one of the wealthiest men in the world by the 1890s. [ 17 ] As a result of his decision in 1898 to invest substantially in the Netherlands and in Switzerland as well as overseas, particularly in the US, the family was to an extent shielded from the hyperinflation that hit Austria in 1922 . [ 24 ] However, their wealth diminished due to post-1918 hyperinflation and subsequently during the Great Depression , although even as late as 1938 they owned 13 mansions in Vienna alone. [ 25 ]
Wittgenstein was ethnically Jewish . [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] His mother was Leopoldine Maria Josefa Kalmus, known among friends as "Poldi". Her father was a Bohemian Jew, and her mother was an Austrian- Slovene Catholic—she was Wittgenstein's only non-Jewish grandparent. Poldi was an aunt of the Nobel Prize laureate Friedrich Hayek on his maternal side. Wittgenstein was born at 8:30 PM on 26 April 1889 in the "Villa Wittgenstein" at what is today Neuwaldegger Straße 38 in the suburban parish Neuwaldegg [ de ] next to Vienna. [ 31 ] [ 32 ]
Karl and Poldi had nine children in all—four girls: Hermine, Margaret (Gretl), Helene, and a fourth daughter Dora who died as a baby; and five boys: Johannes (Hans), Kurt, Rudolf (Rudi), Paul —who became a concert pianist despite losing an arm in World War I—and Ludwig, who was the youngest of the family. [ 33 ]
The children were baptized as Catholics, received formal Catholic instruction, and were raised in an exceptionally intense environment. [ 34 ] [ page needed ] The family was at the centre of Vienna's cultural life; Bruno Walter described the life at the Wittgensteins' palace as an "all-pervading atmosphere of humanity and culture". [ 35 ] Karl was a leading patron of the arts, commissioning works by Auguste Rodin and financing the city's exhibition hall and art gallery, the Secession Building . Gustav Klimt painted a portrait of Wittgenstein's sister Margaret for her wedding, [ 36 ] and Johannes Brahms and Gustav Mahler gave regular concerts in the family's numerous music rooms. [ 35 ] [ 37 ]
Wittgenstein, who valued precision and discipline, never considered contemporary classical music acceptable. He said to his friend Drury in 1930:
Music came to a full stop with Brahms ; and even in Brahms I can begin to hear the noise of machinery. [ 38 ]
Ludwig Wittgenstein himself had absolute pitch , [ 39 ] and his devotion to music remained vitally important to him throughout his life; he made frequent use of musical examples and metaphors in his philosophical writings, and he was unusually adept at whistling lengthy and detailed musical passages. [ 40 ] He also learnt to play the clarinet in his 30s. [ 41 ] A fragment of music (three bars), composed by Wittgenstein, was discovered in one of his 1931 notebooks, by Michael Nedo , director of the Wittgenstein Institute in Cambridge. [ 42 ]
Ray Monk writes that Karl's aim was to turn his sons into captains of industry; they were not sent to school lest they acquire bad habits but were educated at home to prepare them for work in Karl's industrial empire. [ 43 ] Three of the five brothers later committed suicide. [ 44 ] [ 45 ] Psychiatrist Michael Fitzgerald argues that Karl was a harsh perfectionist who lacked empathy, and that Wittgenstein's mother was anxious and insecure, unable to stand up to her husband. [ 46 ] Johannes Brahms said of the family, whom he visited regularly:
They seemed to act towards one another as if they were at court. [ 23 ]
The family appeared to have a strong streak of depression running through it. Anthony Gottlieb tells a story about Paul practising on one of the pianos in the Wittgensteins' main family mansion, when he suddenly shouted at Ludwig in the next room:
I cannot play when you are in the house, as I feel your skepticism seeping towards me from under the door! [ 27 ]
The family palace housed seven grand pianos [ 47 ] and each of the siblings pursued music "with an enthusiasm that, at times, bordered on the pathological". [ 48 ] The eldest brother, Hans, was hailed as a musical prodigy. At the age of four, writes Alexander Waugh , Hans could identify the Doppler effect in a passing siren as a quarter-tone drop in pitch, and at five started crying "Wrong! Wrong!" when two brass bands in a carnival played the same tune in different keys . But he died in mysterious circumstances in May 1902, when he ran away to the US and disappeared from a boat in Chesapeake Bay , most likely having committed suicide. [ 49 ] [ 50 ]
Two years later, aged 22 and studying chemistry at the Berlin Academy , the third eldest brother, Rudi, committed suicide in a Berlin bar. He had asked the pianist to play Thomas Koschat 's " Verlassen, verlassen, verlassen bin ich " ("Forsaken, forsaken, forsaken am I"), before mixing himself a drink of milk and potassium cyanide . He had left several suicide notes, one to his parents that said he was grieving over the death of a friend, and another that referred to his "perverted disposition". It was reported at the time that he had sought advice from the Scientific-Humanitarian Committee , an organization that was campaigning against Paragraph 175 of the German Criminal Code, which prohibited homosexual sex. His father forbade the family from ever mentioning his name again. [ 51 ] [ 52 ] [ 53 ] [ 27 ] (Ludwig himself was a closeted homosexual, who separated sexual intercourse from love, despising all forms of the former. [ 54 ] )
The second eldest brother, Kurt, an officer and company director, shot himself on 27 October 1918, just before the end of World War I, when the Austrian troops he was commanding refused to obey his orders and deserted en masse . [ 43 ] According to Gottlieb, Hermine had said Kurt seemed to carry "the germ of disgust for life within himself". [ 55 ] Later, Ludwig wrote:
I ought to have ... become a star in the sky. Instead of which I have remained stuck on earth. [ 56 ]
Wittgenstein was taught by private tutors at home until he was 14 years old. Subsequently, for three years, he attended a school. After the deaths of Hans and Rudi, Karl relented and allowed Paul and Ludwig to be sent to school. Waugh writes that it was too late for Wittgenstein to pass his exams for the more academic Gymnasium in Wiener Neustadt; having had no formal schooling, he failed his entrance exam and only barely managed after extra tutoring to pass the exam for the more technically oriented k.u.k. Realschule in Linz , a small state school with 300 pupils. [ 57 ] [ 58 ] [ c ] In 1903, when he was 14, he began his three years of formal schooling there, lodging nearby during the term with the family of Josef Strigl, a teacher at the local gymnasium, the family giving him the nickname Luki. [ 59 ] [ 60 ]
On starting at the Realschule, Wittgenstein had been moved forward a year. [ 59 ] Historian Brigitte Hamann writes that he stood out from the other boys: he spoke an unusually pure form of High German with a stutter, dressed elegantly, and was sensitive and unsociable. [ 61 ] Monk writes that the other boys made fun of him, singing after him: "Wittgenstein wandelt wehmütig widriger Winde wegen Wienwärts" [ 41 ] ("Wittgenstein wanders wistfully Vienna-wards (in) worsening winds"). In his leaving certificate, he received a top mark (5) in religious studies; a 2 for conduct and English, 3 for French, geography, history, mathematics and physics, and 4 for German, chemistry, geometry and freehand drawing. [ 59 ] He had particular difficulty with spelling and failed his written German exam because of it. He wrote in 1931:
My bad spelling in youth, up to the age of about 18 or 19, is connected with the whole of the rest of my character (my weakness in study). [ 59 ]
Wittgenstein was baptized as an infant by a Catholic priest and received formal instruction in Catholic doctrine as a child, as was common at the time. [ 34 ] [ page needed ] In an interview, his sister Gretl Stonborough-Wittgenstein says that their grandfather's "strong, severe, partly ascetic Christianity " was a strong influence on all the Wittgenstein children. [ 62 ] While he was at the Realschule , he decided he lacked religious faith and began reading Arthur Schopenhauer per Gretl's recommendation. [ 63 ] He nevertheless believed in the importance of the idea of confession . He wrote in his diaries about having made a major confession to his oldest sister, Hermine, while he was at the Realschule ; Monk speculates that it may have been about his loss of faith. He also discussed it with Gretl, his other sister, who directed him to Schopenhauer's The World as Will and Representation . [ 63 ] As a teenager, Wittgenstein adopted Schopenhauer's epistemological idealism . However, after he studied the philosophy of mathematics, he abandoned epistemological idealism for Gottlob Frege 's conceptual realism . [ 64 ] In later years, Wittgenstein was highly dismissive of Schopenhauer, describing him as an ultimately "shallow" thinker:
One could call Schopenhauer a quite crude mind.... Where real depth starts, his finishes. [ 65 ]
Wittgenstein's relationship with Christianity and with religion in general, for which he always professed a sincere and devoted sympathy, changed over time, much like his philosophical ideas. [ 66 ] In 1912, Wittgenstein wrote to Russell saying that Mozart and Beethoven were the actual sons of God. [ 67 ] However, Wittgenstein resisted formal religion, saying it was hard for him to "bend the knee", [ 68 ] though his grandfather's beliefs continued to influence Wittgenstein – as he said, "I cannot help seeing every problem from a religious point of view." [ 69 ] Wittgenstein referred to Augustine of Hippo in his Philosophical Investigations . Philosophically, Wittgenstein's thought shows alignment with religious discourse. [ 70 ] For example, he would become one of the century's fiercest critics of scientism . [ 71 ] Wittgenstein's religious belief emerged during his service for the Austrian army in World War I, [ 72 ] and he was a devoted reader of Dostoevsky's and Tolstoy's religious writings. [ 73 ] He viewed his wartime experiences as a trial in which he strove to conform to the will of God, and in a journal entry from 29 April 1915, he writes:
Perhaps the nearness of death will bring me the light of life. May God enlighten me. I am a worm, but through God I become a man. God be with me. Amen. [ 74 ]
Around this time, Wittgenstein wrote that "Christianity is indeed the only sure way to happiness", but he rejected the idea that religious belief was merely thinking that a certain doctrine was true. [ 75 ] From this time on, Wittgenstein viewed religious faith as a way of living and opposed rational argumentation or proofs for God.
With age, a deepening personal spirituality led to several elucidations and clarifications, as he untangled language problems in religion—attacking, for example, the temptation to think of God's existence as a matter of scientific evidence. [ 76 ] In 1947, finding it more difficult to work, he wrote:
I have had a letter from an old friend in Austria, a priest. In it he says that he hopes my work will go well, if it should be God's will. Now that is all I want: if it should be God's will. [ 77 ]
In Culture and Value , Wittgenstein writes:
Is what I am doing [my work in philosophy] really worth the effort? Yes, but only if a light shines on it from above.
His close friend Norman Malcolm wrote:
Wittgenstein's mature life was strongly marked by religious thought and feeling. I am inclined to think that he was more deeply religious than are many people who correctly regard themselves as religious believers. [ 34 ] [ page needed ]
Toward the end, Wittgenstein wrote:
Bach wrote on the title page of his Orgelbüchlein , 'To the glory of the most high God, and that my neighbour may be benefited thereby.' That is what I would have liked to say about my work. [ 77 ]
While a student at the Realschule , Wittgenstein was influenced by Austrian philosopher Otto Weininger 's 1903 book Geschlecht und Charakter ( Sex and Character ). Weininger (1880–1903), who was Jewish, argued that the concepts of male and female exist only as Platonic forms , and that Jews tend to embody the Platonic femininity. Whereas men are basically rational, women operate only at the level of their emotions and sexual organs. Jews, Weininger argued, are similar, saturated with femininity, with no sense of right and wrong, and no soul. Weininger argues that man must choose between his masculine and feminine sides, consciousness and unconsciousness, platonic love and sexuality. Love and sexual desire stand in contradiction, and love between a woman and a man is therefore doomed to misery or immorality. The only life worth living is the spiritual one – to live as a woman or a Jew means one has no right to live at all; the choice is genius or death. Weininger committed suicide, shooting himself in 1903, shortly after publishing the book. [ 78 ] Wittgenstein, then 14, attended Weininger's funeral. [ 79 ] Many years later, as a professor at the University of Cambridge , Wittgenstein distributed copies of Weininger's book to his bemused academic colleagues. He said that Weininger's arguments were wrong, but that it was the way they were wrong that was interesting. [ 80 ] In a letter dated 23 August 1931, Wittgenstein wrote the following to G. E. Moore :
Dear Moore, Thanks for your letter. I can quite imagine that you don't admire Weininger very much, what with that beastly translation and the fact that W. must feel very foreign to you. It is true that he is fantastic but he is great and fantastic. It isn't necessary or rather not possible to agree with him but the greatness lies in that with which we disagree. It is his enormous mistake which is great. I.e. roughly speaking if you just add a "~" to the whole book it says an important truth. [ 81 ]
In an unusual move, Wittgenstein took out a copy of Weininger's work on 1 June 1931 from the Special Order Books in the university library. He met Moore on 2 June, when he probably gave this copy to Moore. [ 81 ]
Despite their and their forebears' Christianization, the Wittgensteins considered themselves Jewish. This was evident during the Nazi era, when Ludwig's sister was assured by an official that they wouldn't be considered as Jews under the racial laws. Indignant at the state's attempt to dictate her identity, she demanded papers certifying their Jewish lineage. [ 82 ]
In his own writings, Wittgenstein frequently referred to himself as Jewish, often in a self-deprecating manner. For instance, while criticizing himself for being a "reproductive" rather than a "productive" thinker, he attributed this to his Jewish sense of identity. He wrote: 'The saint is the only Jewish "genius". Even the greatest Jewish thinker is no more than talented. (Myself for instance).' [ 83 ]
There is much discussion around the extent to which Wittgenstein and his siblings, who were of 3/4 Jewish descent, saw themselves as Jews. The issue has arisen in particular regarding Wittgenstein's schooldays, because Adolf Hitler was, for a while, at the same school at the same time. [ 84 ] Laurence Goldstein argues that it is "overwhelmingly probable" that the boys met each other and that Hitler would have disliked Wittgenstein, a "stammering, precocious, precious, aristocratic upstart ..."; Strathern flatly states they never met. [ 85 ] [ 86 ] Other commentators have dismissed as irresponsible and uninformed any suggestion that Wittgenstein's wealth and unusual personality might have fed Hitler's antisemitism, in part because there is no indication that Hitler would have seen Wittgenstein as Jewish. [ 87 ] [ 88 ]
Wittgenstein and Hitler were born just six days apart, though Hitler had to re-sit his mathematics exam before being allowed into a higher class, while Wittgenstein was moved forward by one, so they ended up two grades apart at the Realschule . [ 57 ] [ d ] Monk estimates that they were both at the school during the 1904–1905 school year, but says there is no evidence they had anything to do with each other. [ 61 ] [ 90 ] [ e ] Several commentators have argued that a school photograph of Hitler may show Wittgenstein in the lower left corner, [ 61 ] [ 95 ] [ g ]
While Wittgenstein would later claim that "[m]y thoughts are 100% Hebraic", [ 99 ] as Hans Sluga has argued, if so,
His was a self-doubting Judaism, which had always the possibility of collapsing into a destructive self-hatred (as it did in Weininger's case) but which also held an immense promise of innovation and genius. [ 100 ]
By Hebraic, he meant to include the Christian tradition, in contradistinction to the Greek tradition, holding that good and evil could not be reconciled. [ 101 ]
He began his studies in mechanical engineering at the Technische Hochschule Berlin in Charlottenburg , Berlin, on 23 October 1906, lodging with the family of Professor Jolles. He attended for three semesters, and was awarded a diploma ( Abgangzeugnis ) on 5 May 1908. [ 102 ]
During his time at the Institute, Wittgenstein developed an interest in aeronautics . [ 103 ] He arrived at the Victoria University of Manchester in the spring of 1908 to study for a doctorate, full of plans for aeronautical projects, including designing and flying his own plane. He conducted research into the behaviour of kites in the upper atmosphere, experimenting at a meteorological observation site near Glossop in Derbyshire . [ 104 ] Specifically, the Royal Meteorological Society researched and investigated the ionization of the upper atmosphere, by suspending instruments on balloons or kites. At Glossop, Wittgenstein worked under Professor of Physics Sir Arthur Schuster . [ 105 ]
He also worked on the design of a propeller with small jet ( Tip jet ) engines on the end of its blades, something he patented in 1911, and that earned him a research studentship from the university in the autumn of 1908. [ 106 ] At the time, contemporary propeller designs were not advanced enough to actually put Wittgenstein's ideas into practice, and it would be years before a blade design that could support Wittgenstein's innovative design was created. Wittgenstein's design required air and gas to be forced along the propeller arms to combustion chambers on the end of each blade, where they were then compressed by the centrifugal force exerted by the revolving arms and ignited. Propellers of the time were typically wood, whereas modern blades are made from pressed steel laminates as separate halves, which are then welded together. This gives the blade a hollow interior and thereby creates an ideal pathway for the air and gas. [ 105 ]
Work on the jet-powered propeller proved frustrating for Wittgenstein, who had very little experience working with machinery. [ 107 ] Jim Bamber, a British engineer who was his friend and classmate at the time, reported that
when things went wrong, which often occurred, he would throw his arms around, stomp about, and swear volubly in German. [ 108 ]
According to William Eccles, another friend from that period, Wittgenstein then turned to more theoretical work, focusing on the design of the propeller – a problem that required relatively sophisticated mathematics. [ 107 ] It was at this time that he became interested in the foundations of mathematics , particularly after reading Bertrand Russell 's The Principles of Mathematics (1903), and Gottlob Frege 's The Foundations of Arithmetic , vol. 1 (1893) and vol. 2 (1903). [ 109 ] Wittgenstein's sister Hermine said he became obsessed with mathematics as a result, and was anyway losing interest in aeronautics. [ 110 ] He decided instead that he needed to study logic and the foundations of mathematics, describing himself as in a "constant, indescribable, almost pathological state of agitation". [ 110 ] In the summer of 1911 he visited Frege at the University of Jena to show him some philosophy of mathematics and logic he had written, and to ask whether it was worth pursuing. [ 111 ] He wrote:
I was shown into Frege's study. Frege was a small, neat man with a pointed beard who bounced around the room as he talked. He absolutely wiped the floor with me, and I felt very depressed; but at the end he said 'You must come again', so I cheered up. I had several discussions with him after that. Frege would never talk about anything but logic and mathematics, if I started on some other subject, he would say something polite and then plunge back into logic and mathematics. [ 112 ]
Wittgenstein wanted to study with Frege, but Frege suggested he attend the University of Cambridge to study under Russell, so on 18 October 1911 Wittgenstein arrived unannounced at Russell's rooms in Trinity College . [ 113 ] Russell was having tea with C. K. Ogden , when, according to Russell,
an unknown German appeared, speaking very little English but refusing to speak German. He turned out to be a man who had learned engineering at Charlottenburg, but during this course had acquired, by himself, a passion for the philosophy of mathematics & has now come to Cambridge on purpose to hear me. [ 111 ]
He was soon not only attending Russell's lectures but dominating them. The lectures were poorly attended and Russell often found himself lecturing only to C. D. Broad , E. H. Neville , and H. T. J. Norton. [ 111 ] Wittgenstein started following him after lectures back to his rooms to discuss more philosophy, until it was time for the evening meal in Hall . Russell grew irritated; he wrote to his lover Lady Ottoline Morrell : "My German friend threatens to be an infliction." [ 114 ] Russell soon came to believe that Wittgenstein was a genius, especially after he had examined Wittgenstein's written work. He wrote in November 1911 that he had at first thought Wittgenstein might be a crank, but soon decided he was a genius:
Some of his early views made the decision difficult. He maintained, for example, at one time that all existential propositions are meaningless. This was in a lecture room, and I invited him to consider the proposition: 'There is no hippopotamus in this room at present.' When he refused to believe this, I looked under all the desks without finding one; but he remained unconvinced. [ 115 ]
Three months after Wittgenstein's arrival Russell told Morrell:
I love him & feel he will solve the problems I am too old to solve ... He is the young man one hopes for. [ 116 ]
Wittgenstein later told David Pinsent that Russell's encouragement had proven his salvation, and had ended nine years of loneliness and suffering, during which he had continually thought of suicide. In encouraging him to pursue philosophy and in justifying his inclination to abandon engineering, Russell had, quite literally, saved Wittgenstein's life. [ 116 ] The role-reversal between Bertrand Russell and Wittgenstein was soon such that Russell wrote in 1916 after Wittgenstein had criticized Russell's own work:
His [Wittgenstein's] criticism, tho' I don't think you realized it at the time, was an event of first-rate importance in my life, and affected everything I have done since. I saw that he was right, and I saw that I could not hope ever again to do fundamental work in philosophy. [ 117 ]
In 1912 Wittgenstein joined the Cambridge University Moral Sciences Club , an influential discussion group for philosophy dons and students, delivering his first paper there on 29 November that year, a four-minute talk defining philosophy as "all those primitive propositions which are assumed as true without proof by the various sciences". [ 118 ] [ 119 ] [ 120 ] He dominated the society and for a time would stop attending in the early 1930s after complaints that he gave no one else a chance to speak. [ 121 ]
The club became infamous within popular philosophy because of a meeting on 25 October 1946 at Richard Braithwaite 's rooms in King's College, Cambridge , where Karl Popper , another Viennese philosopher, had been invited as the guest speaker. Popper's paper was "Are there philosophical problems?", in which he struck up a position against Wittgenstein's, contending that problems in philosophy are real, not just linguistic puzzles as Wittgenstein argued. Accounts vary as to what happened next, but Wittgenstein apparently started waving a hot poker, demanding that Popper give him an example of a moral rule. Popper offered one – "Not to threaten visiting speakers with pokers" — at which point Russell told Wittgenstein he had misunderstood and Wittgenstein left. Popper maintained that Wittgenstein "stormed out", but it had become accepted practice for him to leave early (because of his aforementioned ability to dominate discussion). It was the only time the philosophers, three of the most eminent in the 20th CE, were ever in the same room together. [ 122 ] [ 123 ] The minutes record that the meeting was "charged to an unusual degree with a spirit of controversy". [ 124 ]
The economist John Maynard Keynes also invited him to join the Cambridge Apostles , an elite secret society formed in 1820, which both Bertrand Russell and G. E. Moore had joined as students, but Wittgenstein did not greatly enjoy it and attended only infrequently. Russell had been worried that Wittgenstein would not appreciate the group's raucous style of intellectual debate, its precious sense of humour, and the fact that the members were often in love with one another. [ 125 ] He was admitted in 1912 but resigned almost immediately because he could not tolerate the style of discussion. Nevertheless, the Cambridge Apostles allowed Wittgenstein to participate in meetings again in the 1920s when he returned to Cambridge. Reportedly, Wittgenstein also had trouble tolerating the discussions in the Cambridge Moral Sciences Club.
Wittgenstein was quite vocal about his depression in his years at Cambridge and before he went to war; on many an occasion, he told Russell of his woes. His mental anguish seemed to stem from two sources: his work and his personal life. Wittgenstein made numerous remarks to Russell about the logic driving him mad. [ 126 ] Wittgenstein also stated to Russell that he "felt the curse of those who have half a talent". [ 127 ] He later expressed this same worry and told of being in mediocre spirits due to his lack of progress in his logical work. [ 128 ] Monk writes that Wittgenstein lived and breathed logic, and a temporary lack of inspiration plunged him into despair. [ 129 ] Wittgenstein told of his work in logic affecting his mental status in an extreme way. However, he also told Russell another story. Around Christmas, in 1913, he wrote:
how can I be a logician before I'm a human being? For the most important thing is coming to terms with myself! [ 130 ]
He also told Russell on an occasion in Russell's rooms that he was worried about logic and his sins; also, once upon arriving in Russell's rooms one night, Wittgenstein announced to Russell that he would kill himself once he left. [ 131 ] Of things Wittgenstein personally told Russell, Ludwig's temperament was also recorded in the diary of David Pinsent . Pinsent wrote
I have to be frightfully careful and tolerant when he gets these sulky fits
and
I am afraid he is in an even more sensitive neurotic state just now than usual
when talking about Wittgenstein's emotional fluctuations. [ 132 ]
Wittgenstein had romantic relations with both men and women. He is generally believed to have fallen in love with at least three men, and had a relationship with the latter two: David Hume Pinsent in 1912, Francis Skinner in 1930, and Ben Richards in the late 1940s. [ 133 ] He later claimed that, as a teenager in Vienna, he had had an affair with a woman. [ 134 ] Additionally, in the 1920s Wittgenstein fell in love with a young Swiss woman, Marguerite Respinger, sculpting a bust modelled on her and seriously considering marriage, albeit on condition that they would not have children; she decided that he was not right for her. [ 135 ]
Wittgenstein's relationship with David Pinsent occurred during an intellectually formative period and is well documented. Bertrand Russell introduced Wittgenstein to Pinsent in the summer of 1912. Pinsent was a mathematics undergraduate and a relation of David Hume , and Wittgenstein and he soon became very close. [ 136 ] The men worked together on experiments in the psychology laboratory about the role of rhythm in the appreciation of music, and Wittgenstein delivered a paper on the subject to the British Psychological Association in Cambridge in 1912. They also travelled together, including to Iceland in September 1912—the expenses paid by Wittgenstein, including first class travel , the hiring of a private train, and new clothes and spending money for Pinsent. In addition to Iceland, Wittgenstein and Pinsent travelled to Norway in 1913. In determining their destination, Wittgenstein and Pinsent visited a tourist office in search of a location that would fulfil the following criteria: a small village located on a fjord, a location away from tourists, and a peaceful destination to allow them to study logic and law. [ 137 ] Choosing Øystese , Wittgenstein and Pinsent arrived in the small village on 4 September 1913. During a vacation lasting almost three weeks, Wittgenstein was able to work vigorously on his studies. The immense progress on logic during their stay led Wittgenstein to express to Pinsent his notion of leaving Cambridge and returning to Norway to continue his work on logic. [ 138 ] Pinsent's diaries provide valuable insights into Wittgenstein's personality: sensitive, nervous, and attuned to the tiniest slight or change in mood from Pinsent. [ 139 ] [ 140 ] Pinsent also writes of Wittgenstein being "absolutely sulky and snappish" at times, as well. [ 132 ] In his diaries Pinsent wrote about shopping for furniture with Wittgenstein in Cambridge when the latter was given rooms in Trinity. Most of what they found in the stores was not minimalist enough for Wittgenstein's aesthetics:
I went and helped him interview a lot of furniture at various shops ... It was rather amusing: He is terribly fastidious and we led the shopman a frightful dance, Vittgenstein [sic] ejaculating "No – Beastly!" to 90 percent of what he shewed us! [ 141 ]
He wrote in May 1912 that Wittgenstein had just begun to study the history of philosophy:
He expresses the most naive surprise that all the philosophers he once worshipped in ignorance are after all stupid and dishonest and make disgusting mistakes! [ 141 ]
The last time they saw each other was on 8 October 1913 at Lordswood House in Birmingham, then residence of the Pinsent family:
I got up at 6:15 to see Ludwig off. He had to go very early—back to Cambridge—as he has lots to do there. I saw him off from the house in a taxi at 7:00—to catch a 7:30 AM train from New Street Station. It was sad parting from him. [ 140 ]
Wittgenstein left to live in Norway.
Karl Wittgenstein died on 20 January 1913, and after receiving his inheritance Wittgenstein became one of the wealthiest men in Europe. [ 142 ] He donated some of his money, at first anonymously, to Austrian artists and writers, including Rainer Maria Rilke and Georg Trakl . Trakl requested to meet his benefactor but in 1914 when Wittgenstein went to visit, Trakl had killed himself. Wittgenstein came to feel that he could not get to the heart of his most fundamental questions while surrounded by other academics, and so in 1913, he retreated to the village of Skjolden in Norway, where he rented the second floor of a house for the winter. [ 143 ] He later saw this as one of the most productive periods of his life, writing Logik ( Notes on Logic ), the predecessor of much of the Tractatus . [ 113 ]
While in Norway, Wittgenstein learned Norwegian to converse with the local villagers, and Danish to read the works of the Danish philosopher Søren Kierkegaard . [ 144 ] He adored the "quiet seriousness" of the landscape but even Skjolden became too busy for him. He soon designed a small wooden house which was erected on a remote rock overlooking the Eidsvatnet Lake just outside the village. The place was called "Østerrike" (Austria) by locals. He lived there during various periods until the 1930s, and substantial parts of his works were written there. (The house was broken up in 1958 to be rebuilt in the village. A local foundation collected donations and bought it in 2014; it was dismantled again and re-erected at its original location; the inauguration took place on 20 June 2019 with international attendance.) [ 143 ]
It was during this time that Wittgenstein began addressing what he considered to be a central issue in Notes on Logic , a general decision procedure for determining the truth value of logical propositions that would stem from a single primitive proposition. He became convinced during this time that
All the propositions of logic are generalizations of tautologies and all generalizations of tautologies are propositions of logic. There are no other logical propositions. [ 145 ] [ 146 ]
Based on this, Wittgenstein argued that propositions of logic express their truth or falsehood in the sign itself, and one need not know anything about the constituent parts of the proposition to determine it true or false. Rather, one simply needs to identify the statement as a tautology (true), a contradiction (false), or neither.
The problem lay in forming a primitive proposition that encompassed this and would act as the basis for all of logic. As he stated in correspondence with Russell in late 1913,
The big question now is, how must a system of signs be constituted in order to make every tautology recognizable as such IN ONE AND THE SAME WAY? This is the fundamental problem of logic! [ 128 ]
The importance Wittgenstein placed upon this fundamental problem was so great that he believed if he did not solve it, he had no right or desire to live. [ 147 ] Despite this apparent life-or-death importance, Wittgenstein had given up on this primitive proposition by the time he wrote the Tractatus . The Tractatus does not offer any general process for identifying propositions as tautologies; in a simpler manner,
Every tautology itself shows that it is a tautology. [ 148 ]
This shift to understanding tautologies through mere identification or recognition occurred in 1914 when Wittgenstein asked Moore to assist him in dictating his notes.
At Wittgenstein's insistence, Moore, who was now a Cambridge don, visited him in Norway in April 1914, reluctantly because Wittgenstein exhausted him. David Edmonds and John Eidinow write that Wittgenstein regarded Moore, an internationally known philosopher, as an example of how far someone could get in life with "absolutely no intelligence whatever". [ 149 ] In Norway it was clear that Moore was expected to act as Wittgenstein's secretary, taking down his notes, with Wittgenstein falling into a rage when Moore got something wrong. [ 150 ]
Brian McGuinness notes that a letter from Wittgenstein to Moore of 7 May 1914 indicates that he had intended to submit an essay he referred to as " Logik" as the dissertation required for his completion of a bachelor's degree. [ 151 ] [ 152 ] McGuinness asserts that the essay is unlikely to be identical with "Notes on Logic" but suggests it is at least summarised in "Notes dictated to G. E. Moore in Norway" (published in Appendix II of Notebooks 1914-1916 ) [ 153 ] and that "much speaks" for the supposition that it was indeed these notes that Wittgenstein had intended to submit. [ 151 ] According to the relevant regulations, however, such a dissertation had to contain a preface and notes in which the student stated the sources on which he had relied and the extent to which he had done so, qualities lacking in Wittgenstein's essay. Moore, though himself secretary of the relevant Moral Sciences degrees committee, showed the essay to Walter Morley Fletcher – perhaps, McGuinness suggests, "for an impartial opinion from an outsider" – and had "been told that it could not possibly pass for a dissertation" and wrote to Wittgenstein accordingly, [ 151 ]
Wittgenstein was furious, writing to Moore:
If I am not worth your making an exception for me even in some STUPID details then I may as well go to HELL directly; and if I am worth it and you don't do it then – by God – you might go there. [ 154 ]
Moore was apparently distraught; writing in his diary that he felt sick and could not get the letter out of his head. [ 155 ] Wittgenstein wrote to Moore In July of that year conceding that he had "probably no sufficient reason to write to you as I did" [ 151 ] [ 152 ] but the two did not speak again until 1929. [ 150 ]
On the outbreak of World War I, Wittgenstein immediately volunteered for the Austro-Hungarian Army , despite being eligible for a medical exemption. [ 156 ] [ 157 ] He served first on a ship and then in an artillery workshop "several miles from the action". [ 156 ] He was wounded in an accidental explosion, and hospitalised to Kraków . [ 156 ] In March 1916, he was posted to a fighting unit on the front line of the Russian front, as part of the Austrian 7th Army , where his unit was involved in some of the heaviest fighting, defending against the Brusilov Offensive . [ 158 ] Wittgenstein directed the fire of his own artillery from an observation post in no-man's land against Allied troops—one of the most dangerous jobs since he was targeted by enemy fire. [ 157 ] He was decorated with the Military Merit Medal with Swords on the Ribbon, and was commended by the army for "exceptionally courageous behaviour, calmness, sang-froid, and heroism" that "won the total admiration of the troops". [ 159 ] In January 1917, he was sent as a member of a howitzer regiment to the Russian front, where he won several more medals for bravery including the Silver Medal for Valour , First Class. [ 160 ] In 1918, he was promoted to lieutenant and sent to the Italian front as part of an artillery regiment. For his part in the final Austrian offensive of June 1918, he was recommended for the Gold Medal for Valour, one of the highest honours in the Austrian army, but was instead awarded the Band of the Military Service Medal with Swords—it being decided that this particular action, although extraordinarily brave, had been insufficiently consequential to merit the highest honour. [ 161 ]
Throughout the war, he kept notebooks in which he frequently wrote philosophical reflections alongside personal remarks, including his contempt for the character of the other soldiers. [ 162 ] His notebooks also attest to his philosophical and spiritual reflections, and it was during this time that he experienced a kind of religious awakening. [ 72 ] In his entry from 11 June 1915, Wittgenstein states that
The meaning of life, i.e. the meaning of the world, we can call God. And connect with this the comparison of God to a father. To pray is to think about the meaning of life. [ 163 ]
and on 8 July that
To believe in God means to understand the meaning of life. To believe in God means to see that the facts of the world are not the end of the matter. To believe in God means to see that life has a meaning ... When my conscience upsets my equilibrium, then I am not in agreement with Something. But what is this? Is it the world ? Certainly it is correct to say: Conscience is the voice of God. [ 164 ]
He discovered Leo Tolstoy 's 1896 The Gospel in Brief at a bookshop in Tarnów , and carried it everywhere, recommending it to anyone in distress, to the point where he became known to his fellow soldiers as "the man with the gospels". [ 165 ] [ 166 ]
The extent to which The Gospel in Brief influenced Wittgenstein can be seen in the Tractatus , in the unique way both books number their sentences. [ 167 ] In 1916 Wittgenstein read Dostoevsky 's The Brothers Karamazov so often that he knew whole passages of it by heart, particularly the speeches of the elder Zosima, who represented for him a powerful Christian ideal, a holy man "who could see directly into the souls of other people". [ 73 ] [ 168 ]
Iain King has suggested that Wittgenstein's writing changed substantially in 1916 when he started confronting much greater dangers during frontline fighting. [ 169 ] Russell said he returned from the war a changed man, one with a deeply mystical and ascetic attitude. [ 170 ]
In the summer of 1918, Wittgenstein took military leave and went to stay in one of his family's Vienna summer houses, Neuwaldegg. It was there in August 1918 that he completed the Tractatus , which he submitted with the title Der Satz (German: proposition, sentence, phrase, set, but also "leap") to the publishers Jahoda and Siegel. [ 171 ]
A series of events around this time left him deeply upset. On 13 August, his uncle Paul died. On 25 October, he learned that Jahoda and Siegel had decided not to publish the Tractatus , and on 27 October, his brother Kurt killed himself, the third of his brothers to commit suicide. It was around this time he received a letter from David Pinsent's mother to say that Pinsent had been killed in a plane crash on 8 May. [ 172 ] [ 173 ] Wittgenstein was distraught to the point of being suicidal. He was sent back to the Italian front after his leave and, as a result of the defeat of the Austrian army, he was captured by Allied forces on 3 November in Trentino . He subsequently spent nine months in an Italian prisoner of war camp .
He returned to his family in Vienna on 25 August 1919, by all accounts physically and mentally spent. He apparently talked incessantly about suicide, terrifying his sisters and brother Paul. He decided to do two things: to enroll in a teacher training college as an elementary school teacher, and to get rid of his fortune. In 1914, it had been providing him with an income of 300,000 Kronen a year, but by 1919 was worth a great deal more, with a sizable portfolio of investments in the United States and the Netherlands . He divided it among his siblings, except for Margarete, insisting that it not be held in trust for him. His family saw him as ill and acquiesced. [ 171 ]
In September 1919 he enrolled in the Lehrerbildungsanstalt (teacher training college) in the Kundmanngasse in Vienna. His sister Hermine said that Wittgenstein working as an elementary teacher was like using a precision instrument to open crates, but the family decided not to interfere. [ 174 ] Thomas Bernhard , more critically, wrote of this period in Wittgenstein's life: "the multi-millionaire as a village schoolmaster is surely a piece of perversity". [ 175 ]
In the summer of 1920, Wittgenstein worked as a gardener for a monastery. At first, he applied under a false name, for a teaching post at Reichenau, and was awarded the job, but he declined it when his identity was discovered. As a teacher, he wished to no longer be recognized as a member of the Wittgenstein family. In response, his brother Paul wrote:
It is out of the question, really completely out of the question, that anybody bearing our name and whose elegant and gentle upbringing can be seen a thousand paces off, would not be identified as a member of our family ... That one can neither simulate nor dissimulate anything including a refined education I need hardly tell you. [ 176 ]
In 1920, Wittgenstein was given his first job as a primary school teacher in Trattenbach , under his real name, in a remote village of a few hundred people. His first letters describe it as beautiful, but in October 1921, he wrote to Russell: "I am still at Trattenbach, surrounded, as ever, by odiousness and baseness. I know that human beings on the average are not worth much anywhere, but here they are much more good-for-nothing and irresponsible than elsewhere." [ 177 ] He was soon the object of gossip among the villagers, who found him eccentric at best. He did not get on well with the other teachers; when he found his lodgings too noisy, he made a bed for himself in the school kitchen. He was an enthusiastic teacher, offering late-night extra tuition to several of the students, something that did not endear him to the parents, though some of them came to adore him; his sister Hermine occasionally watched him teach and said the students "literally crawled over each other in their desire to be chosen for answers or demonstrations". [ 178 ]
To the less able, it seems that he became something of a tyrant. The first two hours of each day were devoted to mathematics, hours that Monk writes some of the pupils recalled years later with horror. [ 179 ] They reported that he caned the boys and boxed their ears, and also that he pulled the girls' hair; [ 180 ] this was not unusual at the time for boys, but for the villagers he went too far in doing it to the girls too; girls were not expected to understand algebra, much less have their ears boxed over it. The corporal punishment apart, Monk writes that he quickly became a village legend, shouting "Krautsalat!" ("coleslaw" – i.e. shredded cabbage) when the headmaster played the piano, and "Nonsense!" when a priest was answering children's questions. [ 181 ]
While Wittgenstein was living in isolation in rural Austria, the Tractatus was published to considerable interest, first in German in 1921 as Logisch-Philosophische Abhandlung , part of Wilhelm Ostwald 's journal Annalen der Naturphilosophie , though Wittgenstein was not happy with the result and called it a pirate edition. Russell had agreed to write an introduction to explain why it was important because it was otherwise unlikely to have been published: it was difficult if not impossible to understand, and Wittgenstein was unknown in philosophy. [ 182 ] In a letter to Russell, Wittgenstein wrote "The main point is the theory of what can be expressed (gesagt) by prop[osition]s – i.e. by language – (and, which comes to the same thing, what can be thought ) and what can not be expressed by pro[position]s, but only shown (gezeigt); which, I believe, is the cardinal problem of philosophy." [ 183 ] But Wittgenstein was not happy with Russell's help. He had lost faith in Russell, finding him glib and his philosophy mechanistic, and felt he had fundamentally misunderstood the Tractatus . [ 184 ]
The whole modern conception of the world is founded on the illusion that the so-called laws of nature are the explanations of natural phenomena.
Thus people today stop at the laws of nature, treating them as something inviolable, just as God and Fate were treated in past ages. And in fact both were right and both wrong; though the view of the ancients is clearer insofar as they have an acknowledged terminus, while the modern system tries to make it look as if everything were explained.
An English translation was prepared in Cambridge by Frank Ramsey , a mathematics undergraduate at King's commissioned by C. K. Ogden . It was Moore who suggested Tractatus Logico-Philosophicus for the title, an allusion to Baruch Spinoza 's Tractatus Theologico-Politicus. Initially, there were difficulties in finding a publisher for the English edition too, because Wittgenstein was insisting it appear without Russell's introduction; Cambridge University Press turned it down for that reason. Finally in 1922 an agreement was reached with Wittgenstein that Kegan Paul would print a bilingual edition with Russell's introduction and the Ramsey-Ogden translation. [ 185 ] This is the translation that was approved by Wittgenstein, but it is problematic in a number of ways. Wittgenstein's English was poor at the time, and Ramsey was a teenager who had only recently learned German, so philosophers often prefer to use a 1961 translation by David Pears and Brian McGuinness . [ h ]
An aim of the Tractatus is to reveal the relationship between language and the world: what can be said about it, and what can only be shown. Wittgenstein argues that the logical structure of language provides the limits of meaning. The limits of language, for Wittgenstein, are the limits of philosophy. Much of philosophy involves attempts to say the unsayable: "What we can say at all can be said clearly," he argues. Anything beyond that – religion, ethics, aesthetics, the mystical – cannot be discussed. They are not in themselves nonsensical, but any statement about them must be. [ 187 ] He wrote in the preface: "The book will, therefore, draw a limit to thinking, or rather – not to thinking, but to the expression of thoughts; for, in order to draw a limit to thinking we should have to be able to think both sides of this limit (we should therefore have to be able to think what cannot be thought)." [ 188 ]
The book is 75 pages long – "As to the shortness of the book, I am awfully sorry for it ... If you were to squeeze me like a lemon you would get nothing more out of me," he told Ogden – and presents seven numbered propositions (1–7), with various sub-levels (1, 1.1, 1.11): [ 189 ]
In September 1922 he moved to a secondary school in a nearby village, Hassbach , but considered the people there just as bad – "These people are not human at all but loathsome worms," he wrote to a friend – and he left after a month. In November he began work at another primary school, this time in Puchberg in the Schneeberg mountains. There, he told Russell, the villagers were "one-quarter animal and three-quarters human".
Frank P. Ramsey visited him on 17 September 1923 to discuss the Tractatus ; he had agreed to write a review of it for Mind . [ 191 ] He reported in a letter home that Wittgenstein was living frugally in one tiny whitewashed room that only had space for a bed, a washstand, a small table, and one small hard chair. Ramsey shared an evening meal with him of coarse bread, butter, and cocoa. Wittgenstein's school hours were eight to twelve or one, and he had afternoons free. [ 192 ] After Ramsey returned to Cambridge a long campaign began among Wittgenstein's friends to persuade him to return to Cambridge and away from what they saw as a hostile environment for him. He was accepting no help even from his family. [ 193 ] Ramsey wrote to John Maynard Keynes:
[Wittgenstein's family] are very rich and extremely anxious to give him money or do anything for him in any way, and he rejects all their advances; even Christmas presents or presents of invalid's food, when he is ill, he sends back. And this is not because they aren't on good terms but because he won't have any money he hasn't earned ... It is an awful pity. [ 193 ]
He moved schools again in September 1924, this time to Otterthal , near Trattenbach; the socialist headmaster, Josef Putre, was someone Wittgenstein had become friends with while at Trattenbach. While he was there, he wrote a 42-page pronunciation and spelling dictionary for the children, Wörterbuch für Volksschulen , published in Vienna in 1926 by Hölder-Pichler-Tempsky , the only book of his apart from the Tractatus that was published in his lifetime. [ 185 ] A first edition sold in 2005 for £75,000. [ 194 ] In 2020, an English version entitled Word Book translated by art historian Bettina Funcke and illustrated by artist / publisher Paul Chan was released. [ 195 ]
The Wörterbuch für Volksschulen is remarkable for its pluricentric conceptualization, decades before such a linguistic approach existed. In Wittgenstein's preface to the Wörterbuch , which was withheld at the publisher's request but which survives in a 1925 typescript, Wittgenstein takes a clear stance for a Standard Austrian German , which he aimed to document for elementary pupils in the text. Wittgenstein states (translated from German) that
The dictionary should include only words, but all such words, that are known to Austrian elementary students. Therefore it excludes many a good German word unusual in Austria. [ 196 ]
Wittgenstein is through his school dictionary one of the earliest proponents of a German with more than one standard variety. [ 196 ] This is especially noteworthy in the German language context, in which expert debates over the status and relevance of standard varieties are so common that some speak of a One Standard German Axiom in that field today. Wittgenstein was taking a stance for multiple standards, against such an axiom, long before these debates ensued.
An incident occurred in April 1926 and became known as Der Vorfall Haidbauer (the Haidbauer incident ). Josef Haidbauer was an 11-year-old pupil whose father had died and whose mother worked as a local maid. He was a slow learner, and one day Wittgenstein hit him two or three times on the head, causing him to collapse. Wittgenstein carried him to the headmaster's office, then quickly left the school, bumping into a parent, Herr Piribauer, on the way out. Piribauer had been sent for by the children when they saw Haidbauer collapse; Wittgenstein had previously pulled Piribauer's daughter, Hermine, so hard by the ears that her ears had bled. [ 197 ] Piribauer said that when he met Wittgenstein in the hall that day:
I called him all the names under the sun. I told him he wasn't a teacher, he was an animal-trainer! And that I was going to fetch the police right away! [ 198 ]
Piribauer tried to have Wittgenstein arrested, but the village's police station was empty, and when he tried again the next day he was told Wittgenstein had disappeared. On 28 April 1926, Wittgenstein handed in his resignation to Wilhelm Kundt, a local school inspector, who tried to persuade him to stay; however, Wittgenstein was adamant that his days as a schoolteacher were over. [ 198 ] Proceedings were initiated in May, and the judge ordered a psychiatric report; in August 1926 a letter to Wittgenstein from a friend, Ludwig Hänsel, indicates that hearings were ongoing, but nothing is known about the case after that. Alexander Waugh writes that Wittgenstein's family and their money may have had a hand in covering things up. Waugh writes that Haidbauer died shortly afterwards of haemophilia ; Monk says he died when he was 14 of leukaemia . [ 199 ] [ 200 ] Ten years later, in 1936, as part of a series of "confessions" he engaged in that year, Wittgenstein appeared without warning at the village saying he wanted to confess personally and ask for pardon from the children he had hit. He visited at least four of the children, including Hermine Piribauer, who apparently replied only with a "Ja, ja," though other former students were more hospitable. Monk writes that the purpose of these confessions was not
to hurt his pride, as a form of punishment; it was to dismantle it – to remove a barrier, as it were, that stood in the way of honest and decent thought.
Of the apologies, Wittgenstein wrote,
This brought me into more settled waters... and to greater seriousness. [ 201 ]
The Tractatus was now the subject of much debate among philosophers, and Wittgenstein was a figure of increasing international fame. In particular, a discussion group of philosophers, scientists, and mathematicians, known as the Vienna Circle , had developed purportedly as a result of the inspiration they had been given by reading the Tractatus . [ 193 ] While it is commonly assumed that Wittgenstein was a part of the Vienna Circle, in reality, this was not the case. German philosopher Oswald Hanfling writes bluntly: "Wittgenstein was never a member of the Circle, though he was in Vienna during much of the time." [ 202 ] Indeed it is doubtful, as Brian McGuinness notes, that Wittgenstein ever attended any meetings of the Vienna Circle proper. [ 203 ] Yet, Hanfling asserts, "his influence on the Circle's thought was at least as important as that of any of its members." [ 202 ]
Philosopher A. C. Grayling , however, contends that while certain superficial similarities between Wittgenstein's early philosophy and logical positivism led its members to study the Tractatus in detail and to arrange discussions with him, Wittgenstein's influence on the Circle was rather limited. The fundamental philosophical views of Circle had been established before they met Wittgenstein and had their origins in the British empiricists , Ernst Mach , and the logic of Frege and Russell. Whatever influence Wittgenstein did have on the Circle was largely limited to Moritz Schlick and Friedrich Waismann and, even in these cases, had little lasting effect on their positivism. Grayling states that "it is no longer possible to think of the Tractatus as having inspired a philosophical movement, as most earlier commentators claimed." [ 204 ]
Schlick first met Wittgenstein in 1927 and did so several times before the latter would agree to be introduced to some of his colleagues. From 1927 to 1928 Wittgenstein met with small groups that included Schlick, almost always Waismann, sometimes Rudolf Carnap , and sometimes Herbert Feigl and his future wife Maria Kesper. From 1929, Wittgenstein's contact with the Circle would be restricted to meetings with Schlick and Waismann only. [ 203 ] Conversations from these later meetings (December 1929 up to March 1932) were recorded by Waismann and eventually published in English translation in Ludwig Wittgenstein and the Vienna Circle (1979). [ 205 ] By the time they began Schlick had tasked Waismann with writing an exposition of Wittgenstein's philosophy. This project would undergo radical transformation but the final text, inspired by Wittgenstein but very much Waismann's own work, was eventually published in English as The Principles of Linguistic Philosophy (1965). [ 206 ] [ 203 ] [ 205 ] Some further draft materials for the project and dictations were published in English under the editorship of Gordon Baker in 2003. [ 207 ]
In his autobiography, Rudolf Carnap describes Wittgenstein as the thinker who most inspired him. However, he also wrote that "there was a striking difference between Wittgenstein's attitude toward philosophical problems and that of Schlick and myself. Our attitude toward philosophical problems was not very different from that which scientists have toward their problems." As for Wittgenstein:
His point of view and his attitude toward people and problems, even theoretical problems, were much more similar to those of a creative artist than to those of a scientist; one might almost say, similar to those of a religious prophet or a seer.... When finally, sometimes after a prolonged arduous effort, his answers came forth, his statement stood before us like a newly created piece of art or a divine revelation ... the impression he made on us was as if insight came to him as through divine inspiration, so that we could not help feeling that any sober rational comment or analysis of it would be a profanation. [ 208 ]
I am not interested in erecting a building, but in [...] presenting to myself the foundations of all possible buildings.
In 1926 Wittgenstein was again working as a gardener for a number of months, this time at the monastery of Hütteldorf, where he had also inquired about becoming a monk. His sister, Margaret, invited him to help with the design of her new townhouse in Vienna's Kundmanngasse . Wittgenstein, his friend Paul Engelmann , and a team of architects developed a spare modernist house. In particular, Wittgenstein focused on the windows, doors, and radiators, demanding that every detail be exactly as he specified. When the house was nearly finished Wittgenstein had an entire ceiling raised 30 mm so that the room had the exact proportions he wanted. Monk writes that "This is not so marginal as it may at first appear, for it is precisely these details that lend what is otherwise a rather plain, even ugly house its distinctive beauty." [ 210 ]
It took him a year to design the door handles and another to design the radiators. Each window was covered by a metal screen that weighed 150 kilograms (330 lb), moved by a pulley Wittgenstein designed. Bernhard Leitner, author of The Architecture of Ludwig Wittgenstein , said there is barely anything comparable in the history of interior design: "It is as ingenious as it is expensive. A metal curtain that could be lowered into the floor." [ 210 ]
The house was finished by December 1928 and the family gathered there at Christmas to celebrate its completion. Wittgenstein's sister Hermine wrote: "Even though I admired the house very much. ... It seemed indeed to be much more a dwelling for the gods." [ 209 ] Wittgenstein said "the house I built for Gretl is the product of a decidedly sensitive ear and good manners, and expression of great understanding ... But primordial life, wild life striving to erupt into the open – that is lacking." [ 211 ] Monk comments that the same might be said of the technically excellent, but austere, terracotta sculpture Wittgenstein had modelled of Marguerite Respinger in 1926, and that, as Russell first noticed, this "wild life striving to be in the open" was precisely the substance of Wittgenstein's philosophical work. [ 211 ]
According to Feigl (as reported by Monk), upon attending a conference in Vienna by mathematician L. E. J. Brouwer , Wittgenstein remained quite impressed, taking into consideration the possibility of a "return to Philosophy". At the urging of Ramsey and others, Wittgenstein returned to Cambridge in 1929. Keynes wrote in a letter to his wife: "Well, God has arrived. I met him on the 5.15 train." [ 212 ] Despite this fame, he could not initially work at Cambridge as he had failed to obtain a degree, so he applied as an advanced undergraduate. Russell noted that his previous residency was sufficient to fulfil eligibility requirements for a PhD, and urged him to offer the Tractatus as his thesis. [ 213 ] It was examined in 1929 by Russell and Moore; at the end of the thesis defence, Wittgenstein clapped the two examiners on the shoulder and said, '"Don't worry, I know you'll never understand it." [ 214 ] Braithwaite, quoting from memory, recalls that Moore wrote in the examiner's report: "I myself consider that this is a work of genius; but, even if I am completely mistaken and it is nothing of the sort, it is well above the standard required for the Ph.D. degree." [ 215 ] Wittgenstein was appointed as a lecturer and was made a fellow of Trinity College.
From 1936 to 1937, Wittgenstein lived again in Norway, [ 216 ] where he worked on the Philosophical Investigations . In the winter of 1936/7, he delivered a series of "confessions" to close friends, most of them about minor infractions like white lies, in an effort to cleanse himself. In 1938, he travelled to Ireland to visit Maurice O'Connor Drury , a friend who became a psychiatrist, and considered such training himself, with the intention of abandoning philosophy for it. The visit to Ireland was at the same time a response to the invitation of the then Irish Taoiseach , Éamon de Valera , himself a former mathematics teacher. De Valera hoped Wittgenstein's presence would contribute to the Dublin Institute for Advanced Studies which he was soon to set up. [ 217 ]
While he was in Ireland in March 1938, Germany annexed Austria in the Anschluss ; the Viennese Wittgenstein was now a Jew under the 1935 Nuremberg racial laws , because three of his grandparents had been born as Jews. He would also, in July, become by law a 'national' of the enlarged Germany being, as a Jew, ineligible to become a Reich citizen. [ 218 ] The Nuremberg Laws classified people as Jews ( Volljuden ) if they had three or four Jewish grandparents, and as mixed blood ( Mischling ) if they had one or two. It meant, among other things, that the Wittgensteins were restricted in whom they could marry or have sex with, and where they could work. [ 219 ]
After the Anschluss, his brother Paul left almost immediately for England, and later the US. The Nazis discovered his relationship with Hilde Schania, a brewer's daughter with whom he had had two children but whom he had never married, though he did later. Because she was not Jewish, he was served with a summons for Rassenschande (racial defilement). He told no one he was leaving the country, except for Hilde who agreed to follow him. He left so suddenly and quietly that for a time people believed he was the fourth Wittgenstein brother to have committed suicide. [ 220 ]
Wittgenstein began to investigate acquiring British or Irish citizenship with the help of Keynes, and apparently had to confess to his friends in England that he had earlier misrepresented himself to them as having just one Jewish grandparent, when in fact he had three. [ 221 ]
A few days before the invasion of Poland, Hitler personally granted Mischling status to the Wittgenstein siblings. [ 222 ] In 1939 there were 2,100 applications for Mischling status (or for 'promotions' within such status) and Hitler granted only 12. [ 223 ] [ 224 ] Anthony Gottlieb writes that the pretext was that their paternal grandfather had been the bastard son of a German prince, which allowed the Reichsbank to claim foreign currency, stocks and 1700 kg of gold held in Switzerland by a Wittgenstein family trust. Gretl, an American citizen by marriage, started the negotiations over the racial status of their grandfather, and the family's large foreign currency reserves were used as a bargaining tool. Paul had escaped to Switzerland and then the US in July 1938, and disagreed with the negotiations, leading to a permanent split between the siblings. After the war, when Paul was performing in Vienna, he did not visit Hermine who was dying there, and he had no further contact with Ludwig or Gretl. [ 27 ]
After G. E. Moore resigned the chair in philosophy in 1939, Wittgenstein was elected. He was naturalised as a British subject shortly after on 12 April 1939. [ 225 ] In July 1939 he travelled to Vienna to assist Gretl and his other sisters, visiting Berlin for one day to meet an official of the Reichsbank . After this, he travelled to New York to persuade Paul, whose agreement was required, to back the scheme. The required Befreiung was granted in August 1939. The unknown amount signed over to the Nazis by the Wittgenstein family, a week or so before the outbreak of war, included amongst many other assets 1,700 kg of gold. [ 226 ]
Norman Malcolm , at the time a post-graduate research fellow at Cambridge, describes his first impressions of Wittgenstein in 1938:
At a meeting of the Moral Science Club, after the paper for the evening was read and the discussion started, someone began to stammer a remark. He had extreme difficulty in expressing himself and his words were unintelligible to me. I whispered to my neighbour, 'Who's that?': he replied, 'Wittgenstein'. I was astonished because I had expected the famous author of the Tractatus to be an elderly man, whereas this man looked young – perhaps about 35. (His actual age was 49.) His face was lean and brown, his profile was aquiline and strikingly beautiful, his head was covered with a curly mass of brown hair. I observed the respectful attention that everyone in the room paid to him. After this unsuccessful beginning, he did not speak for a time but was obviously struggling with his thoughts. His look was concentrated, he made striking gestures with his hands as if he was discoursing ... Whether lecturing or conversing privately, Wittgenstein always spoke emphatically and with a distinctive intonation. He spoke excellent English, with the accent of an educated Englishman, although occasional Germanisms would appear in his constructions. His voice was resonant ... His words came out, not fluently, but with great force. Anyone who heard him say anything knew that this was a singular person. His face was remarkably mobile and expressive when he talked. His eyes were deep and often fierce in their expression. His whole personality was commanding, even imperial. [ 227 ]
Describing Wittgenstein's lecture programme, Malcolm continues:
It is hardly correct to speak of these meetings as 'lectures', although this is what Wittgenstein called them. For one thing, he was carrying on original research in these meetings ... Often the meetings consisted mainly of dialogue. Sometimes, however, when he was trying to draw a thought out of himself, he would prohibit, with a peremptory motion of the hand, any questions or remarks. There were frequent and prolonged periods of silence, with only an occasional mutter from Wittgenstein, and the stillest attention from the others. During these silences, Wittgenstein was extremely tense and active. His gaze was concentrated; his face was alive; his hands made arresting movements; his expression was stern. One knew that one was in the presence of extreme seriousness, absorption, and force of intellect ... Wittgenstein was a frightening person at these classes. [ 228 ]
After work, the philosopher would often relax by watching Westerns , where he preferred to sit at the very front of the cinema, or reading detective stories especially the ones written by Norbert Davis . [ 229 ] [ 230 ] Norman Malcolm wrote that Wittgenstein would rush to the cinema when class ended. [ 231 ]
By this time, Wittgenstein's view on the foundations of mathematics had changed considerably. In his early 20s, Wittgenstein had thought logic could provide a solid foundation, and he had even considered updating Russell and Whitehead 's Principia Mathematica . Now he denied there were any mathematical facts to be discovered. He gave a series of lectures on mathematics, discussing this and other topics, documented in a book, with some of his lectures and discussions between him and several students, including the young Alan Turing , who described Wittgenstein as "a very peculiar man" . The two had many discussions about the relationship between computational logic and everyday notions of truth. [ 232 ] [ 233 ] [ 234 ]
Wittgenstein's lectures from this period have also been discussed by another of his students, the Greek philosopher and educator Helle Lambridis . Wittgenstein's teachings in the years 1940–1941 were used in the mid-1950s by Lambridis to write a long text in the form of an imagined dialogue with him, where she begins to develop her own ideas about resemblance in relation to language, elementary concepts and basic-level mental images. Initially, only a part of it was published in 1963 in the German education theory review Club Voltaire , but the entire imagined dialogue with Wittgenstein was published after Lambridis's death by her archive holder, the Academy of Athens , in 2004. [ 235 ] [ 236 ]
Monk writes that Wittgenstein found it intolerable that a war ( World War II ) was going on and he was teaching philosophy. He grew angry when any of his students wanted to become professional philosophers. [ i ]
In September 1941, he asked John Ryle , the brother of the philosopher Gilbert Ryle , if he could get a manual job at Guy's Hospital in London. John Ryle was professor of medicine at Cambridge and had been involved in helping Guy's prepare for the Blitz . Wittgenstein told Ryle he would die slowly if left at Cambridge, and he would rather die quickly. He started working at Guy's shortly afterwards as a dispensary porter, delivering drugs from the pharmacy to the wards where he apparently advised the patients not to take them. [ 237 ] In the new year of 1942, Ryle took Wittgenstein to his home in Sussex to meet his wife who had been determined to meet him. His son recorded the weekend in his diary;
Wink is awful strange [sic] – not a very good english speaker, keeps on saying 'I mean' and 'its "tolerable " ' meaning intolerable. [ 238 ]
The hospital staff were not told he was one of the world's most famous philosophers, though some of the medical staff did recognize him – at least one had attended Moral Sciences Club meetings – but they were discreet. "Good God, don't tell anybody who I am!" Wittgenstein begged one of them. [ 239 ] Some of them nevertheless called him Professor Wittgenstein, and he was allowed to dine with the doctors. He wrote on 1 April 1942: "I no longer feel any hope for the future of my life. It is as though I had before me nothing more than a long stretch of living death. I cannot imagine any future for me other than a ghastly one. Friendless and joyless." [ 237 ] It was at this time that Wittgenstein had an operation at Guy's to remove a gallstone that had troubled him for some years. [ 240 ]
He had developed a friendship with Keith Kirk, a working-class teenage friend of Francis Skinner , the mathematics undergraduate he had had a relationship with until Skinner's death in 1941 from polio . Skinner had given up academia, thanks at least in part to Wittgenstein's influence, and had been working as a mechanic in 1939, with Kirk as his apprentice. Kirk and Wittgenstein struck up a friendship, with Wittgenstein giving him lessons in physics to help him pass a City and Guilds exam. During his period of loneliness at Guy's he wrote in his diary: "For ten days I've heard nothing more from K, even though I pressed him a week ago for news. I think that he has perhaps broken with me. A tragic thought!" [ 241 ] Kirk had in fact got married, and they never saw one another again. [ 242 ]
While Wittgenstein was at Guy's he met Basil Reeve, a young doctor with an interest in philosophy, who, with R. T. Grant, [ 243 ] was studying the effect of wound shock [ 244 ] (a state associative to hypovolaemia [ 245 ] ) on air-raid casualties. When the Blitz ended there were fewer casualties to study. In November 1942, Grant and Reeve moved to the Royal Victoria Infirmary , Newcastle upon Tyne , to study road traffic and industrial casualties. Grant offered Wittgenstein a position as a laboratory assistant at a wage of £4 per week, and he lived in Newcastle (at 28 Brandling Park, Jesmond [ 243 ] ) from 29 April 1943 until February 1944. [ 246 ] While there he worked [ 247 ] [ 248 ] and associated socially with Erasmus Barlow , [ 248 ] a great-grandson of Charles Darwin . [ 249 ]
In the summer of 1946, Wittgenstein thought often of leaving Cambridge and resigning his position as Chair. Wittgenstein grew further dismayed at the state of philosophy, particularly about articles published in the journal Mind . It was around this time that Wittgenstein fell in love with Ben Richards (who was a medical student), writing in his diary, "The only thing that my love for B. has done for me is this: it has driven the other small worries associated with my position and my work into the background." On 30 September, Wittgenstein wrote about Cambridge after his return from Swansea, "Everything about the place repels me. The stiffness, the artificiality, the self-satisfaction of the people. The university atmosphere nauseates me." [ 250 ]
Wittgenstein had only maintained contact with Fouracre, from Guy's hospital, who had joined the army in 1943 after his marriage, only returning in 1947. Wittgenstein maintained frequent correspondence with Fouracre during his time away displaying a desire for Fouracre to return home urgently from the war. [ 250 ]
In May 1947, Wittgenstein addressed a group of Oxford philosophers for the first time at the Jowett Society. The discussion was on the validity of Descartes' Cogito ergo sum , where Wittgenstein ignored the question and applied his own philosophical method. Harold Arthur Prichard who attended the event was not pleased with Wittgenstein's methods;
Wittgenstein: If a man says to me, looking at the sky, 'I think it will rain, therefore I exist', I do not understand him. Prichard: That's all very fine; what we want to know is: is the cogito valid or not? [ 251 ]
Death is not an event in life: We do not live to experience death. If we take eternity to mean not infinite temporal duration but timelessness, then eternal life belongs to those who live in the present. Our life has no end in the way in which our visual field has no limits.
Wittgenstein resigned from the professorship at Cambridge in 1947 to concentrate on his writing, and in 1947 and 1948 travelled to Ireland , staying at Ross's Hotel in Dublin and at a farmhouse in Redcross , County Wicklow , where he began the manuscript MS 137, volume R. [ 252 ] Seeking solitude he moved to a holiday cottage in Rosroe overlooking Killary Harbour , Connemara owned by Drury's brother. [ 253 ]
He also accepted an invitation from Norman Malcolm, then a professor at Cornell University, to stay with him and his wife for several months in Ithaca, New York . He made the trip in April 1949, although he told Malcolm he was too unwell to do philosophical work: "I haven't done any work since the beginning of March & I haven't had the strength of even trying to do any." A doctor in Dublin had diagnosed anaemia and prescribed iron and liver pills. The details of Wittgenstein's stay in the US are recounted in Norman Malcolm's Ludwig Wittgenstein: A Memoir . [ 254 ] During his summer in the US, Wittgenstein began his epistemological discussions, in particular his engagement with philosophical scepticism , that would eventually become the final fragments On Certainty .
He returned to London, where he was diagnosed with an inoperable prostate cancer , which had spread to his bone marrow. He spent the next two months in Vienna, where his sister Hermine died on 11 February 1950; he went to see her every day, but she was hardly able to speak or recognize him. "Great loss for me and all of us," he wrote. "Greater than I would have thought." He moved frequently after Hermine's death, staying with various friends: to Cambridge in April 1950, where he stayed with G. H. von Wright ; to London to stay with Rush Rhees ; then to Oxford to see Elizabeth Anscombe , writing to Norman Malcolm that he was doing hardly any philosophy. He went to Norway in August with Ben Richards, then returned to Cambridge, where on 27 November he moved into Storey's End at 76 Storey's Way , the home of his doctor, Edward Bevan , and his wife Joan; he had told them he did not want to die in a hospital, so they said he could spend his last days in their home instead. Joan at first was afraid of Wittgenstein, but they soon became good friends. [ 252 ] [ 255 ]
By the beginning of 1951, it was clear that he had little time left. He wrote a new will in Oxford on 29 January, naming Rhees as his executor, and Anscombe and von Wright as his literary administrators, and wrote to Norman Malcolm that month to say, "My mind's completely dead. This isn't a complaint, for I don't really suffer from it. I know that life must have an end once and that mental life can cease before the rest does." [ 255 ] In February, he returned to the Bevans' home to work on MS 175 and MS 176. These and other manuscripts were later published as Remarks on Colour and On Certainty . [ 252 ] He wrote to Malcolm on 16 April, 13 days before his death:
An extraordinary thing happened to me. About a month ago I suddenly found myself in the right frame of mind for doing philosophy. I had been absolutely certain that I'd never again be able to do it. It's the first time after more than 2 years that the curtain in my brain has gone up. – Of course, so far I've only worked for about 5 weeks & it may be all over by tomorrow; but it bucks me up a lot now. [ 256 ]
Wittgenstein began work on his final manuscript, MS 177, on 25 April 1951. It was his 62nd birthday on 26 April. He went for a walk the next afternoon, and wrote his last entry that day, 27 April. That evening, he became very ill; when his doctor told him he might live only a few days, he reportedly replied, "Good!". Joan stayed with him throughout that night, and just before losing consciousness for the last time on 28 April, he told her: "Tell them I've had a wonderful life." Norman Malcolm describes this as a "strangely moving utterance". [ 256 ]
Four of Wittgenstein's former students arrived at his bedside – Ben Richards, Elizabeth Anscombe , Yorick Smythies , and Maurice O'Connor Drury . Anscombe and Smythies were Catholics, and, at the latter's request, a Dominican friar, Father Conrad Pepler , also attended. (Wittgenstein had asked for a "priest who was not a philosopher" and had met with Pepler several times.) [ 257 ] They were at first unsure what Wittgenstein would have wanted, but then remembered he had said he hoped his Catholic friends would pray for him, so they did, and he was pronounced dead shortly afterwards.
Wittgenstein was given a Catholic burial at Ascension Parish Burial Ground in Cambridge. [ 258 ] Drury later said he had been troubled ever since about whether that was the right thing to do. [ 259 ] In 2015 the ledger gravestone was refurbished by the British Wittgenstein Society. [ 260 ]
As for his religious views, Wittgenstein was said to be greatly interested in Catholicism , and was sympathetic to it, but did not consider himself to be a Catholic. According to Norman Malcolm, Wittgenstein saw Catholicism more as a way of life than as a set of beliefs he held, considering that he did not accept any religious faith. [ j ]
Wittgenstein has no goal to either support or reject religion; his only interest is to keep discussions, whether religious or not, clear. — T. Labron (2006) [ 262 ]
Wittgenstein was said by some commentators to be agnostic , in a qualified sense. [ k ] [ 264 ]
I won't say 'See you tomorrow' because that would be like predicting the future, and I'm pretty sure I can't do that.
The Blue Book , a set of notes dictated to his class at Cambridge in 1933–1934, contains the seeds of Wittgenstein's later thoughts on language and is widely read as a turning point in his philosophy of language.
Philosophical Investigations was published in two parts in 1953. Most of Part I was ready for printing in 1946, but Wittgenstein withdrew the manuscript from his publisher. The shorter Part II was added by his editors, Elizabeth Anscombe and Rush Rhees . Wittgenstein asks the reader to think of language as a multiplicity of language games within which parts of language develop and function. He argues that the bewitchments of philosophical problems arise from philosophers' misguided attempts to consider the meaning of words independently of their context, usage, and grammar — what he called "language gone on holiday". [ 266 ]
According to Wittgenstein, philosophical problems arise when language is forced from its proper home into a metaphysical environment, where all the familiar and necessary landmarks and contextual clues are removed. He describes this metaphysical environment as like being on frictionless ice: where the conditions are apparently perfect for a philosophically and logically perfect language, all philosophical problems can be solved without the muddying effects of everyday contexts; but where, precisely because of the lack of friction, language can in fact do no work at all. [ 267 ] Wittgenstein argues that philosophers must leave the frictionless ice and return to the "rough ground" of ordinary language in use. Much of the Investigations consists of examples of how the first false steps can be avoided, so that philosophical problems are dissolved, rather than solved: "The clarity we are aiming at is indeed complete clarity. But this simply means that the philosophical problems should completely disappear." [ 268 ]
Wittgenstein's archive of unpublished papers included 83 manuscripts, 46 typescripts and 11 dictations, amounting to an estimated 20,000 pages. Choosing among repeated drafts, revisions, corrections, and loose notes, editorial work has found nearly one-third of the total suitable for print. [ 269 ] An Internet facility hosted by the University of Bergen allows access to images of almost all the material and to search the available transcriptions. [ 270 ] In 2011, two new boxes of Wittgenstein papers, thought to have been lost during the Second World War, were found. [ 271 ] [ 272 ]
What became the Philosophical Investigations was already close to completion in 1951. Wittgenstein's three literary executors prioritized it, both because of its intrinsic importance and because he had explicitly intended publication. The book was published in 1953.
At least three other works were more or less finished. Two were already "bulky typescripts", the Philosophical Remarks and Philosophical Grammar . Literary co-executor G. H. von Wright stated, "They are virtually completed works. But Wittgenstein did not publish them." [ 273 ] The third was Remarks on Colour . "He wrote i.a. a fair amount on colour-concepts, and this material he did excerpt and polish, reducing it to a small compass." [ 274 ]
Bertrand Russell described Wittgenstein as "perhaps the most perfect example I have ever known of genius as traditionally conceived; passionate, profound, intense, and dominating." [ 275 ]
In 1999, a survey among American university and college teachers ranked the Investigations as the most important book of 20th-century philosophy, standing out as "the one crossover masterpiece in twentieth-century philosophy, appealing across diverse specializations and philosophical orientations". [ 276 ] [ 277 ] The Investigations also ranked 54th on a list of most influential twentieth-century works in cognitive science prepared by the University of Minnesota 's Center for Cognitive Sciences. [ 278 ]
Duncan J. Richter of the Virginia Military Institute , writing for the Internet Encyclopedia of Philosophy , has described Wittgenstein as "one of the most influential philosophers of the twentieth century, and regarded by some as the most important since Immanuel Kant ." [ 279 ] Peter Hacker argues that Wittgenstein's influence on 20th-century analytical philosophy can be attributed to his early influence on the Vienna Circle and later influence on the Oxford "ordinary language" school and Cambridge philosophers. [ 280 ]
He is considered by some to be one of the greatest philosophers of the modern era. [ 281 ] But despite its deep influence on analytical philosophy, Wittgenstein's work did not always gain a positive reception. Argentine-Canadian philosopher Mario Bunge asserts that "Wittgenstein is popular because he is trivial." [ 282 ]
There are many diverging interpretations of Wittgenstein's thought. In the words of his friend and colleague Georg Henrik von Wright :
He was of the opinion ... that his ideas were generally misunderstood and distorted even by those who professed to be his disciples. He doubted that he would be better understood in the future. He once said that he felt as though he were writing for people who would think in a quite different way, breathe a different air of life, from that of present-day men. [ 283 ]
Since Wittgenstein's death, scholarly interpretations of his philosophy have differed. Scholars have differed on the continuity between the so-called early Wittgenstein and the so-called late ( r ) Wittgenstein (that is, the difference between his views expressed in the Tractatus and those in Philosophical Investigations ), with some seeing the two as starkly disparate and others stressing the gradual transition between the two works through analysis of Wittgenstein's unpublished papers (the Nachlass ). [ 284 ]
One significant debate in Wittgenstein scholarship concerns the work of interpreters who are referred to under the banner of The New Wittgenstein school such as Cora Diamond , Alice Crary , and James F. Conant . While the Tractatus , particularly in its conclusion, seems paradoxical and self-undermining, New Wittgenstein scholars advance a " therapeutic " understanding of Wittgenstein's work – "an understanding of Wittgenstein as aspiring, not to advance metaphysical theories, but rather to help us work ourselves out of confusions we become entangled in when philosophizing." [ 285 ] To support this goal, the New Wittgenstein scholars propose a reading of the Tractatus as "plain nonsense" – arguing it does not attempt to convey a substantive philosophical project but instead simply tries to push the reader to abandon philosophical speculation. The therapeutic approach traces its roots to the philosophical work of John Wisdom [ 286 ] and of Oets Kolk Bouwsma . [ 287 ] [ 284 ] : 54
The therapeutic approach is not without critics: Hans-Johann Glock argues that the "plain nonsense" reading of the Tractatus "is at odds with the external evidence, writings and conversations in which Wittgenstein states that the Tractatus is committed to the idea of ineffable insight." [ 284 ] : 56
Hans Sluga and Rupert Read have advocated a "post- therapeutic " or "liberatory" interpretation of Wittgenstein. [ 288 ] [ 289 ] [ 290 ]
In October 1944, Wittgenstein returned to Cambridge around the same time as did Russell, who had been living in the United States for several years. Russell returned to Cambridge after a backlash in the US to his writings on morals and religion. Wittgenstein said of Russell's works to Drury:
Russell's books should be bound in two colours...those dealing with mathematical logic in red – and all students of philosophy should read them; those dealing with ethics and politics in blue – and no one should be allowed to read them. [ 291 ]
Russell made similar disparaging comments about Wittgenstein's later work:
I have not found in Wittgenstein's Philosophical Investigations anything that seemed to me interesting and I do not understand why a whole school finds important wisdom in its pages. Psychologically this is surprising. The earlier Wittgenstein, whom I knew intimately, was a man addicted to passionately intense thinking, profoundly aware of difficult problems of which I, like him, felt the importance, and possessed (or at least so I thought) of true philosophical genius. The later Wittgenstein, on the contrary, seems to have grown tired of serious thinking and to have invented a doctrine which would make such an activity unnecessary. I do not for one moment believe that the doctrine which has these lazy consequences is true. I realize, however, that I have an overpoweringly strong bias against it, for, if it is true, philosophy is, at best, a slight help to lexicographers, and at worst, an idle tea-table amusement. [ 292 ]
Saul Kripke 's 1982 book Wittgenstein on Rules and Private Language contends that the central argument of Wittgenstein's Philosophical Investigations is a devastating rule-following paradox that undermines the possibility of ever following rules in our use of language. Kripke writes that this paradox is "the most radical and original sceptical problem that philosophy has seen to date". [ 293 ]
Kripke's book generated a large secondary literature, divided between those who find his sceptical problem interesting and perceptive, and others, such as John McDowell , Stanley Cavell , Gordon Baker , Peter Hacker , Colin McGinn , [ 294 ] and Peter Winch who argue that his scepticism of meaning is a pseudo-problem that stems from a confused, selective reading of Wittgenstein. Kripke's position has, however, recently been defended against these and other attacks by the Cambridge philosopher Martin Kusch (2006).
A collection of Ludwig Wittgenstein's manuscripts is held by Trinity College, Cambridge. | https://en.wikipedia.org/wiki/Ludwig_Wittgenstein |
Ludwig Wittgenstein considered his chief contribution to be in the philosophy of mathematics , a topic to which he devoted much of his work between 1929 and 1944. [ 1 ] As with his philosophy of language , Wittgenstein's views on mathematics evolved from the period of the Tractatus Logico-Philosophicus : with him changing from logicism (which was endorsed by his mentor Bertrand Russell ) towards a general anti-foundationalism and constructivism that was not readily accepted by the mathematical community. The success of Wittgenstein's general philosophy has tended to displace the real debates on more technical issues. [ citation needed ]
His Remarks on the Foundations of Mathematics contains his compiled views, notably a controversial repudiation of Gödel's incompleteness theorems .
Wittgenstein's initial conception of mathematics was logicist and even formalist . [ 1 ] The Tractatus described the propositions of logic as a series of tautologies derived from syntactic manipulation, and without the pictorial force of elementary propositions depicting states of affairs obtaining in the world.
Wittgenstein asserted that “[t]he logic of the world, which is shown in tautologies by the propositions of logic, is shown in equations by mathematics” (6.22) and further that “Mathematics is a method of logic” (6.234).
During the 1920s Wittgenstein turned away from philosophical matters but his interest in mathematics was rekindled when he attended in Vienna a lecture by the intuitionist L. E. J. Brouwer . After 1929, his primary mathematical preoccupation entailed resolving the account of logical necessity he had articulated in the Tractatus Logico-Philosophicus —an issue which had been fiercely pressed by Frank P. Ramsey . [ 2 ] Wittgenstein's initial response, Some Remarks on Logical Form , was the only academic paper he published during his lifetime, and marked the beginnings of a departure from the ideal language philosophy and correspondence theory of truth of the Tractatus.
During the two terms of 1938/9 Wittgenstein lectured without any notes before students for two hours twice a week. From four sets of notes made during the lectures a text has been created, presenting Wittgenstein's views at that time. [ 3 ]
An editorial team prepared the edition of Wittgenstein's Remarks on the Foundations of mathematics from the manuscript notes he made during the years 1937–44. The material has been arranged in chronological order, allowing to observe some changes of emphasis or interest in Wittgenstein's views over the years. [ 4 ] | https://en.wikipedia.org/wiki/Ludwig_Wittgenstein's_philosophy_of_mathematics |
The Ludwik Fleck Prize is an annual award given for a book in the field of science and technology studies . It was created by the 4S Council ( Society for the Social Studies of Science ) in 1992 and is named after microbiologist Ludwik Fleck . [ 1 ] [ 2 ]
The prize is named after the Polish microbiologist and sociologist Ludwik Fleck (1896-1961), author of Genèse et développement d'un fait scientifique (1935), which influenced Thomas Samuel Kuhn's conception of the history of science, constructivist epistemology, and various fields of research such as the sociology of science, the sociology of scientific knowledge, science studies and the social construction of technologies. | https://en.wikipedia.org/wiki/Ludwik_Fleck_Prize |
In biochemistry, the Luebering–Rapoport pathway (also called the Luebering–Rapoport shunt) is a metabolic pathway in mature erythrocytes involving the formation of 2,3-bisphosphoglycerate (2,3-BPG), which regulates oxygen release from hemoglobin and delivery to tissues. 2,3-BPG, the reaction product of the Luebering–Rapoport pathway was first described and isolated in 1925 by the Austrian biochemist Samuel Mitja Rapoport and his technical assistant Jane Luebering. [ 1 ] [ 2 ] [ 3 ]
Through the Luebering–Rapoport pathway bisphosphoglycerate mutase catalyzes the transfer of a phosphoryl group from C1 to C2 of 1,3-BPG, giving 2,3-BPG. 2,3-bisphosphoglycerate, the most concentrated organophosphate in the erythrocyte, forms 3-PG by the action of bisphosphoglycerate phosphatase . The concentration of 2,3-BPG varies proportionately with the pH, since it is inhibitory to catalytic action of bisphosphoglycerate mutase . Under physiological conditions, the flux through the Rapoport-Luebering shunt is 19% of the main glycolytic flux. [ 4 ] | https://en.wikipedia.org/wiki/Luebering–Rapoport_pathway |
German Luftwaffe and Kriegsmarine Radar Equipment during World War II, relied on an increasingly diverse array of communications, IFF and RDF equipment for its function. Most of this equipment received the generic prefix FuG ( German : Funkgerät ), meaning "radio equipment". During the war, Germany renumbered their radars. From using the year of introduction as their number, they moved to a different numbering scheme.
No German ground radar was accurate enough for flak fire direction. The method of operation during the day was for radar to direct the flak's optical fire control towards the target. Once this was acquired, the flak was controlled by the optical equipment to complete the engagement. During the night, the radar would be used to indicate the target to the searchlight crews. The rest of the engagement would be carried out optically. During the day, fighters would be directed with sufficient precision for them to come into visual contact with their targets, while during the night they would use their onboard aircraft interception (AI) radar to find the target after initial direction from the ground-based radars.
The Würzburg was first operational in the summer of 1940, had a parabolic shaped antenna with a diameter of about 3metres and in some models could be folded in half for transport. The Würzburg was produced in the thousands with various estimated figures being between 3000 and 4000 with up to 1500 sets of Würzburg Riese. The antenna of the Würzburg weighed over 9.5 tons and its parabolic surface had a diameter equal to 7.5 metres and a focal length of one metres and 70 cm. Only one German company had the technical skill to build these radars, and that was Zeppelin . [ 2 ] The name of the unit was chosen at random by pointing at a map of Germany and Würzburg was chosen. [ 3 ]
FuMG 62 / FuMG 39 Würzburg : 3D fire-control radar. Used to direct the flak optical directors and searchlights. Wavelength 50 cm approx. In response to jamming various models of Würzburg radar were developed to operate on various frequencies called "Islands". [ 2 ]
Würzburg A First production version introduced in 1940. 50 cm operating wavelength. Operation range was approximately 30 km. Included an IFF system that worked with the FuG 25z airborne unit. [ 2 ]
Würzburg B Integrated IR telescope to increase accuracy. Proved unsatisfactory and not placed into production. [ 2 ]
Würzburg C Replaced the model A in production in 1941. Had lobe switching to improve accuracy. On this unit the integral IFF system was replaced by a system based on the FuG 25a airborne. To support this system which worked at approx 125-160 MHz two antenna were placed inside the main dish. A separate interrogation and receiving units were attached to show the IFF responses. [ 2 ]
Würzburg D Replaced the model C in production in 1942. It now had a usable range of approximately 40 km. Conical scan was used for fine accuracy. The IFF antenna was now fitted in the center of the dish rather than on the sides. Better instruments were fitted and generally, it was the best of the small Würzburg. [ 2 ]
FuMG 65 Würzburg Riese(Giant) : The electronics of the D model Würzburg combined with a 7-meter dish to improve resolution and range. Range approx 70 km. Version E was a modified unit to fit on railroad flatcars to produce a mobile Flak radar system. Version G had the 2.4-meter antenna and electronics from a Freya installed. The antenna dipoles were inside the reflector. The reason for this was that the allies were flying very high recon flights which were above the maximum height of the Freya. The standard Würzburg Riese's 50 cm beam was too narrow to find them directly. By combining the two systems the Freya could set the Würzburg Riese onto the target. [ 4 ] [ 2 ]
FuMG 63 Mainz The Mainz, introduced in 1941, was a development from the Wurzburg with its 3-meter solid metal reflector mounted on top of the same type of control car as used by the ‘Kurmark’. Its range was 25–35 km with an accuracy of ±10–20 meters, azimuth 0.1 degrees, and elevation ±0.3-0.5 degrees. Only 51 units were produced before being superseded by the ‘Mannheim’.
FuMG 64 Mannheim The Mannheim was an advanced development from the ‘Mainz’. It also had a 3-meter reflector, which was now made from a lattice framework covered in a fine mesh. This was fixed to the front of a control cabin and the whole apparatus was rotated electrically. Its range was 25–35 km, with an accuracy of ±10–15 meters; azimuth and elevation accuracy of ±0.15 degrees. Though accurate enough to control Flak guns it was not deployed in large numbers. This was due to its cost (time and materials to manufacture was about three times that of a Würzburg D).
FuMG 75 Mannheim Riese Just as the Wurzburg's performance was greatly improved when fitted with a 7-meter reflector, so was the Mannheim's, and the result called a Mannheim Riese (Giant Mannheim). There was an optical device for the initial visual acquisition of the target. With its narrow beam it was relatively immune from ‘Window’. Its accuracy and automatic tracking enabled it to be used in anti-aircraft missile research to track and control the missiles in flight. Only a handful were manufactured.
FuMG 68 Ansbach There was a need for a mobile radar with the range and accuracy of the ‘Mannheim’. The result, in 1944, was the Ansbach. It had a collapsible reflector of diameter 4.5 meters, operating on a wavelength of 53.6 cm, and peak power of 8 kW, giving it a normal range 25–35 km (70 km in search mode) with an accuracy of 30–40 meters. Azimuth and elevation accuracy was around ±0.2°. The antenna and reflector were remote controlled from a Bayern control van up to 30 meters away. The control system was based on the remote control system of the Michael microwave communication system, this was based on the Ward-Leonard AC/DC control system. The Ansbach was to be installed in large Flak batteries with six or more guns, but only a few were produced by the end of the war, and these did not see operational service
FuMG 450 Freya / FuMG 41G : This was a 2D Early warning radar. (2D means unable to indicate height). It was used for fighter direction and target indication for the Würzburg. Operating wavelength of approx 2.4 meters (125 MHz). In response to jamming various models were developed to operate on various frequencies called "Islands". Over 1000 units delivered in various models
FuMG 401 / FMG 42 FREYA - LZ (Models A - D). An Air portable version, the model differences were due to an operating frequency range being in 4 discrete bands between 91 and 200 MHz.
Freya-Rotschwarz and Freya-Grünschwarz : These two systems were Freya modified to operate on the same frequency as the British radio navigation system GEE to avoid jamming. However, as by the time they were ready the Germans were jamming GEE it is not clear whether any were ever deployed. [ 5 ]
FuMG 451 A Freya Flamme : Freya which had been built to use the "Island D" band were modified to be able to trigger the British IFF equipment. Ranges of up to 450 km were obtained. Fell from use as British IFF procedures improved. [ 5 ]
FuMG 401 Freya Fahrstuhl : A 3D version of the Fraya. (3D means could measure height). Measurements made by moving the antenna up and down on a rack. Only a very rough estimation of height available. originally intended for early warning most of the systems produced went to help "jammed" Würzburg
Freya EGON : EGON stood for Erstling Gemse Offensive Navigation system. Where Erstling was the codename of the Fug25a transceiver in the aircraft and Gemse was the codename for the receiver. The system operated on a principle similar to the British OBOE navigation system. An IFF signal was sent from a Freya, that had had its receiver antenna removed, to the aircraft. The Fug25a in the aircraft responded and the received signal was displayed as a range offset on the Freya display. Using a second transmitter and triangulation the position of the aircraft was resolved. Though the system was tested to guide night fighters it was found to be to limited by the number of aircraft that it could control at one time (the same limitation was found with Oboe). The "Y system" was used instead for night fighter control. The EGON system was used to control pathfinders for bombing raids over both England and Russia, however, by now the Luftwaffe bomber force was running out of planes, pilots and fuel so the results were minimal. Work was done using a third transmitter to improve system performance. Range with a normal Freya was up to 250 km, work was underway to use a Wasserman system instead of a Freya to increase range too 350 km. (the Freya signal was too weak to trigger the Fug25a at ranges beyond 250 km), but this was not completed.
For area air defense (vs point defense) Freya's range was found to be insufficient. This led to attempts to use Freya technology to achieve greater range. This resulted in the Wassermann and Mammut. Although the Mammut units achieved their aims they were large installations with large arrays built on bunkers. This resulted in long building times and vulnerability to air attacks. The Wassermann was a better solution in that being smaller they were harder to locate and quicker to build, 3–4 weeks. However, sources indicate that they never achieved the desired range of 400 km, the best was approx 300 km. This may be why there were so many variants deployed.
FuMG 401 Mammut : First deployed in 1942 this was a long range 2D search radar. It consisted of 8 Freya class antenna arranged in a 4 x 2 configuration. It measured 25 meters wide and ten meters high and was mounted on four pylons fixed in concrete. Some installations had a second array mounted back-to-back. Each array could be electronically swung through about 100 degrees, so the dual sided array could look behind itself to continue to track bombers as they flew into Germany. Frequency was the same as Freya (125 MHz). Range was up to 300 km with a transmit power of 200 kW. Installations being very large took up to four months to build.
FuMG 402 Wassermann : This system was deployed in 1942. It was basically six Freya antenna mounted on a rotation cylinder. Frequencies were similar to Freya (125 MHz) transmit power was 100 kW, resulting in a usable range of approx 200 km. Three main versions were produced with sub variants in each class.
Wassermann L : The original light version. Some sources indicate that it had structural problems.
Wassermann S : The heavy version. First deployed late 1942 Some sources indicate it had more than six arrays.
Wassermann M : The last family were the medium class units. Again, it is not clear exactly how many Freya arrays were attached to the mast. In 1944, this version received a modification that allowed it to electronically tilt its beams by 16 degrees which allowed it to perform height determination turning it into a 3D search radar.
Elefant & See Elefant : These bi-static radars were an attempt to combine jamming resistance with long range. They operated in two bands 23–28 MHz or 32–38 MHz. Range was approximately 400 km but under certain RF conditions much greater ranges were obtained. Antenna were usually mounted on Wassermann towers (all units differed in detail from each other). Three Elefants were in operation at the end of the war with one See Elefant. Sources are unclear what the difference between the two types were.
The first type of early-warning radar set giving a panoramic display which come into operation is usually referred to as the Jagdschloss, although its official designation is Jagdschloss F, to distinguish it from later types, such as the Michael B and Z.
Jagdschloss F : The antenna was 24 m wide and 3 m high, consisting of sixteen pairs of double horizontal transmit and receive dipoles. Above this, an 8.5 metre wide antenna array of eight vertical dipoles was mounted for the IFF.The first 62 Jagdschloss were of the Voll Wismar type using wide band antenna covering the band 1.90-2.20 metres. Another 18, used the band 1.20–1.90 meters. Range was 100 km. An optional feature known as Landbriefträger (Postman) was a remote PPI display for use with Jagdschloss. This allowed the PPI display from the radar station to be sent simultaneously to command HQ by HF cable, or by a UHF radio link.
Jagdschloss Michael B : A ponderous aerial array of two rows of eighteen Würzburg mirrors measuring 56 metres long x 7 metres high was used in the Würzmann experimental early-warning radar, and formed the serial array for Jagdschloss Michael B with the array in a horizontal position. The wavelength employed, was that of a Voll Wismar 53.0-63.8 cm. Range approx 250 km. None may have entered service, though one source mentions one entering service.
Forsthaus F : This system was a development of the Jagdschloss Michael B using the so-called Euklid 25–29 cm. waveband employed by the Navy. Once more a very long aerial array 48 metres long and about 8 metres high was used, employing a cylindrical paraboloid. A wave guide antenna (Hohlraumstrahler) was placed along the focal line with a second and a third wave guide parallel to it above and below respectively. Range was expected to be over 200 km. Probably none completed.
Forsthaus KF : Development of the Forsthaus F. Reduced in size so that the system would fit in a railway carriage. Antenna 24 meters long. Range 120 km.
Dreh Freya : This set, which was also known as Freya Panorama , was first introduced in June 1944. It consisted of a Freya aerial of the Breitband type working in Bereich I (1.90-2.50), the frequency of which could be adjusted at will. The aerial was so built that it rotated through 360° and gave a remote panoramic presentation. About 20 units were in use in January 1945. The range claimed for it was only about 100 km.
Jagdhütte :This apparatus, which was produced by Siemens, gave a panoramic PPI display of the German IFF responses, using 24- or 36-metre rotating aerials. The wavelength employed was 2.40 metres and it was planned, with its aid, to trigger off the FuGe 25A. In this way, friendly fighters were to be controlled from the ground at ranges up to about 300 km. It was fully realised that if the FuGe 25A frequency was ever jammed the Jagdhütte would be useless, but it was not considered likely that the Allies would attempt to jam it. Small numbers may have been completed at the end of the war.
Jagdwagen : Jagdwagen was designed as a mobile panoramic radar to control fighters at close ranges immediately behind the front. It was a project of the firm of Lorenz. The aerials were considerably smaller than the Jagdhütte, the array being only 8 metres long. The aerial array was to be mounted on the Kumbach stand as used in the Egerland Flak set. The frequency band used was that of the ASV set Hohentwiel namely 53–59 cm. Range 40–60 km. Prototypes only.
Jagdhaus (FuMG 404) : Jagdhaus was designed and built by Lorenz in 1944 as an early warning radar. It was the most powerful radar built by the Germans, with a peak pulse power of 300 kW, which Lorenz planned to increase to 750 kW. The whole assembly was the size of a house, which is possibly how it got its name (‘haus’ being the German for ‘house’). The rotating upper part of the construction housed the separate parabolic transmit and receive antennae and reflectors, with the IFF above them as usual. It weighed 48 tons and rotated at 10 rpm. It operated on wavelengths of 1.4 to 1.8 metres, and had a range of about 300 km. It could measure altitude, azimuth and range. The control room was located below the antennae, from which its PPI image was also transmitted to command HQ at Charlottenberg by Landbrieftrager, similar to the Jagdschloss system. It is believed that only one Jagdhaus was constructed, which fell into Soviet hands when it was captured by their troops in 1945, during which time it was damaged. The Soviets compelled the Germans to repair it and instruct them in its operation.
Lichtenstein B/C - FuG 202 : Low-UHF band frequency range, introduced in 1941 it was the initial AI radar . Deployed in large numbers with 32-dipole element Matratze (mattress) antenna arrays, it operated on the 61 cm wavelength. Its range was (in theory) 2–3 km but in practise was found to be dependent on factors such as height. Compromised to the Allies on May 9, 1943 . [ 6 ]
Lichtenstein C-1 - FuG 212 : Introduced in 1943, this was an improved version of the FuG 202.
Lichtenstein SN2 - FuG 220 : Low-mid VHF band frequency range, introduced in 1943 in response to Allied jamming, and used an eight-dipole Hirschgeweih (a stag's antlers) antenna array. Transmitter power of 2 kW on 3.3 meters. Range was increased to 6 km. Minimum range was 400 m, which was found to be a problem, hence aircraft carried it and FuG202. Later versions did away with the need for the Fug 202. Compromised to the Allies in July 1944 .
Lichtenstein SN3 - FuG 228 : A higher-powered version of the SN2. Range increased to 8 km. Only a small number accepted into service, perhaps only prototypes.
FuG 214 : This was an "add-on" unit to the SN2 which gave it an additional, rear -facing antenna installation. This was in response to Allied night fighters accompanying the bomber streams to hunt the German night fighters while they hunted the bombers. The idea was to prevent Allied fighters attacking the German fighters from behind.
Neptun 1 - FuG 216 : A small number of experimental sets fitted to Fw 190 and Bf 109. Wavelength 1.3 to 1.8 meters.
Neptun 2 - FuG 217 : A small number of sets fitted to Fw 190 and Bf 109. 1.6 to 1.8 meters wavelength. Some had a rear warning component.
Neptun 3 - Fug 218 : A replacement for SN2, deployed late 1944 after SN2 was jammed. Wavelength 1.6 to 1.9 meters, most often using same, eight-dipole "stag's antlers" antenna array with shorter dipole elements. Range up to 5 km. Some were fitted to Me 262 to create night fighters that could catch Mosquito intruders.
Neptun 4 - FuG 219 : Increased power version of the FuG218, experimental sets only.
Berlin A - FuG 224 : The first centimetric (3 GHz) band radar. Based on a captured H2S radar unit, codenamed "Rotterdams". Unknown number built but under 100. Range 5K under ideal conditions, 10 cm wavelength.
Berlin N1 - FuG 240 N : Combination of the Berlin A and the SN2. Only small numbers delivered.
Berlin N2 : Increased power Berlin N, Range reported to be 9K.
Berlin N3/N4 : Experimental units.
Bremen - FuG 244 : (also known as Berlin D) Berlin A with the frequency changed to 3 cm (10 GHz) rather than 9 cm. Experimental.
Bremen O - FuG 245 : Another experiments 3 cm unit.
Neptun : Early system - It failed its acceptance tests - the system was later reworked into an aircraft intercept set.
Hohentwiel (FuG 200) ; UHF-band radar, operated at wavelength between 52 and 57 cm. Range was between 10 km for a small vessel like a surfaced submarine to 70 km for a large ship. Under the best circumstances it could see the coast at approx 150 km. It had separate antennae for transmit and receive. The transmit antenna was centrally mounted, pointing forward, while the two receive antennae were mounted either side, pointing outwards by 30 degrees, giving it a search beam width of about 120 degrees. Each antenna array consisted of sixteen horizontally polarised dipoles, mounted in four groups of four in a vertical stack.
A variant of the Hohentwiel the Tiefentwiel (FuMG407) ; was tried as an Air Surveillance radar on the coast to try and detect low flying aircraft.
FuMO 1 - Calis A : Its 6.2 x 2.5m antenna consisted of 2 rows of eight full wave vertical dipoles. Its wavelength was 82 cm and its range depended on the height it was installed above sea level, but typically was about 15–20 km. [ 7 ] Given the frequency low angle reflections from the surface, also known a clutter would have been an issue.
FuMO 2 - Calis B : Improved version of the FuMO 1 - similar clutter problems but improved transmitter and accuracy.
FuMO 3 - Zerstorersaule : A version of the destroyer radar modified for land use.
FuMO 4 - Dunkirchen : Improved version of the FuMo 2 - similar otherwise
FuMO 5 - Boulogne : Yet another improved version of the FuMO 2 - increased transmitter power again with an improved aerial - usable range now 40 – 50 km.
FuMO 11 - Renner : 3M antenna from a Wurzburg combined with a 9 cm "Berlin" unit and mounted on a Seetakt base optimized for sea search rather than air search. Sources differ on usable range.
FuMO 12 & 13 : Improved Renner units to attempt to compensate for poor reliability with the original unit.
FuMO 15 - Sheer : Combination of a Berlin 9 cm and an Antenna from a Giant Wurzburg - seems to have been optimized for surface search in the same was as the Renner series was.
FuMO 51 - Mammut G : Version of the Luftwaffe FuMO401 but with Seetakt antenna and waveforms to optimise it for surface search rather than air search.
FuMO 214 - Giant Wurzburg : Naval designation for the airforce unit.
FuMO 215 See Reise : Improved FuMO 214
FuMO 52 : Naval designation for the FuMG 401 Mammut C.
FuMO 64 : A version of the Hohentwiel L ASV radar modifier for coastal air search - different from the unsuccessful xxx
FuMO 221 : Naval designation for the FuMG 64 Mannheim.
FuMO 301 - 303 : Versions of the FuMG 39-41 Freya
FuMO 311 - 318 : Versions of the Freya working on other frequencies (Around 2.2 Meters) from the normal Freya. Sometimes known as the Freiburg
FuMO 321 - 328 : Based on the fuMO311 family of units but working at 1.5 meters.
FuMO 331 : Naval designation for the FuMG 402 Wassermann M
FuMO 371 : Naval designation for the FuMG 403 Jagdschloss
FuMO 201 : Flakleit - Using Seetakt 80 cm technology a 3D radar mounted on an underground armoured turret (originally an optical rangefinder) small numbers produced. Multiple antenna. Manufactured by GEMA.
FuMO 211 - 213 : Naval designation for the FuMG 62 family or radars - the Wurzburg A, C & D.
FuMO 215 : See Reise.
FuMO 221 : Mannheim.
FuMO 111 : Barbara, 9 cm fire control radar based on modifying a FuMO 15 Giant Wurzburg to operate at 9 cm. Only experimental radars produced.
FuMO 214 : A Wurzburg Reise reconfigured for use as a naval radar with a range of approximately 50 – 70 km against surface targets.
FuMO 215 : Improved range version of the FuMO214.
Although the Germans were carrying out research at centimeter wavelengths at the start of the war, the work was abandoned as it was decided that the war would be over before the research and development could be completed. In February 1943 an RAF Stirling bomber was shot down over Rotterdam and a damaged H2S system was recovered. The Germans started a crash development program to use the information deduced from the captured system. Although a range of prototypes were produced, very few reached front line troops. Due to the device being recovered near Rotterdam, the Germans used that name in several code names for the Centimeter (9 cm) systems, such as "Rotterdam Device".
Rotterdam : To get the quickest start with development, German industry copied, as far as possible, the H2S system. Approximately 20 systems were manufactured for R&D work. They led to the Roderich jammer and the Berlin & Korfu receivers.
Jagdschloss Z : The 9 cm version of the Jagdschloss F panoramic radar system. Prototypes only.
Forsthaus Z : The 9 cm version of the Forsthaus panoramic search radar. Prototypes only.
FuMG 77 : Rotterheim. A combination of the 9 cm receiver/transmitter of the Berlin system with the antenna and other systems from a Mannheim. Its range was about 30 km and it was found to be unaffected by Allied jamming. Its name changed to Marbach V later in the war.
FuMG 76 : Marbach. A combination of the Berlin transmitter/receiver with the Ansback 4.5 meter reflector and systems. Controlled by the "Michael" remote control system. Sources suggest that three systems were completed.
FuMG 74 : Kulmbach. A 9 cm panoramic search radar, 6 meter antenna and remote controlled like the FuMG76. When combined with that radar it was known as the Egerland system. Only two were completed, with a range of approx 50 km.
FuG 221 Freya-Halbe : This was a Freya modified to locate British airborne jammers. Development completed but due to lack of parts never deployed. [ 5 ]
FuG 221 Rosendahl : This was a Freya modified to locate British bombers by tracking their Monica warning radar emissions. By the time development was completed the British had ceased using Monica, so never deployed [ 5 ]
FuG 223 : A family of passive airborne receivers tuned to various radar bands such as Freya and Würzburg. Designed to allow night fighters to home onto bombers fitted with jammers against those radars. The Fug223 was a version build from surplus FuG 227 components that detected reflected energy from an aircraft being illuminated by a ground radar. In this way it was an example of an early semi-active radar homing system. In order to work it seems that the radar beam had to illuminate the target and the night fighter so that the two receivers could be synchronized. Used by one test and development squadron at the end of the war.
FuG 227 Flensburg : Built using some components from the FuG 220 range of AI equipment. This was a passive device which allowed night fighters to home onto bombers which had their rear warning 'Monica' active. Monica was a short range VHF radar (200 MHz band) which was fitted to the tail of British heavy bombers facing down and back to give the rear turret gunner a warning display. Using this equipment the night fighters could achieve intercept with apparent ease. Extremely effective until the British captured a Junkers Ju 88G-1 night fighter with FuG 227 installed in July 1944, [ 8 ] and realised its mode of operation. there after Monica was removed from bombers and FuG 227 ceased to have any value.
Klein Heidelberg was the code-name give to a passive radar system devised in 1941. The system was a bi-static radar system. What was unusual was that the transmitters were British rather than German! The system worked by using the reflections from the Chain Home (British coastal radar system) rather than transmitters associated with the receivers. Klein Heidelberg worked by sensing Chain Home (CH) transmission pulses directly with a small auxiliary antenna, close to the main antenna, whose receiver was tuned to a particular CH station whose exact location, bearing and range was known. The CH signal was then used to synchronise the KH with the CH transmission pulses. The CH pulse started a circular trace on a cathode ray tube (CRT) divided into forty sections. The main antenna received the reflection of these pulses from the target and displayed them on the CRT. Range was between 300 and 600 km. The display was 2D. Resolution was not very good but it allowed the Germans to see bomber formations forming up over England and the general path of the bomber streams. Its big advantage was it was not possible for the British to jam without jamming their own radars. The system entered service in late 1943 and by late 1944 six system were commissioned on the Dutch coast. [ 9 ]
FuG 350 Naxos & FuG 351 Korfu : This was a family or radar detectors that operated in the 8 to 12 cm band. They were primarily designed to locate Allied H2S radar transmissions. A range of antenna were used some stationary and some rotating. There were intended to be air, land and maritime versions. However, Naxos had a resolution problem that limited its ability to distinguish individual aircraft. This allowed the night fighter to locate the bomber stream but not usually individual bombers. This was not usually an issue with the maritime based system (primarily U-boats) as there was usually only one aircraft detected at a time. To reduce this issue an improved version the Korfu was developed. It was intended to field Korfu as a replacement for Naxos in all three versions but due to a shortage of components only the land-based version was fielded where is resolution could be used to the best effect. [ 10 ]
FuG 350 Naxos Z : The original system, detected H2S radar system on bombers. Unable to distinguish individual bombers nor the 10 GHz H2X Allied bombing radar, but could reliably guide the fighter into the bomber stream.
FuG 350 Naxos ZR : Additional aerials added a tail warning system which allowed British night-fighters to be detected.
FuG 350 Naxos ZX : 3 cm version for detecting allied H2X radars. Not known to have ever been fielded.
FuG 350 Naxos RX : 3 cm version of the Naxos ZR. Not known to have ever been fielded.
FuG 350 Naxos ZD : Combined Z and ZX, allowing 9 cm and 3 cm detection in the same system.
FuG 351 Korfu Z : Entered production late 1944, due to shortage of components only ground-based versions deployed though an airborne version completed development. better range and discrimination than Naxos.
FuG 280 Kiel Z : IR-based passive receiver. 10-degree field of view - display via CRT. Problems with discrimination between fires aircraft and other IR sources. [ 10 ]
Falter : Based on the Fug 280 K but detected British IR recognition systems. Development not completed. [ 10 ] | https://en.wikipedia.org/wiki/Luftwaffe_and_Kriegsmarine_radar_equipment_of_World_War_II |
During World War II , the German Luftwaffe relied on an increasingly diverse array of electronic communications, IFF and RDF equipment as avionics in its aircraft and also on the ground. Most of this equipment received the generic prefix FuG for Funkgerät , meaning "radio equipment". Most of the aircraft-mounted Radar equipment also used the FuG prefix. This article is a list and a description of the radio, IFF and RDF equipment.
FuG I : An early receiver/transmitter set manufactured by Lorenz . It operated in the 600 to 1667 kHz range (generally the entire American AM radio broadcast band) at a power of 20 to 100 watts, depending on installation.
FuG II : An update of the FuG 1, also manufactured by Lorenz, that operated in the 310 to 600 kHz frequency range, the lower end of the MF band.
FuG 03 : Codenamed Stuttgart, was an airborne receiver/transmitter set used in bombers. Was fitted in: Do 11 , Do 17 E and F , Fw 58 , He 114 , Ju 52 , Ar 66 , Ar 96 , Junkers W 33 and W 34 . Set consists of: S 3a Transmitter; E 2a Receiver. Power source: G 3 Air-driven generator and 2 - 90 volt dry cells. The FuG 03 operated in the 1250 to 1400 kHz frequency range.
FuG 7 : A compact airborne receiver/transmitter used in fighters and dive bombers. Prior to 1943, it was fitted in the Bf 109C to G-2 , and Fw 190 A-0 to A-3 . After 1943, it was still fitted in the Ju 87 and Hs 129 . The FuG 7 typically operated in the 2.5 to 7.5 MHz, with a power of approximately 7 watts. The range of the FuG 7 was approximately 50 km in good weather. Later versions of the FuG 7 included the FuG 7a, which included the S 6a Transmitter, E 5a Receiver and Junction Box VK 5 A.
FuG 10 series : A family of transceivers for both R/T and W/T communications. The German FuG 10 panel, or rack, contained two transmitters and two receivers: One transmitter and its companion receiver operated in the MF or Longwave ; 300 to 600 kHz (1,000 to 500 m) range and the other transmitter and its companion receiver operated in the HF or Shortwave range; 3 to 6 MHz (100 to 50 m). Most of the FuG 10 series used a fixed wire aerial between the fuselage and tailfin or a retractable trailing aerial wire. The FuG 10P replaced the standard E 10L longwave receiver with an EZ6 unit for a G6 direction finding set. The FuG 10ZY incorporated a fixed loop D/F aerial and a homing device for navigation to a ground station. This loop aerial, usually fitted on a small, "teardrop" shaped mounting, was standard equipment on most fighter aircraft from late 1943 on. Manufactured by Lorenz. [ 1 ] [ 2 ] Typical power was 70 watts.
FuG 11 : Developed as a replacement for the FuG 10 series. No MF mode, and of up to 3 kW output. Increased HF-only transceiving range to 3 - 30 MHz (the entire HF band). CW & AM voice. Reduced volume, cost & weight. Intended to be combined with the PeilG 6 & FuBL 2. It could be fitted with a remote control system that allowed the pilot to control it rather than the radio operator. Development completed but never deployed as there was little demand for long range bomber communications in 1944.
FuG 13 : Designed to supplement early versions of the FuG 10 to improve long range communications. Frequency range 3 MHz to 20 MHz 20 Watts output power. Deployed on long range aircraft such as the Fw 200 Condor. Improvements in the FuG 10 family resulted in no need for this additional radio and it was withdrawn from service.
FuG 15 : Intended as the next standard aircraft transceiver to replace earlier series units. Unusual in using FM as well as AM for voice. Operating Frequency 37.8 to 47.7 MHz. It could be fitted with a remote control system that allowed the pilot to control it rather than the radio operator. Production planned to start in 1942 but service trials showed problems and deployment stopped. Replaced by the FuG 16. Completed units rebuilt as BS 15 navigation radio beacons in 1945.
FuG 16 Z, ZE and ZY : These sets were airborne VHF transceivers used in single-seat fighter aircraft for R/T and W/T communications, and were also used for ground fixes and DF homing on ground stations when used in conjunction with the FuG 10P or FuG 10ZY. Installed for Bf 109G-3/G-4 and later, Fw 190A-4 and later subtypes. Frequency Range was 38.5 to 42.3 MHz. The FuG 16ZY was also used for Y-Verfahren ( Y-Control ), in which aircraft were fitted up as Leitjäger or Fighter Formation Leaders that could be tracked and directed from the ground via special R/T equipment. Aircraft equipped with ZY were fitted with a Morane whip aerial array. Principal components:
Transmitter, Receiver, Modulator in one case, S 16 Z Tx, E 16 Z Rcvr,
NG 16 Z Modulator
Dynamotor U 17
Antenna Matching unit AAG 16 Z
Modulator Unit MZ 16
Homing Unit ZVG 16
Indicator AFN - 2
FuG 17 Z and ZY : These sets were airborne VHF transceivers used in Close Air Support aircraft for R/T and W/T communications with ground units. Frequency Range was 42 to 48.3 MHz. This matched the ground forces FuG 7 radio fitted to command tanks and reconnaissance units. The FuG 17 was identical to the FuG 16 with the exception of the frequency range and seems to have been deployed first. In the FuG 17ZY version it was also used for Y-Verfahren ( Y-Control ), though it seems to have superseded it this role by the FuG 16ZY when it became available.
FuG 18 : Developed in 1944 as an improvement to the FuG 15. Frequency range 24 - 75 MHz. FM & AM voice. FuG 18Y included the ability for Y-control, blind landing and Hermine beacon receive.
FuG 24 : This set was developed from the FuG 16 as a simplified and cost reduced system. Intended for the Heinkel He 162 and later aircraft. Did not have a direction finder capability or a Y control interface. Frequency Range was 42 to 48.3 MHz, FM & AM voice only. FuG 24Z included Y-Control and blind landing and Hermine beacon-receiving capability.
FuG 29 : Development unit designed to work as receiver for the running commentary,- " Laufende Reportage " - that was transmitted from radio navigation stations to aid the day- and night- fighters participating in the defense of the Reich. Due to the high transmitting power of the transmitters, the signal was almost immune to interference from jamming. It was an AM receiver, with a frequency range of between 150 kHz and 6 MHz, with 6 push buttons preselected frequencies but details lacking and development was never completed.
Peilgerät (PeilG) 6 : Codenamed "Alex Sniatkowski", this was a long and medium range D/F set and homing device used mainly on bombers: Ar 234 , Do 217 , Ju 87 , Ju 88A-4 on , Ju 188 , Ju 290 , Ju 388 ; the He 177 A heavy bombers (Germany's only "heavy bomber" design in service), and both the He 219 A and Ju 88G night fighter series are some of the aircraft types to be fitted. Frequency range was 150 to 1,200 kHz. A "flat" equivalent of a D/F loop was used for the Peilgerät device to reduce drag over a protruding D/F loop antenna, and made up of a series of metal strips in a "sunburst" pattern. often being fitted under a round, flush fitting plexiglass cover. A small "whip" aerial was also fitted to the FuG 10 radiomast. Manufactured by Telefunken . Version PeilG 5 was of similar performance but used a manually controlled loop antenna. Control was via an electric servo motor. Versions 1 - 4 had manual control either via cable linkage or direct control via an attached handle.
FuBL 2 : Used the Knickebein beam navigation and bombing system. Consisted of the EBL 3 and EBL 2 receivers with display device ANF 2. The EBL 3 operated between 30 and 33 MHz and received 34 channels, The EBL 2 operated at 38 MHz and was unchanged from the FuBL 1 system. The AFN 2 provided the pilot with a left/right display and a signal strength. The unit was available in two versions FuBL 2 H for a unit operated by the radio operator and the FuBL 2 F for remote operation by the pilot in a single seat aircraft.`The primary difference between the EBL 1 and the EBL 3 was sensitivity to allow, what was basically a ILS system, to be used for bombing. [ 3 ]
FuG 28a : Y-Gerät transponder. Based on the Fug17 transceiver with additional components to send the response to the Y-Gerät ground station for the ground station to derive range. Also derived the azimuth signal and displayed the results on the ANF 2 display giving the pilot a left/right command. Operating frequency 24 - 28 MHz. 8 Watts transmit power. The unit also interfaced to the FuG 10 system in the aircraft so that voice communication with the pilot from the ground controllers via the Fug 28a was possible.
Hermine : This system was a VHF radio beacon. Originally developed in 1942 due to problems the design was suspended. When in 1944 the existing radio navigation systems were either being jammed or under physical attack the design was revisited. It consisted of a rotating radio beacon transmitting at 30 -33 MHz. The signal consisted of a tone and a robot voice using FM. The robot voice was encoded onto an optical disk. The voice spoke a number between 1 and 35, corresponding to 10 degrees of angle from the beacon. The pilot listened to the signal, when the tone disappeared the next number corresponded to the angle from the beacon. It was expected that this would give an angular resolution of about 5 degrees but when tested it was found that some pilots could estimate to within 3 degrees. The receiver was a modified EBL 3 which had had its bandwidth increased and fitted with an FM interface board. This board also connected to the pilots audio via the Fug16 to send the audio information to the pilot. In single-seater aircraft the radio fit was numbered FuG 125 . The beacon identifier was transmitted instead of the number 0. This allowed a pilot to select a particular beacon. Between 10 -20 beacons were commissioned by May 1945. 30 channels were available with 2 more being reserved for airfield ILS. Beacons were usually placed 20 km from a runway, The pilot would over fly the beacon and then circle until he acquired the ILS landing beam on the FuBL 2 equipment. Ground units were BS 15 navigation radio beacons constructed from rebuilt FuG 15 sets.
FuG120/FuG120k Bernhard : The "Bernhard/Bernhardine" system was a nightfighter/day fighter radio-navigation system. Primary intended to guide fighters into the bomber streams rather than against individual aircraft. The ground station (FuSAn 724/725) "Bernhard" was the VHF rotating directional-beacon ground-station. It continuously transmitted the station identifier and the antenna azimuth (bearing) in Hellschreiber-format. The FuG 120 "Bernhardine" was the airborne Hellschreiber system that prints the data stream from the selected Bernhard station. The HF receiver for the system was the EBL3 from the FuBL 2 ILS system. Operating Frequency: 30 - 33.1 MHz, Transmitter power: 2 × 500 Watt (FuSAn 724) or 2 x 5000 Watt (FuSAn 725). Antenna rotational speed: 12 degrees per second (2 revolutions per minute). Accuracy: initially ±1°, then improved to ±0.5°. The system was initially deployed in 1941/42 however work was stopped until 1944. Deployment was started to try and produce a "jam proof" system. A later version (deployed at about 3 sites) alternated between sending angular information and text message instructions which allowed a simple form of data link between the fighter direction stations and the fighters. FuG 120k : This version was developed as the original unit was bulky and expensive. In return for considerable reduction is size and weight azimuth measurement was reduced to approx. 4 degrees. [ 4 ]
Fug 126 : By 1944 the Germans were aware of the operating concept of the British Rebecca/Eureka system and the Oboe and G-H systems via captured examples. From this they developed the Baldur system. This was a system or responder beacons working at 2-4 Meters wavelength. The airborne equipment FuG 126 was based on the SN2 radar. Accuracy was +- 100 meters. The system seems to have only deployed in small numbers as bomber operations were ceasing due to the air forces concentration just on fighters and CAS. A variant called FuG162k was produced for single seat fighter (reduced accuracy +- 500 meters) operation but it seems never to have been used.
Two variants of the system were also designed Baldur-Truhe , combined system and Baldur- Bernhardine again a combination. Neither of these systems seem to have achieved flight trials.
The Luftwaffe operationally deployed 3 beam navigation systems during the first part of the war. Knickebein, X-Gerät and Y-Gerät.
For more information see the main page Battle of the Beams
Knickebein : Development started on this system in 1934 based on work done by Lorenz. The initial work was to develop their ILS system but further work investigated how far a beam of this frequency could be used to guide an aircraft. It was found that by using a combination of a large antenna, a powerful transmitter and maximum elevation of the antenna that ranges far in excess of those expected could be achieved. (probably being caused by ducting, a little understood propagation mode at the time). With an antenna at 1000m above sea level and an aircraft flying at 3000m ranges of 400 km could be achieved. Aircraft equipment was the EBL3 receiver. Frequency range 30 - 34 MHz.
X-Gerät : The Knickebein system was even for its time very crude. As soon as it had proved itself development of an improved system called X-Gerät was started. This used higher frequencies 66-70 MHz to improve resolution and reduce the size of the antenna group. This allowed the system to be mobile (by standards of 1940s not today's standards for mobile). Additionally it used 4 beams rather than two, included was a system called the X-clock. This allowed much better accuracy, crews often achieved 300 x 300 meter target boxes.
Y-Gerät : This system was developed to allow one beam rather than the 2 or 4 of the other systems. The airborne component was the FuG 28 , which was an FuG 17E with additional transponder systems. Essentially the system transmitted on one beam that indicated left/right on a pilot display and a range indication by using the FuG 28 transponder. System transmitted at the FuG 17 range of 42.1 to 47.7 MHz.
Y-Control for fighters : Developed from mid 1943 to guide fighters to intercept bomber streams. Radio equipment was a modified FuG 16 equipment.
FuG 124 Komet : In 1942 with the He 177 and the "Battle of the Atlantic" in full swing the Germans started the development of a long range beacon system called Komet. This was based on pre-war work done by Lorenz. Its consisted or a rapidly rotating beam (electronic not mechanical) transmitting at 3 kW and at frequencies between 5 and 12 MHz. The signals were picked up using a FuG10K receiver and processed by the FuG 124 Komet processor which printed the results out on a paper strip.(The Kometscriber). Two test stations were built in 1944. [ 5 ] There were several problems which resulted in it never being used. The antenna array was vast using 127 aerials and 19 control huts. It was discovered that it would be easy to jam and as it was now 1944 with German forces falling back on all fronts there was no longer a requirement for it. The few Fug124 receivers built were only used on the ground for R&D work. [ 6 ]
FuG 121 Erika : First deployed in 1942 it was used briefly before being replaced by Sonne and Bernard. Erika transmitted a VHF signal on 30-33 MHz which could be received by standard EBL 3 receivers. The signal was adjusted in phase between a ref point and a navigation point. After processing the FuG 121 displayed an angle from the beacon. By using two beacons it was possible to achieve a fix. However, this was a problem as four receivers were required, two listening to each station. On smaller aircraft there was not enough space and German industry was by now having trouble supplying enough radios to the air force without adding 4 more receivers per plane. The system was not deployed. Some sources indicate that there may have been a version called Electra that operated at 250 to 300 kHz but details are lacking or contradictory. [ 5 ] [ 7 ]
Sonne : This system transmitted on 270–480 kHz and could be received on a FuG 10 . No special receiver was required as the pattern was discernable with the ear all that was required was the special charts. At least 6 stations were built providing coverage from the Bay of Biscay to Norway. Accuracy was reasonable during the day but errors up to 4 degrees occurred at night. The allies captured the maps with resulted in the being issued to allied units, because of this the allies left the Sonne system alone. After the war the stations were rebuilt and operated into the 1970s. The system was called Consol by that time.
Mond : Development work was done on Sonne (sun) to remove the night time errors, this system was called Mond (moon). Work was never completed.
Truhe : This system was based on the British GEE system. After British units were captured the Germans set up a project to 'clone' the units. The first unit was the FuG 122 which allowed the reception of British GEE signals. Units in France received these units and were able to navigate using British signals. The Germans then developed the concept to produce FuG 123 receivers which would allow a wider turning range. This allowed the Germans to set up GEE chains of their own further inside Germany where the British GEE signals were unusable. There seems to have been some idea of using frequencies very close to the British frequencies to make jamming by the Allies hard to do without jamming their own GEE system. One chain became operational around Berlin. [ 6 ]
FuBL 1 Used the Lorenz landing beam system. Consisted of the EBL 1 and EBL 2 receivers with display device ANF 2. The EBL 1 operated between 30 and 33 MHz and received the azimuth signals from a transmitter at the far end of the runway, The EBL 2 operated at 38 MHz and received the two marker beacons as the aircraft approached the threshold to land. The AFN 2 provided the pilot with a left/right display and a signal strength. The pilot could also hear the azimuth signal and the marker beacons in his headset. When the aircraft passed over the beacons a light was also illuminated in the cockpit.` [ 3 ]
FuG 125 Hermine : Was a system designed for night fighters and single pilot aircraft in night/poor visibility conditions. It consisted of several sub systems. For navigation it used the "Hermine" VHF radio beacon signal system via the Fug 16ZY. For approach and landing it used the FuBL 1 or 2 blind landing receiver. For altitude it used the Fug 101 radio altimeter. Given the pilot workload in a single pilot aircraft it also included a simple auto pilot. Fitted in some types of Fw 190 and Bf 109s. Manufactured in small numbers by Lorenz in 1945. [ 3 ]
FuG 101 : FM (Frequency Modulated) CW (Continuous Wave) Altimeter. Operating frequency 337 - 400 MHz. (75 – 89 cm) Selectable between two ranges, 0 - 150 Meters and 0 - 750 Meters. Units were small enough to be fitted to single-engine day fighters and night fighters. Fitted generally at first but later in the war only to aircraft expected to operate at night. In larger aircraft usually paired with Fug 102 due to its max height limitation. [ 8 ] [ better source needed ] [ 9 ]
FuG 102 : Pulse modulated Altimeter. Operating frequency 182 MHz. Usable between 100 Meters and 15,000 Meters. Due to its limited minimum height usually paired with Fug 101. Too large to fit in single-engined fighters. [ 9 ]
FuG 103 : Pulse Modulated Altimeter. Improved version of Fug 102 with reduced min height limitation, therefore Fug 101 could be dispensed with. Small numbers produced in 1945. [ 9 ]
FuG 104 : Improved Fug 103 by reducing its size. Development never completed. [ 9 ]
FuG 25z Zwilling : This was an early IFF set designed to respond to the Würzburg . The reception frequency range was 600 MHz 50 cm. Transmitting frequency was also 600 MHz, 50 cm. When it responded the radar operator could hear a morse character in their headphones. This only worked with the Würzburg radars not Freya. It could be received at up to 30 km (19 mi).
FuG 25z Häuptling : As experience was gained it was discovered that using the system above the radar operators were unable to identify which aircraft had responded to the interrogation pulse as the basic system did not provide range. In an attempt to resolve this question a modification was applied turning the Zwilling into the Häuptling.This retransmitted the receiving pulse on the 160 MHz frequency to a receiver on the radar. However, by the time that this modification had been developed jamming of the Würzburg had commenced and the radar had been modified to work on one of three bands called "islands". As the Häuptling could not cover these bands it was abandoned and the FuG 25z was replaced by the various versions of the FuG 25a system.
Originally IFF was only considered to be of use with Flak hence the limitation above. As the war progressed it was realised that IFF should also work with early warning radars hence a new version of the FuG 25 was developed.
FuG 25a Erstling : This was an IFF set designed to respond to Freya , Würzburg and the advanced, limited-deployment FuG 404 Jagdschloss system. The reception frequency range was 125 + or - 1.8 MHz. Transmitting frequency was 160 MHz. It could be received at up to 100 km (62 mi).
Würzburg radars as they worked on a different band required separate equipment to work with the FuG 25a. This was known under the name Kuckuck. It consisted of the interrogator transmitter Kur and the receiver Gemse. Dipoles were mounted inside the reflector to transmit and receive. A severe problem was encountered with the width of the resulting beam.
FuG 25a Erstling-Rot : With the introduction of PPI radars such as Jagdschloss a problem was encountered with the FuG 25a in that the dwell time of the radar was too short for the operator of the system to observe, in many cases, the mark on their screen. Earlier radars which "stared" rather than scanned did not have this problem. This modification increased the duration of the response signal so that this did not happen.
FuG 25a Erstling-Grün : In anticipation of the allies jamming of the 125/160 MHz IFF frequency this modification changed the interrogation wavelength to 2.5 meters and the response to 2 meters. No other changes were made. Never deployed.
FuG 225 Wobbelbiene : This was a development of the FuG 25z to provide a wide band receiver which would respond to the Würzburg "Island A" & "Island B" frequencies. It was hoped by doing this that the beam width problems with Fug25a would be resolved. However, by the time this was ready for production in 1944 Flak Würzburg now included "Island C' which could not be received. The unit was therefore never deployed. Further development of the basic Fug 25 was then abandoned.
FuG 226 Neuling : Intended to incorporate all the lessons using the preceding systems. The objectives of the design were; (a) Work with all anticipated service radars i.e. "staring and PPI' (b) operate at 6, later 12 frequency pairs to defeat jamming (c) for the first time provide an air-to-air mode. Development never completed.
FuG 228 Lichtenstein SN-3 : The last-developed version of the Lichtenstein airborne intercept radar developed to allow night fighters fitted with it to identify one another. Transmitted and received on the same band (100 - 156 MHz). It may have been intended to use it as some sort of squadron control system. Never deployed. [ 10 ]
FuG 229 Frischling : With the deployment starting on 9 cm band radars such as the Jagdschloss Z , a need for IFF was identified. The Frischling was an add on unit for either FuG 25a or FuG 226 that converted the 9 cm integration pulse to a standard 125 MHz pulse which was then passed it to the response unit. Development not completed.
FuG 243 : By 1944 the Germans were aware of the operating concept of the British Rebecca/Eureka system via captured examples. From this a series of radar beacons were designed to respond to different frequencies and waveforms. The FuG243 seems to be the only one that entered service in small numbers in early 1945 in Norway with coastal units. It operated on the low-UHF band frequencies used by the FuG 200 Hohentwiel ASV airborne radar hardware. [ 10 ] In modern terms it was a type of Radar beacon (racon)
As allied jamming of the fighter voice links became increasingly effective by 1944/45 attempts were made to find other ways of passing information and commands to fighter pilots.
Nachtlicht : The receiver for this was the FuG25a IFF system. When a ground station interrogated the unit it flashed a small light to indicate this had happened to the pilot. The system involved modifying the transmitter so that the light flashed Morse signals. This allowed a very primitive way of signaling the pilot. A development of this system was to include a unit called the Luftkurier which decoded the Morse and indicated commands on a pointer (left/right). The system was trialed but it was found to be too hard for pilots to watch the indicator while piloting their aircraft. Another issue was that the Luftkurier was found to be very easy to jam. [ 11 ]
Fug 136 Nachtfee : A development of the Nachlicht system. Used the Fug25a receiver again. This time commands were decoded onto a small CRT, which allowed up to 16 commands to be issued to the fighter. Had the same problems as Nachlicht, too easy to jam and too hard to use in a single-seater plane. Abandoned. [ 11 ]
Fug 138 Barbara : A further development of the Nachlicht system. This time an audio receiver was added to the system between the Fug25a and the Fug16ZY. This allowed the pilot to hear Morse commands sent up the data link. Unusable in practice and abandoned. [ 11 ]
As German pilot training was cut back due to the war situation it was realised that the above systems would be unusable as pilots were no longer being trained in Morse. This led to the Fug 120 and Fug 139 systems. [ 11 ]
FuG 139 Barbarossa : This system again used the Fug25a receiver but fed it to a Hellschreiber printer. This removed the requirement to read Morse or continuously watch a display. Deployed in small numbers in 1945. An attempt was being made to use Pulse Modulation to also transmit voice but this was never completed. [ 11 ]
NS 2 : Single watertight box transmitter. Operated on the international distress frequency of 500 kHz. Powered by a hand generator. Sent Morse code, no receiver. Fitted to most German aircraft expected to operate over water at the start of the war. Range 120 – 250 miles. Transmit power 8 Watts.
NS 4 : Single watertight box transmitter. Operated on a frequency of 53.5 to 61 MHz. Powered by batteries. Sent Morse code, no receiver. Fitted to most German aircraft expected to operate over water from the middle of the war. Replaced NS2. Range 6 to 16 Miles. Easier to use than the NS2. Transmit power 1 to 2 watts. [ 9 ]
FuG 141 : Receiver for signals from the NS4 emergency transmitter. Fitted to air-sea rescue units. Operated with a direction finding loop.
FuG 142 : Receiver to receive MW beacons. Battery powered to be used when other power had failed on an aircraft. Not deployed after service tests had revealed problems.
Due to be replaced by the FuG 145 .
Fug 145 : Replacement for the PeiGL 6 MF receiver. Development not completed.
Fug 301 & FuG 310 : Radio sonde, operated suspended from a barrage balloon. Transmit frequency 13.4 MHz. [ 10 ]
FuG 302 : Radio Buoy. Dropped into the sea to mark a particular location for following aircraft. Initially transmitted at 45 MHz for detection by Fug 17, later modified to operate at 40 MHz for location by FuG 16. [ 10 ] Used in late 1944 to guide He 111 launching V-1 over the North Sea. [ 11 ]
FuG 303 : Overland version of FuG 302.
FuG 304 : Distress Radio Buoy.
FuG 305 : Jammer - details lacking
FuG 308 : Radio Sonde
Numerous different Radio Sonde systems were deployed by both the Army, Air Force and Navy.
An example of a ground station would be the FuG 502 Mouse . This used a transponder system working at 300 MHz to track the radio sonde and received values from it on 27 MHz. It was mounted in a trailer. [ 10 ]
FuG 23 : Location transmitter installed in some Fieseler Fi 103 (V 1) cruise missiles. Transmitted at frequencies between 340 kHz and 3.5 MHz.
Allowed the missiles to be tracked. Transmitted two signals, one while the motor was running and the second when it cut off, allowing its impact point to be calculated.
FuG 230 : Radio tracking beacon for various German missiles such as 'Waterfall', 'Enzian' and 'HS 117'. Operated at 600 MHz.
The Luftwaffe was known to have fitted small aluminum strips which frequently carried explosive self-destruct charges onto the outside of the equipment's aluminum housings. These explosives were linked then by a delay fuse attached onto any sensitive apparatus, which allowed it to be destroyed rather than be captured by the Allies. | https://en.wikipedia.org/wiki/Luftwaffe_radio_equipment_of_World_War_II |
A Lugeon is a unit devised to quantify the water permeability of bedrock and the hydraulic conductivity resulting from fractures; it is named after Maurice Lugeon , a Swiss geologist who first formulated the method in 1933. More specifically, the Lugeon test is used to measure the amount of water injected into a segment of the bored hole under a steady pressure; the value ( Lugeon value) is defined as the loss of water in litres per minute and per metre borehole at an over-pressure of 1 MPa .
Although the Lugeon test may serve other purposes, its main object is to determine the Lugeon coefficient which by definition is water absorption measured in litres per metre of test-stage per minute at a pressure of 10 kg/cm 2 (1 MN/m 2 ). [ 1 ]
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lugeon |
A Luggin capillary (also Luggin probe , Luggin tip , or Luggin-Haber capillary ) is a small tube that is used in electrochemistry . The capillary defines a clear sensing point for the reference electrode near the working electrode . [ 1 ] [ 2 ] This is in contrast to the poorly defined, large reference electrode. [ clarification needed ]
This electrochemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Luggin_capillary |
Lugol's iodine , also known as aqueous iodine and strong iodine solution , is a solution of potassium iodide with iodine in water. [ 2 ] It is a medication and disinfectant used for a number of purposes. [ 3 ] [ 4 ] Taken by mouth it is used to treat thyrotoxicosis until surgery can be carried out, protect the thyroid gland from radioactive iodine , and to treat iodine deficiency . [ 4 ] [ 5 ] When applied to the cervix it is used to help in screening for cervical cancer . [ 6 ] As a disinfectant it may be applied to small wounds such as a needle stick injury . [ 3 ] A small amount may also be used for emergency disinfection of drinking water. [ 7 ]
Side effects may include allergic reactions , headache , vomiting , and conjunctivitis . [ 4 ] [ 1 ] Long term use may result in trouble sleeping and depression . [ 4 ] It should not typically be used during pregnancy or breastfeeding . [ 4 ] Lugol's iodine is a liquid made up of two parts potassium iodide for every one part elemental iodine in water. [ 8 ]
Lugol's iodine was first made in 1829 by the French physician Jean Lugol . [ 7 ] [ 8 ] It is on the World Health Organization's List of Essential Medicines . [ 9 ] [ 10 ] Lugol's iodine is available as a generic medication and over the counter . [ 1 ] Lugol's solution is available in different strengths of iodine. Large volumes of concentrations more than 2.2% may be subject to regulation. [ 11 ]
Preoperative administration of Lugol's solution decreases intraoperative blood loss during thyroidectomy in patients with Graves' disease . [ 12 ] However, it appears ineffective in patients who are already euthyroid on anti-thyroid drugs and levothyroxine . [ 13 ]
Up until early 1970s, it was often recommended for use in victims of rape in order to avoid pregnancy. The idea stemmed from the fact that, in the laboratory, Lugol's iodine appeared to kill sperm cells even in such great dilutions as 1:32. Thus it was thought that an intrauterine application of Lugol's iodine, immediately after the event, would help avoid pregnancy. [ 21 ]
Because it contains free iodine, Lugol's solution at 2% or 5% concentration without dilution is irritating and destructive to mucosa, such as the lining of the esophagus and stomach. Doses of 10 mL of undiluted 5% solution have been reported to cause gastric lesions when used in endoscopy. [ 22 ] The LD50 for 5% Iodine is 14,000 mg/kg (14 g/kg) in rats, and 22,000 mg/kg (22 g/kg) in mice. [ 23 ]
The World Health Organization classifies substances taken orally with an LD50 of 5–50 mg/kg as the second highest toxicity class, Class Ib (Highly Hazardous). [ 24 ] The Global Harmonized System of Classification and Labeling of Chemicals categorizes this as Category 2 with a hazard statement "Fatal if swallowed". [ 25 ] Potassium iodide is not considered hazardous. [ 26 ]
The above uses and effects are consequences of the fact that the solution is a source of effectively free elemental iodine, which is readily generated from the equilibrium between elemental iodine molecules and polyiodide ions in the solution.
It was historically used as a first-line treatment for hyperthyroidism , as the administration of pharmacologic amounts of iodine leads to temporary inhibition of iodine organification in the thyroid gland, caused by phenomena including the Wolff–Chaikoff effect and the Plummer effect . However it is not used to treat certain autoimmune causes of thyroid disease as iodine-induced blockade of iodine organification may result in hypothyroidism . They are not considered as a first line therapy because of possible induction of resistant hyperthyroidism but may be considered as an adjuvant therapy when used together with other hyperthyroidism medications.
Lugol's iodine has been used traditionally to replenish iodine deficiency. Because of its wide availability as a drinking-water decontaminant, and high content of potassium iodide, emergency use of it was at first recommended to the Polish government in 1986, after the Chernobyl disaster to replace and block any intake of radioactive 131 I , even though it was known to be a non-optimal agent, due to its somewhat toxic free-iodine content. [ 27 ] Other sources state that pure potassium iodide solution in water ( SSKI ) was eventually used for most of the thyroid protection after this accident. [ 28 ] There is "strong scientific evidence" for potassium iodide thyroid protection to help prevent thyroid cancer . Potassium iodide does not provide immediate protection but can be a component of a general strategy in a radiation emergency. [ 29 ] [ failed verification ]
Historically, Lugol's iodine solution has been widely available and used for a number of health problems with some precautions. [ 30 ] Lugol's is sometimes prescribed in a variety of alternative medical treatments . [ 31 ] [ 32 ] Only since the end of the Cold War has the compound become subject to national regulation in the English-speaking world. [ citation needed ]
Until 2007, in the United States, Lugol's solution was unregulated and available over the counter as a general reagent , an antiseptic , a preservative , [ 33 ] or as a medicament for human or veterinary application.
Since 1 August 2007, the DEA regulates all iodine solutions containing greater than 2.2% elemental iodine as a List I precursor because they may potentially be used in the illicit production of methamphetamine . [ 11 ] Transactions of up to one fluid ounce (30 ml) of Lugol's solution are exempt from this regulation.
Lugol's solution is commonly available in different potencies of (nominal) 1%, 2%, 5% or 10%. Iodine concentrations greater than 2.2% are subject to US regulations. [ 11 ] [ 35 ] [ 36 ] If the US regulations are taken literally, their 2.2% maximum iodine concentration limits a Lugol's solution to maximum (nominal) 0.87%.
The most commonly used (nominal) 5% solution consists of 5% ( wt/v ) iodine ( I 2 ) and 10% ( wt/v ) potassium iodide (KI) mixed in distilled water and has a total iodine content of 126.4 mg/mL. The (nominal) 5% solution thus has a total iodine content of 6.32 mg per drop of 0.05 mL; the (nominal) 2% solution has 2.53 mg total iodine content per drop.
Potassium iodide renders the elementary iodine soluble in water through the formation of the triiodide ( I − 3 ) ion. It is not to be confused with tincture of iodine solutions, which consist of elemental iodine, and iodide salts dissolved in water and alcohol. Lugol's solution contains no alcohol.
Other names for Lugol's solution are I 2 KI (iodine-potassium iodide); Markodine, Strong solution (Systemic); and Aqueous Iodine Solution BP.
In the United Kingdom, in 2015, the NHS paid £9.57 per 500 ml of solution. [ 4 ] | https://en.wikipedia.org/wiki/Lugol's_iodine |
The Luhn algorithm or Luhn formula , also known as the " modulus 10" or "mod 10" algorithm , named after its creator, IBM scientist Hans Peter Luhn , is a simple check digit formula used to validate a variety of identification numbers. It is described in US patent 2950048A, granted on 23 August 1960. [ 1 ]
The algorithm is in the public domain and is in wide use today. It is specified in ISO/IEC 7812-1 . [ 2 ] It is not intended to be a cryptographically secure hash function ; it was designed to protect against accidental errors, not malicious attacks. Most credit card numbers and many government identification numbers use the algorithm as a simple method of distinguishing valid numbers from mistyped or otherwise incorrect numbers.
The check digit is computed as follows:
Assume an example of an account number 1789372997 (just the "payload", check digit not yet included):
The sum of the resulting digits is 56.
The check digit is equal to ( 10 − ( 56 mod 1 0 ) ) mod 1 0 = 4 {\displaystyle (10-(56{\bmod {1}}0)){\bmod {1}}0=4} .
This makes the full account number read 17893729974.
The Luhn algorithm will detect all single-digit errors, as well as almost all transpositions of adjacent digits. It will not, however, detect transposition of the two-digit sequence 09 to 90 (or vice versa). It will detect most of the possible twin errors (it will not detect 22 ↔ 55 , 33 ↔ 66 or 44 ↔ 77 ).
Other, more complex check-digit algorithms (such as the Verhoeff algorithm and the Damm algorithm ) can detect more transcription errors. The Luhn mod N algorithm is an extension that supports non-numerical strings.
Because the algorithm operates on the digits in a right-to-left manner and zero digits affect the result only if they cause shift in position, zero-padding the beginning of a string of numbers does not affect the calculation. Therefore, systems that pad to a specific number of digits (by converting 1234 to 0001234 for instance) can perform Luhn validation before or after the padding and achieve the same result.
The algorithm appeared in a United States Patent [ 1 ] for a simple, hand-held, mechanical device for computing the checksum. The device took the mod 10 sum by mechanical means. The substitution digits , that is, the results of the double and reduce procedure, were not produced mechanically. Rather, the digits were marked in their permuted order on the body of the machine.
The following function takes a card number, including the check digit, as an array of integers and outputs true if the check digit is correct, false otherwise.
The Luhn algorithm is used in a variety of systems, including: | https://en.wikipedia.org/wiki/Luhn_algorithm |
The Luhn mod N algorithm is an extension to the Luhn algorithm (also known as mod 10 algorithm) that allows it to work with sequences of values in any even-numbered base . This can be useful when a check digit is required to validate an identification string composed of letters, a combination of letters and digits or any arbitrary set of N characters where N is divisible by 2.
The Luhn mod N algorithm generates a check digit (more precisely, a check character) within the same range of valid characters as the input string. For example, if the algorithm is applied to a string of lower-case letters ( a to z ), the check character will also be a lower-case letter. Apart from this distinction, it resembles very closely the original algorithm.
The main idea behind the extension is that the full set of valid input characters is mapped to a list of code-points (i.e., sequential integers beginning with zero). The algorithm processes the input string by converting each character to its associated code-point and then performing the computations in mod N (where N is the number of valid input characters). Finally, the resulting check code-point is mapped back to obtain its corresponding check character.
The Luhn mod N algorithm only works where N is divisible by 2. This is because there is an operation to correct the value of a position after doubling its value which does not work where N is not divisible by 2. For applications using the English alphabet this is not a problem, since a string of lower-case letters has 26 code-points, and adding decimal characters adds a further 10, maintaining an N divisible by 2.
The second step in the Luhn algorithm re-packs the doubled value of a position into the original digit's base by adding together the individual digits in the doubled value when written in base N . This step results in even numbers if the doubled value is less than or equal to N , and odd numbers if the doubled value is greater than N . For example, in decimal applications where N is 10, original values between 0 and 4 result in even numbers and original values between 5 and 9 result in odd numbers, effectively re-packing the doubled values between 0 and 18 into a single distinct result between 0 and 9.
Where an N is used that is not divisible by 2 this step returns even numbers for doubled values greater than N which cannot be distinguished from doubled values less than or equal to N .
The algorithm will neither detect all single-digit errors nor all transpositions of adjacent digits if an N is used that is not divisible by 2. As these detection capabilities are the algorithm's primary strengths, the algorithm is weakened almost entirely by this limitation. The Luhn mod N algorithm odd variation enables applications where N is not divisible by 2 by replacing the doubled value at each position with the remainder after dividing the position's value by N which gives odd number remainders consistent with the original algorithm design.
Initially, a mapping between valid input characters and code-points must be created. For example, consider that the valid characters are the lower-case letters from a to f . Therefore, a suitable mapping would be:
Note that the order of the characters is completely irrelevant. This other mapping would also be acceptable (although possibly more cumbersome to implement):
It is also possible to intermix letters and digits (and possibly even other characters). For example, this mapping would be appropriate for lower-case hexadecimal digits:
Assuming the following functions are defined:
The function to generate a check character is:
And the function to validate a string (with the check character as the last character) is:
Assuming the following functions are defined:
The function to generate a check character is:
And the function to validate a string (with the check character as the last character) is:
Assuming the following functions are defined:
The function to generate a check character is:
And the function to validate a string (with the check character as the last character) is:
Consider the above set of valid input characters and the example input string abcdef . To generate the check character, start with the last character in the string and move left doubling every other code-point. The "digits" of the code-points as written in base 6 (since there are 6 valid input characters) should then be summed up:
The total sum of digits is 14 (0 + 2 + 2 + 1 + 4 + 5). The number that must be added to obtain the next multiple of 6 (in this case, 18 ) is 4 . This is the resulting check code-point. The associated check character is e .
The resulting string abcdefe can then be validated by using a similar procedure:
The total sum of digits is 18 . Since it is divisible by 6, the check character is valid .
The mapping of characters to code-points and back can be implemented in a number of ways. The simplest approach (akin to the original Luhn algorithm) is to use ASCII code arithmetic. For example, given an input set of 0 to 9 , the code-point can be calculated by subtracting the ASCII code for '0' from the ASCII code of the desired character. The reverse operation will provide the reverse mapping. Additional ranges of characters can be dealt with by using conditional statements.
Non-sequential sets can be mapped both ways using a hard-coded switch/case statement. A more flexible approach is to use something similar to an associative array . For this to work, a pair of arrays is required to provide the two-way mapping.
An additional possibility is to use an array of characters where the array indexes are the code-points associated with each character. The mapping from character to code-point can then be performed with a linear or binary search. In this case, the reverse mapping is just a simple array lookup.
This extension shares the same weakness as the original algorithm, namely, it cannot detect the transposition of the sequence <first-valid-character><last-valid-character> to <last-valid-character><first-valid-character> (or vice versa). This is equivalent to the transposition of 09 to 90 (assuming a set of valid input characters from 0 to 9 in order). On a positive note, the larger the set of valid input characters, the smaller the impact of the weakness. | https://en.wikipedia.org/wiki/Luhn_mod_N_algorithm |
Luigi Maria Venanzi was an inorganic chemist who was recognized for diverse contributions to coordination chemistry . He was born in Italy in 1927. [ 1 ]
After receiving his diplom degree at the University of Kiel , he took a position at ICI Laboratories, where he published extensively with Joseph Chatt . He then proceeded to receive his D.Phil. at Oxford, where he remained as lecturer until 1968. He left England to become professor at SUNY Albany and later the University of Delaware . He then moved to ETH, succeeding Gerold Schwarzenbach . He finished his career in Switzerland, working extensively on platinum phosphine complexes and 31P NMR spectroscopy . [ 2 ] [ 3 ]
A lecture award was created in his memory in 2014. [ 4 ] | https://en.wikipedia.org/wiki/Luigi_M._Venanzi |
Luigi Sacconi was an Italian inorganic chemist who gained renown for contributions to coordination chemistry . He was born on February 28, 1911 in S. Croce sull'Arno. He died September 1, 1992. He received a Doctor of Pharmacy at the University of Florence. He was on the faculties at the University of Parma, Turin, and then Florence. He was mentor of future influential inorganic chemists including Ivano Bertini , Claudio Bianchini, Fausto Calderazzo , Carlo Floriani , Dante Gatteschi, Carlo Mealli, and Maurizio Peruzzini. Among his many contributions, Sacconi popularized tripodal ligands , which often stabilize pentacoordinate complexes with unusual electronic or chemical properties. [ 2 ]
The Sacconi Medal was instituted to recognize Sacconi's contributions. [ 3 ] He was awarded the Premio Presidente della Repubblica (prize) in 1977. | https://en.wikipedia.org/wiki/Luigi_Sacconi |
In statistics , Lukacs's proportion-sum independence theorem is a result that is used when studying proportions, in particular the Dirichlet distribution . It is named after Eugene Lukacs . [ 1 ]
If Y 1 and Y 2 are non-degenerate, independent random variables , then the random variables
are independently distributed if and only if both Y 1 and Y 2 have gamma distributions with the same scale parameter.
Suppose Y i , i = 1, ..., k be non-degenerate, independent, positive random variables. Then each of k − 1 random variables
is independent of
if and only if all the Y i have gamma distributions with the same scale parameter. [ 2 ] | https://en.wikipedia.org/wiki/Lukacs's_proportion-sum_independence_theorem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.