id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
4,047,259 | https://en.wikipedia.org/wiki/Polite%20number | In number theory, a polite number is a positive integer that can be written as the sum of two or more consecutive positive integers. A positive integer which is not polite is called impolite. The impolite numbers are exactly the powers of two, and the polite numbers are the natural numbers that are not powers of two.
Polite numbers have also been called staircase numbers because the Young diagrams which represent graphically the partitions of a polite number into consecutive integers (in the French notation of drawing these diagrams) resemble staircases. If all numbers in the sum are strictly greater than one, the numbers so formed are also called trapezoidal numbers because they represent patterns of points arranged in a trapezoid.
The problem of representing numbers as sums of consecutive integers and of counting the number of representations of this type has been studied by Sylvester, Mason, Leveque, and many other more recent authors. The polite numbers describe the possible numbers of sides of the Reinhardt polygons.
Examples and characterization
The first few polite numbers are
3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, ... .
The impolite numbers are exactly the powers of two. It follows from the Lambek–Moser theorem that the nth polite number is f(n + 1), where
Politeness
The politeness of a positive number is defined as the number of ways it can be expressed as the sum of consecutive integers. For every x, the politeness of x equals the number of odd divisors of x that are greater than one.
The politeness of the numbers 1, 2, 3, ... is
0, 0, 1, 0, 1, 1, 1, 0, 2, 1, 1, 1, 1, 1, 3, 0, 1, 2, 1, 1, 3, ... .
For instance, the politeness of 9 is 2 because it has two odd divisors, 3 and 9, and two polite representations
9 = 2 + 3 + 4 = 4 + 5;
the politeness of 15 is 3 because it has three odd divisors, 3, 5, and 15, and (as is familiar to cribbage players) three polite representations
15 = 4 + 5 + 6 = 1 + 2 + 3 + 4 + 5 = 7 + 8.
An easy way of calculating the politeness of a positive number by decomposing the number into its prime factors, taking the powers of all prime factors greater than 2, adding 1 to all of them, multiplying the numbers thus obtained with each other and subtracting 1. For instance 90 has politeness 5 because ; the powers of 3 and 5 are respectively 2 and 1, and applying this method .
Construction of polite representations from odd divisors
To see the connection between odd divisors and polite representations, suppose a number x has the odd divisor y > 1. Then y consecutive integers centered on x/y (so that their average value is x/y) have x as their sum:
Some of the terms in this sum may be zero or negative. However, if a term is zero it can be omitted and any negative terms may be used to cancel positive ones, leading to a polite representation for x. (The requirement that y > 1 corresponds to the requirement that a polite representation have more than one term; applying the same construction for y = 1 would just lead to the trivial one-term representation x = x.)
For instance, the polite number x = 14 has a single nontrivial odd divisor, 7. It is therefore the sum of 7 consecutive numbers centered at 14/7 = 2:
14 = (2 − 3) + (2 − 2) + (2 − 1) + 2 + (2 + 1) + (2 + 2) + (2 + 3).
The first term, −1, cancels a later +1, and the second term, zero, can be omitted, leading to the polite representation
14 = 2 + (2 + 1) + (2 + 2) + (2 + 3) = 2 + 3 + 4 + 5.
Conversely, every polite representation of x can be formed from this construction. If a representation has an odd number of terms, x/y is the middle term, while if it has an even number of terms and its minimum value is m it may be extended in a unique way to a longer sequence with the same sum and an odd number of terms, by including the 2m − 1 numbers −(m − 1), −(m − 2), ..., −1, 0, 1, ..., m − 2, m − 1.
After this extension, again, x/y is the middle term. By this construction, the polite representations of a number and its odd divisors greater than one may be placed into a one-to-one correspondence, giving a bijective proof of the characterization of polite numbers and politeness. More generally, the same idea gives a two-to-one correspondence between, on the one hand, representations as a sum of consecutive integers (allowing zero, negative numbers, and single-term representations) and on the other hand odd divisors (including 1).
Another generalization of this result states that, for any n, the number of partitions of n into odd numbers having k distinct values equals the number of partitions of n into distinct numbers having k maximal runs of consecutive numbers.
Here a run is one or more consecutive values such that the next larger and the next smaller consecutive values are not part of the partition; for instance the partition 10 = 1 + 4 + 5 has two runs, 1 and 4 + 5.
A polite representation has a single run, and a partition with one value d is equivalent to a factorization of n as the product d ⋅ (n/d), so the special case k = 1 of this result states again the equivalence between polite representations and odd factors (including in this case the trivial representation n = n and the trivial odd factor 1).
Trapezoidal numbers
If a polite representation starts with 1, the number so represented is a triangular number
Otherwise, it is the difference of two nonconsecutive triangular numbers
This second case is called a trapezoidal number. One can also consider polite numbers that aren't trapezoidal. The only such numbers are the triangular numbers with only one nontrivial odd divisor, because for those numbers, according to the bijection described earlier, the odd divisor corresponds to the triangular representation and there can be no other polite representations. Thus, non-trapezoidal polite number must have the form of a power of two multiplied by an odd prime. As Jones and Lord observe, there are exactly two types of triangular numbers with this form:
the even perfect numbers 2n − 1(2n − 1) formed by the product of a Mersenne prime 2n − 1 with half the nearest power of two, and
the products 2n − 1(2n + 1) of a Fermat prime 2n + 1 with half the nearest power of two.
. For instance, the perfect number 28 = 23 − 1(23 − 1) and the number 136 = 24 − 1(24 + 1) are both this type of polite number. It is conjectured that there are infinitely many Mersenne primes, in which case there are also infinitely many polite numbers of this type.
References
External links
Introducing Runsums, R. Knott.
Is there any pattern to the set of trapezoidal numbers? Intellectualism.org question of the day, October 2, 2003. With a diagram showing trapezoidal numbers color-coded by the number of terms in their expansions.
Additive number theory
Figurate numbers
Integer sequences
Quadrilaterals | Polite number | [
"Mathematics"
] | 1,689 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Figurate numbers",
"Numbers",
"Number theory"
] |
4,047,274 | https://en.wikipedia.org/wiki/Gold-containing%20drugs | Gold-containing drugs are pharmaceuticals that contain gold. Sometimes these species are referred to as "gold salts". "Chrysotherapy" and "aurotherapy" are the applications of gold compounds to medicine. Research on the medicinal effects of gold began in 1935, primarily to reduce inflammation and to slow disease progression in patients with rheumatoid arthritis. The use of gold compounds has decreased since the 1980s because of numerous side effects and monitoring requirements, limited efficacy, and very slow onset of action. Most chemical compounds of gold, including some of the drugs discussed below, are not salts, but are examples of metal thiolate complexes.
Use in rheumatoid arthritis
Investigation of medical applications of gold began at the end of the 19th century, when gold cyanide demonstrated efficacy in treating Mycobacterium tuberculosis in vitro.
Indications
The use of injected gold compound is indicated for rheumatoid arthritis. Its uses have diminished with the advent of newer compounds such as methotrexate and because of numerous side effects. The efficacy of orally administered gold is more limited than injecting the gold compounds.
Mechanism in arthritis
The mechanism by which gold drugs affect arthritis is unknown.
Administration
Gold-containing drugs for rheumatoid arthritis are administered by intramuscular injection but can also be administered orally (although the efficacy is low). Regular urine tests to check for protein, indicating kidney damage, and blood tests are required.
Efficacy
A 1997 review (Suarez-Almazor ME, et al) reports that treatment with intramuscular gold (parenteral gold) reduces disease activity and joint inflammation. Gold-containing drugs taken by mouth are less effective than by injection. Three to six months are often required before gold treatment noticeably improves symptoms.
Side effects
Chrysiasis
A noticeable side-effect of gold-based therapy is skin discoloration, in shades of mauve to a purplish dark grey when exposed to sunlight. Skin discoloration occurs when gold salts are taken on a regular basis over a long period of time. Excessive intake of gold salts while undergoing chrysotherapy results – through complex redox processes – in the saturation by relatively stable gold compounds of skin tissue and organs (as well as teeth and ocular tissue in extreme cases) in a condition known as chrysiasis. This condition is similar to argyria, which is caused by exposure to silver salts and colloidal silver. Chrysiasis can ultimately lead to acute kidney injury (such as tubular necrosis, nephrosis, glomerulitis), severe heart conditions, and hematologic complications (leukopenia, anemia). While some effects can be healed with moderate success, the skin discoloration is considered permanent.
Other side effects
Other side effects of gold-containing drugs include kidney damage, itching rash, and ulcerations of the mouth, tongue, and pharynx. Approximately 35% of patients discontinue the use of gold salts because of these side effects. Kidney function must be monitored continuously while taking gold compounds.
Types
Disodium aurothiomalate
Sodium aurothiosulfate (Gold sodium thiosulfate)
Sodium aurothiomalate (Gold sodium thiomalate) (UK)
Auranofin (UK & US)
Aurothioglucose (Gold thioglucose) (US)
References
External links
"Gold salts for juvenile rheumatoid arthritis". BCHealthGuide.org
"Gold salts information". DiseasesDatabase.com
"HMS researchers find how gold fights arthritis: Sheds light on how medicinal metal function against rheumatoid arthritis and other autoimmune diseases." Harvard University Gazette (2006)
"Aurothioglucose is a gold salt used in treating inflammatory arthritis". MedicineNet.com
"About gold treatment: What is it? Gold treatment includes different forms of gold salts used to treat arthritis." Washington.edu University of Washington (December 30, 2004)
Gold compounds
Hepatotoxins
Antirheumatic products
Coordination complexes
Nephrotoxins | Gold-containing drugs | [
"Chemistry"
] | 854 | [
"Coordination chemistry",
"Coordination complexes"
] |
4,047,871 | https://en.wikipedia.org/wiki/Hexachlorobenzene | Hexachlorobenzene, or perchlorobenzene, is an aryl chloride and a six-substituted chlorobenzene with the molecular formula C6Cl6. It is a fungicide formerly used as a seed treatment, especially on wheat to control the fungal disease bunt. Its use has been banned globally under the Stockholm Convention on Persistent Organic Pollutants.
Physical and chemical properties
Hexachlorobenzene is a stable, white, crystalline chlorinated hydrocarbon. It is sparingly soluble in organic solvents such as benzene, diethyl ether and alcohol, but practically insoluble in water with no reaction. It has a flash point of 468 °F and it is stable under normal temperatures and pressures. It is combustible but it does not ignite readily. When heated to decomposition, hexachlorobenzene emits highly toxic fumes of hydrochloric acid, other chlorinated compounds (such as phosgene), carbon monoxide, and carbon dioxide.
History
Hexachlorobenzene was first known as "Julin's chloride of carbon" as it was discovered as a strange and unexpected product of impurities reacting in Julin's nitric acid factory. In 1864, Hugo Müller synthesised the compound by the reaction of benzene and antimony pentachloride, he then suggested that his compound was the same as Julin's chloride of carbon. Müller previously also believed it was the same compound as Michael Faraday's "perchloride of carbon" (Hexachloroethane), obtained a small sample of Julin's chloride of carbon to send to Richard Phillips and Faraday for investigation. In 1867, Henry Bassett proved that the compound produced from benzene and antimony was the same as Julian's carbon chloride and named it "hexachlorobenzene".
Leopold Gmelin named it "dichloride of carbon" and claimed that the carbon was derived from cast iron and the chlorine was from crude saltpetre.
Victor Regnault obtained hexachlorobenzene from the decomposition of chloroform and tetrachloroethylene vapours through a red-hot tube.
Synthesis
Large-scale manufacture for use as a fungicide was developed by using the residue remaining after purification of the mixture of isomers of hexachlorocyclohexane, from which the insecticide lindane (the γ-isomer) had been removed, leaving the unwanted α- and β- isomers. This mixture is produced when benzene is reacted with chlorine in the presence of ultraviolet light (e.g. from sunlight). However, manufacture is no longer practiced following the compound's ban.
Hexachlorobenzene has been made on a laboratory scale since the 1890s, by the electrophilic aromatic substitution reaction of chlorine with benzene or chlorobenzenes. A typical catalyst is ferric chloride. Much milder reagents than chlorine (e.g. dichlorine monoxide, iodine in chlorosulfonic acid) also suffice, and the various hexachlorocyclohexanes can substitute for benzene as well.
Usage
Hexachlorobenzene was used in agriculture to control the fungus tilletia caries (common bunt of wheat). It is also effective on tilletia controversa, dwarf bunt. The compound was introduced in 1947, normally formulated as a seed dressing but is now banned in many countries.
A minor industrial phloroglucinol synthesis nucleophilically substitutes hexachlorobenzene with alkoxides, followed by acidic workup.
Environmental considerations
In the 1970 HCB was produced at a level of 100,000 tons/y. Since then usage has decline steadily, production being 23-90 tons/y in "mid 1990s". The half-life in the soil is estimated to be 9 years. The mechanism of its toxicity and other adverse effects remain under study.
Safety
Hexachlorobenzene can react violently with dimethylformamide, particularly in the presence of catalytic transition-metal salts.
Toxicology
Oral LD50 (rat): 10,000 mg/kg
Oral LD50 (mice): 4,000 mg/kg
Inhalation LC50 (rat): 3600 mg/m3
Material has relatively low acute toxicity but is toxic because of its persistent and cumulative nature in body tissues in rich lipid content.
Hexachlorobenzene is an animal carcinogen and is considered to be a probable human carcinogen. After its introduction as a fungicide in 1945, for crop seeds, this toxic chemical was found in all food types. Hexachlorobenzene was banned from use in the United States in 1966.
This material has been classified by the International Agency for Research on Cancer (IARC) as a Group 2B carcinogen (possibly carcinogenic to humans). Animal carcinogenicity data for hexachlorobenzene show increased incidences of liver, kidney (renal tubular tumours) and thyroid cancers. Chronic oral exposure in humans has been shown to give rise to a liver disease (porphyria cutanea tarda), skin lesions with discoloration, ulceration, photosensitivity, thyroid effects, bone effects and loss of hair. Neurological changes have been reported in rodents exposed to hexachlorobenzene. Hexachlorobenzene may cause embryolethality and teratogenic effects. Human and animal studies have demonstrated that hexachlorobenzene crosses the placenta to accumulate in foetal tissues and is transferred in breast milk.
HCB is very toxic to aquatic organisms. It may cause long term adverse effects in the aquatic environment. Therefore, release into waterways should be avoided. It is persistent in the environment. Ecological investigations have found that biomagnification up the food chain does occur. Hexachlorobenzene has a half life in the soil of between 3 and 6 years. Risk of bioaccumulation in an aquatic species is high.
Anatolian porphyria
In Anatolia, Turkey between 1955 and 1959, during a period when bread wheat was unavailable, 500 people were fatally poisoned and more than 4,000 people fell ill by eating bread made with HCB-treated seed that was intended for agriculture use. Most of the sick were affected with a liver condition called porphyria cutanea tarda, which disturbs the metabolism of hemoglobin and results in skin lesions. Almost all breastfeeding children under the age of two, whose mothers had eaten tainted bread, died from a condition called "pembe yara" or "pink sore", most likely from high doses of HCB in the breast milk. In one mother's breast milk the HCB level was found to be 20 parts per million in lipid, approximately 2,000 times the average levels of contamination found in breast-milk samples around the world. Follow-up studies 20 to 30 years after the poisoning found average HCB levels in breast milk were still more than seven times the average for unexposed women in that part of the world (56 specimens of human milk obtained from mothers with porphyria, average value was 0.51 ppm in HCB-exposed patients compared to 0.07 ppm in unexposed controls), and 150 times the level allowed in cow's milk.
In the same follow-up study of 252 patients (162 males and 90 females, avg. current age of 35.7 years), 20–30 years' postexposure, many subjects had dermatologic, neurologic, and orthopedic symptoms and signs. The observed clinical findings include scarring of the face and hands (83.7%), hyperpigmentation (65%), hypertrichosis (44.8%), pinched faces (40.1%), painless arthritis (70.2%), small hands (66.6%), sensory shading (60.6%), myotonia (37.9%), cogwheeling (41.9%), enlarged thyroid (34.9%), and enlarged liver (4.8%). Urine and stool porphyrin levels were determined in all patients, and 17 have at least one of the porphyrins elevated. Offspring of mothers with three decades of HCB-induced porphyria appear normal.
See also
Chlorobenzenes—different numbers of chlorine substituents
Pentachlorobenzenethiol
References
Cited works
Additional references
International Agency for Research on Cancer. In: IARC Monographs on the Evaluation of Carcinogenic Risk to Humans. World Health Organisation, Vol 79, 2001pp 493–567
Registry of Toxic Effects of Chemical Substances. Ed. D. Sweet, US Dept. of Health & Human Services: Cincinnati, 2005.
Environmental Health Criteria No 195; International Programme on Chemical Safety, World health Organization, Geneva, 1997.
Toxicological Profile for Hexachlorobenzene (Update), US Dept of Health & Human Services, Sept 2002.
Merck Index, 11th Edition, 4600
External links
Obsolete pesticides
Chlorobenzenes
Endocrine disruptors
Fungicides
Hazardous air pollutants
IARC Group 2B carcinogens
Persistent organic pollutants under the Stockholm Convention
Suspected teratogens
Suspected embryotoxicants
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Perchlorocarbons | Hexachlorobenzene | [
"Chemistry",
"Biology"
] | 2,013 | [
"Fungicides",
"Persistent organic pollutants under the Stockholm Convention",
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Endocrine disruptors",
"Biocides"
] |
4,047,952 | https://en.wikipedia.org/wiki/Mirex | Mirex is an organochloride that was commercialized as an insecticide and later banned because of its impact on the environment. This white crystalline odorless solid is a derivative of both cyclopentadiene and cubane. It was popularized to control fire ants but by virtue of chemical robustness and lipophilicity it was recognized as a bioaccumulative pollutant. The spread of the red imported fire ant was encouraged by the use of mirex, which also kills native ants that are highly competitive with the fire ants. The United States Environmental Protection Agency prohibited its use in 1976. It is prohibited by the Stockholm Convention on Persistent Organic Pollutants.
Production and applications
Mirex was first synthesized in 1946, but was not used in pesticide formulations until 1955. Mirex was produced by the dimerization of hexachlorocyclopentadiene in the presence of aluminium chloride.
Mirex is a stomach insecticide, meaning that it must be ingested by the organism in order to poison it. The insecticidal use was focused on Southeastern United States to control fire ants. Approximately 250,000 kg of mirex were applied to fields between 1962 and 1975 (US NRC, 1978). Most of the mirex was in the form of "4X mirex bait", which consists of 0.3% mirex in 14.7% soybean oil mixed with 85% corncob grits. Application of the 4X bait was designed to give a coverage of 4.2 g mirex/ha and was delivered by aircraft, helicopter or tractor. 1x and 2x bait were also used. Use of mirex as a pesticide was banned in 1978. The Stockholm Convention banned production and use of several persistent organic pollutants, and mirex is one of the "dirty dozen".
Degradation
Much like other perchlorocarbons such as carbon tetrachloride, mirex does not burn easily; pyrolysis products are expected to include carbon dioxide, carbon monoxide, hydrogen chloride, chlorine, phosgene, and possibly other organochlorine species. Slow oxidation of mirex can be used to produce chlordecone ("Kepone"), a related insecticide that is also banned in most of the western world, but is more readily biodegraded. Sunlight degrades mirex to photomirex (8-monohydromirex) and 2,8-dihydromirex.
Mirex is highly resistant to microbiological degradation. It only slowly dechlorinates to a monohydro derivative by anaerobic microbial action in sewage sludge and by enteric bacteria. Degradation by soil microorganisms has not been described.
Bioaccumulation and biomagnification
Mirex is highly cumulative and amount depends upon the concentration and duration of exposure. There is evidence of accumulation of mirex in aquatic and terrestrial food chains to harmful levels. After 6 applications of mirex bait at 1.4 kg/ha, high mirex levels were found in some species; turtle fat contained 24.8 mg mirex/kg, kingfishers, 1.9 mg/kg, coyote fat, 6 mg/kg, opossum fat, 9.5 mg/kg, and racoon fat, 73.9 mg/kg. In a model ecosystem with a terrestrial-aquatic interface, sorghum seedlings were treated with mirex at 1.1 kg/ha. Caterpillars fed on these seedlings and their faeces contaminated the water which contained algae, snails, Daphnia, mosquito larvae, and fish. After 33 days, the ecological magnification value was 219 for fish and 1165 for snails.
Although general environmental levels are low, it is widespread in the biotic and abiotic environment. Being lipophilic, mirex is strongly adsorbed on sediments.
Safety
Mirex is only moderately toxic in single-dose animal studies (oral values range from 365–3000 mg/kg body weight). It can enter the body via inhalation, ingestion, and via the skin. The most sensitive effects of repeated exposure in animals are principally associated with the liver, and these effects have been observed with doses as low as 1.0 mg/kg diet (0.05 mg/kg body weight per day), the lowest dose tested. At higher dose levels, it is fetotoxic (25 mg/kg in diet) and teratogenic (6.0 mg/kg per day). Mirex was not generally active in short-term tests for genetic activity. There is sufficient evidence of its carcinogenicity in mice and rats. Delayed onset of toxic effects and mortality is typical of mirex poisoning. Mirex is toxic for a range of aquatic organisms, with crustacea being particularly sensitive.
Mirex induces pervasive chronic physiological and biochemical disorders in various vertebrates. No acceptable daily intake (ADI) for mirex has been advised by FAO/WHO. IARC (1979) evaluated mirex's carcinogenic hazard and concluded that "there is sufficient evidence for its carcinogenicity to mice and rats. In the absence of adequate data in humans, based on above result it can be said, that it has carcinogenic risk to humans". Data on human health effects do not exist .
Health effects
Per a 1995 ATSDR report mirex caused fatty changes in the livers, hyperexcitability and convulsion, and inhibition of reproduction in animals. It is a potent endocrine disruptor, interfering with estrogen-mediated functions such as ovulation, pregnancy, and endometrial growth. It also induced liver cancer by interaction with estrogen in female rodents.
References
Further reading
International Organization for the Management of Chemicals (IOMC), 1995, POPs Assessment Report, December.1995.
Lambrych KL, and JP Hassett. Wavelength-Dependent Photoreactivity of Mirex in Lake Ontario. Environ. Sci. Technol. 2006, 40, 858-863
Mirex Health and Safety Guide. IPCS International Program on Chemical Safety. Health and Safety Guide No.39. 1990
Toxicological Review of Mirex. In support of summary information on the Integrated Risk Information System (IRIS) 2003. U.S. Environmental Protection Agency, Washington DC.
Obsolete pesticides
Organochloride insecticides
IARC Group 2B carcinogens
Endocrine disruptors
Persistent organic pollutants under the Stockholm Convention
Fetotoxicants
Teratogens
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Cyclobutanes
Perchlorocarbons | Mirex | [
"Chemistry"
] | 1,394 | [
"Endocrine disruptors",
"Persistent organic pollutants under the Stockholm Convention",
"Teratogens",
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution"
] |
4,047,997 | https://en.wikipedia.org/wiki/Clorgiline | Clorgiline (INN), or clorgyline (BAN), is a monoamine oxidase inhibitor (MAOI) structurally related to pargyline which is described as an antidepressant. Specifically, it is an irreversible and selective inhibitor of monoamine oxidase A (MAO-A). Clorgiline was never marketed, but it has found use in scientific research. It has been found to bind with high affinity to the σ1 receptor (Ki = 3.2 nM) and with very high affinity to the I2 imidazoline receptor (Ki = 40 pM).
Unlike selegiline, clorgiline does not appear to be a monoaminergic activity enhancer (MAE).
Clorgiline is also a multidrug efflux pump inhibitor. Holmes et al., 2012 reverse azole fungicide resistance using clorgiline, showing promise for its use in multiple fungicide resistance.
References
Abandoned drugs
Propargyl compounds
Amines
Chloroarenes
Monoamine oxidase inhibitors
Phenol ethers
Sigma agonists | Clorgiline | [
"Chemistry"
] | 234 | [
"Drug safety",
"Functional groups",
"Amines",
"Bases (chemistry)",
"Abandoned drugs"
] |
4,048,014 | https://en.wikipedia.org/wiki/Monoamine%20oxidase%20A | Monoamine oxidase A, also known as MAO-A, is an enzyme (E.C. 1.4.3.4) that in humans is encoded by the MAOA gene. This gene is one of two neighboring gene family members that encode mitochondrial enzymes which catalyze the oxidative deamination of amines, such as dopamine, norepinephrine, and serotonin. A mutation of this gene results in Brunner syndrome. This gene has also been associated with a variety of other psychiatric disorders, including antisocial behavior. Alternatively spliced transcript variants encoding multiple isoforms have been observed.
Structures
Gene
Monoamine oxidase A, also known as MAO-A, is an enzyme that in humans is encoded by the MAOA gene.
The promoter of MAOA contains conserved binding sites for Sp1, GATA2, and TBP. This gene is adjacent to a related gene (MAOB) on the opposite strand of the X chromosome.
In humans, there is a 30-base repeat sequence repeated several different numbers of times in the promoter region of MAO-A. There are 2R (two repeats), 3R, 3.5R, 4R, and 5R variants of the repeat sequence, with the 3R and 4R variants most common in all populations. The variants of the promoter have been found to appear at different frequencies in different ethnic groups in an American sample cohort.
The epigenetic modification of MAOA gene expression through methylation likely plays an important role in women. A study from 2010 found epigenetic methylation of MAOA in men to be very low and with little variability compared to women, while having higher heritability in men than women.
Protein
MAO-A shares 70% amino acid sequence identity with its homologue MAO-B. Accordingly, both proteins have similar structures. Both MAO-A and MAO-B exhibit an N-terminal domain that binds flavin adenine dinucleotide (FAD), a central domain that binds the amine substrate, and a C-terminal α-helix that is inserted in the outer mitochondrial membrane. MAO-A has a slightly larger substrate-binding cavity than MAO-B, which may be the cause of slight differences in catalytic activity between the two enzymes, as shown in quantitative structure-activity relationship experiments. Both enzymes are relatively large, about 60 kilodaltons in size, and are believed to function as dimers in living cells.
Function
Monoamine oxidase A catalyzes O2-dependent oxidation of primary arylalkyl amines, most importantly neurotransmitters such as dopamine and serotonin. This is the initial step in the breakdown of these molecules. The products are the corresponding aldehyde, hydrogen peroxide, and ammonia:
R-Amine + + → R-Aldehyde + +
This reaction is believed to occur in three steps, using FAD as an electron-transferring cofactor. First, the amine is oxidized to the corresponding imine, with reduction of FAD to FADH2. Second, O2 accepts two electrons and two protons from FADH2, forming and regenerating FAD. Third, the imine is hydrolyzed by water, forming ammonia and the aldehyde.
Compared to MAO-B, MAO-A has a higher specificity for serotonin and norepinephrine, while the two enzymes have similar affinity for dopamine and tyramine.
MAO-A is a key regulator for normal brain function. In the brain, the highest levels of transcription occur in the brain stem, hypothalamus, amygdala, habenula, and nucleus accumbens, and the lowest in the thalamus, spinal cord, pituitary gland, and cerebellum. Its expression is regulated by the transcription factors SP1, GATA2, and TBP via cAMP-dependent regulation. MAO-A is also expressed in cardiomyocytes, where it is induced in response to stress such as ischemia and inflammation.
Clinical significance
Cancer
MAO-A produces an amine oxidase, which is a class of enzyme known to affect carcinogenesis. Clorgyline, an MAO-A enzyme inhibitor, prevents apoptosis in melanoma cells, in vitro. Cholangiocarcinoma suppresses MAO-A expression, and those patients with higher MAO-A expression had less adjacent organ invasion and better prognosis and survival.
Cardiovascular disease
MAOA activity is linked to apoptosis and cardiac damage during cardiac injury following ischemic-reperfusion.
Behavioral and neurological disorders
There is some association between low activity forms of the MAOA gene and autism. Mutations in the MAOA gene results in monoamine oxidase deficiency, or Brunner syndrome. Other disorders associated with MAO-A include Alzheimer's disease, aggression, panic disorder, bipolar disorder, major depressive disorder, and attention deficit hyperactivity disorder. Effects of parenting on self-regulation in adolescents appear to be moderated by 'plasticity alleles', of which the 2R and 3R alleles of MAOA are two, with "the more plasticity alleles males (but not females) carried, the more and less self-regulation they manifested under, respectively, supportive and unsupportive parenting conditions."
Depression
MAO-A levels in the brain as measured using positron emission tomography are elevated by an average of 34% in patients with major depressive disorder. Genetic association studies examining the relationship between high-activity MAOA variants and depression have produced mixed results, with some studies linking the high-activity variants to major depression in females, depressed suicide in males, major depression and sleep disturbance in males and major depressive disorder in both males and females.
Other studies failed to find a significant relationship between high-activity variants of the MAOA gene and major depressive disorder. In patients with major depressive disorder, those with MAOA G/T polymorphisms (rs6323) coding for the highest-activity form of the enzyme have a significantly lower magnitude of placebo response than those with other genotypes.
Antisocial behavior
In humans, an association between the 2R allele of the VNTR region of the gene and an increase in the likelihood of committing serious crime or violence has been found. The VNTR 2R allele of MAOA has been found to be a risk factor for violent delinquency, when present in association with stresses, i.e. family issues, low popularity or failing school.
A connection between the MAO-A gene 3R version and several types of anti-social behaviour has been found: Maltreated children with genes causing high levels of MAO-A were less likely to develop antisocial behavior. Low MAO-A activity alleles which are overwhelmingly the 3R allele in combination with abuse experienced during childhood resulted in an increased risk of aggressive behaviour as an adult, and men with the low activity MAOA allele were more genetically vulnerable even to punitive discipline as a predictor of antisocial behaviour. High testosterone, maternal tobacco smoking during pregnancy, poor material living standards, dropping out of school, and low IQ predicted violent behavior are associated with men with the low-activity alleles. According to a large meta-analysis in 2014, the 3R allele had a small, nonsignificant effect on aggression and antisocial behavior, in the absence of other interaction factors. Owing to methodological concerns, the authors do not view this as evidence in favor of an effect.
The MAO-A gene was the first candidate gene for antisocial behavior and was identified during a "molecular genetic analysis of a large, multigenerational, and notoriously violent, Dutch kindred". A study of Finnish prisoners revealed that a MAOA-L (low-activity) genotype, which contributes to low dopamine turnover rate, was associated with extremely violent behavior. For the purpose of the study, "extremely violent behavior" was defined as at least ten committed homicides, attempted homicides or batteries.
However, a large genome-wide association study has failed to find any large or statistically significant effects of the MAOA gene on aggression. A separate GWAS on antisocial personality disorder likewise did not report a significant effect of MAOA. Another study, while finding effects from a candidate gene search, failed to find any evidence in a large GWAS. A separate analysis of human and rat genome wide association studies, Mandelian randomization studies, and causal pathway analyses likewise failed to reveal robust evidence of MAOA in aggression. This lack of replication is predicted from the known issues of candidate gene research, which can produce many substantial false positives.
Aggression and the "Warrior gene"
Low-activity variants of the VNTR promoter region of the MAO-A gene have been referred to as the warrior gene. When faced with social exclusion or ostracism, individuals with the low activity MAO-A variants showed higher levels of aggression than individuals with the high activity MAO-A gene. Low activity MAO-A could significantly predict aggressive behaviour in a high provocation situation: Individuals with the low activity variant of the MAO-A gene were more likely (75% as opposed to 62%, out of a sample size of 70) to retaliate, and with greater force, as compared to those with a normal MAO-A variant if the perceived loss was large.
The effects of MAOA genes on aggression have also been criticized for being heavily overstated. Indeed, the MAOA gene, even in conjunction with childhood adversity, is known to have a very small effect. The vast majority of people with the associated alleles have not committed any violent acts.
Legal implications
In a 2009 criminal trial in the United States, an argument based on a combination of "warrior gene" and history of child abuse was successfully used to avoid a conviction of first-degree murder and the death penalty; however, the convicted murderer was sentenced to 32 years in prison. In a second case, an individual was convicted of second-degree murder, rather than first-degree murder, based on a genetic test that revealed he had the low-activity MAOA variant. Judges in Germany are more likely to sentence offenders to involuntary psychiatric hospitalization on hearing an accused's MAOA-L genotype.
Epigenetics
Studies have linked methylation of the MAOA gene with nicotine and alcohol dependence in women. A second MAOA VNTR promoter, P2, influences epigenetic methylation and interacts with having experienced child abuse to influence antisocial personality disorder symptoms, only in women. A study of 34 non-smoking men found that methylation of the gene may alter its expression in the brain.
Animal studies
A dysfunctional MAOA gene has been correlated with increased aggression levels in mice, and has been correlated with heightened levels of aggression in humans. In mice, a dysfunctional MAOA gene is created through insertional mutagenesis (called 'Tg8'). Tg8 is a transgenic mouse strain that lacks functional MAO-A enzymatic activity. Mice that lacked a functional MAOA gene exhibited increased aggression towards intruder mice.
Some types of aggression exhibited by these mice were territorial aggression, predatory aggression, and isolation-induced aggression. The MAO-A deficient mice that exhibited increased isolation-induced aggression reveals that an MAO-A deficiency may also contribute to a disruption in social interactions. There is research in both humans and mice to support that a nonsense point mutation in the eighth exon of the MAOA gene is responsible for impulsive aggressiveness due to a complete MAO-A deficiency.
Interactions
Transcription factors
A number of transcription factors bind to the promoter region of MAO-A and upregulate its expression. These include:Sp1 transcription factor, GATA2, TBP.
Inducers
Synthetic compounds that up-regulate the expression of MAO-A include Valproic acid (Depakote)
Inhibitors
Substances that inhibit the enzymatic activity of MAO-A include:
Synthetic compounds
Befloxatone (MD370503)
Brofaromine (Consonar)
Cimoxatone
Clorgyline (irreversible)
Methylene Blue
Minaprine (Cantor)
Moclobemide (Aurorix, Manerix)
Phenelzine (Nardil)
Pirlindole (Pirazidol)
Toloxatone (Humoryl)
Tyrima (CX 157)
Tranylcypromine (nonselective and irreversible)
Natural products (herbal sources)
Incarviatone A (Incarvillea delavayi)
Garlic (Garlic)
β-Carboline alkaloids (Syrian Rue, Passion Flower, Tobacco smoke, Ayahuasca)
Harmine
Harmaline
Isoquinoline alkaloids
Piperine (Black pepper)
Rosiridin (in vitro)
See also
Monoamine oxidase B
Monoamine oxidase inhibitor - a class of antidepressant drugs that block or inactivate one or both MAO isoforms
References
Further reading
External links
Aggression
Criminology
EC 1.4.3
Human proteins
de:Warrior Gene | Monoamine oxidase A | [
"Biology"
] | 2,736 | [
"Behavior",
"Aggression",
"Human behavior"
] |
4,048,455 | https://en.wikipedia.org/wiki/Mechanically%20interlocked%20molecular%20architectures | In chemistry, mechanically interlocked molecular architectures (MIMAs) are molecules that are connected as a consequence of their topology. This connection of molecules is analogous to keys on a keychain loop. The keys are not directly connected to the keychain loop but they cannot be separated without breaking the loop. On the molecular level, the interlocked molecules cannot be separated without the breaking of the covalent bonds that comprise the conjoined molecules; this is referred to as a mechanical bond. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, and molecular Borromean rings. Work in this area was recognized with the 2016 Nobel Prize in Chemistry to Bernard L. Feringa, Jean-Pierre Sauvage, and J. Fraser Stoddart.
The synthesis of such entangled architectures has been made efficient by combining supramolecular chemistry with traditional covalent synthesis, however mechanically interlocked molecular architectures have properties that differ from both "supramolecular assemblies" and "covalently bonded molecules". The terminology "mechanical bond" has been coined to describe the connection between the components of mechanically interlocked molecular architectures. Although research into mechanically interlocked molecular architectures is primarily focused on artificial compounds, many examples have been found in biological systems including: cystine knots, cyclotides or lasso-peptides such as microcin J25 which are proteins, and a variety of peptides.
Residual topology
Residual topology is a descriptive stereochemical term to classify a number of intertwined and interlocked molecules, which cannot be disentangled in an experiment without breaking of covalent bonds, while the strict rules of mathematical topology allow such a disentanglement. Examples of such molecules are rotaxanes, catenanes with covalently linked rings (so-called pretzelanes), and open knots (pseudoknots) which are abundant in proteins.
The term "residual topology" was suggested on account of a striking similarity of these compounds to the well-established topologically nontrivial species, such as catenanes and knotanes (molecular knots). The idea of residual topological isomerism introduces a handy scheme of modifying the molecular graphs and generalizes former efforts of systemization of mechanically bound and bridged molecules.
History
Experimentally the first examples of mechanically interlocked molecular architectures appeared in the 1960s with catenanes being synthesized by Wasserman and Schill and rotaxanes by Harrison and Harrison. The chemistry of MIMAs came of age when Sauvage pioneered their synthesis using templating methods. In the early 1990s the usefulness and even the existence of MIMAs were challenged. The latter concern was addressed by X ray crystallographer and structural chemist David Williams. Two postdoctoral researchers who took on the challenge of producing [5]catenane (olympiadane) pushed the boundaries of the complexity of MIMAs that could be synthesized their success was confirmed in 1996 by a solid‐state structure analysis conducted by David Williams.
Mechanical bonding and chemical reactivity
The introduction of a mechanical bond alters the chemistry of the sub components of rotaxanes and catenanes. Steric hindrance of reactive functionalities is increased and the strength of non-covalent interactions between the components are altered.
Mechanical bonding effects on non-covalent interactions
The strength of non-covalent interactions in a mechanically interlocked molecular architecture increases as compared to the non-mechanically bonded analogues. This increased strength is demonstrated by the necessity of harsher conditions to remove a metal template ion from catenanes as opposed to their non-mechanically bonded analogues. This effect is referred to as the "catenand effect". The augmented non-covalent interactions in interlocked systems compared to non-interlocked systems has found utility in the strong and selective binding of a range of charged species, enabling the development of interlocked systems for the extraction of a range of salts. This increase in strength of non-covalent interactions is attributed to the loss of degrees of freedom upon the formation of a mechanical bond. The increase in strength of non-covalent interactions is more pronounced on smaller interlocked systems, where more degrees of freedom are lost, as compared to larger mechanically interlocked systems where the change in degrees of freedom is lower. Therefore, if the ring in a rotaxane is made smaller the strength of non-covalent interactions increases, the same effect is observed if the thread is made smaller as well.
Mechanical bonding effects on chemical reactivity
The mechanical bond can reduce the kinetic reactivity of the products, this is ascribed to the increased steric hindrance. Because of this effect hydrogenation of an alkene on the thread of a rotaxane is significantly slower as compared to the equivalent non interlocked thread. This effect has allowed for the isolation of otherwise reactive intermediates.
The ability to alter reactivity without altering covalent structure has led to MIMAs being investigated for a number of technological applications.
Applications of mechanical bonding in controlling chemical reactivity
The ability for a mechanical bond to reduce reactivity and hence prevent unwanted reactions has been exploited in a number of areas. One of the earliest applications was in the protection of organic dyes from environmental degradation.
Examples
Olympiadane
References
Further reading
Supramolecular chemistry
Molecular topology | Mechanically interlocked molecular architectures | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,104 | [
"Molecular topology",
"Topology",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
4,048,755 | https://en.wikipedia.org/wiki/Hubert%20Schoemaker | Hubert Jacob Paul Schoemaker (March 23, 1950 – January 1, 2006) was a Dutch biotechnologist. He was a co-founder and the president of one of America's first biotechnology companies, Centocor, which was founded in 1979 for the commercialising of monoclonal antibodies. In 1999 he founded Neuronyx, Inc., for the manufacture of stem cells and the development of stem-cell therapies.
Early life and education
Schoemaker was born in Deventer, Netherlands. He attended St. Bernardus School in Deventer, and Canisius College, Nijmegen. In 1969 he moved to the United States to attend the University of Notre Dame, where he majored in chemistry, graduating in May 1972. Soon after he married Ann Postorino.
He then earned a doctorate in biochemistry in 1975 from the Massachusetts Institute of Technology. Supervised by Paul Schimmel, his doctoral research was an investigation of the structure function relationships of transfer RNAs and their complexes.
Career
After declining postdoctoral research positions with Stanley Cohen and Klaus Weber, Schoemaker chose to work as a research scientist in industry.
His choice was influenced by the severe disabilities suffered by his first daughter, Maureen, who was born with lissencephaly and needed specialised care. This inspired Schoemaker to become involved in commercial biotechnology.
In 1976 Schoemaker joined Corning Medical, a Boston-based division of Corning Glass Works. At Corning Schoemaker rapidly progressed from being a specialist in immunoassay development for diagnostics to heading research and development. Among his achievements at the company was devising effective diagnostic kit tests for thyroid disorders.
In 1979 Schoemaker became involved in the founding of Centocor together with a former Corning Medical colleague Ted Allen and the bioentrepreneur Michael Wall with whom he had some dealings while at Corning. Inspired by the work of Hilary Koprowski, who developed some of the earliest monoclonal antibodies against tumour antigens and influenza viral antigens, the objective of Centocor was to commercialise monoclonal antibodies for diagnostics and therapeutics. In 1980 Schoemaker joined Centocor and soon after became its first chief executive officer.
From the start Centocor decided to fill its product pipeline through partnerships with research institutions and marketing alliances. Central to this policy was Schoemaker's ability to network and the company's decision to design diagnostic kits so that were compatible with existing diagnostic systems. Under Schoemaker's leadership Centocor rapidly grew into a profitable diagnostic business. By 1985 the company had revenues of approximately $50 million. In part this success was built upon the swift approval the company won for two of its tests. The first was for gastrointestinal cancer test and the other was for hepatitis B. Between 1983 and 1986 Centocor introduced three other diagnostic tests to the market: one for ovarian cancer (the first diagnostic test available for the disease), one for breast cancer and one for colorectal cancer.
Despite the company's success on the diagnostic front, Schoemaker was plunged in 1992 into efforts to save the company from bankruptcy when its first therapeutic, Centoxin, a drug designed to treat septic shock, failed to win FDA approval. In part the crisis had come about as a result of the company's executives trying to go it alone in developing the drug. What saved the company was a return to the policy of collaboration. Learning from its mistakes with Centoxin, in December 1994 Centocor gained marketing approval for ReoPro, a monoclonal antibody drug for cardiovascular disease. The first therapeutic to ever receive simultaneous US and European approvals, and the second monoclonal antibody to ever win approval as a drug, ReoPro marked a milestone for both Centocor and for monoclonal antibodies therapeutics. ReoPro was to be followed in August 1998 by the approval of Centocor's Remicade, a drug to treat auto-immune disorders like Crohn's disease and rheumatoid arthritis.
After selling Centocor to Johnson and Johnson in 1999, Schoemaker went on to form Neuronyx, Inc., a biotech company focused on developing cellular therapies. After Schoemaker died in 2006 the company was continued by his wife Anne Faulkner Schoemaker. Initial work focused on using stem cells taken from adult bone marrow to help regenerate heart tissue damaged during heart attacks. Later the company turned direction to looking at the development of a treatment for incision wounds in women following breast cancer reconstruction surgery. The company later changed its name to Garnet BioTherapeutics. Despite promising clinical results and raising more than $55 million in venture capital funding, the company was unable to continue.
Death
Schoemaker was diagnosed in 1994 with a form of brain cancer, medulloblastoma.
He died on January 1, 2006, at age 55.
References
External links
Hubert Schoemaker on WhatisBiotechnology.org
1950 births
2006 deaths
American biochemists
20th-century American businesspeople
Dutch businesspeople
Biotechnologists
American immunologists
People from Deventer
Dutch emigrants to the United States
Massachusetts Institute of Technology School of Science alumni
University of Notre Dame alumni
Deaths from brain cancer in Pennsylvania
Dutch immunologists | Hubert Schoemaker | [
"Biology"
] | 1,095 | [
"Biotechnologists"
] |
4,049,070 | https://en.wikipedia.org/wiki/Swing-piston%20engine | A swing-piston engine is a type of internal combustion engine in which the pistons move in a circular motion inside a ring-shaped "cylinder", moving closer and further from each other to provide compression and expansion. Generally two sets of pistons are used, geared to move in a fixed relationship as they rotate around the cylinder. In some versions the pistons oscillate around a fixed center, as opposed to rotating around the entire engine. The design has also been referred to as a oscillating piston engine, vibratory engine when the pistons oscillate instead of rotate, or toroidal engine based on the shape of the "cylinder".
Many swing-piston engines have been proposed, but none have been successful. Two attempts in about 2010 are the prototype American-made MYT engine and prototype Russian ORE for use in the Yo-Mobile hybrid car. Both claimed high fuel efficiency and high power-to-weight ratio, but there have been no successful demonstrations of claimed efficiency or that the engines are durable enough for practical use.
Steam engines
Swing-piston engines were initially introduced during the 1820s as alternate steam engine designs, prior to the widespread introduction of the steam turbine. In these examples the "piston" is typically not cylindrical as in a modern internal combustion design, and is generally rectangular in cross-section as seen from the top, rotating in a flat disk "cylinder". From the side they are either flat plates or pie-wedge shaped. The term "swing-piston" is not entirely accurate in these cases, but the operating cycle is identical and is properly considered here.
The first known example was introduced by Elijah Galloway in 1829 for ship propulsion. It featured a single vane rotating through 270 degrees. It appears this version was never built, although a model still exists in the Science Museum. Galloway also designed a wide variety of pure rotary engines using vanes as well.
A more serious attempt was the "Cambrian System" of John Jones in 1841. This design used two or three flat plates that were geared to move closer or further apart as the cycle continued. When the plates were at their closest point, steam was admitted between them using a valve, pushing them apart as the cycle continued. When the plates reached their maximum distance, an internal passage was uncovered that allowed the partially expanded steam to flow across the center of the device into the area on the other side of the vanes, which were now at their minimum distance. In this fashion the design was effectively a compound engine.
Many variations followed, and a number of these saw limited use in the field. Notable among them was John Ericsson's design of 1843, which powered the USS Princeton, the United States' first screw-powered steamship. Charles Parsons examined the concept and appears to have produced two swing-piston engine designs before moving on to the steam turbine. The Roots brothers designed a swing-piston engine of a unique type, although they are better known for their supercharger design.
Internal combustion
It is unclear whether or not any internal combustion swing-piston engine has ever reached production, but the closest attempt appears to be the German World War II-era design by . His design had six pistons in total, three each attached to two disks. The disks were geared to each other to form six chambers between the pistons, such that at any one time one set of three chambers were "close together" while the other set of three was "wide apart", varying between those two extremes as the disks rotated. The timing was arranged such that the chambers reached their "close together" point over the spark plug, and their "wide apart" point over the intake and exhaust ports. This action is similar to the Wankel engine, the primary difference being that the Wankel creates compression and expansion via the shape of the engine and rotor, rather than the relative motion of the pistons.
Lutz's engine was being designed as an experimental gas generator for a new type of aircraft engine, one that replaced a traditional centrifugal or axial compressor with his swing-piston design. Ultimately the exhaust would be used to drive a turbine, that power being used to drive a propeller to produce a turboprop. For this role the exhaust gas was too hot to be used directly in a turbine, given the materials available at the time, so the engine had a second "exhaust port" that vented cold pressurized air, which was then mixed into the hot exhaust. For direct power use, as opposed to driving a turbine, this "third area" of the engine could simply be left open to the air to avoid losing power by unnecessary compression.
The initial test engines had some minor problems, notably with sealing, but these were worked through and the engines were under test during 1944. One particularly good feature of swing-piston engines is that they can be bolted back to back along a common crank shaft to make a larger engine, and with each additional stage the running becomes smoother and the only part that needs to be made larger is the crankshaft. A similar arrangement with a radial engine is generally more difficult to arrange, especially cooling, and ones with inline engine arrangements soon become so long that keeping the crankshaft from vibrating becomes a serious problem (see Chrysler IV-2220 for example).
Each "cylinder" from Lutz's design was 0.70 m in diameter and only about 30 cm in depth, providing 445 hp from 140 kg, an excellent power-to-weight ratio compared even to jet engines of the era. A five-block version was proposed for his turboprop concept, providing 3,450 hp from an engine about 2 m long. While the power-to-weight ratio was good, the density of the engine was simply superb.
The overall turboprop looked much more like a jet engine than a piston one. The swing-piston gas generator was located in the middle of a long nacelle, with a five-stage axial compressor in front and a three-stage turbine behind. The compressor was used both to act as a supercharger for the piston engine, as well as provide cold air to cool the turbine. The actual power to the propeller, combining both the pistons and the turbines, was 4,930 hp at 10,000 m altitude, far greater than any German wartime project.
Why all this complexity to produce a new version of an engine, the turboprop, whose primary advantage was simplicity? The main problem with conventional jet engines is that the combustion takes place in an open chamber, which is considerably less efficient than the closed chamber of a piston engine, where it has constant volume (or close to it). The Otto cycle or Diesel cycle used in piston engines has a much lower specific fuel consumption than the Brayton cycle of traditional gas turbine engines at low speed. Lutz's design was intended to power very long-range bombers and patrol aircraft, where fuel economy was more important than simplicity and performance.
Lutz later patented the design under "Rotary compressor and other engines", United States patent 2,301,667.
Other examples
Lutz's design is not the only way to produce such an engine: BMW experimented with a traditional engine with poppet valves on the combustion chambers, which had been used a number of times previously in experiments. Another approach entirely is to recover some of the heat of the exhaust in a heat exchanger and use that instead of fuel to heat the compressed air, a concept used by General Motors in a series of automobile turbines. Generally, however, improvements in the basic piston engine in "low-power" roles have kept any of these advanced designs out of the marketplace.
In the 1990s, a number of inventors re-introduced the concept as if it were new. Examples include Angel Labs' "Massive Yet Tiny" engine, the Rotoblock, the Roundengine, the Trochilic Engine, and designs by Tschudi and Hoose. In 2009, Russian billionaire industrialist Mikhail Prokhorov announced his plans to enter an automotive business with a series of a lightweight hybrid vehicles using this design as their prime mover. Another recent introduction aimed at the hybrid market is the "Hüttlin Kugelmotor", which combines the swing-piston concept with a modified swashplate to produce a spherical design that directly powers an internal electrical generator. The Taurozzi pendulum engine has cylinders that follow the shape of a toroid, but is otherwise similar to a conventional reciprocating engine.
Other names
The Tschudi engine is also known as a "cat-and-mouse engine" or a "scissor engine".
See also
Free-piston engine
Opposed piston engine
Oscillating cylinder steam engine
References
Notes
Bibliography
External links
Youtube.com
Toroidal engine from Franky Devaere
Piston engine configurations
Proposed engines | Swing-piston engine | [
"Technology"
] | 1,783 | [
"Proposed engines",
"Engines"
] |
4,049,082 | https://en.wikipedia.org/wiki/List%20of%20artificial%20objects%20on%20extraterrestrial%20surfaces | This is a partial list of artificial objects left on extraterrestrial surfaces.
Artificial objects on Venus
Artificial objects on the Moon
Artificial objects on Mars
Artificial objects on other extraterrestrial bodies
Estimated total masses of objects
Gallery
See also
Sample return mission and Moon rock
List of archaeological sites beyond national boundaries
List of landings on extraterrestrial bodies
Deliberate crash landings on extraterrestrial bodies
List of extraterrestrial orbiters
List of artificial objects leaving the Solar System
External links
Comprehensive listing of space artifacts by Philippe Pinczon du Sel (spaceartefacts.com)
References
Spaceflight
Outer space lists
Extra-terrestrial | List of artificial objects on extraterrestrial surfaces | [
"Astronomy"
] | 127 | [
"Spaceflight",
"Outer space",
"Outer space lists"
] |
4,049,168 | https://en.wikipedia.org/wiki/Glass%20cloth | Glass cloth is a textile material woven from glass fiber yarn.
Home and garden
Glass cloth was originally developed to be used in greenhouse paneling, allowing sunlight's ultraviolet rays to be filtered out, while still allowing visible light through to plants.
Glass cloth is also a term for a type of tea towel suited for polishing glass. The cloth is usually woven with the plain weave, and may be patterned in various ways, though checked cloths are the most common. The original cloth was made from linen, but a large quantity is made with cotton warp and tow weft, and in some cases they are composed entirely of cotton. Short fibres of the cheaper kind are easily detached from the cloth.
In the Southern Plains during the Dust Bowl, states' health officials recommended attaching translucent glass cloth to the inside frames of windows to help in keeping the dust out of buildings, although people also used paperboard, canvas or blankets. Eyewitness accounts indicate they were not completely successful.
Use in technology
Given the properties of glass - in particular, its heat resistance and inability to ignite - glass is often used to create fire barriers in hazardous environments, such as those inside racecars.
Due to its poor flexibility and ability to cause skin irritation, glass fibers are typically inadequate for use in apparel. However, the bi-directional strength of glass cloth has found utility in some fiberglass reinforced plastics. The Rutan VariEze homebuilt aircraft uses a moldless glass-cloth/epoxy composite, which acts as a protective skin.
Glass cloth is also commonly used as a reinforcing lattice for pre-pregs.
See also
G-10 (material)
Glass fiber
References
Woven fabrics
Linens
Fiberglass
Composite materials
Fibre-reinforced polymers
Glass applications | Glass cloth | [
"Physics",
"Chemistry",
"Materials_science"
] | 355 | [
"Composite materials",
"Fiberglass",
"Materials",
"Polymer chemistry",
"Matter"
] |
4,049,625 | https://en.wikipedia.org/wiki/Noise%20barrier | A noise barrier (also called a soundwall, noise wall, sound berm, sound barrier, or acoustical barrier) is an exterior structure designed to protect inhabitants of sensitive land use areas from noise pollution. Noise barriers are the most effective method of mitigating roadway, railway, and industrial noise sources –
other than cessation of the source activity or use of source controls.
In the case of surface transportation noise, other methods of reducing the source noise intensity include encouraging the use of hybrid and electric vehicles, improving automobile aerodynamics and tire design, and choosing low-noise paving material. Extensive use of noise barriers began in the United States after noise regulations were introduced in the early 1970s.
History
Noise barriers have been built in the United States since the mid-twentieth century, when vehicular traffic burgeoned. I-680 in Milpitas, California was the first noise barrier. In the late 1960s, analytic acoustical technology emerged to mathematically evaluate the efficacy of a noise barrier design adjacent to a specific roadway. By the 1990s, noise barriers that included use of transparent materials were being designed in Denmark and other western European countries.
The best of these early computer models considered the effects of roadway geometry, topography, vehicle volumes, vehicle speeds, truck mix, road surface type, and micro-meteorology. Several U.S. research groups developed variations of the computer modeling techniques: Caltrans Headquarters in Sacramento, California; the ESL Inc. group in Sunnyvale, California; the Bolt, Beranek and Newman group in Cambridge, Massachusetts, and a research team at the University of Florida. Possibly the earliest published work that scientifically designed a specific noise barrier was the study for the Foothill Expressway in Los Altos, California.
Numerous case studies across the U.S. soon addressed dozens of different existing and planned highways. Most were commissioned by state highway departments and conducted by one of the four research groups mentioned above. The U.S. National Environmental Policy Act, enacted in 1970, effectively mandated the quantitative analysis of noise pollution from every Federal-Aid Highway Act Project in the country, propelling noise barrier model development and application. With passage of the Noise Control Act of 1972, demand for noise barrier design soared from a host of noise regulation spinoff.
By the late 1970s, more than a dozen research groups in the U.S. were applying similar computer modeling technology and addressing at least 200 different locations for noise barriers each year. , this technology is considered a standard in the evaluation of noise pollution from highways. The nature and accuracy of the computer models used is nearly identical to the original 1970s versions of the technology.
Small and purposeful gaps exist in most noise barriers to allow firefighters to access nearby fire hydrants and pull through fire hoses, which are usually denoted by a sign indicating the nearest cross street, and a pictogram of a fire hydrant, though some hydrant gaps channel the hoses through small culvert channels beneath the wall.
Design
The acoustical science of noise barrier design is based upon treating an airway or railway as a line source. The theory is based upon blockage of sound ray travel toward a particular receptor; however, diffraction of sound must be addressed. Sound waves bend (downward) when they pass an edge, such as the apex of a noise barrier. Barriers that block line of sight of a highway or other source will therefore block more sound. Further complicating matters is the phenomenon of refraction, the bending of sound rays in the presence of an inhomogeneous atmosphere. Wind shear and thermocline produce such inhomogeneities. The sound sources modeled must include engine noise, tire noise, and aerodynamic noise, all of which vary by vehicle type and speed.
The noise barrier may be constructed on private land, on a public right-of-way, or on other public land. Because sound levels are measured using a logarithmic scale, a reduction of nine decibels is equivalent to elimination of approximately 86 percent of the unwanted sound power.
Materials
Several different materials may be used for sound barriers, including masonry, earthwork (such as earth berm), steel, concrete, wood, plastics, insulating wool, or composites. Walls that are made of absorptive material mitigate sound differently than hard surfaces. It is also possible to make noise barriers with active materials such as solar photovoltaic panels to generate electricity while also reducing traffic noise.
A wall with porous surface material and sound-dampening content material can be absorptive where little or no noise is reflected back towards the source or elsewhere. Hard surfaces such as masonry or concrete are considered to be reflective where most of the noise is reflected back towards the noise source and beyond.
Noise barriers can be effective tools for noise pollution abatement, but certain locations and topographies are not suitable for use of noise barriers. Cost and aesthetics also play a role in the choice of noise barriers. In some cases, a roadway is surrounded by a noise abatement structure or dug into a tunnel using the cut-and-cover method.
Disadvantages
Potential disadvantages of noise barriers include:
Blocked vision for motorists and rail passengers. Glass elements in noise screens can reduce visual obstruction, but require regular cleaning
Aesthetic impact on land- and townscape
An expanded target for graffiti, unsanctioned guerilla advertising, and vandalism
Creation of spaces hidden from view and social control (e.g. at railway stations)
Possibility of bird–window collisions for large and clear barriers
Effects on air pollution
Roadside noise barriers have been shown to reduce the near-road air pollution concentration levels. Within 15–50 m from the roadside, air pollution concentration levels at the lee side of the noise barriers may be reduced by up to 50% compared to open road values.
Noise barriers force the pollution plumes coming from the road to move up and over the barrier creating the effect of an elevated source and enhancing vertical dispersion of the plume. The deceleration and the deflection of the initial flow by the noise barrier force the plume to disperse horizontally. A highly turbulent shear zone characterized by slow velocities and a re-circulation cavity is created in the lee of the barrier which further enhances the dispersion; this mixes ambient air with the pollutants downwind behind the barrier.
See also
Health effects from noise
Noise control
Safety barrier
Soundproofing
References
External links
Environmental engineering
Noise pollution
Noise control
Road infrastructure
Acoustics
Sound
1970s introductions | Noise barrier | [
"Physics",
"Chemistry",
"Engineering"
] | 1,322 | [
"Chemical engineering",
"Classical mechanics",
"Acoustics",
"Civil engineering",
"Environmental engineering"
] |
4,050,188 | https://en.wikipedia.org/wiki/Red%20yeast%20rice | Red yeast rice or red rice koji is a bright reddish purple fermented rice, which acquires its color from being cultivated with the mold Monascus purpureus. Red yeast rice is what is referred to as a kōji in Japanese, meaning "grain or bean overgrown with a mold culture", a food preparation tradition going back to ca. 300 BC.
In addition to its culinary use, red yeast rice is also used in Chinese herbology and Traditional Chinese medicine, possibly during the Tang dynasty around AD 800. Red yeast rice is described in the Chinese pharmacopoeia Ben Cao Gang Mu by Li Shizhen.
A modern-era use as a dietary supplement developed in the late 1970s after researchers were isolating lovastatin from Aspergillus and monacolins from Monascus, the latter being the same fungus used to make red yeast rice. Chemical analysis soon showed that lovastatin and monacolin K were identical. Lovastatin became the patented prescription drug Mevacor. Red yeast rice went on to become a non-prescription dietary supplement in the United States and other countries. In 1998, the U.S. Food and Drug Administration (FDA) initiated action to ban a dietary supplement containing red yeast rice extract, stating that red yeast rice products containing monacolin K are identical to a prescription drug, and thus subject to regulation as a drug.
Terminology
Red yeast rice is also known as red fermented rice, red kojic rice or red koji rice from its Japanese name, and anka or angkak from Southern Min pronunciations of its Chinese name. In both the scientific and popular literature in English that draws principally on Japanese traditional use, red yeast rice is most often referred to as "red rice koji". English language articles favoring Chinese literature sources prefer the translation "red yeast rice".
Production
Red yeast rice is produced by cultivating the mold species Monascus purpureus on rice for 3–6 days at room temperature. The rice grains turn bright red at the core and reddish purple on the outside. The fully cultured rice is then either sold as the dried grain, or cooked and pasteurized to be sold as a wet paste, or dried and pulverized to be sold as a fine powder. China is the world's largest producer of red yeast rice, but European companies have entered the market.
Uses
Culinary
Red yeast rice is used to color a wide variety of food products, including fermented tofu, red rice vinegar, char siu, Peking duck, and Chinese pastries that require red food coloring.
In China, documentation dates back to at least the first century AD. It is also traditionally used in the production of several types of Chinese huangjiu (Shaoxing jiu), and Japanese sake (akaisake), imparting a reddish color to these wines. It was called a "koji" in Japanese, meaning "grain or bean overgrown with a mold culture".
The lees left over from wine production, known as hóngzāo (), can be used as flavoring, imparting a subtle but pleasant taste to food. The lees are particularly commonly used in Fujian cuisine, where they are used for dishes like Fujian red wine chicken, a celebratory dish associated with birthdays and Chinese New Year.
Red yeast rice (angkak in Filipino) is also used widely in the Philippines to traditionally color and preserve certain dishes like fermented shrimp (bagoong alamang), burong isda (fermented rice and fish), and balao-balao (fermented rice and shrimp).
Traditional Chinese medicine
In addition to its culinary use, red yeast rice is also used in Chinese herbology and traditional Chinese medicine. Medicinal use of red yeast rice is described in the Chinese pharmacopoeia Ben Cao Gang Mu compiled by Li Shizhen ca. 1590. Recommendations were to take it internally to invigorate the body, aid in digestion, and revitalize the blood. One reference provided the Li Shizhen health claims as a quotation "...the effect of promoting the circulation of blood and releasing stasis, invigorating the spleen, and eliminating [in]digestion."
Red yeast rice and statin drugs
In the late 1970s, researchers in the United States and Japan were isolating lovastatin from Aspergillus and monacolins from Monascus, the latter being the same fungus used to make red yeast rice (RYR) when cultured under carefully controlled conditions. Chemical analysis soon showed that lovastatin and monacolin K are identical chemical compounds. The two isolations, documentations, and patent applications occurred months apart. Lovastatin became the patented, prescription drug Mevacor. Red yeast rice went on to become a non-prescription dietary supplement in the United States and other countries.
Lovastatin and other prescription statin drugs inhibit cholesterol synthesis by blocking action of the enzyme HMG-CoA reductase. As a consequence, circulating total cholesterol and LDL-cholesterol are lowered by 24–49% depending on the statin and dose. Different strains of Monascus fungus will produce different amounts of monacolins. The 'Went' strain of Monascus purpureus (purpureus=dark red in Latin), when properly fermented and processed, will yield a dried red yeast rice powder that is approximately 0.4% monacolins, of which roughly half will be monacolin K (chemically identical to lovastatin).
U.S. regulatory restrictions
The US Food and Drug Administration (FDA) position is that red yeast rice products that contain monacolin K are identical to a prescription drug and, thus, subject to regulation as a drug. In 1998, the FDA initiated action to ban a product (Cholestin) containing red yeast rice extract. The U.S. District Court in Utah ruled in favor of allowing the product to be sold without restriction. This decision was reversed on appeal to the U.S. Court of Appeals in 2001. In 2007, the FDA sent warning letters to two dietary supplement companies. One was making a monacolin content claim about its RYR product and the other was not, but the FDA noted that both products contained monacolins. Both products were withdrawn. In a press release the FDA "...is warning consumers to not buy or eat red yeast rice products... may contain an unauthorized drug that could be harmful to health." The rationale for "harmful to health" was that consumers might not understand that the dangers of monacolin-containing red yeast rice are the same as those of prescription statin drugs.
A products analysis report from 2010 tested 12 products commercially available in the U.S. and reported that per 600 mg capsule, total monacolins content ranged from 0.31 to 11.15 mg. A 2017 study tested 28 brands of red yeast rice supplements purchased from U.S. retailers, stating "the quantity of monacolin K varied from none to prescription strength". Many of these avoid FDA regulation by not having any appreciable monacolin content. Their labels and websites say no more than "fermented according to traditional Asian methods" or "similar to that used in culinary applications". The labeling on these products often says nothing about cholesterol lowering. If products do not contain lovastatin, do not claim to contain lovastatin, and do not make a claim to lower cholesterol, they are not subject to FDA action. Two reviews confirm that the monacolin content of red yeast rice dietary supplements can vary over a wide range, with some containing negligible monacolins.
Clinical evidence
The amount typically used in clinical trials is 1200–2400 mg/day of red yeast rice containing approximately 10 mg total monacolins, of which half are monacolin K. A meta-analysis reported LDL-cholesterol lowered by 1.02 mmol/L (39.4 mg/dL) compared to placebo. The incidence of reported adverse effects ranged from 0% to 5% and was not different from controls. A second meta-analysis incorporating more recent clinical trials also reported significant lowering of total cholesterol and LDL-cholesterol.
Within the first review, the largest and longest duration trial was conducted in China. Close to 5,000 post-heart attack patients were enrolled for an average of 4.5 years to receive either a placebo or a RYR product named Xuezhikang (血脂康). The test product was an ethanol extract of red yeast rice, with a monacolin K content of 11.6 mg/day. Key results: in the treated group, risk of subsequent heart attacks was reduced by 45%, cardio deaths by 31%, and all-cause deaths by 33%. These heart attack and cardiovascular death outcomes appear to be better than what has been reported for prescription statin drugs. A 2008 review pointed out that the cardioprotective effects of statins in Japanese populations occur at lower doses than are needed in Western populations, and theorized that the low amount of monacolins found in the Xuezhikang product might have been more effectively athero-protective than expected in the Chinese population for the same reason.
Safety
The safety of red yeast rice (RYR) products has not been established. Some supplements have been found to contain high levels of citrinin, which can be toxic to the liver, kidneys, and cellular DNA. Commercial products also have highly variable amounts of monacolins and rarely declare this content on the label, making risk assessment difficult. Ingredient suppliers have been suspected of "spiking" red yeast rice preparations with purified lovastatin. One published analysis reported several commercial products as being almost entirely monacolin K—which would occur if the drug lovastatin was illegally added—rather than the expected composition of many monacolin compounds.
There are reports in the literature of muscle myopathy and liver damage resulting from red yeast rice usage. From a review: "The potential safety signals of myopathies and liver injury raise the hypothesis that the safety profile of RYR is similar to that of statins. Continuous monitoring of dietary supplements should be promoted to finally characterize their risk profile, thus supporting regulatory bodies for appropriate actions." The European Food Safety Authority (EFSA) Panel on Food Additives and Nutrient Sources added to Food concluded that when red yeast rice preparations contained monacolins, the Panel was unable to identify an intake that it could consider as safe. The reason given was case study reports of severe adverse reactions to products containing monacolins at amounts as low as 3 mg/day. Red yeast rice is not recommended during pregnancy or breast-feeding.
In March 2024, the Japanese Ministry of Health ordered stores to remove three RYR dietary supplements (Benikoji ColesteHelp, NaishiHelp Plus Cholesterol and Natto-kinase Sarasara Tsubu) produced by Kobayashi Pharmaceutical after reports of thousands made ill. Over a hundred people between the ages of 40 to 80 were hospitalized, and five had died , with four of them from kidney problems. There have been more than twelve thousand cases of health problems reported by users. The company said it uses a strain that does not produce citrinin. It has found puberulic acid in the recalled products and is looking into whether the substance might be linked to the fatalities. The suspect batch was manufactured in 2023. Some analysts have placed the blame on industry deregulation, intended to boost economic growth by facilitating the approval of health products. Benikoji products such as miso paste, crackers, food coloring, and a vinegar dressing made by other companies were also recalled. Kobayashi Pharmaceutical officially discontinued production of beni koji products on 8 August 2024.
See also
List of microorganisms used in food and beverage preparation
Medicinal fungi
References
External links
Chinese rice dishes
Dietary supplements
Fermented foods
Food colorings
Medical controversies
Medicinal fungi
Traditional Chinese medicine | Red yeast rice | [
"Biology"
] | 2,483 | [
"Fermented foods",
"Biotechnology products"
] |
4,050,532 | https://en.wikipedia.org/wiki/Pi-system | In mathematics, a -system (or pi-system) on a set is a collection of certain subsets of such that
is non-empty.
If then
That is, is a non-empty family of subsets of that is closed under non-empty finite intersections.
The importance of -systems arises from the fact that if two probability measures agree on a -system, then they agree on the -algebra generated by that -system. Moreover, if other properties, such as equality of integrals, hold for the -system, then they hold for the generated -algebra as well. This is the case whenever the collection of subsets for which the property holds is a -system. -systems are also useful for checking independence of random variables.
This is desirable because in practice, -systems are often simpler to work with than -algebras. For example, it may be awkward to work with -algebras generated by infinitely many sets So instead we may examine the union of all -algebras generated by finitely many sets This forms a -system that generates the desired -algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a -system that generates the very important Borel -algebra of subsets of the real line.
Definitions
A -system is a non-empty collection of sets that is closed under non-empty finite intersections, which is equivalent to containing the intersection of any two of its elements.
If every set in this -system is a subset of then it is called a
For any non-empty family of subsets of there exists a -system called the , that is the unique smallest -system of containing every element of
It is equal to the intersection of all -systems containing and can be explicitly described as the set of all possible non-empty finite intersections of elements of
A non-empty family of sets has the finite intersection property if and only if the -system it generates does not contain the empty set as an element.
Examples
For any real numbers and the intervals form a -system, and the intervals form a -system if the empty set is also included.
The topology (collection of open subsets) of any topological space is a -system.
Every filter is a -system. Every -system that doesn't contain the empty set is a prefilter (also known as a filter base).
For any measurable function the set defines a -system, and is called the -system by (Alternatively, defines a -system generated by )
If and are -systems for and respectively, then is a -system for the Cartesian product
Every -algebra is a -system.
Relationship to -systems
A -system on is a set of subsets of satisfying
if then
if is a sequence of (pairwise) subsets in then
Whilst it is true that any -algebra satisfies the properties of being both a -system and a -system, it is not true that any -system is a -system, and moreover it is not true that any -system is a -algebra. However, a useful classification is that any set system which is both a -system and a -system is a -algebra. This is used as a step in proving the - theorem.
The - theorem
Let be a -system, and let be a -system contained in The - theorem states that the -algebra generated by is contained in
The - theorem can be used to prove many elementary measure theoretic results. For instance, it is used in proving the uniqueness claim of the Carathéodory extension theorem for -finite measures.
The - theorem is closely related to the monotone class theorem, which provides a similar relationship between monotone classes and algebras, and can be used to derive many of the same results. Since -systems are simpler classes than algebras, it can be easier to identify the sets that are in them while, on the other hand, checking whether the property under consideration determines a -system is often relatively easy. Despite the difference between the two theorems, the - theorem is sometimes referred to as the monotone class theorem.
Example
Let be two measures on the -algebra and suppose that is generated by a -system If
for all and
then
This is the uniqueness statement of the Carathéodory extension theorem for finite measures. If this result does not seem very remarkable, consider the fact that it usually is very difficult or even impossible to fully describe every set in the -algebra, and so the problem of equating measures would be completely hopeless without such a tool.
Idea of the proof
Define the collection of sets
By the first assumption, and agree on and thus By the second assumption, and it can further be shown that is a -system. It follows from the - theorem that and so That is to say, the measures agree on
-Systems in probability
-systems are more commonly used in the study of probability theory than in the general field of measure theory. This is primarily due to probabilistic notions such as independence, though it may also be a consequence of the fact that the - theorem was proven by the probabilist Eugene Dynkin. Standard measure theory texts typically prove the same results via monotone classes, rather than -systems.
Equality in distribution
The - theorem motivates the common definition of the probability distribution of a random variable in terms of its cumulative distribution function. Recall that the cumulative distribution of a random variable is defined as
whereas the seemingly more general of the variable is the probability measure
where is the Borel -algebra. The random variables and (on two possibly different probability spaces) are (or ), denoted by if they have the same cumulative distribution functions; that is, if The motivation for the definition stems from the observation that if then that is exactly to say that and agree on the -system which generates and so by the example above:
A similar result holds for the joint distribution of a random vector. For example, suppose and are two random variables defined on the same probability space with respectively generated -systems and The joint cumulative distribution function of is
However, and Because
is a -system generated by the random pair the - theorem is used to show that the joint cumulative distribution function suffices to determine the joint law of In other words, and have the same distribution if and only if they have the same joint cumulative distribution function.
In the theory of stochastic processes, two processes are known to be equal in distribution if and only if they agree on all finite-dimensional distributions; that is, for all
The proof of this is another application of the - theorem.
Independent random variables
The theory of -system plays an important role in the probabilistic notion of independence. If and are two random variables defined on the same probability space then the random variables are independent if and only if their -systems satisfy for all and
which is to say that are independent. This actually is a special case of the use of -systems for determining the distribution of
Example
Let where are iid standard normal random variables. Define the radius and argument (arctan) variables
Then and are independent random variables.
To prove this, it is sufficient to show that the -systems are independent: that is, for all and
Confirming that this is the case is an exercise in changing variables. Fix and then the probability can be expressed as an integral of the probability density function of
See also
Notes
Citations
References
Measure theory
Families of sets | Pi-system | [
"Mathematics"
] | 1,500 | [
"Basic concepts in set theory",
"Families of sets",
"Combinatorics"
] |
4,050,658 | https://en.wikipedia.org/wiki/Chakravala%20method | The chakravala method () is a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation. It is commonly attributed to Bhāskara II, (c. 1114 – 1185 CE) although some attribute it to Jayadeva (c. 950 ~ 1000 CE). Jayadeva pointed out that Brahmagupta's approach to solving equations of this type could be generalized, and he then described this general method, which was later refined by Bhāskara II in his Bijaganita treatise. He called it the Chakravala method: chakra meaning "wheel" in Sanskrit, a reference to the cyclic nature of the algorithm. C.-O. Selenius held that no European performances at the time of Bhāskara, nor much later, exceeded its marvellous height of mathematical complexity.
This method is also known as the cyclic method and contains traces of mathematical induction.
History
Chakra in Sanskrit means cycle. As per popular legend, Chakravala indicates a mythical range of mountains which orbits around the Earth like a wall and not limited by light and darkness.
Brahmagupta in 628 CE studied indeterminate quadratic equations, including Pell's equation
for minimum integers x and y. Brahmagupta could solve it for several N, but not all.
Jayadeva and Bhaskara offered the first complete solution to the equation, using the chakravala method to find for the solution
This case was notorious for its difficulty, and was first solved in Europe by Brouncker in 1657–58 in response to a challenge by Fermat, using continued fractions. A method for the general problem was first completely described rigorously by Lagrange in 1766. Lagrange's method, however, requires the calculation of 21 successive convergents of the simple continued fraction for the square root of 61, while the chakravala method is much simpler. Selenius, in his assessment of the chakravala method, states
"The method represents a best approximation algorithm of minimal length that, owing to several minimization properties, with minimal effort and avoiding large numbers automatically produces the best solutions to the equation. The chakravala method anticipated the European methods by more than a thousand years. But no European performances in the whole field of algebra at a time much later than Bhaskara's, nay nearly equal up to our times, equalled the marvellous complexity and ingenuity of chakravala."
Hermann Hankel calls the chakravala method
"the finest thing achieved in the theory of numbers before Lagrange."
The method
From Brahmagupta's identity, we observe that for given N,
For the equation , this allows the "composition" (samāsa) of two solution triples and into a new triple
In the general method, the main idea is that any triple (that is, one which satisfies ) can be composed with the trivial triple to get the new triple for any m. Assuming we started with a triple for which , this can be scaled down by k (this is Bhaskara's lemma):
Since the signs inside the squares do not matter, the following substitutions are possible:
When a positive integer m is chosen so that (a + bm)/k is an integer, so are the other two numbers in the triple. Among such m, the method chooses one that minimizes the absolute value of m2 − N and hence that of (m2 − N)/k. Then the substitution relations are applied for m equal to the chosen value. This results in a new triple (a, b, k). The process is repeated until a triple with is found. This method always terminates with a solution (proved by Lagrange in 1768).
Optionally, we can stop when k is ±1, ±2, or ±4, as Brahmagupta's approach gives a solution for those cases.
Brahmagupta's composition method
In AD 628, Brahmagupta discovered a general way to find and of when given , when k is ±1, ±2, or ±4.
k = ±1
Using Brahmagupta's identity to compose the triple with itself:
The new triple can be expressed as .
Substituting gives a solution:
For , the original was already a solution. Substituting yields a second:
k = ±2
Again using the equation,
Substituting ,
Substituting ,
k = 4
Substituting into the equation creates the triple .
Which is a solution if is even:
If a is odd, start with the equations and .
Leading to the triples and . Composing the triples gives
When is odd,
k = -4
When , then . Composing with itself yields .
Again composing itself yields
Finally, from the earlier equations, compose the triples and , to get
.
This give us the solutions
(Note, is useful to find a solution to Pell's Equation, but it is not always the smallest integer pair. e.g. . The equation will give you , which when put into Pell's Equation yields , which works, but so does for .
Examples
n = 61
The n = 61 case (determining an integer solution satisfying ), issued as a challenge by Fermat many centuries later, was given by Bhaskara as an example.
We start with a solution for any k found by any means. In this case we can let b be 1, thus, since , we have the triple . Composing it with gives the triple , which is scaled down (or Bhaskara's lemma is directly used) to get:
For 3 to divide and to be minimal, we choose , so that we have the triple . Now that k is −4, we can use Brahmagupta's idea: it can be scaled down to the rational solution , which composed with itself three times, with respectively, when k becomes square and scaling can be applied, this gives . Finally, such procedure can be repeated until the solution is found (requiring 9 additional self-compositions and 4 additional square-scalings): . This is the minimal integer solution.
n = 67
Suppose we are to solve for x and y.
We start with a solution for any k found by any means; in this case we can let b be 1, thus producing . At each step, we find an m > 0 such that k divides a + bm, and |m2 − 67| is minimal. We then update a, b, and k to and respectively.
First iteration
We have . We want a positive integer m such that k divides a + bm, i.e. 3 divides 8 + m, and |m2 − 67| is minimal. The first condition implies that m is of the form 3t + 1 (i.e. 1, 4, 7, 10,… etc.), and among such m, the minimal value is attained for m = 7. Replacing (a, b, k) with , we get the new values . That is, we have the new solution:
At this point, one round of the cyclic algorithm is complete.
Second iteration
We now repeat the process. We have . We want an m > 0 such that k divides a + bm, i.e. 6 divides 41 + 5m, and |m2 − 67| is minimal. The first condition implies that m is of the form 6t + 5 (i.e. 5, 11, 17,… etc.), and among such m, |m2 − 67| is minimal for m = 5. This leads to the new solution a = (41⋅5 + 67⋅5)/6, etc.:
Third iteration
For 7 to divide 90 + 11m, we must have m = 2 + 7t (i.e. 2, 9, 16,… etc.) and among such m, we pick m = 9.
Final solution
At this point, we could continue with the cyclic method (and it would end, after seven iterations), but since the right-hand side is among ±1, ±2, ±4, we can also use Brahmagupta's observation directly. Composing the triple (221, 27, −2) with itself, we get
that is, we have the integer solution:
This equation approximates as to within a margin of about .
Notes
References
Florian Cajori (1918), Origin of the Name "Mathematical Induction", The American Mathematical Monthly 25 (5), p. 197-201.
George Gheverghese Joseph, The Crest of the Peacock: Non-European Roots of Mathematics (1975).
G. R. Kaye, "Indian Mathematics", Isis 2:2 (1919), p. 326–356.
Clas-Olaf Selenius, "Rationale of the chakravala process of Jayadeva and Bhaskara II" , Historia Mathematica 2 (1975), pp. 167–184.
Clas-Olaf Selenius, "Kettenbruchtheoretische Erklärung der zyklischen Methode zur Lösung der Bhaskara-Pell-Gleichung", Acta Acad. Abo. Math. Phys. 23 (10) (1963), pp. 1–44.
Hoiberg, Dale & Ramchandani, Indu (2000). Students' Britannica India. Mumbai: Popular Prakashan.
Goonatilake, Susantha (1998). Toward a Global Science: Mining Civilizational Knowledge. Indiana: Indiana University Press. .
Kumar, Narendra (2004). Science in Ancient India. Delhi: Anmol Publications Pvt Ltd.
Ploker, Kim (2007) "Mathematics in India". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook New Jersey: Princeton University Press.
External links
Introduction to chakravala
Brahmagupta
Diophantine equations
Number theoretic algorithms
Indian mathematics | Chakravala method | [
"Mathematics"
] | 2,072 | [
"Diophantine equations",
"Mathematical objects",
"Equations",
"Number theory"
] |
4,051,468 | https://en.wikipedia.org/wiki/Plutonium-238 | Plutonium-238 ( or Pu-238) is a radioactive isotope of plutonium that has a half-life of 87.7 years.
Plutonium-238 is a very powerful alpha emitter; as alpha particles are easily blocked, this makes the plutonium-238 isotope suitable for usage in radioisotope thermoelectric generators (RTGs) and radioisotope heater units. The density of plutonium-238 at room temperature is about 19.8 g/cc. The material will generate about 0.57 watts per gram of 238Pu.
The bare sphere critical mass of metallic plutonium-238 is not precisely known, but its calculated range is between 9.04 and 10.07 kilograms.
History
Initial production
Plutonium-238 was the first isotope of plutonium to be discovered. It was synthesized by Glenn Seaborg and associates in December 1940 by bombarding uranium-238 with deuterons, creating neptunium-238.
+ → + 2
The neptunium isotope then undergoes β− decay to plutonium-238, with a half-life of 2.12 days:
→ + +
Plutonium-238 naturally decays to uranium-234 and then further along the radium series to lead-206. Historically, most plutonium-238 has been produced by Savannah River in their weapons reactor, by irradiating neptunium-237 (half life ) with neutrons.
+ →
Neptunium-237 is a by-product of the production of plutonium-239 weapons-grade material, and when the site was shut down in 1988, 238Pu was mixed with about 16% 239Pu.
Manhattan Project
Plutonium was first synthesized in 1940 and isolated in 1941 by chemists at the University of California, Berkeley.
The Manhattan Project began shortly after the discovery, with most early research (pre-1944) carried out using small samples manufactured using the large cyclotrons at the Berkeley Rad Lab and Washington University in St. Louis.
Much of the difficulty encountered during the Manhattan Project regarded the production and testing of nuclear fuel. Both uranium and plutonium were eventually determined to be fissile, but in each case they had to be purified to select for the isotopes suitable for an atomic bomb.
With World War II underway, the research teams were pressed for time. Micrograms of plutonium were made by cyclotrons in 1942 and 1943. In the fall of 1943 Robert Oppenheimer is quoted as saying "there's only a twentieth of a milligram in existence."
By his request, the Rad Lab at Berkeley made available 1.2 mg of plutonium by the end of October 1943, most of which was taken to Los Alamos for theoretical work there.
The world's second reactor, the X-10 Graphite Reactor built at a secret site at Oak Ridge, would be fully operational in 1944. In November 1943, shortly after its initial start-up, it was able to produce a minuscule 500 mg. However, this plutonium was mixed with large amounts of uranium fuel and destined for the nearby chemical processing pilot plant for isotopic separation (enrichment). Gram amounts of plutonium would not be available until spring of 1944.
Industrial-scale production of plutonium only began in March 1945 when the B Reactor at the Hanford Site began operation.
Plutonium-238 and human experimentation
While samples of plutonium were available in small quantities and being handled by researchers, no one knew what health effects this might have.
Plutonium handling mishaps occurred in 1944, causing alarm in the Manhattan Project leadership as contamination inside and outside the laboratories was becoming an issue. In August 1944, chemist Donald Mastick was sprayed in the face with a solution of plutonium chloride, causing him to accidentally swallow some. Nose swipes taken of plutonium researchers indicated that plutonium was being breathed in. Lead Manhattan Project chemist Glenn Seaborg, discoverer of many transuranium elements including plutonium, urged that a safety program be developed for plutonium research. In a memo to Robert Stone at the Chicago Met Lab, Seaborg wrote "that a program to trace the course of plutonium in the body be initiated as soon as possible ... [with] the very highest priority." This memo was dated January 5, 1944, prior to many of the contamination events of 1944 in Building D where Mastick worked. Seaborg later claimed that he did not at all intend to imply human experimentation in this memo, nor did he learn of its use in humans until far later due to the compartmentalization of classified information.
With bomb-grade enriched plutonium-239 destined for critical research and for atomic weapon production, plutonium-238 was used in early medical experiments as it is unusable as atomic weapon fuel. However, 238Pu is far more dangerous than 239Pu due to its short half-life and being a strong alpha-emitter. It was soon found that plutonium was being excreted at a very slow rate, accumulating in test subjects involved in early human experimentation. This led to severe health consequences for the patients involved.
From April 10, 1945, to July 18, 1947, eighteen people were injected with plutonium as part of the Manhattan Project. Doses administered ranged from 0.095 to 5.9 microcuries (μCi).
Albert Stevens, after a (mistaken) terminal cancer diagnosis which seemed to include many organs, was injected in 1945 with plutonium without his informed consent. He was referred to as patient CAL-1 and the plutonium consisted of 3.5 μCi 238Pu, and 0.046 μCi 239Pu, giving him an initial body burden of 3.546 μCi (131 kBq) total activity. The fact that he had the highly radioactive plutonium-238 (produced in the 60-inch cyclotron at the Crocker Laboratory by deuteron bombardment of natural uranium) contributed heavily to his long-term dose. Had all of the plutonium given to Stevens been the long-lived 239Pu as used in similar experiments of the time, Stevens's lifetime dose would have been significantly smaller. The short half-life of 87.7 years of 238Pu means that a large amount of it decayed during its time inside his body, especially when compared to the 24,100 year half-life of 239Pu.
After his initial "cancer" surgery removed many non-cancerous "tumors", Stevens survived for about 20 years after his experimental dose of plutonium before succumbing to heart disease; he had received the highest known accumulated radiation dose of any human patient. Modern calculations of his lifetime absorbed dose give a significant 64 Sv (6400 rem) total.
Weapons
The first application of 238Pu was its use in nuclear weapon components made at Mound Laboratories for Lawrence Radiation Laboratory (now Lawrence Livermore National Laboratory). Mound was chosen for this work because of its experience in producing the polonium-210-fueled Urchin initiator and its work with several heavy elements in a Reactor Fuels program. Two Mound scientists spent 1959 at Lawrence in joint development while the Special Metallurgical Building was constructed at Mound to house the project. Meanwhile, the first sample of 238Pu came to Mound in 1959.
The weapons project called for the production of about 1 kg/year of 238Pu over a 3-year period. However, the 238Pu component could not be produced to the specifications despite a 2-year effort beginning at Mound in mid-1961. A maximum effort was undertaken with 3 shifts a day, 6 days a week, and ramp-up of Savannah River's 238Pu production over the next three years to about 20 kg/year. A loosening of the specifications resulted in productivity of about 3%, and production finally began in 1964.
Use in radioisotope thermoelectric generators
Beginning on January 1, 1957, Mound Laboratories RTG inventors Jordan & Birden were working on an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source.
In 1961, Capt. R. T. Carpenter had chosen 238Pu as the fuel for the first RTG (radioisotope thermoelectric generator) to be launched into space as auxiliary power for the Transit IV Navy navigational satellite. By January 21, 1963, the decision had yet to be made as to what isotope would be used to fuel the large RTGs for NASA programs.
Early in 1964, Mound Laboratories scientists developed a different method of fabricating the weapon component that resulted in a production efficiency of around 98%. This made available the excess Savannah River 238Pu production for Space Electric Power use just in time to meet the needs of the SNAP-27 RTG on the Moon, the Pioneer spacecraft, the Viking Mars landers, more Transit Navy navigation satellites (precursor to today's GPS) and two Voyager spacecraft, for which all of the 238Pu heat sources were fabricated at Mound Laboratories.
The radioisotope heater units were used in space exploration beginning with the Apollo Radioisotope Heaters (ALRH) warming the Seismic Experiment placed on the Moon by the Apollo 11 mission and on several Moon and Mars rovers, to the 129 LWRHUs warming the experiments on the Galileo spacecraft.
An addition to the Special Metallurgical building weapon component production facility was completed at the end of 1964 for 238Pu heat source fuel fabrication. A temporary fuel production facility was also installed in the Research Building in 1969 for Transit fuel fabrication. With completion of the weapons component project, the Special Metallurgical Building, nicknamed "Snake Mountain" because of the difficulties encountered in handling large quantities of 238Pu, ceased operations on June 30, 1968, with 238Pu operations taken over by the new Plutonium Processing Building, especially designed and constructed for handling large quantities of 238Pu. Plutonium-238 is given the highest relative hazard number (152) of all 256 radionuclides evaluated by Karl Z. Morgan et al. in 1963.
Nuclear powered pacemakers
In the United States, when plutonium-238 became available for non-military uses, numerous applications were proposed and tested, including the cardiac pacemaker program that began on June 1, 1966, in conjunction with NUMEC. The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete.
, there were nine living people with nuclear-powered pacemakers in the United States, out of an original 139 recipients. When these individuals die, the pacemaker is supposed to be removed and shipped to Los Alamos where the plutonium will be recovered.
In a letter to the New England Journal of Medicine discussing a woman who received a Numec NU-5 decades ago that is continuously operating, despite an original $5,000 price tag equivalent to $23,000 in 2007 dollars, the follow-up costs have been about $19,000 compared with $55,000 for a battery-powered pacemaker.
Another nuclear powered pacemaker was the Medtronics “Laurens-Alcatel Model 9000”. Approximately 1600 nuclear-powered cardiac pacemakers and/or battery assemblies have been located across the United States, and are eligible for recovery by the Off-Site Source Recovery Project (OSRP) Team at Los Alamos National Laboratory (LANL).
Production
Reactor-grade plutonium from spent nuclear fuel contains various isotopes of plutonium. 238Pu makes up only one or two percent, but it may be responsible for much of the short-term decay heat because of its short half-life relative to other plutonium isotopes. Reactor-grade plutonium is not useful for producing 238Pu for RTGs because difficult isotopic separation would be needed.
Pure plutonium-238 is prepared by neutron irradiation of neptunium-237, one of the minor actinides that can be recovered from spent nuclear fuel during reprocessing, or by the neutron irradiation of americium in a reactor. The targets are purified chemically, including dissolution in nitric acid to extract the plutonium-238. A 100 kg sample of light water reactor fuel that has been irradiated for three years contains only about 700 grams (0.7% by weight) of neptunium-237, which must be extracted and purified. Significant amounts of pure 238Pu could also be produced in a thorium fuel cycle.
In the US, the Department of Energy's Space and Defense Power Systems Initiative of the Office of Nuclear Energy processes 238Pu, maintains its storage, and develops, produces, transports and manages safety of radioisotope power and heating units for both space exploration and national security spacecraft.
As of March 2015, a total of of 238Pu was available for civil space uses. Out of the inventory, remained in a condition meeting NASA specifications for power delivery. Some of this pool of 238Pu was used in a multi-mission radioisotope thermoelectric generator (MMRTG) for the 2020 Mars Rover mission and two additional MMRTGs for a notional 2024 NASA mission. would remain after that, including approximately just barely meeting the NASA specification.
Since isotope content in the material is lost over time to radioactive decay while in storage, this stock could be brought up to NASA specifications by blending it with a smaller amount of freshly produced 238Pu with a higher content of the isotope, and therefore energy density.
U.S. production ceases and resumes
The United States stopped producing bulk 238Pu with the closure of the Savannah River Site reactors in 1988. Since 1993, all of the 238Pu used in American spacecraft has been purchased from Russia. From 1992 to 1994, 10 kilograms were purchased by the US Department of Energy from Russia's Mayak Production Association. Via agreement with Minatom, the US must use plutonium for uncrewed NASA missions, and Russia must use the currency for environmental and social investment in the Chelyabinsk region, affected by long-term radioactive contamination such as the Kyshtym disaster. In total, have been purchased, but Russia is no longer producing 238Pu, and their own supply is reportedly running low.
In February 2013, a small amount of 238Pu was successfully produced by Oak Ridge's High Flux Isotope Reactor, and on December 22, 2015, they reported the production of of 238Pu.
In March 2017, Ontario Power Generation (OPG) and its venture arm, Canadian Nuclear Partners, announced plans to produce 238Pu as a second source for NASA. Rods containing neptunium-237 will be fabricated by Pacific Northwest National Laboratory (PNNL) in Washington State and shipped to OPG's Darlington Nuclear Generating Station in Clarington, Ontario, Canada where they will be irradiated with neutrons inside the reactor's core to produce 238Pu.
In January 2019, it was reported that some automated aspects of its production were implemented at Oak Ridge National Laboratory in Tennessee, that are expected to triple the number of plutonium pellets produced each week. The production rate is now expected to increase from 80 pellets per week to about 275 pellets per week, for a total production of about 400 grams per year. The goal now is to optimize and scale-up the processes in order to produce an average of per year by 2025.
Applications
The main application of 238Pu is as the heat source in radioisotope thermoelectric generators (RTGs). The RTG was invented in 1954 by Mound scientists Ken Jordan and John Birden, who were inducted into the National Inventors Hall of Fame in 2013. They immediately produced a working prototype using a 210Po heat source, and on January 1, 1957, entered into an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source.
In 1966, a study reported by SAE International described the potential for the use of plutonium-238 in radioisotope power subsystems for applications in space. This study focused on employing power conversions through the Rankine cycle, Brayton cycle, thermoelectric conversion and thermionic conversion with plutonium-238 as the primary heating element. The heat supplied by the plutonium-238 heating element was consistent between the 400 °C and 1000 °C regime but future technology could reach an upper limit of 2000 °C, further increasing the efficiency of the power systems. The Rankine cycle study reported an efficiency between 15 and 19% with inlet turbine temperatures of , whereas the Brayton cycle offered efficiency greater than 20% with an inlet temperature of . Thermoelectric converters offered low efficiency (3-5%) but high reliability. Thermionic conversion could provide similar efficiencies to the Brayton cycle if proper conditions reached.
RTG technology was first developed by Los Alamos National Laboratory during the 1960s and 1970s to provide radioisotope thermoelectric generator power for cardiac pacemakers. Of the 250 plutonium-powered pacemakers Medtronic manufactured, twenty-two were still in service more than twenty-five years later, a feat that no battery-powered pacemaker could achieve.
This same RTG power technology has been used in spacecraft such as Pioneer 10 and 11, Voyager 1 and 2, Cassini–Huygens and New Horizons, and in other devices, such as the Mars Science Laboratory and Mars 2020 Perseverance Rover, for long-term nuclear power generation.
See also
Atomic battery
Plutonium-239
Polonium-210
References
External links
Story of Seaborg's discovery of Pu-238, especially pages 34–35.
NLM Hazardous Substances Databank – Plutonium, Radioactive
Fertile materials
Isotopes of plutonium
Radioisotope fuels
Fissile materials | Plutonium-238 | [
"Chemistry"
] | 3,715 | [
"Explosive chemicals",
"Fissile materials",
"Isotopes of plutonium",
"Isotopes"
] |
4,051,608 | https://en.wikipedia.org/wiki/132P/Helin%E2%80%93Roman%E2%80%93Alu | 132P/Helin–Roman–Alu, also known as Helin-Roman-Alu 2, is a periodic comet in the Solar System.
References
External links
132P/Helin-Roman-Alu 2 – Seiichi Yoshida @ aerith.net
132P at Kronk's Cometography
Periodic comets
0132
132P
132P
132P
19891026 | 132P/Helin–Roman–Alu | [
"Astronomy"
] | 82 | [
"Astronomy stubs",
"Comet stubs"
] |
4,051,670 | https://en.wikipedia.org/wiki/Secular%20resonance | A secular resonance is a type of orbital resonance between two bodies with synchronized precessional frequencies. In celestial mechanics, secular refers to the long-term motion of a system, and resonance is periods or frequencies being a simple numerical ratio of small integers. Typically, the synchronized precessions in secular resonances are between the rates of change of the argument of the periapses or the rates of change of the longitude of the ascending nodes of two system bodies. Secular resonances can be used to study the long-term orbital evolution of asteroids and their families within the asteroid belt.
Description
Secular resonances occur when the precession of two orbits is synchronised (a precession of the perihelion, with frequency g, or the ascending node, with frequency s, or both). A small body (such as a small Solar System body) in secular resonance with a much larger one (such as a planet) will precess at the same rate as the large body. Over relatively short time periods (a million years or so), a secular resonance will change the eccentricity and the inclination of the small body.
One can distinguish between:
linear secular resonances between a body (no subscript) and a single other large perturbing body (e.g. a planet, subscript as numbered from the Sun), such as the ν6 = g − g6 secular resonance between asteroids and Saturn; and
nonlinear secular resonances, which are higher-order resonances, usually combination of linear resonances such as the z1 = (g − g6) + (s − s6), or the ν6 + ν5 = 2g − g6 − g5 resonances.
ν6 resonance
A prominent example of a linear resonance is the ν6 secular resonance between asteroids and Saturn. Asteroids that approach Saturn have their eccentricity slowly increased until they become Mars-crossers, when they are usually ejected from the asteroid belt by a close encounter with Mars. The resonance forms the inner and "side" boundaries of the asteroid belt around 2 AU and at inclinations of about 20°.
See also
Orbital resonance
Asteroid belt
References
Orbital perturbations
Orbital resonance | Secular resonance | [
"Physics",
"Chemistry",
"Astronomy"
] | 449 | [
"Scattering stubs",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Scattering"
] |
4,051,694 | https://en.wikipedia.org/wiki/136P/Mueller | 136P/Mueller, also known as Mueller 3, is a periodic comet in the Solar System.
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
136P/Mueller 3 – Seiichi Yoshida @ aerith.net
136P at Kronk's Cometography
Periodic comets
0136
136P
136P
136P
19900924 | 136P/Mueller | [
"Astronomy"
] | 79 | [
"Astronomy stubs",
"Comet stubs"
] |
4,051,750 | https://en.wikipedia.org/wiki/Dagmar%20bumper | Dagmar bumpers (also known as "bullet bumpers") is a slang term for chrome conical-shaped bumper guards that began to appear on the front bumper/grille assemblies of certain American automobiles following World War II. They reached their peak in the mid-1950s.
Derivation
The term evokes the prominent bosom of Dagmar, a buxom early-1950s television personality featuring low-cut gowns and conical bra cups. She was amused by the tribute.
History
As originally conceived by Harley Earl, GM Vice President of Design, the conical bumper guards would mimic artillery shells. Placed inboard of the headlights on front bumpers of Cadillacs, they were intended to both convey the image of a speeding projectile and protect vehicles' front ends in collisions. The similarity of these features to the then popular bullet bra as epitomized by buxom television personality Dagmar was inescapable.
As the 1950s wore on and American automakers' use of chrome grew more flamboyant, they grew more pronounced. The black rubber tips they gained on the 1957 Cadillac Eldorado Brougham and other models were known as pasties.
In the early 1960s, American car designers shed both rear tailfins and prominent bumper guards.
Use
Postwar Cadillacs began sporting conical bumper guards in the 1946 model year. In 1951 models, some were raised into the grille. In 1957, black rubber tips appeared. The element continued to become more pronounced in size through 1958, but were eliminated in the 1959 Cadillac redesign.
Mercury sported Dagmars in 1953 through the 1956 model year. Lincoln added Dagmars in 1960, with a black rubber ring separating the body from the chrome tip.
Buick added Dagmars on its 1954 and 1955 models, in 1954 as part of the bumper assembly, and moved into the grille in 1955.
Packard included large Dagmars on the bumper in 1955 and 1956 models.
Full-sized Chevys in 1961 and 1963 also had small rubber Dagmars on the front bumper, and 1962 Ford Galaxie had small rubber Dagmars as an option.
GAZ-13 Chaika had similar designs until their discontinuation in the 1980s
Other iterations
In 1974, British motoring press applied the name of statuesque British actress Sabrina to oversized pairs of protruding rubber bumper blocks added to MG MGB, MG Midget, Triumph Spitfire and Triumph TR6 sports cars to meet strengthened US auto safety regulations. The term, which was not common in the U.S., lingered at least to the mid-1990s in some areas.
Gallery
References
External links
1961 Chevrolet Impala with Dagmar bumpers
1955 Packard Caribbean with Dagmar bumpers
Vehicle design
Slang
Automotive body parts
Automotive styling features | Dagmar bumper | [
"Engineering"
] | 557 | [
"Vehicle design",
"Design"
] |
4,051,830 | https://en.wikipedia.org/wiki/Sugar%20glass | Sugar glass (also called candy glass, edible glass, and breakaway glass) is a brittle transparent form of sugar that looks like glass. It can be formed into a sheet that looks like flat glass or an object, such as a bottle or drinking glass.
Description
Sugar glass is made by dissolving sugar in water and heating it to at least the "hard crack" stage (approx. 150 °C / 300 °F) in the candy making process. Glucose or corn syrup is used to prevent the sugar from recrystallizing and becoming opaque, by disrupting the orderly arrangement of the molecules. Cream of tartar is also used for this purpose, converting the sugar into glucose and fructose.
Because sugar glass is hygroscopic, it must be used soon after preparation, or it will soften and lose its brittle quality.
Sugar glass has been used to simulate glass in movies, photographs, plays and professional wrestling.
Other uses
Sugar glass is also used to make sugar sculptures or other forms of edible art.
Sugar glass with blue dye was used to represent the methamphetamine in the AMC TV series Breaking Bad. Actor Aaron Paul would eat it on set.
References
Amorphous solids
Sugar confectionery
Glass types
Stunts | Sugar glass | [
"Physics"
] | 253 | [
"Amorphous solids",
"Unsolved problems in physics"
] |
4,051,863 | https://en.wikipedia.org/wiki/Influenza%20A%20virus%20subtype%20H6N2 | H6N2 is an avian influenza virus with two forms: one has a low and the other a high pathogenicity. It can cause a serious problem for poultry, and also infects ducks as well. H6N2 subtype is considered to be a non-pathogenic chicken virus, the host still unknown, but could strain from feral animals, and/or aquatic bird reservoirs. H6N2 along with H6N6 are viruses that are found to replicate in mice without preadaptation, and some have acquired the ability to bind to human-like receptors. Genetic markers for H6N2 include 22-amino acid stalk deletion in neuraminidase (NA) protein gene, increased N-glycosylation, and a D144 mutation of the Haemagglutinin (HA) protein gene. Transmission of avian influenza viruses from wild aquatic birds to domestic birds usually cause subclinical infections, and occasionally, respiratory disease and drops in egg production. Some histological features presented in chicken infected with H6N2 are fibrinous yolk peritonitis, salpingitis, oophoritis, nephritis, along with swollen kidneys as well.
Signs and symptoms
sneezing and lacrimation
prostration
anorexia and fever
sometimes swelling of the infraorbital sinuses with nasal mucous
References
Avian influenza
H6N2 | Influenza A virus subtype H6N2 | [
"Biology"
] | 300 | [
"Virus stubs",
"Viruses"
] |
7,055,079 | https://en.wikipedia.org/wiki/MPEG%20LA | MPEG LA was an American company based in Denver, Colorado that licensed patent pools covering essential patents required for use of the MPEG-2, MPEG-4, IEEE 1394, VC-1, ATSC, MVC, MPEG-2 Systems, AVC/H.264 and HEVC standards.
Via Licensing Corp acquired MPEG LA in April 2023 and formed a new patent pool administration company called Via Licensing Alliance.
History
MPEG LA started operations in July 1997 immediately after receiving a Department of Justice Business Review Letter. During formation of the MPEG-2 standard, a working group of companies that participated in the formation of the MPEG-2 standard recognized that the biggest challenge to adoption was efficient access to essential patents owned by many patent owners. That ultimately led to a group of various MPEG-2 patent owners to form MPEG LA, which in turn created the first modern-day patent pool as a solution. The majority of patents underlying MPEG-2 technology were owned by three companies: Sony (311 patents), Thomson (198 patents) and Mitsubishi Electric (119 patents).
In June 2012, MPEG LA announced a call for patents essential to the High Efficiency Video Coding (HEVC) standard.
In September 2012, MPEG LA launched Librassay, which makes diagnostic patent rights from some of the world's leading research institutions available to everyone through a single license. Organizations which have included patents in Librassay include Johns Hopkins University; Ludwig Institute for Cancer Research; Memorial Sloan Kettering Cancer Center; National Institutes of Health (NIH); Partners HealthCare; The Board of Trustees of the Leland Stanford Junior University; The Trustees of the University of Pennsylvania; The University of California, San Francisco; and Wisconsin Alumni Research Foundation (WARF).
On September 29, 2014, the MPEG LA announced their HEVC license which covers the patents from 23 companies. The license is US$0.20 per HEVC product after the first 100,000 units each year with an annual cap. The license has been expanded to include the profiles in version 2 of the HEVC standard.
On March 5, 2015, the MPEG LA announced their DisplayPort license which is US$0.20 per DisplayPort product.
In April 2023, in what is thought to be the first time that two pool administrators have merged into one, Via Licensing Corp acquired MPEG LA and formed a new patent pool administrator called Via Licensing Alliance. Via President Heath Hoglund will serve as president of the new company. MPEG LA CEO Larry Horn will serve as a Via LA advisor.
Criticism
MPEG LA has claimed that video codecs such as Theora and VP8 infringe on patents owned by its licensors, without disclosing the affected patent or patents. They then called out for “any party that believes it has patents that are essential to the VP8 video codec”. In April 2013, Google and MPEG LA announced an agreement covering the VP8 video format.
In May 2010, Nero AG filed an antitrust suit against MPEG LA, claiming it "unlawfully extended its patent pools by adding non-essential patents to the MPEG-2 patent pool" and has been inconsistent in charging royalty fees. The United States District Court for the Central District of California dismissed the suit with prejudice on November 29, 2010.
David Balto, who is a former policy director at the Federal Trade Commission, has used the MPEG-2 patent pool as an example of why patent pools need more scrutiny so that they do not suppress innovation.
The MPEG-2 patent pool began with 100 patents in 1997 and since then additional patents were added. The MPEG-2 license agreement states that if possible the license fee will not increase when new patents are added. The MPEG-2 license agreement stated that MPEG-2 royalties must be paid when there is one or more active patents in either the country of manufacture or the country of sale. The original MPEG-2 license rate was US$4 for a decoding license, US$4 for an encoding license and US$6.00 for encode-decode consumer product.
A criticism of the MPEG-2 patent pool is that even though the number of patents decreased from 1,048 to 416 by June 2013 the license fee did not decrease with the expiration rate of MPEG-2 patents.
For products from January 1, 2002, through December 31, 2009 royalties were US$2.50 for a decoding license, US$2.50 for an encoding license and US$2.50 for encode-decode consumer product license. Since January 1, 2010, MPEG-2 patent pool royalties were US$2.00 for a decoding license, US$2.00 for an encoding license and US$2.00 for encode-decode consumer product.
H.264/MPEG-4 AVC licensors
The following organizations hold one or more patents in MPEG LA's H.264/AVC patent pool.
HEVC licensors
The following organizations hold one or more patents in the HEVC patent pool.
VC-1 licensors
The following organizations hold one or more patents in the VC-1 patent pool ().
See also
Avanci
DVD6C
References
External links
MPEG LA corporate website
New MPEG LA MPEG-2 License Agreement Offers Extended Coverage at Reduced Royalty Rates (Press Release on businesswire.com)
Companies based in Denver
MPEG
Patent pools
Open standards covered by patents | MPEG LA | [
"Technology"
] | 1,130 | [
"Multimedia",
"MPEG"
] |
7,055,146 | https://en.wikipedia.org/wiki/Martindale%3A%20The%20Complete%20Drug%20Reference | Martindale: The Complete Drug Reference is a reference book published by Pharmaceutical Press listing some 6,000 drugs and medicines used throughout the world, including details of over 125,000 proprietary preparations. It also includes almost 700 disease treatment reviews.
It was first published in 1883 under the title Martindale: The Extra Pharmacopoeia. Martindale contains information on drugs in clinical use worldwide, as well as selected investigational and veterinary drugs, herbal and complementary medicines, pharmaceutical excipients, vitamins and nutritional agents, vaccines, radiopharmaceuticals, contrast media and diagnostic agents, medicinal gases, drugs of abuse and recreational drugs, toxic substances, disinfectants, and pesticides.
International usefulness
Martindale aims to cover drugs and related substances reported to be of clinical interest anywhere in the world. It provides health professionals with a useful source of information to identify medicines, such as confirming the drug and brand name of a medication being taken by a patient arriving from abroad. Alternatively, if the drug is not available, the class of agent can be determined allowing a pharmacist or doctor to determine which other equivalent drugs might be substituted. Monographs include Chemical Abstracts Service (CAS) numbers, Anatomical Therapeutic Chemical Classification System (ATC) codes and FDA Unique Ingredient Identifier (UNII) codes to help readers refer to other information systems.
Arrangement
Martindale is arranged into two main parts followed by three extensive indexes:
Monographs on drugs and ancillary substances, listing over 6,400 monographs arranged in 49 chapters based on clinical use with the corresponding disease treatment reviews. Monographs summarize the nomenclature, properties, actions, and uses of each substance. A chapter on supplementary drugs and other substances covers monographs on new drugs, those not easily classified, herbals, and drugs no longer clinically used but still of interest. Monographs of some toxic substances are also included.
Preparations - including over 125,000 items from 43 countries and regions, including China.
Directory of Manufacturers listing some 25,000 entries.
Pharmaceutical Terms in Various Languages: this index lists nearly 5,600 pharmaceutical terms and routes of administration in 13 major European languages as an aid to the non-native speaker in interpreting packaging, product information, or prescriptions written in another language.
General index: prepared from over 175,000 entries it includes approved names, synonyms and chemical names; a separate Cyrillic section lists non-proprietary and proprietary names in Russian and Ukrainian.
Digital versions include an additional 1,000 drug monographs, 100,000 preparation names, and 5,000 manufacturers.
List of the editions
To date there have been 40 editions of Martindale: The Complete Drug Reference. The 40th edition was published in May 2020.
See also
British National Formulary
British National Formulary for Children
References
External links
Martindale: The Complete Drug Reference, 40th Edition
Pharmacology literature
Medical manuals | Martindale: The Complete Drug Reference | [
"Chemistry"
] | 587 | [
"Pharmacology",
"Pharmacology literature"
] |
7,055,324 | https://en.wikipedia.org/wiki/Reproductive%20synchrony | Reproductive synchrony is a term used in evolutionary biology and behavioral ecology. Reproductive synchrony—sometimes termed "ovulatory synchrony"—may manifest itself as "breeding seasonality". Where females undergo regular menstruation, "menstrual synchrony" is another possible term.
Reproduction is said to be synchronised when fertile matings across a population are temporarily clustered, resulting in multiple conceptions (and consequent births) within a restricted time window. In marine and other aquatic contexts, the phenomenon may be referred to as mass spawning. Mass spawning has been observed and recorded in a large number of phyla, including in coral communities within the Great Barrier Reef.
In primates, reproductive synchrony usually takes the form of conception and birth seasonality. The regulatory "clock", in this case, is the sun's position in relation to the tilt of the earth. In nocturnal or partly nocturnal primates—for example, owl monkeys—the periodicity of the moon may also come into play. Synchrony in general is for primates an important variable determining the extent of "paternity skew"—defined as the extent to which fertile matings can be monopolised by a fraction of the population of males. The greater the precision of female reproductive synchrony—the greater the number of ovulating females who must be guarded simultaneously—the harder it is for any dominant male to succeed in monopolising a harem all to himself. This is simply because, by attending to any one fertile female, the male unavoidably leaves the others at liberty to mate with his rivals. The outcome is to distribute paternity more widely across the total male population, reducing paternity skew (figures a, b).
Reproductive synchrony can never be perfect. On the other hand, theoretical models predict that group-living species will tend to synchronise wherever females can benefit by maximising the number of males offered chances of paternity, minimising reproductive skew. For example, the cichlid fish V. moorii spawns in the days leading up to each full moon (lunar synchrony), and broods often exhibit multiple paternity. The same models predict that female primates, including evolving humans, will tend to synchronise wherever fitness benefits can be gained by securing access to multiple males. Conversely, group-living females who need to restrict paternity to a single dominant harem-holder should assist him by avoiding synchrony.
In the human case, evolving females with increasingly heavy childcare burdens would have done best by resisting attempts at harem-holding by locally dominant males. No human female needs a partner who will get her pregnant only to disappear, abandoning her in favour of his next sexual partner. To any local group of females, the more such philandering can be successfully resisted—and the greater the proportion of previously excluded males who can be included in the breeding system and persuaded to invest effort—the better. Hence scientists would expect reproductive synchrony—whether seasonal, lunar or a combination of the two—to be central to evolving human strategies of reproductive levelling, reducing paternity skew and culminating in the predominantly monogamous egalitarian norms illustrated by extant hunter-gatherers. Divergent climate regimes differentiating Neanderthal reproductive strategies from those of modern Homo sapiens have recently been analysed in these terms.
See also
Lunar effect
Lunar phase
Mast seeding
Menstrual cycle
Menstrual synchrony
Menstruation
Photoperiodism
Predator satiation
Season of birth
References
Ethology
Periodic phenomena
Reproduction
Synchronization
Theriogenology | Reproductive synchrony | [
"Engineering",
"Biology"
] | 764 | [
"Behavior",
"Telecommunications engineering",
"Reproduction",
"Biological interactions",
"Behavioural sciences",
"Ethology",
"Synchronization"
] |
7,056,315 | https://en.wikipedia.org/wiki/Two-dimensional%20gas | A two-dimensional gas is a collection of objects constrained to move in a planar or other two-dimensional space in a gaseous state. The objects can be: classical ideal gas elements such as rigid disks undergoing elastic collisions; elementary particles, or any ensemble of individual objects in physics which obeys laws of motion without binding interactions. The concept of a two-dimensional gas is used either because:
the issue being studied actually takes place in two dimensions (as certain surface molecular phenomena); or,
the two-dimensional form of the problem is more tractable than the analogous mathematically more complex three-dimensional problem.
While physicists have studied simple two body interactions on a plane for centuries, the attention given to the two-dimensional gas (having many bodies in motion) is a 20th-century pursuit. Applications have led to better understanding of superconductivity, gas thermodynamics, certain solid state problems and several questions in quantum mechanics.
Classical mechanics
Research at Princeton University in the early 1960s posed the question of whether the Maxwell–Boltzmann statistics and other thermodynamic laws could be derived from Newtonian laws applied to multi-body systems rather than through the conventional methods of statistical mechanics. While this question appears intractable from a three-dimensional closed form solution, the problem behaves differently in two-dimensional space. In particular an ideal two-dimensional gas was examined from the standpoint of relaxation time to equilibrium velocity distribution given several arbitrary initial conditions of the ideal gas. Relaxation times were shown to be very fast: on the order of mean free time .
In 1996 a computational approach was taken to the classical mechanics non-equilibrium problem of heat flow within a two-dimensional gas. This simulation work showed that for N>1500, good agreement with continuous systems is obtained.
Electron gas
While the principle of the cyclotron to create a two-dimensional array of electrons has existed since 1934, the tool was originally not really used to analyze interactions among the electrons (e.g. two-dimensional gas dynamics). An early research investigation explored cyclotron resonance behavior and the de Haas–van Alphen effect in a two-dimensional electron gas. The investigator was able to demonstrate that for a two-dimensional gas, the de Haas–van Alphen oscillation period is independent of the short-range electron interactions.
Later applications to Bose gas
In 1991 a theoretical proof was made that a Bose gas can exist in two dimensions. In the same work an experimental recommendation was made that could verify the hypothesis.
Experimental research with a molecular gas
In general, 2D molecular gases are experimentally observed on weakly interacting surfaces such as metals, graphene etc. at a non-cryogenic temperature and a low surface coverage. As a direct observation of individual molecules is not possible due to fast diffusion of molecules on a surface, experiments are either indirect (observing an interaction of a 2D gas with surroundings, e.g. condensation of a 2D gas) or integral (measuring integral properties of 2D gases, e.g. by diffraction methods).
An example of the indirect observation of a 2D gas is the study of Stranick et al. who used a scanning tunnelling microscope in ultrahigh vacuum (UHV) to image an interaction of a two-dimensional benzene gas layer in contact with a planar solid interface at 77 kelvins. The experimenters were able to observe mobile benzene molecules on the surface of Cu(111), to which a planar monomolecular film of solid benzene adhered. Thus the scientists could witness the equilibrium of the gas in contact with its solid state.
Integral methods that are able to characterize a 2D gas usually fall into a category of diffraction (see for example study of Kroger et al.). The exception is the work of Matvija et al. who used a scanning tunneling microscope to directly visualize a local time-averaged density of molecules on a surface. This method is of special importance as it provides an opportunity to probe local properties of 2D gases; for instance it enables to directly visualize a pair correlation function of a 2D molecular gas in a real space.
If the surface coverage of adsorbates is increased, a 2D liquid is formed, followed by a 2D solid. It was shown that the transition from a 2D gas to a 2D solid state can be controlled by a scanning tunneling microscope which can affect the local density of molecules via an electric field.
Implications for future research
A multiplicity of theoretical physics research directions exist for study via a two-dimensional gas, such as:
Complex quantum mechanics phenomena, whose solutions may be more appropriate in a two-dimensional environment;
Studies of phase transitions (e.g. melting phenomena at a planar surface);
Thin film phenomena such as chemical vapor deposition;
Surface excitations of a solid.
See also
Bose gas
Fermi gas
Melting point
Optical lattice
Three-body problem
References
External links
Riemann problems for a two-dimensional gas
Two-dimensional gas of disks
Gases
Non-equilibrium thermodynamics
Statistical mechanics
Surfaces | Two-dimensional gas | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,036 | [
"Matter",
"Non-equilibrium thermodynamics",
"Phases of matter",
"Statistical mechanics",
"Gases",
"Dynamical systems"
] |
7,056,318 | https://en.wikipedia.org/wiki/Calcium%20tartrate | Calcium tartrate, exactly calcium L-tartrate, is a byproduct of the wine industry, prepared from wine fermentation dregs. It is the calcium salt of L-tartaric acid, an acid most commonly found in grapes. Its solubility decreases with lower temperature, which results in the forming of whitish (in red wine often reddish) crystalline clusters as it precipitates. As E number E354, it finds use as a food preservative and acidity regulator. Like tartaric acid, calcium tartrate has two asymmetric carbons, hence it has two chiral isomers and a non-chiral isomer (meso-form). Most calcium tartrate of biological origin is the chiral levorotatory (–) isomer.
References
Calcium compounds
Tartrates
Preservatives
E-number additives | Calcium tartrate | [
"Chemistry"
] | 189 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
7,056,912 | https://en.wikipedia.org/wiki/Clootie%20well | A clootie well is a holy well (or sacred spring), almost always with a tree growing beside it, where small strips of cloth or ribbons are left as part of a healing ritual, usually by tying them to branches of the tree (called a clootie tree or rag tree). Clootie wells are places of pilgrimage usually found in Celtic areas. It is believed the tradition comes from the ancient custom of leaving votive offerings in water. In Scots, a clootie or cloot is a strip of cloth or rag.
Practices
When used at the clootie wells in Scotland, Ireland, and the Isle of Man, the pieces of cloth are generally dipped in the water of the holy well and then tied to a branch while a prayer of supplication is said to the spirit of the well – in modern times usually a saint, but in pre-Christian times a goddess or local nature spirit. This is most often done by those seeking healing, though some may do it simply to honour the spirit of the well. In either case, many see this as a probable continuation of the ancient Celtic practice of leaving votive offerings in wells or pits.
There are local variations to the practice. At some wells the tradition is to wash the affected part of the body with the wet rag and then tie the washing-rag on the branch; as the rag disintegrates over time, the ailment is supposed to fade away as well. At some wells the clooties are definitely "rags" and discards, at others, brightly coloured strips of fine cloth. In some locations the ceremony may also include circumambulation (or circling) of the well a set number of times and making an offering of a coin, pin or stone. Additional votive offerings hung on the branches or deposited in the wells may include rosaries, religious medals, crosses, religious icons and other symbols of faith.
At clootie wells where the operative principle is to shed the ailment, and the clootie is thought to represent the ailment, the "offerings" may be grotesque castoffs. Those that instead view the clootie as an offering to the spirit, saint or deity are more likely to tie an attractive, clean piece of cloth or ribbon.
The sacred trees at clootie wells are usually hawthorn trees, though ash trees are also common.
The most popular times for pilgrimages to clootie wells, like other holy wells, are on the feast days of Saints, the Pattern or Patron day, or on the old Gaelic festival days of Imbolc (1 February), Beltane (1 May), Lughnasadh (1 August), or Samhain (1 November).
Locations
In Scotland, by the village of Munlochy on the A832, is a clootie well (called in ) at an ancient spring dedicated to Saint Curetán, where rags are still hung on the surrounding bushes and trees. Here the well was once thought to have had the power to cure sick children who were left there overnight. The site sometimes needs to be cleared of non-biodegradable materials and rubbish such as electrical items and a venetian blind.
In the heart of Culloden woods near the battlefield is a walled clootie well also known as St Mary's well. This well was traditionally visited on the first Sunday in May. Until recently, it was a popular holiday, with an ice-cream van situated in the car park. However, this tradition is now in decline although still marked. Craigie Well at Avoch on the Black Isle has both offerings of coins and clooties. Rags, wool and human hair were also used as charms against sorcery, and as tokens of penance or fulfilment of a vow. A clootie well once existed at Kilallan near Kilmacolm in Renfrewshire. This holy well was dedicated to St Fillan and cloth was tied to overhanging shrub branches.
In Cornwall, at Madron Well () the practice is to tie the cloth and as it rots the ailment is believed to disappear. In 1894 Madron Well was said to be the only Cornish well where rags were traditionally tied. Rags have only appeared at other Cornish wells such as Alsia Well () and Sancreed Well () in about the last 30 years. Christ's Well at Mentieth was described in 1618 "as all tapestried about with old rags".
In Ireland at Loughcrew, Oldcastle, County Meath () there is a wishing tree, where visitors to the passage tombs tie ribbons to the branch of a hawthorn tree. Loughcrew is a site of considerable historical importance in Ireland. It is the site of megalithic burial grounds dating back to approximately 3500 and 3300 BC, situated near the summit of Sliabh na Caillí and on surrounding hills and valleys.
Popular culture
In 2002, the folklorist Marion Bowman observed that the number of clootie wells had "increased markedly" both at existing and new locations in recent years. She added that those engaged in the practice often conceived of it as an ancient "Celtic" activity which they were perpetuating.
A fictional clootie well at Auchterarder features in the 2006 novel The Naming of the Dead by Ian Rankin, who visited the clootie well at Munlochy on Black Isle before writing the book.
The 2018 film The Party's Just Beginning, written and directed by Inverness-born filmmaker Karen Gillan, features the Munlochy clootie well.
See also
Culloden, Scotland
Sacred grove
Well dressing
Wilweorthunga
Wish tree
Nuragic holy well
References
Bibliography
External links
The Clootie Well, Munlochy
Pictures of the Clootie Well, Munlochy
Ireland – Rag Trees
Irish Holy Wells – some with rags and ribbons
A mention of the Clootie Well of St Curidan (Scotland)
Doon Well, a renowned Holy well in Co. Donegal
Irish Landmarks: The Holy Wells of Ireland
The Megalithic Portal Includes Holy wells and sacred springs.]
Video footage of Saint Queran's Clootie Well.
Archaeological artefact types
Celtic mythology
Pilgrimage sites
Rituals
Springs (hydrology)
Traditional medicine
Votive offering
Christian holy places | Clootie well | [
"Environmental_science"
] | 1,299 | [
"Hydrology",
"Springs (hydrology)"
] |
7,057,218 | https://en.wikipedia.org/wiki/Entoloma%20sinuatum | Entoloma sinuatum (commonly known as the livid entoloma, livid agaric, livid pinkgill, leaden entoloma, and lead poisoner) is a poisonous mushroom found across Europe and North America. Some guidebooks refer to it by its older scientific names of Entoloma lividum or Rhodophyllus sinuatus. The largest mushroom of the genus of pink-spored fungi known as Entoloma, it is also the type species. Appearing in late summer and autumn, fruit bodies are found in deciduous woodlands on clay or chalky soils, or nearby parklands, sometimes in the form of fairy rings. Solid in shape, they resemble members of the genus Tricholoma. The ivory to light grey-brown cap is up to across with a margin that is rolled inward. The sinuate gills are pale and often yellowish, becoming pink as the spores develop. The thick whitish stem has no ring.
When young, it may be mistaken for the edible St George's mushroom (Calocybe gambosa) or the miller (Clitopilus prunulus). It has been responsible for many cases of mushroom poisoning in Europe. E. sinuatum causes primarily gastrointestinal problems that, though not generally life-threatening, have been described as highly unpleasant. Delirium and depression are uncommon sequelae. It is generally not considered to be lethal, although one source has reported deaths from the consumption of this mushroom.
Name and relationships
The saga of this species' name begins in 1788 with the publication of part 8 of Jean Baptiste Bulliard's Herbier de la France. In it was plate 382, representing a mushroom which he called Agaricus lividus. In 1872, Lucien Quélet took up a species which he called "Entoloma lividus Bull."; although all subsequent agree that this is a fairly clear reference to Bulliard's name, Quélet gave a description that is generally considered to be that of a different species from Bulliard's. In the meantime, 1801 had seen the description of Agaricus sinuatus by Christian Persoon in his Synopsis Methodica Fungorum. He based that name on another plate (number 579) published in the last part of Bulliard's work, and which the latter had labelled "agaric sinué". German mycologist Paul Kummer reclassified it as Entoloma sinuatum in 1871.
For many years Quélet's name and description were treated as valid because Bulliard's name antedated Persoon's. However, in 1950, a change in the International Code of Botanical Nomenclature (termed the Stockholm Code, after the city where the International Botanical Congress was being held) caused only names on fungi published after 1801 or 1821 (depending on their type) to be valid. This meant that suddenly Bulliard's name was no longer a valid name, and now it was Persoon's name that had priority. Nonetheless, it was a well-known name, and the already chaotic situation caused by a change to a famous Latin name was further complicated by another of Quélet's suggestions. He had in 1886 proposed a new, broader genus that included all pink-gilled fungi with adnate or sinuate gills and angular spores: Rhodophyllus. These two approach to genus placement, using either Rhodophyllus or Entoloma, coexisted for many decades, with mycologists and guidebooks following either; Henri Romagnesi, who studied the genus for over forty years, favoured Rhodophyllus, as initially did Rolf Singer. However, most other authorities have tended to favor Entoloma, and Singer conceded the name was far more widely used and adopted it for his Agaricales in Modern Taxonomy text in 1986.
In the meantime, it had been widely accepted that the 1950 change to the Stockholm Code caused more problems than they solved, and in 1981, the Sydney Code reinstated the validity of pre-1801 names, but created the status of sanctioned name for those used in the foundational works of Persoon and Elias Magnus Fries. Thus Entoloma sinuatum, which Fries had sanctioned, still had to be used for the species described by Quélet even though Bulliard's name was the older one. At about the same time, Machiel Noordeloos re-examined Bulliard's name in more details, and discovered that not only was it illegitimate (and thus not available for use) because William Hudson had already used it ten years earlier for a different species, but Bulliard's illustration was clearly not an Entoloma, but a species of Pluteus, a genus that is only distantly related to Entoloma. As this made Quélet's name definitely unusable for the Entoloma, and because at the time he and Romagnesi believed there were ground to treat Quélet's "E. lividum" and Persoon's E. sinuatum as separate species, he had to coin a third name for Quélet's species: Entoloma eulividum. He however later changed his mind on this issue, combining again his own Entoloma eulividum and E. sinuatum, so that Persoon's name is now universally recognised. Because it was previously widely used and Quélet had provided a good description and illustration (which, the proposer argued, was better considered as a new species rather than a mere placement of Bulliard's name in another genus), a proposal was made in 1999 to conserve Entoloma lividum and thus restore its use. However, it failed because E. sinuatum had already been in use (if not universally) for many years and was thus a well-known name for the species.
The specific epithet sinuatum is the Latin for "wavy", referring to the shape of the cap, while the generic name is derived from the Ancient Greek words entos/ἐντός "inner" and lóma/λῶμα "fringe" or "hem" from the inrolled margin. The specific epithet lividum was derived from the Latin word līvǐdus "lead-coloured". The various common names include livid entoloma, livid agaric, livid pinkgill, leaden entoloma, lead poisoner, and grey pinkgill. In the Dijon region of France it was known as le grand empoisonneur de la Côte-d'Or ("the great poisoner of Côte d'Or"). Quélet himself, who was poisoned by the fungus, called it the miller's purge, akin to another common name of false miller.
Within the large genus Entoloma, which contains more than 1900 species, E. sinuatum has been classically placed in the section Entoloma within the subgenus Entoloma, as it is the type species of the genus. A 2009 study analyzing DNA sequences and spore morphology found it to lie in a rhodopolioid clade with (among other species) E. sordidulum, E. politum and E. rhodopolium, and most closely related to E. sp. 1. This rhodopolioid clade lay within a crown Entoloma clade.
Description
The largest member of its genus, Entoloma sinuatum has an imposing epigeous (aboveground) fruiting body (basidiocarp), bearing a cap 6–20 cm (–6 in) wide, though diameters of have been recorded. It is convex to flat, often with a blunt umbo in its centre and wavy margins, ivory white to light grey-brown in color, and darkening with age. The distant gills are sinuate (notched at their point of attachment to the stipe) to almost free, generally (but not always) yellowish white before darkening to pink and then red. Interspersed between the gills are lamellulae (short gills that do not extend completely from the cap margin to the stipe). When viewed from beneath, a characteristic groove colloquially known as a "moat" can be seen in the gill pattern circumnavigating the stalk. The form lacking yellow color on the gills is rare but widespread, and has been recorded from Austria, France and the Netherlands.
The stout white stipe lacks a ring and is anywhere from high, and in diameter. It may be bulbous at the base. The taste is mild, although it may be unpleasant. The mushroom's strong and unusual odor can be hard to describe; it may smell of flour, though is often unpleasant and rancid. The spore print is reddish-brown, with angular spores 8–11 × 7–9.5 μm, roughly six-sided and globular in shape. The basidia are four-spored and clamped. The gill edge is fertile, and cystidia are absent.
Similar species
Confusion with the highly regarded miller or sweetbread mushroom (Clitopilus prunulus) is a common cause of poisoning in France; the latter fungus has a greyish -white downy cap and whitish decurrent gills which turn pink with maturity. Young fruit bodies of Entoloma sinuatum can also be confused with St George's mushroom (Calocybe gambosa), although the gills of the latter are crowded and cream in color, and the clouded agaric (Clitocybe nebularis), which has whitish decurrent gills and an unusual, starchy, rancid or rancid starch odor. To complicate matters, it often grows near these edible species. Its overall size and shape resemble members of the genus Tricholoma, although the spore color (white in Tricholoma, pinkish in Entoloma) and shape (angular in Entoloma) help distinguish it. The rare and edible all-white dovelike tricholoma (T. columbetta) has a satiny cap and stem and a faint, not mealy, odor. E. sinuatum may be confused with Clitocybe multiceps in the Pacific Northwest of North America, although the latter has white spores and generally grows in clumps. A casual observer may mistake it for an edible field mushroom (Agaricus campestris), but this species has a ring on the stipe, pink gills that become chocolate-brown in maturity, and a dark brown spore print. The poorly known North American species E. albidum resembles E. sinuatum but is likewise poisonous.
Distribution and habitat
Entoloma sinuatum is fairly common and widespread across North America as far south as Arizona. It also occurs throughout Europe and including Ireland and Britain, though it is more common in southern and central parts of Europe than the northwest. In Asia, it has been recorded in the Black Sea region, the Adıyaman Province in Turkey, Iran, and northern Yunnan in China.
The fruit bodies of E. sinuatum grow solitarily or in groups, and have been found forming fairy rings. Fruit bodies appear mainly in autumn, and also in summer in North America, while in Europe the season is reported as late summer and autumn. They are found in deciduous woodlands under oak, beech, and less commonly birch, often on clay or calcareous (chalky) soils, but they may spread to in parks, fields and grassy areas nearby. Most members of the genus are saprotrophic, although this species has been recorded as forming an ectomycorrhizal relationship with willow (Salix).
Toxicity
This fungus has been cited as being responsible for 10% of all mushroom poisonings in Europe. For example, 70 people required hospital treatment in Geneva alone in 1983, and the fungus accounted for 33 of 145 cases of mushroom poisoning in a five-year period at a single hospital in Parma. Poisoning is said to be mainly gastrointestinal in nature; symptoms of diarrhea, vomiting and headache occur 30 minutes to 2 hours after consumption and last for up to 48 hours. Acute liver toxicity and psychiatric symptoms like mood disturbance or delirium may occur. Rarely, symptoms of depression may last for months. At least one source reports there have been fatalities in adults and children. Hospital treatment of poisoning by this mushroom is usually supportive; antispasmodic medicines may lessen colicky abdominal cramps and activated charcoal may be administered early on to bind residual toxin. Intravenous fluids may be required if dehydration has been extensive, especially with children and the elderly. Metoclopramide may be used in cases of recurrent vomiting once gastric contents are emptied. The identity of the toxin(s) is unknown, but chemical analysis has established that there are alkaloids present in the mushroom.
A study of trace elements in mushrooms in the eastern Black Sea Region of Turkey found E. sinuatum to have the highest levels of copper (64.8 ± 5.9 μg/g dried material—insufficient to be toxic) and zinc (198 μg/g) recorded. Caps and stalks tested in an area with high levels of mercury in southeastern Poland showed it to bioaccumulate much higher levels of mercury than other fungi. The element was also found in high levels in the humus-rich substrate. Entoloma sinuatum also accumulates arsenic-containing compounds. Of the roughly 40 μg of arsenic present per gram of fresh mushroom tissue, about 8% was arsenite and the other 92% was arsenate.
See also
List of deadly fungi
List of Entoloma species
Footnotes
References
Cited texts
Entolomataceae
Fungi described in 1801
Fungi of Asia
Fungi of Europe
Fungi of North America
Poisonous fungi
Taxa named by Christiaan Hendrik Persoon
Fungus species | Entoloma sinuatum | [
"Biology",
"Environmental_science"
] | 2,901 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
7,057,338 | https://en.wikipedia.org/wiki/Computer%20Press%20Association | Founded in 1983, the Computer Press Association (CPA) was established to promote excellence in the field of computer journalism. The association was composed of working editors, writers, producers, and freelancers who covered issues related to computers and technology. The CPA conducted the annual Computer Press Awards, which was the preeminent editorial awards of the computer and technology media. The CPA Awards honored outstanding examples in print, broadcast and electronic media. Awards were given for print publications, such as PC Magazine; online news media, such as Newsbytes News Network (both were multiple winners); individual columns and features by well-known journalists such as Steven Levy (author of “Hackers: Heroes of the Computer Revolution”); broadcast awards such as “Best Radio Program”; as well as book awards in categories such as Best Product Specific Book. CPA President Jeff Yablon (1994-1996) developed an updated code of ethics for technology journalists that was adopted by many major trade show groups, most notably Bruno Blenheim. The Computer Press Association disbanded in 2000.
Individuals winning multiple Computer Press Association awards include:
Wendy Woods Gorski (3 times)
John C. Dvorak (8 times)
Woody Leonhard (8 times)
Deke McClelland (7 times)
Brock N. Meeks (4 times)
Ed Bott (3 times)
Danny Goodman (3 times)
Linda Rohrbough (3 times)
Mary McFall Axelson (2 times)
David D. Busch (2 times)
Jonathan Littman (2 times)
David Pogue (2 times)
Ed Scannell (2 times)
Neil J. Salkind (2 times)
References
History of computing | Computer Press Association | [
"Technology"
] | 344 | [
"Computers",
"History of computing"
] |
7,057,416 | https://en.wikipedia.org/wiki/S-Adenosyl-L-homocysteine | {{DISPLAYTITLE:S-Adenosyl-L-homocysteine}}
S-Adenosyl-L-homocysteine (SAH) is the biosynthetic precursor to homocysteine. SAH is formed by the demethylation of S-adenosyl-L-methionine. Adenosylhomocysteinase converts SAH into homocysteine and adenosine.
Biological role
DNA methyltransferases are inhibited by SAH. Two S-adenosyl-L-homocysteine cofactor products can bind the active site of DNA methyltransferase 3B and prevent the DNA duplex from binding to the active site, which inhibits DNA methylation.
References
External links
BioCYC E.Coli K-12 Compound: S-adenosyl-L-homocysteine
Nucleosides
Purines
Alpha-Amino acids
Amino acid derivatives | S-Adenosyl-L-homocysteine | [
"Chemistry",
"Biology"
] | 202 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
7,057,536 | https://en.wikipedia.org/wiki/Eva%20Nogales | Eva Nogales (born in Colmenar Viejo, Madrid, Spain) is a Spanish-American biophysicist at the Lawrence Berkeley National Laboratory and a professor at the University of California, Berkeley, where she served as head of the Division of Biochemistry, Biophysics and Structural Biology of the Department of Molecular and Cell Biology (2015–2020). She is a Howard Hughes Medical Institute investigator.
Nogales is a pioneer in using electron microscopy for the structural and functional characterization of macromolecular complexes. She used electron crystallography to obtain the first structure of tubulin and identify the binding site of the important anti-cancer drug taxol. She is a leader in combining cryo-EM, computational image analysis and biochemical assays to gain insights into function and regulation of biological complexes and molecular machines. Her work has uncovered aspects of cellular function that are relevant to the treatment of cancer and other diseases.
Early life and education
Eva Nogales obtained her BS degree in physics from the Autonomous University of Madrid in 1988. She later earned her PhD from the University of Keele in 1992 while working at the Synchrotron Radiation Source under the supervision of Joan Bordas.
Career
During her post-doctoral work in the laboratory of Ken Downing at the Lawrence Berkeley National Laboratory, Eva Nogales was the first to determine the atomic structure of tubulin and the location of the taxol-binding site by electron crystallography. She became an assistant professor in the Department of Molecular and Cell Biology at the University of California, Berkeley in 1998. In 2000 she became an investigator in the Howard Hughes Medical Institute. As cryo-EM techniques became more powerful, she became a leader in applying cryo-EM to the study of microtubule structure and function and other large macromolecular assemblies such as eukaryotic transcription and translation initiation complexes, the polycomb complex PRC2, and telomerase.
Selected publications
Awards
2000: investigator, Howard Hughes Medical Institute
2005: Early Career Life Scientist Award, American Society for Cell Biology
2006: Chabot Science Award for Excellence
2015: Dorothy Crowfoot Hodgkin Award, Protein Society
2015: Elected as a member of the US National Academy of Sciences
2016: Elected to the American Academy of Arts and Sciences
2018: Women in Cell Biology Award (Senior), American Society for Cell Biology
2019: Grimwade Medal for Biochemistry.
2021: AAAS Fellows Award.
2023: Shaw Prize in Life Sciences.
Personal life
Nogales is married to Howard Padmore and they have two children.
References
External links
Molecules in motion
Nogales lab
Howard Hughes Medical Investigators
Living people
Molecular biologists
Spanish biophysicists
Women biophysicists
Biophysicists
Spanish emigrants to the United States
University of California, Berkeley faculty
American women biologists
21st-century American women scientists
Members of the United States National Academy of Sciences
Alumni of Keele University
Autonomous University of Madrid alumni
Structural biologists
1965 births | Eva Nogales | [
"Chemistry"
] | 600 | [
"Structural biologists",
"Structural biology",
"Molecular biology",
"Biochemists",
"Molecular biologists"
] |
7,057,811 | https://en.wikipedia.org/wiki/Microcontact%20printing | Microcontact printing (or μCP) is a form of soft lithography that uses the relief patterns on a master polydimethylsiloxane (PDMS) stamp or Urethane rubber micro stamp to form patterns of self-assembled monolayers (SAMs) of ink on the surface of a substrate through conformal contact as in the case of nanotransfer printing (nTP). Its applications are wide-ranging including microelectronics, surface chemistry and cell biology.
History
Both lithography and stamp printing have been around for centuries. However, the combination of the two gave rise to the method of microcontact printing. The method was first introduced by George M. Whitesides and Amit Kumar at Harvard University. Since its inception many methods of soft lithography have been explored.
Procedure
Preparing the master
Creation of the master, or template, is done using traditional photolithography techniques. The master is typically created on silicon, but can be done on any solid patterned surface. Photoresist is applied to the surface and patterned by a photomask and UV light. The master is then baked, developed and cleaned before use. In typical processes the photoresist is usually kept on the wafer to be used as a topographic template for the stamp. However, the unprotected silicon regions can be etched, and the photoresist stripped, which would leave behind a patterned wafer for creating the stamp. This method is more complex but creates a more stable template.
Creating the PDMS stamp
After fabrication the master is placed in a walled container, typically a petri dish, and the stamp is poured over the master.
The PDMS stamp, in most applications, is a 10:1 ratio of silicone elastomer and a silicone elastomer curing agent. This mixture consists of a short hydrosilane crosslinker that contains a catalyst made from a platinum complex. After pouring, the PDMS is cured at elevated temperatures to create a solid polymer with elastomeric properties. The stamp is then peeled off and cut to the proper size. The stamp replicates the opposite of the master. Elevated regions of the stamp correspond to indented regions of the master.
Some commercial services for procuring PDMS stamps and micropatterned samples exist such as Research Micro Stamps.
Inking the stamp
Inking of the stamp occurs through the application of a thiol solution either by immersion or coating the stamp with a Q-tip. The highly hydrophobic PDMS material allows the ink to be diffused into the bulk of the stamp, which means the thiols reside not only on the surface, but also in the bulk of the stamp material. This diffusion into the bulk creates an ink reservoir for multiple prints. The stamp is let dry until no liquid is visible and an ink reservoir is created.
Applying the stamp to the substrate
Direct contact
Applying the stamp to the substrate is easy and straightforward which is one of the main advantages of this process. The stamp is brought into physical contact with the substrate and the thiol solution is transferred to the substrate. The thiol is area-selectively transferred to the surface based on the features of the stamp. During the transfer the carbon chains of the thiol align with each other to create a hydrophobic self-assembling monolayer (SAM).
Other application techniques
Printing of the stamp onto the substrate, although not used as often, can also take place with a rolling stamp onto a planar substrate or a curved substrate with a planar stamp.
Advantages
Microcontact Printing has several advantages including:
The simplicity and ease of creating patterns with micro-scale features
Can be done in a traditional laboratory without the constant use of a cleanroom (cleanroom is needed only to create the master).
Multiple stamps can be created from a single master
Individual stamps can be used several times with minimal degradation of performance
A cheaper technique for fabrication that uses less energy than conventional techniques
Some materials have no other micro patterning method available
Disadvantages
After this technique became popular various limitations and problems arose, all of which affected patterning and reproducibility.
Stamp Deformation
During direct contact one must be careful because the stamp can easily be physically deformed causing printed features that are different from the original stamp features. Horizontally stretching or compressing the stamp will cause deformations in the raised and recessed features. Also, applying too much vertical pressure on the stamp during printing can cause the raised relief features to flatten against the substrate. These deformations can yield submicron features even though the original stamp has a lower resolution.
Deformation of the stamp can occur during removal from the master and during the substrate contacting process. When the aspect ratio of the stamp is high buckling of the stamp can occur. When the aspect ratio is low roof collapse can occur.
Substrate contamination
During the curing process some fragments can potentially be left uncured and contaminate the process. When this occurs the quality of the printed SAM is decreased. When the ink molecules contain certain polar groups the transfer of these impurities is increased.
Shrinking/swelling of the stamp
During the curing process the stamp can potentially shrink in size leaving a difference in desired dimensions of the substrate patterning.
Swelling of the stamp may also occur. Most organic solvents induce swelling of the PDMS stamp. Ethanol in particular has a very small swelling effect, but many other solvents cannot be used for wet inking because of high swelling. Because of this the process is limited to apolar inks that are soluble in ethanol.
Ink mobility
Ink diffusion from the PDMS bulk to the surface occurs during the formation of the patterned SAM on the substrate. This mobility of the ink can cause lateral spreading to unwanted regions. Upon the transfer this spreading can influence the desired pattern.
Applications
Depending on the type of ink used and the subsequent substrate the microcontact printing technique has many different applications
Micromachining
Microcontact printing has great applications in micromachining. For this application inking solutions commonly consist of a solution of alkanethiol. This method uses metal substrates with the most common metal being gold. However, silver, copper, and palladium have been proven to work as well.
Once the ink has been applied to the substrate the SAM layer acts as a resist to common wet etching techniques allowing for the creation of high resolution patterning. The patterned SAMs layer is a step in a series of steps to create complex microstructures. For example, applying the SAM layer on top of gold and etching creates microstructures of gold. After this step etched areas of gold exposes the substrate which can further be etched using traditional anisotropic etch techniques. Because of the microcontact printing technique no traditional photolithography is needed to accomplish these steps.
Patterning proteins
The patterning of proteins has helped the advancement of biosensors., cell biology research, and tissue engineering. Various proteins have been proven to be suitable inks and are applied to various substrates using the microcontact printing technique. Polylysine, immunoglobulin antibody, and different enzymes have been successfully placed onto surfaces including glass, polystyrene, and hydrophobic silicon.
Patterning cells
Microcontact printing has been used to advance the understanding of how cells interact with substrates. This technique has helped improve the study of cell patterning that was not possible with traditional cell culture techniques.
Patterning DNA
Successful patterning of DNA has also been done using this technique. The reduction in time and DNA material are the critical advantages for using this technique. The stamps were able to be used multiple times that were more homogeneous and sensitive than other techniques.
Making Microchambers
To learn about micro organisms, scientists need adaptable ways to capture and record the behavior of motile single-celled organisms across a diverse range of species. PDMS stamps can mold growth material into micro chambers that then capture single-celled organisms for imaging.
Technique improvements
To help overcome the limitations set by the original technique several alternatives have been developed.
High-Speed printing: Successful contact printing was done on a gold substrate with a contact time in the range of milliseconds. This printing time is three orders of magnitude shorter than the normal technique, yet successfully transformed the pattern. The process of contact was automated to achieve these speeds through a piezoelectric actuator. At these low contact times the surface spreading of thiol did not occur, greatly improving the pattern uniformity
Submerged Printing: By submerging the stamp in a liquid medium stability was greatly increased. By printing hydrophobic long-chain thiols underwater the common problem of vapor transport of the ink is greatly reduced. PDMS aspect ratios of 15:1 were achieved using this method, which was not accomplished before
Lift-off Nanocontact printing: By first using Silicon lift-off stamps and later low cost polymer lift-off stamps and contacting these with an inked flat PDMS stamp, nanopatterns of multiple proteins or of complex digital nanodot gradients with dot spacing ranging from 0 nm to 15 um apart were achieved for immunoassays and cell assays. Implementation of this approach led to the patterning of a 100 digital nanodot gradient array, composed of more than 57 million protein dots 200 nm in diameter printed in 10 minutes in a 35 mm2 area.
Contact Inking: as opposed to wet inking this technique does not permeate the PDMS bulk. The ink molecules only contact the protruding areas of the stamp that are going to be used for the patterning. The absence of ink on the rest of the stamp reduces the amount of ink transferred through the vapor phase that can potentially affect the pattern. This is done by the direct contact of a feature stamp and a flat PDMS substrate that has ink on it.
New Stamp Materials: To create uniform transfer of the ink the stamp needs to be both mechanically stable and also be able to create conformal contact well. These two characteristics are juxtaposed because high stability requires a high Young's modulus while efficient contact requires an increase in elasticity. A composite, thin PDMS stamp with a rigid back support has been used for patterning to help solve this problem.
Magnetic field assisted micro contact printing: to apply a homogeneous pressure during the printing step, a magnetic force is used. For that, the stamp is sensitive to a magnetic field by injecting iron powder into a second layer of PDMS. This force can be adjusted for nano and micro-patterns [13][12][12][12].
Multiplexing : the macrostamp: the main drawback of microcontact printing for biomedical application is that it is not possible to print different molecules with one stamp. To print different (bio)molecules in one step, a new concept is proposed : the macrostamp. It is a stamp composed of dots. The space between the dots corresponds to the space between the wells of a microplate. Then, it is possible to ink, dry and print in one step different molecules.
General references
www.microcontactprinting.net : a website dealing with microcontact printing (articles, patents, thesis, tips, education, ...)
www.researchmicrostamps.com: a service that provides micro stamps via simple online sales.
Footnotes
Lithography (microfabrication) | Microcontact printing | [
"Materials_science"
] | 2,339 | [
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
7,057,945 | https://en.wikipedia.org/wiki/Positive%20deviance | Positive deviance (PD) is an approach to behavioral and social change. It is based on the idea that, within a community, some individuals engage in unusual behaviors allowing them to solve problems better than others who face similar challenges, despite not having additional resources or knowledge. These individuals are referred to as positive deviants.
The concept first appeared in nutrition research in the 1970s. Researchers observed that, despite the poverty in a community, some families had well-nourished children. Some suggested using information gathered from these outliers to plan nutrition programs.
Principles
Positive deviance is a strength-based approach applicable to problems requiring behavior and social change. It is based on the following principles:
Communities already have the solutions; they are the best experts in solving their problems.
Communities self-organize and are equipped with the human resources and social assets to solve agreed-upon problems.
Collective intelligence. Intelligence and know-how are not concentrated in the leadership of a community alone or in external experts but are distributed throughout the community. Thus, the PD process aims to elicit collective intelligence to apply it to specific problems requiring behavior or social change.
Sustainability is the cornerstone of the approach. The PD approach enables the community or organization to seek and discover sustainable solutions to a given problem because the demonstrably successful uncommon behaviors are already practiced in that community within the constraints and challenges of the current situation.
It is easier to change behavior by practicing it rather than knowing about it. "It is easier to act your way into a new way of thinking than think your way into a new way of acting".
Original application
The PD approach was first operationalized and applied in programming in the field by Jerry and Monique Sternin through their work with Save the Children in Vietnam in the 1990s (Tuhus-Dubrow, Sternin, Sternin and Pascale).
At the start of the pilot, 64% of children weighed in the pilot villages were malnourished. Through a PD inquiry, the villagers found poor peers in the community that, through their uncommon but successful strategies, had well-nourished children. These families collected foods typically considered inappropriate for children (e.g., sweet potato greens, shrimp, and crabs), washed their children's hands before meals, and actively fed them three to four times a day instead of the typical two meals a day provided to children.
Unknowingly, PDs had incorporated foods already found in their community that provided essential nutrients: protein, iron, and calcium. A nutrition program based on these insights was created. Instead of simply telling participants what to do differently, they designed the program to help them act their way into a new way of thinking. Parents were required to bring one of the newly identified foods to attend a feeding session. They brought their children and, while sharing nutritious meals, learned to cook the new foods.
At the end of the two-year pilot, malnutrition fell by 85%. Results were sustained and transferred to the participants' younger siblings.
This approach to programming was different in important ways.
It is always appropriate, as it operates within the assets of a community, and it, therefore, caters to its specific cultural context, e.g., village, business, schools, ministry, department, or hospital. Additionally, by seeing that certain members of their community are already engaging in an uncommon behavior, others are more likely to adopt it themselves, as this serves as "social proof" that the behavior is acceptable for everyone within the community. Furthermore, the solutions stem from the community, avoiding thus the "immune response" that can occur when outside experts enter a community with best practices that are often unsuccessful in promoting sustained change. (Sternin)
Since it was first applied in Vietnam, PD has been used to inform nutrition programs in over 40 countries by USAID, World Vision, Mercy Corps, Save the Children, CARE, Plan International, Indonesian Ministry of Health, Peace Corps, Food for the Hungry, among others.
Steps
A positive deviance approach may follow a series of steps.
An invitation to change
A PD inquiry begins with an invitation from a community that wishes to address a significant issue they face. This is crucial, as it is the community that acquires ownership of the process.
Defining the problem
The definition of the problem is carried out by and for the community. This will often lead to a problem definition that differs from the outside "expert" opinion of the situation.
The community establishes a quantitative baseline, allowing it to reflect on the problem given the evidence at hand and measure the progress toward its goals.
This is also the beginning of the process of identifying stakeholders and decision-makers involved. Additional stakeholders and decision-makers will be pulled in throughout the process as they are identified.
Determining the presence of PD individuals or groups
Using data and observation, the community can identify the positive deviants in their midst.
Discovering uncommon practices or behaviors
The Positive Deviance Inquiry aims to discover uncommon practices or behaviors. The community, having identified positive deviants, sets out to find the behaviors, attitudes, or beliefs that allow the PD to be successful. The focus is on the successful strategies of the PD, not on making a hero of the person using the strategy. This self-discovery of people/groups just like them who have found successful solutions provides "social proof" that this problem can be overcome now, without outside resources.
Program design
After identifying successful strategies, the community decides which strategies they would like to adopt, and they design activities to help others access and practice these uncommon and other beneficial strategies. Program design is not focused on spreading "best practices" but on helping community members "act their way into a new way of thinking" through hands-on activities.
Monitoring and evaluation
PD-informed projects are monitored and evaluated through a participatory process. As the community decides on and performs the monitoring, the tools they create will be appropriate to the setting. Even illiterate community members can participate through pictorial monitoring forms or other appropriate tools.
Evaluation allows the community to track their progress toward their goals and reinforces the changes they are making in behaviors, attitudes, and beliefs.
Scaling up
The scaling up of a PD project may happen through many mechanisms: the "ripple effect" of other communities observing the success and engaging in a PD project of their own, through the coordination of NGOs, or organizational development consultants. Irrespective of the mechanism employed, the community discovery process of PDs in their midst remains vital to the acceptance of new behaviors, attitudes, and knowledge.
Applications
Preventing hospital-acquired infections
The PD approach has been applied in hospitals in the United States, Brazil, Canada, Mexico, Colombia, and England to stop the spread of hospital-acquired infections such as Clostridioides difficile and Methicillin-resistant Staphylococcus aureus (MRSA). The Centers for Disease Control and Prevention (CDC) evaluated pilot programs in the U.S. and found units using the approach decreased their infections by 30-73%.
Additionally, it has been used in healthcare settings by increasing the incidence of hand washing and improving care for patients immediately after a heart attack.
Primary care (Bright Spotting)
Termed "Bright Spotting", instead of positive deviance, the primary care pilot initiative first took place in rural New Hampshire and is still ongoing. The outpatient clinic identified a complex patient population, from the clinic's perspective, studied the risk factors of that population, then identified measures that would signify that a patient has become healthy and sustained health. Once these measures were identified (using both data and the practices' knowledge of the patients), "Bright Spots" were identified as those that meet both high-risk criteria and achieved health. Finding positive deviant patients through predictive analytics has also be suggested as a possible tool in discovery. Once these patients were identified the care team performed qualitative research to discover their patterns of behavior. The results were then shown to the bright spots and their families who then designed a peer learning experience with the results in mind. The community meetings were then facilitated using both positive deviance facilitation techniques as well as applying the "Citizen Health Care Model", which is very similar to positive deviance approaches.
Public health
A PD project helped prisoners in a New South Wales prison stop smoking. Projects in Burkino Faso, Guatemala, Ivory Coast, and Rwanda addressed reproductive health in adolescents. PD maternal and newborn health projects in Myanmar, Pakistan, Egypt, and India have improved women's access to prenatal care, delivery preparation, and antenatal care for mothers and babies.
PD projects to prevent the spread of HIV/AIDS took place in 2002 with motorbike taxi drivers in Vietnam, and in 2004 with sex workers in Indonesia. A PD project to enhance psychological resilience amongst adolescents vulnerable to depression and anxiety was implemented in the Netherlands.
Child protection
A five-year PD project starting in 2003 to prevent girl trafficking in Indonesia with Save the Children and a local Indonesian NGO, helped them find viable economic options to stay in their communities.
A PD project to stop Female Genital Mutilation/Cutting in Egypt began in 1998 with CEDPA (Center for Development and Population Activities), COST (Coptic Organization for Services and Training), Caritas in Minya, Community Development Agency (CDA), Monshaat Nasser in Beni Suef governorate, and the Center for Women's Legal Assistance (CEWLA). Efforts have already shown a reduction in the practice.
In Uganda, a project with the Oak Foundation and Save the Children helped girls who were child soldiers with the Lords Resistance Army in Sudan reintegrate into their communities.
In education
PD projects in New Jersey, California, Argentina, Ethiopia, and Burkina Faso have addressed dropout rates and keeping girls in school.
Private sector
Proponents of PD within management science argue that, in any population (even in such seemingly mundane groups as service personnel in fast food environments), the positive deviants have attitudes, cognitive processes, and behavioral patterns that lead to significantly improved performance in key metrics such as speed of service and profitability. Studies claim that the widespread adoption of positive-deviant approaches consistently leads to significant performance improvement.
PD had been significantly extended to the private sector, by William Seidman and Michael McCauley. Their extensions include methodologies and technologies for:
Quickly identifying the positive deviants
Efficiently gathering and organizing the positive deviant knowledge
Motivating a willingness in others to adopt the positive deviant approaches
Sustaining the change by others by integrating it into their pre-existing emotional and cognitive functions
Scaling the positive deviant knowledge to large numbers of people simultaneously
Positive deviance was further extended to groups or organizations by Gary Hamel. Hamel looks to Positive Deviant companies to set the example for "management innovation."
See also
Creativity
Deviance (sociology)
Individuality
Invention
Nonconformity
Outliers (book)
Thinking outside the box
Rebellious Motivational State
References
Malnutrition
Eating behaviors of humans
Change management
Health promotion
Research on poverty | Positive deviance | [
"Biology"
] | 2,237 | [
"Eating behaviors",
"Behavior",
"Human behavior",
"Eating behaviors of humans"
] |
7,058,047 | https://en.wikipedia.org/wiki/History%20of%20Lorentz%20transformations | The history of Lorentz transformations comprises the development of linear transformations forming the Lorentz group or Poincaré group preserving the Lorentz interval and the Minkowski inner product .
In mathematics, transformations equivalent to what was later known as Lorentz transformations in various dimensions were discussed in the 19th century in relation to the theory of quadratic forms, hyperbolic geometry, Möbius geometry, and sphere geometry, which is connected to the fact that the group of motions in hyperbolic space, the Möbius group or projective special linear group, and the Laguerre group are isomorphic to the Lorentz group.
In physics, Lorentz transformations became known at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell's equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant between different inertial frames. They relate the spacetime coordinates of two arbitrary inertial frames of reference with constant relative speed v. In one frame, the position of an event is given by x,y,z and time t, while in the other frame the same event has coordinates x′,y′,z′ and t′.
Mathematical prehistory
Using the coefficients of a symmetric matrix A, the associated bilinear form, and a linear transformations in terms of transformation matrix g, the Lorentz transformation is given if the following conditions are satisfied:
It forms an indefinite orthogonal group called the Lorentz group O(1,n), while the case det g=+1 forms the restricted Lorentz group SO(1,n). The quadratic form becomes the Lorentz interval in terms of an indefinite quadratic form of Minkowski space (being a special case of pseudo-Euclidean space), and the associated bilinear form becomes the Minkowski inner product. Long before the advent of special relativity it was used in topics such as the Cayley–Klein metric, hyperboloid model and other models of hyperbolic geometry, computations of elliptic functions and integrals, transformation of indefinite quadratic forms, squeeze mappings of the hyperbola, group theory, Möbius transformations, spherical wave transformation, transformation of the Sine-Gordon equation, Biquaternion algebra, split-complex numbers, Clifford algebra, and others.
Electrodynamics and special relativity
Overview
In the special relativity, Lorentz transformations exhibit the symmetry of Minkowski spacetime by using a constant c as the speed of light, and a parameter v as the relative velocity between two inertial reference frames. Using the above conditions, the Lorentz transformation in 3+1 dimensions assume the form:
In physics, analogous transformations have been introduced by Voigt (1887) related to an incompressible medium, and by Heaviside (1888), Thomson (1889), Searle (1896) and Lorentz (1892, 1895) who analyzed Maxwell's equations. They were completed by Larmor (1897, 1900) and Lorentz (1899, 1904), and brought into their modern form by Poincaré (1905) who gave the transformation the name of Lorentz. Eventually, Einstein (1905) showed in his development of special relativity that the transformations follow from the principle of relativity and constant light speed alone by modifying the traditional concepts of space and time, without requiring a mechanical aether in contradistinction to Lorentz and Poincaré. Minkowski (1907–1908) used them to argue that space and time are inseparably connected as spacetime.
Regarding special representations of the Lorentz transformations: Minkowski (1907–1908) and Sommerfeld (1909) used imaginary trigonometric functions, Frank (1909) and Varićak (1910) used hyperbolic functions, Bateman and Cunningham (1909–1910) used spherical wave transformations, Herglotz (1909–10) used Möbius transformations, Plummer (1910) and Gruner (1921) used trigonometric Lorentz boosts, Ignatowski (1910) derived the transformations without light speed postulate, Noether (1910) and Klein (1910) as well Conway (1911) and Silberstein (1911) used Biquaternions, Ignatowski (1910/11), Herglotz (1911), and others used vector transformations valid in arbitrary directions, Borel (1913–14) used Cayley–Hermite parameter,
Voigt (1887)
Woldemar Voigt (1887) developed a transformation in connection with the Doppler effect and an incompressible medium, being in modern notation:
If the right-hand sides of his equations are multiplied by γ they are the modern Lorentz transformation. In Voigt's theory the speed of light is invariant, but his transformations mix up a relativistic boost together with a rescaling of space-time. Optical phenomena in free space are scale, conformal, and Lorentz invariant, so the combination is invariant too. For instance, Lorentz transformations can be extended by using factor :
.
l=1/γ gives the Voigt transformation, l=1 the Lorentz transformation. But scale transformations are not a symmetry of all the laws of nature, only of electromagnetism, so these transformations cannot be used to formulate a principle of relativity in general. It was demonstrated by Poincaré and Einstein that one has to set l=1 in order to make the above transformation symmetric and to form a group as required by the relativity principle, therefore the Lorentz transformation is the only viable choice.
Voigt sent his 1887 paper to Lorentz in 1908, and that was acknowledged in 1909:
Also Hermann Minkowski said in 1908 that the transformations which play the main role in the principle of relativity were first examined by Voigt in 1887. Voigt responded in the same paper by saying that his theory was based on an elastic theory of light, not an electromagnetic one. However, he concluded that some results were actually the same.
Heaviside (1888), Thomson (1889), Searle (1896)
In 1888, Oliver Heaviside investigated the properties of charges in motion according to Maxwell's electrodynamics. He calculated, among other things, anisotropies in the electric field of moving bodies represented by this formula:
.
Consequently, Joseph John Thomson (1889) found a way to substantially simplify calculations concerning moving charges by using the following mathematical transformation (like other authors such as Lorentz or Larmor, also Thomson implicitly used the Galilean transformation z-vt in his equation):
Thereby, inhomogeneous electromagnetic wave equations are transformed into a Poisson equation. Eventually, George Frederick Charles Searle noted in (1896) that Heaviside's expression leads to a deformation of electric fields which he called "Heaviside-Ellipsoid" of axial ratio
Lorentz (1892, 1895)
In order to explain the aberration of light and the result of the Fizeau experiment in accordance with Maxwell's equations, Lorentz in 1892 developed a model ("Lorentz ether theory") in which the aether is completely motionless, and the speed of light in the aether is constant in all directions. In order to calculate the optics of moving bodies, Lorentz introduced the following quantities to transform from the aether system into a moving system (it's unknown whether he was influenced by Voigt, Heaviside, and Thomson)
where x* is the Galilean transformation x-vt. Except the additional γ in the time transformation, this is the complete Lorentz transformation. While t is the "true" time for observers resting in the aether, t′ is an auxiliary variable only for calculating processes for moving systems. It is also important that Lorentz and later also Larmor formulated this transformation in two steps. At first an implicit Galilean transformation, and later the expansion into the "fictitious" electromagnetic system with the aid of the Lorentz transformation. In order to explain the negative result of the Michelson–Morley experiment, he (1892b) introduced the additional hypothesis that also intermolecular forces are affected in a similar way and introduced length contraction in his theory (without proof as he admitted). The same hypothesis had been made previously by George FitzGerald in 1889 based on Heaviside's work. While length contraction was a real physical effect for Lorentz, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation.
In 1895, Lorentz further elaborated on his theory and introduced the "theorem of corresponding states". This theorem states that a moving observer (relative to the ether) in his "fictitious" field makes the same observations as a resting observers in his "real" field for velocities to first order in v/c. Lorentz showed that the dimensions of electrostatic systems in the ether and a moving frame are connected by this transformation:
For solving optical problems Lorentz used the following transformation, in which the modified time variable was called "local time" () by him:
With this concept Lorentz could explain the Doppler effect, the aberration of light, and the Fizeau experiment.
Larmor (1897, 1900)
In 1897, Larmor extended the work of Lorentz and derived the following transformation
Larmor noted that if it is assumed that the constitution of molecules is electrical then the FitzGerald–Lorentz contraction is a consequence of this transformation, explaining the Michelson–Morley experiment. It's notable that Larmor was the first who recognized that some sort of time dilation is a consequence of this transformation as well, because "individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio 1/γ". Larmor wrote his electrodynamical equations and transformations neglecting terms of higher order than (v/c)2 – when his 1897 paper was reprinted in 1929, Larmor added the following comment in which he described how they can be made valid to all orders of v/c:
In line with that comment, in his book Aether and Matter published in 1900, Larmor used a modified local time t″=t′-εvx′/c2 instead of the 1897 expression t′=t-vx/c2 by replacing v/c2 with εv/c2, so that t″ is now identical to the one given by Lorentz in 1892, which he combined with a Galilean transformation for the x′, y′, z′, t′ coordinates:
Larmor knew that the Michelson–Morley experiment was accurate enough to detect an effect of motion depending on the factor (v/c)2, and so he sought the transformations which were "accurate to second order" (as he put it). Thus he wrote the final transformations (where x′=x-vt and t″ as given above) as:
by which he arrived at the complete Lorentz transformation. Larmor showed that Maxwell's equations were invariant under this two-step transformation, "to second order in v/c" – it was later shown by Lorentz (1904) and Poincaré (1905) that they are indeed invariant under this transformation to all orders in v/c.
Larmor gave credit to Lorentz in two papers published in 1904, in which he used the term "Lorentz transformation" for Lorentz's first order transformations of coordinates and field configurations:
Lorentz (1899, 1904)
Also Lorentz extended his theorem of corresponding states in 1899. First he wrote a transformation equivalent to the one from 1892 (again, x* must be replaced by x-vt):
Then he introduced a factor ε of which he said he has no means of determining it, and modified his transformation as follows (where the above value of t′ has to be inserted):
This is equivalent to the complete Lorentz transformation when solved for x″ and t″ and with ε=1. Like Larmor, Lorentz noticed in 1899 also some sort of time dilation effect in relation to the frequency of oscillating electrons "that in S the time of vibrations be kε times as great as in S0", where S0 is the aether frame.
In 1904 he rewrote the equations in the following form by setting l=1/ε (again, x* must be replaced by x-vt):
Under the assumption that l=1 when v=0, he demonstrated that l=1 must be the case at all velocities, therefore length contraction can only arise in the line of motion. So by setting the factor l to unity, Lorentz's transformations now assumed the same form as Larmor's and are now completed. Unlike Larmor, who restricted himself to show the covariance of Maxwell's equations to second order, Lorentz tried to widen its covariance to all orders in v/c. He also derived the correct formulas for the velocity dependence of electromagnetic mass, and concluded that the transformation formulas must apply to all forces of nature, not only electrical ones. However, he didn't achieve full covariance of the transformation equations for charge density and velocity. When the 1904 paper was reprinted in 1913, Lorentz therefore added the following remark:
Lorentz's 1904 transformation was cited and used by Alfred Bucherer in July 1904:
or by Wilhelm Wien in July 1904:
or by Emil Cohn in November 1904 (setting the speed of light to unity):
or by Richard Gans in February 1905:
Poincaré (1900, 1905)
Local time
Neither Lorentz or Larmor gave a clear physical interpretation of the origin of local time. However, Henri Poincaré in 1900 commented on the origin of Lorentz's "wonderful invention" of local time. He remarked that it arose when clocks in a moving reference frame are synchronised by exchanging signals which are assumed to travel with the same speed in both directions, which lead to what is nowadays called relativity of simultaneity, although Poincaré's calculation does not involve length contraction or time dilation. In order to synchronise the clocks here on Earth (the x*, t* frame) a light signal from one clock (at the origin) is sent to another (at x*), and is sent back. It's supposed that the Earth is moving with speed v in the x-direction (= x*-direction) in some rest system (x, t) (i.e. the luminiferous aether system for Lorentz and Larmor). The time of flight outwards is
and the time of flight back is
.
The elapsed time on the clock when the signal is returned is δta+δtb and the time t*=(δta+δtb)/2 is ascribed to the moment when the light signal reached the distant clock. In the rest frame the time t=δta is ascribed to that same instant. Some algebra gives the relation between the different time coordinates ascribed to the moment of reflection. Thus
identical to Lorentz (1892). By dropping the factor γ2 under the assumption that , Poincaré gave the result t*=t-vx*/c2, which is the form used by Lorentz in 1895.
Similar physical interpretations of local time were later given by Emil Cohn (1904) and Max Abraham (1905).
Lorentz transformation
On June 5, 1905 (published June 9) Poincaré formulated transformation equations which are algebraically equivalent to those of Larmor and Lorentz and gave them the modern form:
.
Apparently Poincaré was unaware of Larmor's contributions, because he only mentioned Lorentz and therefore used for the first time the name "Lorentz transformation". Poincaré set the speed of light to unity, pointed out the group characteristics of the transformation by setting l=1, and modified/corrected Lorentz's derivation of the equations of electrodynamics in some details in order to fully satisfy the principle of relativity, i.e. making them fully Lorentz covariant.
In July 1905 (published in January 1906) Poincaré showed in detail how the transformations and electrodynamic equations are a consequence of the principle of least action; he demonstrated in more detail the group characteristics of the transformation, which he called Lorentz group, and he showed that the combination x2+y2+z2-t2 is invariant. He noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing as a fourth imaginary coordinate, and he used an early form of four-vectors. He also formulated the velocity addition formula, which he had already derived in unpublished letters to Lorentz from May 1905:
.
Einstein (1905) – Special relativity
On June 30, 1905 (published September 1905) Einstein published what is now called special relativity and gave a new derivation of the transformation, which was based only on the principle of relativity and the principle of the constancy of the speed of light. While Lorentz considered "local time" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. For quantities of first order in v/c this was also done by Poincaré in 1900, while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations applied to the kinematics of moving frames.
The notation for this transformation is equivalent to Poincaré's of 1905, except that Einstein didn't set the speed of light to unity:
Einstein also defined the velocity addition formula:
and the light aberration formula:
Minkowski (1907–1908) – Spacetime
The work on the principle of relativity by Lorentz, Einstein, Planck, together with Poincaré's four-dimensional approach, were further elaborated and combined with the hyperboloid model by Hermann Minkowski in 1907 and 1908. Minkowski particularly reformulated electrodynamics in a four-dimensional way (Minkowski spacetime). For instance, he wrote x, y, z, it in the form x1, x2, x3, x4. By defining ψ as the angle of rotation around the z-axis, the Lorentz transformation assumes the form (with c=1):
Even though Minkowski used the imaginary number iψ, he for once directly used the tangens hyperbolicus in the equation for velocity
with .
Minkowski's expression can also by written as ψ=atanh(q) and was later called rapidity. He also wrote the Lorentz transformation in matrix form:
As a graphical representation of the Lorentz transformation he introduced the Minkowski diagram, which became a standard tool in textbooks and research articles on relativity:
Sommerfeld (1909) – Spherical trigonometry
Using an imaginary rapidity such as Minkowski, Arnold Sommerfeld (1909) formulated the Lorentz boost and the relativistic velocity addition in terms of trigonometric functions and the spherical law of cosines:
Frank (1909) – Hyperbolic functions
Hyperbolic functions were used by Philipp Frank (1909), who derived the Lorentz transformation using ψ as rapidity:
Bateman and Cunningham (1909–1910) – Spherical wave transformation
In line with Sophus Lie's (1871) research on the relation between sphere transformations with an imaginary radius coordinate and 4D conformal transformations, it was pointed out by Bateman and Cunningham (1909–1910), that by setting u=ict as the imaginary fourth coordinates one can produce spacetime conformal transformations. Not only the quadratic form , but also Maxwells equations are covariant with respect to these transformations, irrespective of the choice of λ. These variants of conformal or Lie sphere transformations were called spherical wave transformations by Bateman. However, this covariance is restricted to certain areas such as electrodynamics, whereas the totality of natural laws in inertial frames is covariant under the Lorentz group. In particular, by setting λ=1 the Lorentz group can be seen as a 10-parameter subgroup of the 15-parameter spacetime conformal group .
Bateman (1910–12) also alluded to the identity between the Laguerre inversion and the Lorentz transformations. In general, the isomorphism between the Laguerre group and the Lorentz group was pointed out by Élie Cartan (1912, 1915–55), Henri Poincaré (1912–21) and others.
Herglotz (1909/10) – Möbius transformation
Following Felix Klein (1889–1897) and Fricke & Klein (1897) concerning the Cayley absolute, hyperbolic motion and its transformation, Gustav Herglotz (1909–10) classified the one-parameter Lorentz transformations as loxodromic, hyperbolic, parabolic and elliptic. The general case (on the left) and the hyperbolic case equivalent to Lorentz transformations or squeeze mappings are as follows:
Varićak (1910) – Hyperbolic functions
Following Sommerfeld (1909), hyperbolic functions were used by Vladimir Varićak in several papers starting from 1910, who represented the equations of special relativity on the basis of hyperbolic geometry in terms of Weierstrass coordinates. For instance, by setting l=ct and v/c=tanh(u) with u as rapidity he wrote the Lorentz transformation:
and showed the relation of rapidity to the Gudermannian function and the angle of parallelism:
He also related the velocity addition to the hyperbolic law of cosines:
Subsequently, other authors such as E. T. Whittaker (1910) or Alfred Robb (1911, who coined the name rapidity) used similar expressions, which are still used in modern textbooks.
Plummer (1910) – Trigonometric Lorentz boosts
w:Henry Crozier Keating Plummer (1910) defined the Lorentz boost in terms of trigonometric functions
Ignatowski (1910)
While earlier derivations and formulations of the Lorentz transformation relied from the outset on optics, electrodynamics, or the invariance of the speed of light, Vladimir Ignatowski (1910) showed that it is possible to use the principle of relativity (and related group theoretical principles) alone, in order to derive the following transformation between two inertial frames:
The variable n can be seen as a space-time constant whose value has to be determined by experiment or taken from a known physical law such as electrodynamics. For that purpose, Ignatowski used the above-mentioned Heaviside ellipsoid representing a contraction of electrostatic fields by x/γ in the direction of motion. It can be seen that this is only consistent with Ignatowski's transformation when n=1/c2, resulting in p=γ and the Lorentz transformation. With n=0, no length changes arise and the Galilean transformation follows. Ignatowski's method was further developed and improved by Philipp Frank and Hermann Rothe (1911, 1912), with various authors developing similar methods in subsequent years.
Noether (1910), Klein (1910) – Quaternions
Felix Klein (1908) described Cayley's (1854) 4D quaternion multiplications as "Drehstreckungen" (orthogonal substitutions in terms of rotations leaving invariant a quadratic form up to a factor), and pointed out that the modern principle of relativity as provided by Minkowski is essentially only the consequent application of such Drehstreckungen, even though he didn't provide details.
In an appendix to Klein's and Sommerfeld's "Theory of the top" (1910), Fritz Noether showed how to formulate hyperbolic rotations using biquaternions with , which he also related to the speed of light by setting ω2=-c2. He concluded that this is the principal ingredient for a rational representation of the group of Lorentz transformations:
Besides citing quaternion related standard works by Arthur Cayley (1854), Noether referred to the entries in Klein's encyclopedia by Eduard Study (1899) and the French version by Élie Cartan (1908). Cartan's version contains a description of Study's dual numbers, Clifford's biquaternions (including the choice for hyperbolic geometry), and Clifford algebra, with references to Stephanos (1883), Buchheim (1884–85), Vahlen (1901–02) and others.
Citing Noether, Klein himself published in August 1910 the following quaternion substitutions forming the group of Lorentz transformations:
or in March 1911
Conway (1911), Silberstein (1911) – Quaternions
Arthur W. Conway in February 1911 explicitly formulated quaternionic Lorentz transformations of various electromagnetic quantities in terms of velocity λ:
Also Ludwik Silberstein in November 1911 as well as in 1914, formulated the Lorentz transformation in terms of velocity v:
Silberstein cites Cayley (1854, 1855) and Study's encyclopedia entry (in the extended French version of Cartan in 1908), as well as the appendix of Klein's and Sommerfeld's book.
Ignatowski (1910/11), Herglotz (1911), and others – Vector transformation
Vladimir Ignatowski (1910, published 1911) showed how to reformulate the Lorentz transformation in order to allow for arbitrary velocities and coordinates:
Gustav Herglotz (1911) also showed how to formulate the transformation in order to allow for arbitrary velocities and coordinates v=(vx, vy, vz) and r=(x, y, z):
This was simplified using vector notation by Ludwik Silberstein (1911 on the left, 1914 on the right):
Equivalent formulas were also given by Wolfgang Pauli (1921), with Erwin Madelung (1922) providing the matrix form
These formulas were called "general Lorentz transformation without rotation" by Christian Møller (1952), who in addition gave an even more general Lorentz transformation in which the Cartesian axes have different orientations, using a rotation operator . In this case, v′=(v′x, v′y, v′z) is not equal to -v=(-vx, -vy, -vz), but the relation holds instead, with the result
Borel (1913–14) – Cayley–Hermite parameter
Émile Borel (1913) started by demonstrating Euclidean motions using Euler-Rodrigues parameter in three dimensions, and Cayley's (1846) parameter in four dimensions. Then he demonstrated the connection to indefinite quadratic forms expressing hyperbolic motions and Lorentz transformations. In three dimensions:
In four dimensions:
Gruner (1921) – Trigonometric Lorentz boosts
In order to simplify the graphical representation of Minkowski space, Paul Gruner (1921) (with the aid of Josef Sauter) developed what is now called Loedel diagrams, using the following relations:
In another paper Gruner used the alternative relations:
See also
Derivations of the Lorentz transformations
History of special relativity
References
Historical mathematical sources
Historical relativity sources
. For Minkowski's and Voigt's statements see p. 762.
. See also: English translation.
; English translation by David Delphenich: On the mechanics of deformable bodies from the standpoint of relativity theory.
(Reprint of Larmor (1897) with new annotations by Larmor.)
. See also the English translation.
Written by Poincaré in 1912, printed in Acta Mathematica in 1914 though belatedly published in 1921.
Secondary sources
See also "Michelson, FitzGerald and Lorentz: the origins of relativity revisited", Online.
(Only pages 1–21 were published in 1915, the entire article including pp. 39–43 concerning the groups of Laguerre and Lorentz was posthumously published in 1955 in Cartan's collected papers, and was reprinted in the Encyclopédie in 1991.)
; First edition 1911, second expanded edition 1913, third expanded edition 1919.
In English:
External links
Mathpages: 1.4 The Relativity of Light
Equations
History of physics
Hendrik Lorentz
Historical treatment of quaternions | History of Lorentz transformations | [
"Mathematics"
] | 5,939 | [
"Mathematical objects",
"Equations"
] |
7,058,352 | https://en.wikipedia.org/wiki/Matrix%20pencil | In linear algebra, a matrix pencil is a matrix-valued polynomial function defined on a field , usually the real or complex numbers.
Definition
Let be a field (typically, ; the definition can be generalized to rngs), let be a non-negative integer, let be a positive integer, and let be matrices (i. e. for all ). Then the matrix pencil defined by is the matrix-valued function defined by
The degree of the matrix pencil is defined as the largest integer such that (the zero matrix over ).
Linear matrix pencils
A particular case is a linear matrix pencil (where ). We denote it briefly with the notation , and note that using the more general notation, and (not ).
Properties
A pencil is called regular if there is at least one value of such that ; otherwise it is called singular. We call eigenvalues of a matrix pencil all (complex) numbers for which ; in particular, the eigenvalues of the matrix pencil are the matrix eigenvalues of . For linear pencils in particular, the eigenvalues of the pencil are also called generalized eigenvalues.
The set of the eigenvalues of a pencil is called the spectrum of the pencil, and is written . For the linear pencil , it is written as (not ).
The linear pencil is said to have one or more eigenvalues at infinity if has one or more 0 eigenvalues.
Applications
Matrix pencils play an important role in numerical linear algebra. The problem of finding the eigenvalues of a pencil is called the generalized eigenvalue problem. The most popular algorithm for this task is the QZ algorithm, which is an implicit version of the QR algorithm to solve the eigenvalue problem without inverting the matrix (which is impossible when is singular, or numerically unstable when it is ill-conditioned).
Pencils generated by commuting matrices
If , then the pencil generated by and :
consists only of matrices similar to a diagonal matrix, or
has no matrices in it similar to a diagonal matrix, or
has exactly one matrix in it similar to a diagonal matrix.
See also
Generalized eigenvalue problem
Generalized pencil-of-function method
Nonlinear eigenproblem
Quadratic eigenvalue problem
Generalized Rayleigh quotient
Notes
References
Peter Lancaster & Qian Ye (1991) "Variational and numerical methods for symmetric matrix pencils", Bulletin of the Australian Mathematical Society 43: 1 to 17
Linear algebra | Matrix pencil | [
"Mathematics"
] | 511 | [
"Linear algebra",
"Algebra"
] |
7,058,539 | https://en.wikipedia.org/wiki/Triazol-5-ylidene | The triazol-5-ylidenes are a group of persistent carbenes which includes the 1,2,4-triazol-5-ylidene system and the 1,2,3-triazol-5-ylidene system. As opposed to the now ubiquitous NHC (N-heterocyclic carbene) systems based on imidazole rings, these carbenes are structured from triazole rings. 1,2,4-triazol-5-ylidene can be thought of as an analog member of the NHC family, with an extra nitrogen in the ring, while 1,2,3-triazol-5-ylidene is better thought of as a mesoionic carbene (MIC). Both isomers of this group of carbenes benefit from enhanced stability, with certain examples exhibiting greater thermal stability, and others extended shelf life.
The 1,2,4-triazol-5-ylidene system is of special historic interest, as this system contains the first known instance of a characterized NHC, a compound colloquially known as Nitron, which was first isolated in 1905. This compound was first proposed as an analytical reagent for the gravimetric analysis of moieties commonly found in explosives. Nitron's properties as an NHC, however, were not reported and utilized until 2011.
Another member from this group of carbenes is of particular interest due to its robust stability up to temperatures of 150 °C in the absence of air or oxygen. It was first reported in 1995 by Dieter Enders and coworkers and has since become known as the "Ender's carbene." This particular reagent bears the notable distinction of being the first commercially available carbene.
History and Synthesis
Nitron
Cope and Barab reported in 1917 that Nitron had been first synthesized as early as 1905 by Max Busch, who published extensively on its use as an analytical reagent for gravimetric analysis. This molecule's potential for carbene-like reactivity would not be recognized until Färber et al. from the University of Kassel published a paper in 2011 showcasing its potential as a carbenic species. This group demonstrated that Nitron reacts as a nucleophilic carbene.
Reaction with elemental Sulfur in THF afforded a triazolinethione derivative. This formation of a C=S double bond is characteristic of nucleophilic carbenes, often referred to as a "trapping" reaction. With addition of CS2 in THF, a betainic dithiocarboxylate was synthesized, with its crystal structure fully characterized and its 13CNMR and IR spectra corresponding well with typical NHC analogues. The Rhodium complexes that the group synthesized showed that Nitron acts as a moderate donor ligand, as a reduced CO stretching frequency in the product was confirmed by IR analysis when compared to the starting material, indicating that the significant back-donation into the metal center had occurred, as would be expected from a nucleophilic carbene. Nitron has gained relatively little attention in the literature since this discovery of its carbene reactivity, although a few investigations have been undertaken to determine how its reactivity compares to the more rigorously tested and more commonly used carbene ligands.
Enders Carbene
The University of Kassel group cited their interest in generating new, cheaper-to-produce carbenes because, at the time, the commercially available carbenes "exceed[ed] several hundred US$ per gram. These commercially available carbenes had been in development since the late 1960s. Chemists were trying to make these carbenic species more stable at higher temperatures and exist free in solution without needing to form coordination compounds. Hans-Werner Wanzlick, Guy Bertrand, and Anthony Arduengo were pioneers in the development of these types of persistent carbenes, not exclusively working with the triazol-5-ylidenes.
Dieter Enders' group developed a carbene in 1995 that was stable enough to be commercially distributed. Starting with benzoyl chloride, they formed a triazolium perchlorate salt over 5 steps. They reacted this triazolium salt with sodium methoxide in methanol, and then carried out a thermal α-elimination of methanol at 80 °C and under low pressure conditions to form the Enders carbene. While all carbenes are very sensitive to oxygen and air and typically decompose readily when exposed to high temperatures. Enders showed that his new carbene was stable up to 150 °C in the absence of air and oxygen. These advances in carbene stability helped to make the commercialization of these reagents a reality. Enders carbene would become the first commercially available carbene. These carbenes, however, were still expensive, as noted by Färber et al. Following this commercialization and dissemination, many analogues of the 1,2,4-triazol-5-ylidene system have been reported and utilized, most often as transition metal coordination compounds. The enders Carbene itself proved to be a powerful catalyst for the conversion of formaldehyde to glycolaldehyde in the "formoin reaction."
1,2,3-triazol-5-ylidene
The chemistry of the 1,2,3-triazol-5-ylidene system is a much more recently developed field. This system is based on the 1,2,3-triazole ring and had been indicated to have "non negligible lifetimes" in solution as early as 1975. In 2008, 1,2,3-triazolium iodide salts were observed to react with transition metals to form metal-ligand complexes. In 2010, however, Guy Bertrand's group reported the first crystalline carbene of this class, synthesized via a copper-catalyzed azide–alkyne cycloaddition (click reaction) of 2,6-diisopropylphenyl azide and phenylacetylene. Bertrand's group reported high stability and shelf life for this compound. Since then, many coordination compounds have been reported based on this system—most notably, compounds which are active organocatalysts.
Reactivity
Enhanced Stability
Arduengo postulated that the stability of NHC-type carbenes arose from accumulation of electron density around the carbene center, hindering addition reactions from opportunistic nucleophiles. Arduengo concluded that the overall stability of these NHC's resulted from kinetic factors. He stated that "the isolation of a stable carbene is dependent upon the ability of the carbene to exist in a deep local minimum on the potential energy surfaces. It is not important what other minima might also exist on the potential surfaces so long as these minima are not kinetically accessible under ambient conditions likewise." Enders, in a similar manner, referring to the stability of the "enders carbene", posited that the "2p-2p interactions between the carbene carbon atom and the adjacent nitrogen atoms play a significant role in the stabilization of [the molecule]," due to their observation that these N-C bond lengths are considerably shorter than would be expected from single bonds. When comparing their own assessment to Arduengo's rationale for stability, Enders et al. acknowledged in their 1995 paper that "Neither our crystallographic nor our theoretical results permit us to judge the significance of these factors for the stability of the system examined in this work." The combination of strong lone pair donation from the two Nitrogens to the carbene center and the Nitrogens' sigma withdrawing effects are the primary rationales for the stability of these systems.
Wanzlick Equilibrium
The 1,2,3-triazol-5-ylidene system demonstrates fascinating reactivity, particularly with respect to the typical dimerization pathways for NHC's. Guy Bertrand notes "the Wanzlick equilibrium pathway for classical carbenes is disfavored [for these MIC's]." The Wanzlick equilibrium describes a typical dimerization for Arduengo type carbenes (NHC's). Due to their apparent reluctance to participate in this dimerization pathway, carbenes based on the 1,2,3-triazol-5-ylidene system have vastly extended shelf lives. These systems still require significant kinetic stabilization to be stable in solution.
Reactions
Enders reported that the Enders carbene exhibits typical Lewis basicity, as it readily adds to Lewis acids like BH3∙THF, giving the triazoline-borane adduct. In the same paper, Enders reports many other types of nucleophilic carbene reactions that are not exclusive to this system. The enders carbene undergoes insertion reactions, addition reactions, and cycloadditions in a similar manner to many other NHC systems.
Both triazol-5-ylidene systems prove to be excellent organocatalysts. One such catalytic use of these carbenes is an allylic substitution Grignard reaction reported in 2013. The catalytic use of a triazolium salt generates a 1,2,3-triazol-5-ylidene magnesium complex in situ, which, due to its significant Lewis basicity, can back donate to the magnesium center and push the Schlenk equilibrium towards alkyl magnesium products. The Lewis basicity of the catalyst also promotes Sn2' selectivity for this specific reaction. The 1,2,3-triazol-5-ylidene ligand has also been shown to work well with catalytic ruthenium systems promoting olefin metathesis reactions. Other reported catalytic processes facilitated by compounds bearing these MIC ligands include: hydrohydrazination of alkynes, reductive formylation of amines with carbon dioxide and diphenylsilane, hydrogenation and dehydrogenation of N-heteroarenes in water, cycloisomerization of enynes, asymmetric Suzuki−Miyaura cross-coupling reactions, and water oxidation (WO) reactions.
Regarding the 1,2,4-triazol-5-ylidene system, many of its reported coordination compounds are with transition metals, which are usually generated in similar fashion to the analogous imidazole-based NHC ligand-metal systems. One such catalytic use of this system coupled to a transition metal was described in 2010, where the authors used a Gold (I) complex as a regioselective catalyst for the hydroamination of alkynes.
Triazaborole System
A substituted analogue of the 1,2,4-triazol-5-ylidene system was synthesized in 2016, with a boron atom replacing the carbenic carbon. The synthesized triazaborole-metal system showed interesting reactivity toward CO and isonitriles. The authors also reported that reactions with this triazaborole ring yielded some exceptionally rare boron-metal bonds, such as B-Sb and B-Bi. The structures of these triazaborole rings are stabilized by the interaction between the empty P orbital on the Boron and the lone pairs on the flanking Nitrogens. The aryl groups also provide good kinetic stabilization to the system. Insertion reactions of CO to 1,2,4,3-Triazaborol-3-yl-Lithium yielded reactive carbene species, which the authors utilized as a starting material to generate a 1,2-diboranylethene adduct.
See also
Persistent Carbene
Mesoionic Carbene
References
Carbenes | Triazol-5-ylidene | [
"Chemistry"
] | 2,459 | [
"Organic compounds",
"Carbenes",
"Inorganic compounds"
] |
7,058,918 | https://en.wikipedia.org/wiki/IBM%20Tivoli%20Access%20Manager | IBM Tivoli Access Manager (TAM) is an authentication and authorization solution for corporate web services, operating systems, and existing applications. Tivoli Access Manager runs on various operating system platforms such as Unix (AIX, Solaris, HP-UX), Linux, and Windows.
It has been renamed as IBM Security Access Manager (ISAM), in line with the renaming of other Tivoli products, such as TIM turned ISIM.
In 2002, IBM acquired Access360 software, which it planned to integrate into Tivoli Access Manager. In 2009, IBM and Fujitsu announced a partnership to integrate Fujitsu's biometric authentication technology into TAM. Comparable products from other vendors include Oracle Access Manager, CA SiteMinder, NetIQ Access Manager and SAP NetWeaver Single Sign-On.
Core components
TAM has two core components, which are the foundation upon which its other features are implemented:
A user registry.
An authorization service consisting of an authorization database and an authorization engine that performs the decision-making action on the request.
Another related component is the resource manager, which is responsible for applying security policy to resources. The policy enforcer component directs the request to the authorization service for evaluation. Based on the authorization service result (approval or denial) the resource manager allows or denies access to the protected resources. Access Manager authorization decisions are based upon the Privilege Attribute Certificate (PAC), which is created for each user authenticated in an Access Manager environment, regardless of the authentication mechanism used.
Tivoli Access Manager Family
Tivoli Access Manager is not a single product but rather a family of products that use the same core authorization and authentication engine:
IBM Tivoli Access Manager for e-business (TAMeb)
IBM Tivoli Access Manager for Operating Systems (TAMOS) - controls access to operating system resources
IBM Tivoli Access Manager for Enterprise Single Sign-On (TAMESSO)
References
See also
IBM Tivoli Access Manager for e-business product home
IBM Tivoli Access Manager for Operating Systems product home
IBM Tivoli Identity Manager
Computer access control
Tivoli Access Manager
Identity management systems | IBM Tivoli Access Manager | [
"Engineering"
] | 431 | [
"Cybersecurity engineering",
"Computer access control"
] |
7,059,064 | https://en.wikipedia.org/wiki/Human%20search%20engine | A human search engine was a search engine that used human participation to filter the search results and assist users in clarifying their search request. The goal was to provide users with a limited number of relevant results, as opposed to traditional search engines that often return many results that may or may not be relevant.
Examples of defunct human search engines include ApexKB, ChaCha, Mahalo.com, NowNow (from Amazon.com), DMOZ and Sproose.
See also
Human-based computation
Human flesh search engine
Social search
References
Internet search engines
Human-based computation | Human search engine | [
"Technology"
] | 118 | [
"Information systems",
"Human-based computation"
] |
7,059,260 | https://en.wikipedia.org/wiki/Hun%20Mining | Hun Mining, previously known as Genesis Energy Investment Company and Genesis Mining is an investment company based in Budapest, Hungary. From 2007 to 2010 Genesis Energy Investment Company was invested in the photovoltaics market, producing solar panels with thin film technology. In 2010 it sold the solar panel manufacturing and technology units to Denver-based Cogenco International for €15 million so that it could concentrate on mining activities. It was renamed Genesis Mining in the wake of this sale. In lage 2011 it changed its name again to Hun Mining. In late 2011 the company sold its mining operation to Davies Corporation, announcing plans to acquire mobile telecommunications company 6GMOBILE.
References
External links
http://genesisenergy.eu/
https://web.archive.org/web/20080202154922/http://www.genesistechnologyfund.vc/html/en_home.php
Energy companies of Hungary
Solar power in Hungary
Photovoltaics manufacturers
Hungarian brands | Hun Mining | [
"Engineering"
] | 204 | [
"Photovoltaics manufacturers",
"Engineering companies"
] |
7,060,139 | https://en.wikipedia.org/wiki/Grand%20Rapids%20Amateur%20Astronomical%20Association | The Grand Rapids Amateur Astronomical Association (or "GRAAA") is an astronomy group located in Grand Rapids, Michigan. It was formed in 1955 by an enthusiastic group of individuals led by businessman James C. Veen, with a love of astronomy and science. Veen initially provided a meeting place in his office, but died in an automobile accident in 1958.
The Association participates in public education activities at various metropolitan venues, comet watches, meteor observing, as well as opening their Observatory to the public two nights per month, excluding winter. Besides the public education programs, members involve themselves in many other pursuits from observing programs to astrophotography and CCD imaging.
The Association owns/ operates the Veen Observatory, located S.W. of Lowell, Michigan, situated on private land per a ninety-nine year lease granted by James M. and H. Evelyn Marron. Construction was started in 1965 and the Observatory was dedicated in 1970, originally with a 12-inch Newtonian reflector constructed by the members. The Chaffee Planetarium made considerable contribution to the building, as well as Kent County businesses and a foundation. It is the largest amateur facility in the state of Michigan, with a 16-inch (Borr), 14-inch (Marron), and 17-inch (Hawkins) telescopes, as well as smaller portable instruments including a hydrogen-alpha solar telescope.
The GRAAA is a 501(c)(3) non-profit educational and scientific organization dedicated to advancing the study of astronomy and promoting astronomy and science education. Meetings are held monthly, in the warmer months at the Veen Observatory, and currently (2013) at Schuler Books and Music in the city during the rest of the season.
See also
List of observatories
References
Amateur astronomy organizations
Clubs and societies in Michigan
Non-profit organizations based in Michigan | Grand Rapids Amateur Astronomical Association | [
"Astronomy"
] | 378 | [
"Amateur astronomy organizations",
"Astronomy stubs",
"Astronomy organizations",
"Astronomy organization stubs"
] |
7,060,157 | https://en.wikipedia.org/wiki/IBM%205880 | The IBM 5880, also known as the IBM 5880 Electrocardiograph System, is a computerized electrocardiograph and diagnostic tool. It was developed by IBM scientist Ray Bonner in the early 1970s and announced in 1978.
The IBM 5880 was designed to analyze electrocardiograms, measurements of the electrical activity of the heart, and provide diagnostic advice to the same standards as a cardiologist. Similar programs already ran on mainframe computers, but the 5880 was the first version that could be placed in a cart and taken into hospital conditions.
When it first arrived in the hospitals doctors were afraid that they no longer would be paid to "read" and EKG. Going forward all EKGs done by the 5880 in the hospital still had to be "reviewed" by a doctor and was paid for that.
References
IBM Archives
Products introduced in 1978
5880 | IBM 5880 | [
"Biology"
] | 180 | [
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
7,060,924 | https://en.wikipedia.org/wiki/Hankinson%27s%20equation | Hankinson's equation (also called Hankinson's formula or Hankinson's criterion) is a mathematical relationship for predicting the off-axis uniaxial compressive strength of wood. The formula can also be used to compute the fiber stress or the stress wave velocity at the elastic limit as a function of grain angle in wood. For a wood that has uniaxial compressive strengths of parallel to the grain and perpendicular to the grain, Hankinson's equation predicts that the uniaxial compressive strength of the wood in a direction at an angle to the grain is given by
Even though the original relation was based on studies of spruce, Hankinson's equation has been found to be remarkably accurate for many other types of wood. A generalized form of the Hankinson formula has also been used for predicting the uniaxial tensile strength of wood at an angle to the grain. This formula has the form
where the exponent can take values between 1.5 and 2.
The stress wave velocity at angle to the grain at the elastic limit can similarly be obtained from the Hankinson formula
where is the velocity parallel to the grain, is the velocity perpendicular to the grain and is the grain angle.
See also
Material failure theory
Linear elasticity
Hooke's law
Orthotropic material
Transverse isotropy
References
Materials science
Solid mechanics
Equations | Hankinson's equation | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 286 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Mathematical objects",
"Materials science",
"Equations",
"Mechanics",
"nan"
] |
7,060,967 | https://en.wikipedia.org/wiki/Crayford%20focuser | The Crayford focuser is a simplified focusing mechanism for amateur astronomical telescopes. Crayford focusers are considered superior to entry-level rack and pinion focusers, normally found in this type of device. Instead of the rack and pinion, they have a smooth spring-loaded shaft which holds the focus tube against four opposing bearing surfaces, and controls its movement. It is named after the Crayford Manor House Astronomical Society, Crayford, London, England, where it was invented by John Wall, a member of the astronomical society which meets there. The original Crayford Focuser is on display there.
The Crayford focuser was initially demonstrated to the Crayford Manor House Astronomical Society, and then descriptions were published in The Journal of the British Astronomical Association (February 1971), Model Engineer magazine (May 1972) (with full constructional plans), and Sky & Telescope magazine (September 1972). John Wall decided not to patent the idea, effectively donating it to the amateur astronomical community.
Crayfords began to be popular among amateur telescope makers for being easy to make without any high precision machining, yet providing precise focusing with no gear slop or backlash. This trend picked up steam as commercial Crayfords, while being relatively inexpensive, proved to be superior to most rack-and-pinion devices. They became the default focusing device on many amateur telescopes just above entry level, and better. They are made in a variety of designs by companies specializing in amateur telescope supplies, with such features as dual speed focusing and interface to motorized, or robotic, focusing.
The concept
The Crayford focuser design is reminiscent of the Dobsonian design for telescopes, since instead of precision machining, it relies on geometry to preclude any wobble or backlash .
The Crayford is similar in appearance to a Rack and pinion focuser, but has no teeth on either the rack or the pinion. Instead, a round axle is pressed (for example by a spring-loaded or thumbscrew-tightened piece of PTFE plastic) against a flat on the side of the focuser drawtube, relying only on friction to move the drawtube as the axle is turned. This also presses the drawtube against a set of four ball bearings against which it moves smoothly with minimal friction. The pressure exerted on the axle can often be adjusted for smoothest operation, and the drawtube may be locked in position to support heavy eyepieces or cameras.
Subsequently, variations on the Crayford focuser concept have emerged, e.g. a helical Crayford where the drawtube is pressed against four tilted ball bearings, and focusing action is accomplished by rotating the drawtube.
Crayford focuser Gallery
References
External links
The Crayford eyepiece mounting by John Wall - Journal of the British Astronomical Association, 81 (2), 118, February 1971
Crayford focusers
Crayford Manor House Astronomical Society - The home of the original Crayford Focuser
astronomics.com's description of how the Crayford Focuser works
Toshimi Taki's description of how to build a cheap Crayford Focuser
Features of SCT Crayford Focuser
Telescope making
Mechanisms (engineering)
English inventions
Crayford | Crayford focuser | [
"Engineering"
] | 677 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
7,061,159 | https://en.wikipedia.org/wiki/Chunked%20transfer%20encoding | Chunked transfer encoding is a streaming data transfer mechanism available in Hypertext Transfer Protocol (HTTP) version 1.1, defined in RFC 9112 §7.1. In chunked transfer encoding, the data stream is divided into a series of non-overlapping "chunks". The chunks are sent out and received independently of one another. No knowledge of the data stream outside the currently-being-processed chunk is necessary for both the sender and the receiver at any given time.
Each chunk is preceded by its size in bytes. The transmission ends when a zero-length chunk is received. The chunked keyword in the Transfer-Encoding header is used to indicate chunked transfer.
Chunked transfer encoding is not supported in HTTP/2, which provides its own mechanisms for data streaming.
Rationale
The introduction of chunked encoding provided various benefits:
Chunked transfer encoding allows a server to maintain an HTTP persistent connection for dynamically generated content. In this case, the HTTP Content-Length header cannot be used to delimit the content and the next HTTP request/response, as the content size is not yet known. Chunked encoding has the benefit that it is not necessary to generate the full content before writing the header, as it allows streaming of content as chunks and explicitly signaling the end of the content, making the connection available for the next HTTP request/response.
Chunked encoding allows the sender to send additional header fields after the message body. This is important in cases where values of a field cannot be known until the content has been produced, such as when the content of the message must be digitally signed. Without chunked encoding, the sender would have to buffer the content until it was complete in order to calculate a field value and send it before the content.
Applicability
For version 1.1 of the HTTP protocol, the chunked transfer mechanism is considered to be always and anyway acceptable, even if not listed in the TE (transfer encoding) request header field, and when used with other transfer mechanisms, should always be applied last to the transferred data and never more than one time. This transfer coding method also allows additional entity header fields to be sent after the last chunk if the client specified the "trailers" parameter as an argument of the TE field. The origin server of the response can also decide to send additional entity trailers even if the client did not specify the "trailers" option in the TE request field, but only if the metadata is optional (i.e. the client can use the received entity without them). Whenever the trailers are used, the server should list their names in the Trailer header field; three header field types are specifically prohibited from appearing as a trailer field: Transfer-Encoding, Content-Length and Trailer.
Format
If a field with a value of "" is specified in an HTTP message (either a request sent by a client or the response from the server), the body of the message consists of one or more chunks and one terminating chunk with an optional trailer before the final ␍␊ sequence (i.e. carriage return followed by line feed).
Each chunk starts with the number of octets of the data it embeds expressed as a hexadecimal number in ASCII followed by optional parameters (chunk extension) and a terminating ␍␊ sequence, followed by the chunk data. The chunk is terminated by ␍␊.
If chunk extensions are provided, the chunk size is terminated by a semicolon and followed by the parameters, each also delimited by semicolons. Each parameter is encoded as an extension name followed by an optional equal sign and value. These parameters could be used for a running message digest or digital signature, or to indicate an estimated transfer progress, for instance.
The terminating chunk is a special chunk of zero length. It may contain a trailer, which consists of a (possibly empty) sequence of entity header fields. Normally, such header fields would be sent in the message's header; however, it may be more efficient to determine them after processing the entire message entity. In that case, it is useful to send those headers in the trailer.
Header fields that regulate the use of trailers are TE (used in requests), and Trailers (used in responses).
Use with compression
HTTP servers often use compression to optimize transmission, for example with or . If both compression and chunked encoding are enabled, then the content stream is first compressed, then chunked; so the chunk encoding itself is not compressed, and the data in each chunk is compressed individually. The remote endpoint then decodes the stream by concatenating the chunks and uncompressing the result.
Example
Encoded data
The following example contains three chunks of size 4, 7, and 11 (hexadecimal "B") octets of data.
4␍␊Wiki␍␊7␍␊pedia i␍␊B␍␊n ␍␊chunks.␍␊0␍␊␍␊
Below is an annotated version of the encoded data.
4␍␊ (chunk size is four octets)
Wiki (four octets of data)
␍␊ (end of chunk)
7␍␊ (chunk size is seven octets) pedia i (seven octets of data) ␍␊ (end of chunk)
B␍␊ (chunk size is eleven octets) n ␍␊chunks. (eleven octets of data) ␍␊ (end of chunk)
0␍␊ (chunk size is zero octets, no more chunks) ␍␊ (end of final chunk with zero data octets)''
Note: Each chunk's size excludes the two ␍␊ bytes that terminate the data of each chunk.
Decoded data
Decoding the above example produces the following octets:
Wikipedia in ␍␊chunks.
The bytes above are typically displayed as
Wikipedia in
chunks.
See also
List of HTTP header fields
References
External links
Data management
Hypertext Transfer Protocol headers | Chunked transfer encoding | [
"Technology"
] | 1,160 | [
"Data management",
"Data"
] |
7,062,356 | https://en.wikipedia.org/wiki/Anomalous%20diffusion | Anomalous diffusion is a diffusion process with a non-linear relationship between the mean squared displacement (MSD), , and time. This behavior is in stark contrast to Brownian motion, the typical diffusion process described by Einstein and Smoluchowski, where the MSD is linear in time (namely, with d being the number of dimensions and D the diffusion coefficient).
It has been found that equations describing normal diffusion are not capable of characterizing some complex diffusion processes, for instance, diffusion process in inhomogeneous or heterogeneous medium, e.g. porous media. Fractional diffusion equations were introduced in order to characterize anomalous diffusion phenomena.
Examples of anomalous diffusion in nature have been observed in ultra-cold atoms, harmonic spring-mass systems, scalar mixing in the interstellar medium, telomeres in the nucleus of cells, ion channels in the plasma membrane, colloidal particle in the cytoplasm, moisture transport in cement-based materials, and worm-like micellar solutions.
Classes of anomalous diffusion
Unlike typical diffusion, anomalous diffusion is described by a power law, where is the so-called generalized diffusion coefficient and is the elapsed time. The classes of anomalous diffusions are classified as follows:
α < 1: subdiffusion. This can happen due to crowding or walls. For example, a random walker in a crowded room, or in a maze, is able to move as usual for small random steps, but cannot take large random steps, creating subdiffusion. This appears for example in protein diffusion within cells, or diffusion through porous media. Subdiffusion has been proposed as a measure of macromolecular crowding in the cytoplasm.
α = 1: Brownian motion.
: superdiffusion. Superdiffusion can be the result of active cellular transport processes or due to jumps with a heavy-tail distribution.
α = 2: ballistic motion. The prototypical example is a particle moving at constant velocity: .
: hyperballistic. It has been observed in optical systems.
In 1926, using weather balloons, Lewis Fry Richardson demonstrated that the atmosphere exhibits super-diffusion. In a bounded system, the mixing length (which determines the scale of dominant mixing motions) is given by the Von Kármán constant according to the equation , where is the mixing length, is the Von Kármán constant, and is the distance to the nearest boundary. Because the scale of motions in the atmosphere is not limited, as in rivers or the subsurface, a plume continues to experience larger mixing motions as it increases in size, which also increases its diffusivity, resulting in super-diffusion.
Models of anomalous diffusion
The types of anomalous diffusion given above allows one to measure the type, but how does anomalous diffusion arise? There are many possible ways to mathematically define a stochastic process which then has the right kind of power law. Some models are given here.
These are long range correlations between the signals continuous-time random walks (CTRW) and fractional Brownian motion (fBm), and diffusion in disordered media. Currently the most studied types of anomalous diffusion processes are those involving the following
Generalizations of Brownian motion, such as the fractional Brownian motion and scaled Brownian motion
Diffusion in fractals and percolation in porous media
Continuous time random walks
These processes have growing interest in cell biophysics where the mechanism behind anomalous diffusion has direct physiological importance. Of particular interest, works by the groups of Eli Barkai, Maria Garcia Parajo, Joseph Klafter, Diego Krapf, and Ralf Metzler have shown that the motion of molecules in live cells often show a type of anomalous diffusion that breaks the ergodic hypothesis. This type of motion require novel formalisms for the underlying statistical physics because approaches using microcanonical ensemble and Wiener–Khinchin theorem break down.
See also
Long term correlations
References
External links
Boltzmann's transformation, Parabolic law (animation)
Anomalous interface shift kinetics (Computer simulations and Experiments)
Physical chemistry | Anomalous diffusion | [
"Physics",
"Chemistry"
] | 861 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
7,062,375 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20residue | In mathematics, the Poincaré residue is a generalization, to several complex variables and complex manifold theory, of the residue at a pole of complex function theory. It is just one of a number of such possible extensions.
Given a hypersurface defined by a degree polynomial and a rational -form on with a pole of order on , then we can construct a cohomology class . If we recover the classical residue construction.
Historical construction
When Poincaré first introduced residues he was studying period integrals of the form for where was a rational differential form with poles along a divisor . He was able to make the reduction of this integral to an integral of the form for where , sending to the boundary of a solid -tube around on the smooth locus of the divisor. Ifon an affine chart where is irreducible of degree and (so there is no poles on the line at infinity page 150). Then, he gave a formula for computing this residue aswhich are both cohomologous forms.
Construction
Preliminary definition
Given the setup in the introduction, let be the space of meromorphic -forms on which have poles of order up to . Notice that the standard differential sends
Define
as the rational de-Rham cohomology groups. They form a filtrationcorresponding to the Hodge filtration.
Definition of residue
Consider an -cycle . We take a tube around (which is locally isomorphic to ) that lies within the complement of . Since this is an -cycle, we can integrate a rational -form and get a number. If we write this as
then we get a linear transformation on the homology classes. Homology/cohomology duality implies that this is a cohomology class
which we call the residue. Notice if we restrict to the case , this is just the standard residue from complex analysis (although we extend our meromorphic -form to all of . This definition can be summarized as the map
Algorithm for computing this class
There is a simple recursive method for computing the residues which reduces to the classical case of . Recall that the residue of a -form
If we consider a chart containing where it is the vanishing locus of , we can write a meromorphic -form with pole on as
Then we can write it out as
This shows that the two cohomology classes
are equal. We have thus reduced the order of the pole hence we can use recursion to get a pole of order and define the residue of as
Example
For example, consider the curve defined by the polynomial
Then, we can apply the previous algorithm to compute the residue of
Since
and
we have that
This implies that
See also
Grothendieck residue
Leray residue
Bott residue
Sheaf of logarithmic differential forms
normal crossing singularity
Adjunction formula#Poincare residue
Hodge structure
Jacobian ideal
References
Introductory
Poincaré and algebraic geometry
Infinitesimal variations of Hodge structure and the global Torelli problem - Page 7 contains general computation formula using Cech cohomology
Higher Dimensional Residues - Mathoverflow
Advanced
References
Boris A. Khesin, Robert Wendt, The Geometry of Infinite-dimensional Groups (2008) p. 171
Several complex variables | Poincaré residue | [
"Mathematics"
] | 649 | [
"Several complex variables",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
7,062,377 | https://en.wikipedia.org/wiki/Aftermath%20of%20World%20War%20II | The aftermath of World War II saw the rise of two global superpowers, the United States (U.S.) and the Soviet Union (USSR). The aftermath of World War II was also defined by the rising threat of nuclear warfare, the creation and implementation of the United Nations as an intergovernmental organization, and the decolonization of Asia, Oceania, South America and Africa by European and East Asian powers, most notably by the United Kingdom, France, and Japan.
Once allies during World War II, the U.S. and the USSR became competitors on the world stage and engaged in the Cold War, so called because it never resulted in overt, declared total war between the two powers. It was instead characterized by espionage, political subversion and proxy wars. Western Europe was rebuilt through the American Marshall Plan, whereas Central and Eastern Europe fell under the Soviet sphere of influence and eventually behind an "Iron Curtain". Europe was divided into a U.S.-led Western Bloc and a USSR-led Eastern Bloc. Internationally, alliances with the two blocs gradually shifted, with some nations trying to stay out of the Cold War through the Non-Aligned Movement. The Cold War also saw a nuclear arms race between the two superpowers, and part of the reason that the Cold War never became a "hot" war was that the Soviet Union and the United States had nuclear deterrents against each other, leading to a mutually assured destruction standoff.
As a consequence of the war, the Allies created the United Nations, an organization for international cooperation and diplomacy, similar to the League of Nations. Members of the United Nations agreed to outlaw wars of aggression in an attempt to avoid a third world war. The devastated great powers of Western Europe formed the European Coal and Steel Community, which later evolved into the European Economic Community and ultimately into the current European Union. This effort primarily began as an attempt to avoid another war between Germany and France through economic cooperation and integration, and a common market for important natural resources.
The end of the war opened the way for decolonization, as independence was granted to India and Pakistan (from the United Kingdom), Indonesia (from the Netherlands), the Philippines (from the U.S.), as well as Israel and several Arab nations from specific Mandates granted to European states by the now defunct League of Nations. Independence for the nations of Sub-Saharan Africa came in the 1960s.
The aftermath of World War II saw the rise of communist influence in East Asia with the founding of the People's Republic of China after the Chinese Communist Party emerged victorious from the Chinese Civil War in 1949, as well as with the Korean War leading to the division of the Korean Peninsula between the communist North and the Western-aligned South.
Immediate effects of World War II
At the end of the war in Europe, tens of millions of people had been killed and even more were displaced, European economies had collapsed, and much of Europe's industrial infrastructure had been destroyed.
In response, in 1947 U.S. Secretary of State George Marshall devised the "European Recovery Program", which became known as the Marshall Plan. Under the plan, from 1948–1952 the United States government allocated US$13 billion (US$ in dollars) for the reconstruction of affected countries in Western Europe.
United Kingdom
By the end of the war, the economy of the United Kingdom was one of severe privation, as a significant portion of its national wealth had been consumed by the war effort. Until the introduction in 1941 of Lend-Lease aid from the US, the UK had been spending its assets to purchase American equipment including aircraft and ships—over £437 million (equivalent to some £ in ) on aircraft alone. Lend-Lease came just before its reserves were exhausted. Britain had placed 55% of its total labour force into war production.
In the spring of 1945, after the final defeat of Germany, the Labour Party withdrew from the wartime coalition government, to oust Winston Churchill, forcing a general election. Following a landslide victory, Labour held more than 60% of the seats in the House of Commons and formed a new government on 26 July 1945 under Clement Attlee, who had been Deputy Prime Minister in the coalition government.
Britain's war debt was described by some in the American administration as a "millstone round the neck of the British economy". Although there were suggestions for an international conference to tackle the issue, in August 1945 the U.S. announced unexpectedly that the Lend-Lease programme was to end immediately.
The abrupt withdrawal of American Lend-Lease support to Britain on 2 September 1945 dealt a severe blow to the plans of the new government. It was only with the completion of the Anglo-American loan by the United States to Great Britain on 15 July 1946 that some measure of economic stability was restored. However, the loan was made primarily to support British overseas expenditure in the immediate post-war years and not to implement the Labour government's policies for domestic welfare reforms and the nationalisation of key industries. Although the loan was agreed on reasonable terms, its conditions included what proved to be damaging fiscal conditions for sterling. From 1946 to 1948, the UK introduced bread rationing, which it had never done during the war.
Soviet Union
The Soviet Union suffered enormous losses in the war against Germany. The Soviet population decreased by about 27 million during the war; of these, 8.7 million were combat deaths. The 19 million non-combat deaths had a variety of causes: starvation in the siege of Leningrad; conditions in German prisons and concentration camps; mass shootings of civilians; harsh labour in German industry; famine and disease; conditions in Soviet camps; and service in German or German-controlled military units fighting the Soviet Union.
Soviet ex-POWs and civilians repatriated from abroad were suspected of having been Nazi collaborators, and 226,127 of them were sent to forced labour camps after scrutiny by Soviet intelligence, NKVD. Many ex-POWs and young civilians were also conscripted to serve in the Red Army. Others worked in labour battalions to rebuild infrastructure destroyed during the war.
The economy had been devastated. Roughly a quarter of the Soviet Union's capital resources were destroyed, and industrial and agricultural output in 1945 fell far short of pre-war levels. To help rebuild the country, the Soviet government obtained limited credits from Britain and Sweden; it refused assistance offered by the United States under the Marshall Plan. Instead, the Soviet Union coerced Soviet-occupied Central and Eastern Europe to supply machinery and raw materials. Germany and former Nazi satellites made reparations to the Soviet Union. The reconstruction programme emphasized heavy industry to the detriment of agriculture and consumer goods. By 1953, steel production was twice its 1940 level, but the production of many consumer goods and foodstuffs was lower than it had been in the late 1920s.
The immediate post-war period in Europe was dominated by the Soviet Union annexing, or converting into Soviet Socialist Republics, all the countries invaded and annexed by the Red Army driving the Germans out of central and eastern Europe. New satellite states were set up by the Soviets in Poland, Bulgaria, Hungary, Czechoslovakia, Romania, Albania, and East Germany; the last of these was created from the Soviet zone of occupation in Germany. Yugoslavia emerged as an independent Communist state allied but not aligned with the Soviet Union, owing to the independent nature of the military victory of the Partisans of Josip Broz Tito during World War II in Yugoslavia. The Allies established the Far Eastern Commission and Allied Council for Japan to administer their occupation of that country while the establishment Allied Control Council, administered occupied Germany. Following the Potsdam Conference agreements, the Soviet Union occupied and subsequently annexed the strategic island of Sakhalin.
Germany
In the east, the Sudetenland reverted to Czechoslovakia following the European Advisory Commission's decision to delimit German territory to be the territory it held on 31 December 1937. Close to one-quarter of pre-war (1937) Nazi Germany was de facto annexed by the Allies; roughly 10 million Germans were either expelled from this territory or not permitted to return to it if they had fled during the war. The remainder of Germany was partitioned into four zones of occupation, coordinated by the Allied Control Council. The Saar was detached and put into economic union with France in 1947. In 1949, the Federal Republic of Germany was created out of the Western zones. The Soviet zone became the German Democratic Republic.
Germany paid reparations to the United Kingdom, France, and the Soviet Union, mainly in the form of dismantled factories, forced labour, and coal. The German standard of living was to be reduced to its 1932 level. Beginning immediately after the German surrender and continuing for the next two years, the U.S. and Britain pursued an "intellectual reparations" programme to harvest all technological and scientific know-how as well as all patents in Germany. The value of these amounted to around US$10 billion (US$ in dollars). In accordance with the Paris Peace Treaties, 1947, reparations were also assessed from the countries of Italy, Romania, Hungary, Bulgaria, and Finland.
US policy in post-war Germany from April 1945 until July 1947 had been that no help should be given to the Germans in rebuilding their nation, save for the minimum required to mitigate starvation. The Allies' immediate post-war "industrial disarmament" plan for Germany had been to destroy Germany's capability to wage war by complete or partial de-industrialization. The first industrial plan for Germany signed in 1946, required the destruction of 1,500 manufacturing plants to lower German heavy industry output to roughly 50% of its 1938 level. The dismantling of the West German industry ended in 1951. By 1950, equipment had been removed from 706 manufacturing plants, and steel production capacity had been reduced by 6.7 million tons. After lobbying by the Joint Chiefs of Staff and Generals Lucius D. Clay and George Marshall, the Truman administration accepted that economic recovery in Europe could not go forward without the reconstruction of the German industrial base on which it had previously been dependent. In July 1947, President Truman rescinded on "national security grounds" the directive that had ordered the U.S. occupation forces to "take no steps looking toward the economic rehabilitation of Germany." A new directive recognized that "[a]n orderly, prosperous Europe requires the economic contributions of a stable and productive Germany." From mid-1946 onwards Germany received U.S. government aid through the GARIOA programme. From 1948 onwards West Germany also became a minor beneficiary of the Marshall Plan. Volunteer organizations had initially been forbidden to send food, but in early 1946 the Council of Relief Agencies Licensed to Operate in Germany was founded. The prohibition against sending CARE Packages to individuals in Germany was rescinded on 5 June 1946.
Following the German surrender, the International Red Cross was prohibited from providing aid such as food or visiting POW camps for Germans inside Germany. However, after making approaches to the Allies in the autumn of 1945 it was allowed to investigate the camps in the UK and French occupation zones of Germany, as well as to provide relief to the prisoners held there. On 4 February 1946, the Red Cross was also permitted to visit and assist prisoners in the U.S. occupation zone of Germany, although only with very small quantities of food. The Red Cross petitioned successfully for improvements to be made in the living conditions of German POWs.
The German people as a whole, especially its youth, were traumatized psychologically by the previous decade of Nazi rule, with major cities and infrastructure destroyed by Allied bombardments. This trauma was multifaceted, as it permeated all levels of society, by means of the systematic Nazification of the country with the strategic creation of the Reich Ministry of Public Enlightenment and Propaganda which took over the media and all institutions, and put in place the systematic indoctrination of the very young via the creation of the Hitler Youth, the Deutsches Jungvolk, the League of German Girls and the Jungmädelbund. At the end of the war, major cities were devastated, food shortages ensued, and a wave of denazification occurred throughout occupied Germany.
France
As France was liberated from German occupation, an épuration (purge) of real and suspected Nazi collaborators began. At first, this was undertaken in an extralegal manner by the French Resistance (called the épuration sauvage, "wild purge"). French women who had had romantic liaisons with German soldiers were publicly humiliated and had their heads shaved. There was also a wave of summary executions estimated to have killed about 10,000 people.
When the Provisional Government of the French Republic established control, the Épuration légale ("legal purge") began. There were no international war crimes trials for French collaborators, who were tried in the domestic courts. Approximately 300,000 cases were investigated; 120,000 people were given various sentences including 6,763 death sentences (of which only 791 were carried out). Most convicts were given amnesty a few years later.
Italy
The aftermath of World War II left Italy with an anger against the monarchy for its endorsement of the Fascist regime for the previous twenty years. These frustrations contributed to a revival of the Italian republican movement. In the 1946 Italian constitutional referendum, held on 2 June, a day celebrated since as Festa della Repubblica, the Italian monarchy was abolished, having been associated with the deprivations of the war and the Fascist rule, especially in the North, and Italy became a republic. This was the first time that Italian women voted at the national level, and the second time overall considering the local elections that were held a few months earlier in some cities.
King Victor Emmanuel III's son, King Umberto II, was forced to abdicate and exiled. The Republican Constitution was approved on 1 January 1948, resulting from the work of a Constituent Assembly formed by the representatives of all the anti-fascist forces that contributed to the defeat of Nazi and Fascist forces during the liberation of Italy. Unlike in Germany and Japan, no war crimes tribunals were held against Italian military and political leaders, though the Italian resistance summarily executed some of them (such as Mussolini) at the end of the war; the Togliatti amnesty, taking its name from the Communist Party secretary at the time, pardoned all wartime common and political crimes in 1946.
The 1947, Treaty of Peace with Italy spelled the end of the Italian colonial empire, along with other border revisions, like the transfer of the Italian Islands of the Aegean to the Kingdom of Greece and the transfer to France of Briga and Tenda, as well than to minor revisions of the Franco-Italian border. Moreover, under the Treaty of Peace with Italy, Istria, Kvarner, most of the Julian March as well as the Dalmatian city of Zara was annexed by Yugoslavia causing the Istrian–Dalmatian exodus, which led to the emigration of between 230,000 and 350,000 of local ethnic Italians (Istrian Italians and Dalmatian Italians), the others being ethnic Slovenians, ethnic Croatians, and ethnic Istro-Romanians, choosing to maintain Italian citizenship, towards Italy, and in smaller numbers, towards the Americas, Australia and South Africa.
The 1947 Treaty of Peace compelled Italy to pay $360 million (US dollars at 1938 prices) in war reparations: $125 million to Yugoslavia, $105 million to Greece, $100 million to the Soviet Union, $25 million to Ethiopia and $5 million to Albania. In 1954 the Free Territory of Trieste, an independent territory between northern Italy and Yugoslavia under direct responsibility of the United Nations Security Council, was divided between the two states, Italy and Yugoslavia. The Italian border that applies today has existed since 1975, when Trieste was formally re-annexed to Italy after the Treaty of Osimo. In 1950, Italian Somaliland was made a United Nations Trust Territory under Italian administration until 1 July 1960.
Austria
The Federal State of Austria had been annexed by Germany in 1938 (Anschluss, this union was banned by the Treaty of Versailles). Austria (called Ostmark by the Germans) was separated from Germany and divided into four zones of occupation. With the Austrian State Treaty, these zones reunited in 1955 to become the Republic of Austria.
Japan
Following the war, the Allies rescinded Japanese Empire pre-war annexations such as Manchuria, and Korea became militarily occupied by the United States in the south and by the Soviet Union in the north. The Philippines and Guam were returned to the United States. Burma, Malaya, and Singapore were returned to Britain and Indochina back to France. The Dutch East Indies was to be handed back to the Dutch but was resisted leading to the Indonesian war for independence. At the Yalta Conference, U.S. president Franklin D. Roosevelt had secretly traded the Japanese Kurils and south Sakhalin to the Soviet Union in return for Soviet entry into the war with Japan. The Soviet Union annexed the Kuril Islands, provoking the Kuril Islands dispute, which is ongoing, as Russia continues to occupy the islands.
Hundreds of thousands of Japanese were forced to relocate to the Japanese main islands. Okinawa became a main U.S. staging point. The U.S. covered large areas of it with military bases and continued to occupy it until 1972, years after the end of the occupation of the main islands. The bases remain. To skirt the Geneva Convention, the Allies classified many Japanese soldiers as Japanese Surrendered Personnel (JSP) instead of POWs and used them as forced labour until 1947. The UK, France, and the Netherlands used JSP to support their military operations in the region after World War II. General Douglas MacArthur established the International Military Tribunal for the Far East. The Allies collected reparations from Japan.
To further remove Japan as a potential future military threat, the Far Eastern Commission decided to de-industrialize Japan, to reduce the Japanese standard of living to what prevailed between 1930 and 1934. In the end, the de-industrialisation programme in Japan was implemented to a lesser degree than the one in Germany. Japan received emergency aid from GARIOA, as did Germany. In early 1946, the Licensed Agencies for Relief in Asia were formed and permitted to supply Japanese with food and clothes. In April 1948 the Johnston Committee Report recommended that the economy of Japan should be reconstructed due to the high cost to US taxpayers of continuous emergency aid.
Survivors of the atomic bombings of Hiroshima and Nagasaki, known as hibakusha (被爆者), were ostracized by Japanese society. Japan provided no special assistance to these people until 1952. By the 65th anniversary of the bombings, total casualties from the initial attack and later deaths reached about 270,000 in Hiroshima and 150,000 in Nagasaki. About 230,000 hibakusha were still alive , and about 2,200 were suffering from radiation-caused illnesses .
Finland
In the Winter War of 1939–1940, the Soviet Union invaded neutral Finland and annexed some of its territory. From 1941 until 1944, Finland aligned itself with Nazi Germany in a failed effort to regain lost territories from the Soviets. Finland retained its independence following the war but remained subject to Soviet-imposed constraints in its domestic affairs.
The Baltic states
In 1940 the Soviet Union invaded and annexed the neutral Baltic states, Estonia, Latvia, and Lithuania. In June 1941, the Soviet governments of the Baltic states carried out mass deportations of "enemies of the people"; as a result, many treated the invading Nazis as liberators when they invaded only a week later. The Atlantic Charter promised self-determination to people deprived of it during the war. The British Prime Minister, Winston Churchill, argued for a weaker interpretation of the Charter to permit the Soviet Union to continue to control the Baltic states. In March 1944 the US accepted Churchill's view that the Atlantic Charter did not apply to the Baltic states. With the return of Soviet troops at the end of the war, the Forest Brothers mounted a guerrilla war. This continued until the mid-1950s.
The Philippines
An estimated one million military and civilian Filipinos were killed from all causes; of these 131,028 were listed as killed in seventy-two war crime events. According to a United States analysis released years after the war, U.S. casualties were 10,380 dead and 36,550 wounded; Japanese dead were 255,795.
Population displacement
As a result of the new borders drawn by the victorious nations, large populations suddenly found themselves in hostile territory. The Soviet Union took over areas formerly controlled by Germany, Finland, Poland, and Japan. Poland lost the Kresy region (about half of its pre-war territory) and received most of Germany east of the Oder–Neisse line, including the industrial regions of Silesia. The German state of the Saar was temporarily a protectorate of France but later returned to German administration. As set forth at Potsdam, approximately 12 million people were expelled from Germany, including seven million from Germany proper, and three million from the Sudetenland.
During the war, the United States government interned approximately 110,000 Japanese Americans and Japanese who lived along the Pacific coast of the United States in the wake of Imperial Japan's attack on Pearl Harbor. Canada interned approximately 22,000 Japanese Canadians, 14,000 of whom were born in Canada. After the war, some internees chose to return to Japan, while most remained in North America.
Poland
The Soviet Union expelled at least 2 million Poles from the east of the new border approximating the Curzon Line. This estimate is uncertain as neither the Polish Communist government nor the Soviet government kept track of the number of expelled people. The number of Polish citizens inhabiting Polish borderlands (Kresy region) was about 13 million before World War II broke out according to official Polish statistics. Polish citizens killed in the war that originated from the Polish borderlands territory (killed by either the German Nazi regime or the Soviet regime, or expelled to distant parts of Siberia) were accounted as Russian, Ukrainian, or Belarusian casualties of war in official Soviet historiography. This fact imposes additional difficulties in making the correct estimation of the number of Polish citizens forcibly transferred after the war. The border change also reversed the results of the 1919–1920 Polish–Soviet War. Former Polish cities such as Lwów came under the control of the Ukrainian Soviet Socialist Republic. Additionally, the Soviet Union transferred more than two million people within their borders; these included Germans, Finns, Crimean Tatars, and Chechens.
Rape during occupation and liberation
In Europe
As Soviet troops marched across the Balkans, they committed rapes and robberies in Romania, Hungary, Czechoslovakia and Yugoslavia. The population of Bulgaria was largely spared of this treatment, possibly due to a sense of ethnic kinship or to the leadership of Marshal Fyodor Tolbukhin. The population of Germany was treated significantly worse. Rape and murder of German civilians was as bad as, and sometimes worse than, Nazi propaganda had anticipated. Political officers encouraged Soviet troops to seek revenge and terrorise the German population. On "the basis of Hochrechnungen (projections or estimations)", "1.9 million German women altogether were raped at the end of the war by Red Army soldiers." About one-third of all German women in Berlin were raped by Soviet forces. A substantial minority were raped multiple times. In Berlin, contemporary hospital records indicate between 95,000 and 130,000 women were raped by Soviet troops. About 10,000 of these women died, mostly by suicide. Over 4.5 million Germans fled towards the West. The Soviets initially had no rules against their troops "fraternising" with German women, but by 1947 they started to isolate their troops from the German population in an attempt to stop rape and robbery by the troops. Not all Soviet soldiers participated in these activities.
Foreign reports of Soviet brutality were denounced as false. Rape, robbery, and murder were blamed on German bandits impersonating Soviet soldiers. Some justified Soviet brutality towards German civilians based on previous brutality of German troops toward Russian civilians. Until the reunification of Germany, East German histories virtually ignored the actions of Soviet troops, and Russian histories still tend to do so. Reports of mass rapes by Soviet troops were often dismissed as anti-Communist propaganda or the normal byproduct of war.
Rapes also occurred under other Allied forces in Europe, though the majority were committed by Soviet troops. In a letter to the editor of Time published in September 1945, a United States Army sergeant wrote, "Our own Army and the British Army along with ours have done their share of looting and raping ... This offensive attitude among our troops is not at all general, but the percentage is large enough to have given our Army a pretty black name, and we too are considered an army of rapists." Robert Lilly's analysis of military records led him to conclude about 14,000 rapes occurred in Britain, France, and Germany at the hands of U.S. soldiers between 1942 and 1945. Lilly assumed that only 5% of rapes by American soldiers were reported, making 17,000 GI rapes a possibility, while analysts estimate that 50% of (ordinary peacetime) rapes are reported. Supporting Lilly's lower figure is the "crucial difference" that for World War II military rapes "it was the commanding officer, not the victim, who brought charges". According to German historian Miriam Gebhardt, as many as 190,000 women were raped by U.S. soldiers in Germany.
German soldiers left many war children behind in nations such as France and Denmark, which were occupied for an extended period. After the war, the children and their mothers often suffered recriminations. In Norway, the "Tyskerunger" (German-kids) suffered greatly.
During the Italian campaign, the Goumiers, French Moroccan colonial troops attached to the French Expeditionary Forces, have been accused of committing rape and murder against the Italian peasant communities, mostly targeting civilian women and girls, as well as a few men and boys. In Italy the victims of these acts were described as Marocchinate meaning literally "Moroccaned" (or people who have been subjected to acts committed by Moroccans). According to Italian victims associations, a total of more than 7,000 civilians, including children, were raped by Goumiers.
In Japan
In the first few weeks of the American military occupation of Japan, rape and other violent crime was widespread in naval ports like Yokohama and Yokosuka but declined shortly afterward. There were 1,336 reported rapes during the first 10 days of the occupation of Kanagawa prefecture. Historian Toshiyuki Tanaka relates that in Yokohama, the capital of the prefecture, there were 119 known rapes in September 1945.
Historians Eiji Takemae and Robert Ricketts state that "When U.S. paratroopers landed in Sapporo, an orgy of looting, sexual violence, and drunken brawling ensued. Gang rapes and other sex atrocities were not infrequent" and some of the rape victims committed suicide.
General Robert L. Eichelberger, the commander of the U.S. Eighth Army, recorded that in the one instance when the Japanese formed a self-help vigilante guard to protect women from off-duty GIs, the Eighth Army ordered armored vehicles in the battle array into the streets and arrested the leaders, and the leaders received long prison terms.
According to Takemae and Ricketts, members of the British Commonwealth Occupation Force (BCOF) were also involved in rapes:
Rape committed by U.S. soldiers occupying Okinawa was also a notable phenomenon. Okinawan historian Oshiro Masayasu (former director of the Okinawa Prefectural Historical Archives) writes:
According to Toshiyuki Tanaka, 76 cases of rape or rape-murder were reported during the first five years of the American occupation of Okinawa. However, he claims this is probably not the true figure, as most cases were unreported.
Comfort women for Japanese soldiers
During World War II the Japanese military established brothels filled with "comfort women", a euphemism for the 200,000 girls and women who were forced into sexual slavery for Japanese soldiers. In Confucian nations like Korea and China, where premarital sex is considered shameful, the subject of the "comfort women" was ignored for decades after 1945 as the victims were considered pariahs. Dutch comfort women brought a successful case before the Batavia Military Tribunal in 1948.
Post-war tensions
Europe
The alliance between the Western Allies and the Soviet Union began to deteriorate even before the war was over, when Stalin, Roosevelt, and Churchill exchanged a heated correspondence over whether the Polish government-in-exile, backed by Roosevelt and Churchill, or the Provisional Government, backed by Stalin, should be recognised. Stalin won.
Many allied leaders felt that war between the United States and the Soviet Union was likely. On 19 May 1945, the American Under-Secretary of State Joseph Grew went so far as to say that it was inevitable.
On 5 March 1946, in his "Sinews of Peace" (Iron Curtain) speech at Westminster College in Fulton, Missouri, Winston Churchill said "a shadow" had fallen over Europe. He described Stalin as having dropped an "Iron Curtain" between East and West. Stalin responded by charging that co-existence between communist countries and the West was impossible. In mid-1948 the Soviet Union imposed a blockade on the Western zone of occupation in Berlin.
Due to the rising tension in Europe and concerns over further Soviet expansion, American planners came up with a contingency plan code-named Operation Dropshot in 1949. It considered possible nuclear and conventional war with the Soviet Union and its allies to counter a Soviet takeover of Western Europe, the Near East, and parts of Eastern Asia that they anticipated would begin around 1957. In response, the U.S. would saturate the Soviet Union with atomic and high-explosive bombs, and then invade and occupy the country. In later years, to reduce military expenditures while countering Soviet conventional strength, President Dwight Eisenhower would adopt a strategy of massive retaliation, relying on the threat of a U.S. nuclear strike to prevent non-nuclear incursions by the Soviet Union in Europe and elsewhere. The approach entailed a major buildup of U.S. nuclear forces and a corresponding reduction in America's non-nuclear ground and naval strength. The Soviet Union viewed these developments as "atomic blackmail".
In Greece, civil war broke out in 1946 between Anglo-American-supported royalist forces and communist-led forces, with the royalist forces emerging as the victors. The U.S. launched a massive programme of military and economic aid to Greece and to neighbouring Turkey, arising from a fear that the Soviet Union stood on the verge of breaking through the NATO defence line to the oil-rich Middle East. On 12 March 1947, to gain Congressional support for the aid, President Truman described the aid as promoting democracy in defence of the "Free World", a principle that became known as the Truman Doctrine.
The U.S. sought to promote an economically strong and politically united Western Europe to counter the threat posed by the Soviet Union. This was done openly using tools such as the European Recovery Program, which encouraged European economic integration. The International Authority for the Ruhr, designed to keep German industry down and controlled, evolved into the European Coal and Steel Community, a founding pillar of the European Union. The United States also worked covertly to promote European integration, for example using the American Committee on United Europe to funnel funds to European federalist movements. To ensure that Western Europe could withstand the Soviet military threat, the Western European Union was founded in 1948 and NATO in 1949. The first NATO Secretary General, Lord Ismay, famously stated the organisation's goal was "to keep the Russians out, the Americans in, and the Germans down". However, without the manpower and industrial output of West Germany no conventional defence of Western Europe had any hope of succeeding. To remedy this, in 1950 the US sought to promote the European Defence Community, which would have included a rearmed West Germany. The attempt was dashed when the French Parliament rejected it. On 9 May 1955, West Germany was instead admitted to NATO; the immediate result was the creation of the Warsaw Pact five days later.
The Cold War also saw the creation of propaganda and espionage organisations such as Radio Free Europe, the Information Research Department, the Gehlen Organization, the Central Intelligence Agency, the Special Activities Division, and the Ministry for State Security, as well as the radicalization and proliferation of numerous far-left and far-right terrorist organizations in Western European countries (Italy, France, West Germany, Belgium, Francoist Spain, and the Netherlands), with spillovers in Northern and Southeastern Europe.
Asia
In Asia, the surrender of Japanese forces was complicated by the split between East and West as well as by the movement toward national self-determination in European colonial territories.
India
Decisions to decolonize British India led to an agreement to partition the country along religious lines into two independent dominions: India and Pakistan. The partition resulted in communal violence and massive displacements of the population. It is often described as the largest mass human migration and one of the largest refugee crises in history.
China
As agreed to at the Yalta Conference, the Soviet Union declared war on Japan. Soviet forces invaded Manchuria which led to the collapse of the Manchukuo and expulsion of all Japanese settlers from the puppet state. The Soviet Union dismantled the industrial base in Manchuria that the Japanese had built up and it subsequently became a base for the Communist Chinese forces due to the area being under Soviet occupation.
Following the end of the war, the Kuomintang (KMT) party (led by generalissimo Chiang Kai-shek) and the Communist Chinese forces resumed fighting each other, which they had temporarily suspended in order to fight Japan. The fight against the Japanese occupiers had strengthened popular support among the Chinese people for the Communist forces while it weakened the KMT, which depleted its strength fighting them. Full-scale war between the KMT and CCF broke out in June 1946. Despite U.S. support for the Kuomintang, Communist forces ultimately prevailed and they established the People's Republic of China (PRC) on the mainland. The KMT forces retreated to the island of Taiwan in 1949 where they established the Republic of China (ROC).
With the Communist victory in the civil war, the Soviet Union gave up its claim to military bases in China that were given to it by its Western Allies at the end of World War II.
While large scale hostilities largely ceased by 1950, intermittent clashes occurred between the two from 1950 to 1979. Taiwan unilaterally declared the civil war over in 1991, but no formal peace treaty or truce has been signed and the PRC continues to officially see Taiwan as a breakaway province that rightfully belongs to it.
The outbreak of the Korean War a few months after the conclusion of the Chinese Civil War and continued U.S. support for the KMT were the main reasons that prevented the PRC from invading Taiwan.
Korea
At the Yalta Conference, the Allies agreed that an undivided post-war Korea would be placed under four-power multinational trusteeship. After Japan's surrender, this agreement was modified to a joint Soviet-American occupation of Korea. The agreement was that Korea would be divided and occupied by the Soviets from the north and the Americans from the south.
Korea, formerly under Japanese rule, and which had been partially occupied by the Red Army following the Soviet Union's entry into the war against Japan, was divided at the 38th parallel on the orders of the U.S. Department of War. A U.S. military government in southern Korea was established in the capital city of Seoul. The American military commander, Lt. Gen. John R. Hodge, enlisted many former Japanese administrative officials to serve in this government. North of the military line, the Soviets administered the disarming and demobilisation of repatriated Korean nationalist guerrillas who had fought on the side of Chinese nationalists against the Japanese in Manchuria during World War II. Simultaneously, the Soviets enabled a build-up of heavy armaments to pro-communist forces in the north. The military line became a political line in 1948, when separate republics emerged on both sides of the 38th parallel, each republic claiming to be the legitimate government of Korea. It culminated in the north invading the south, start of the Korean War two years later.
Malaya
Labour and civil unrest broke out in the British colony of Malaya in 1946. A state of emergency was declared by the colonial authorities in 1948 with the outbreak of acts of terrorism. The situation deteriorated into a full-scale anti-colonial insurgency, or Anti-British National Liberation War as the insurgents referred to it, led by the Malayan National Liberation Army (MNLA), the military wing of the Malayan Communist Party. The Malayan Emergency would endure for the next 12 years, ending in 1960. In 1967, communist leader Chin Peng reopened hostilities, culminating in a second emergency that lasted until 1989.
French Indochina (Vietnam, Laos, and Cambodia)
Events during WWII in the colony of French Indochina (consisting of modern-day Vietnam, Laos and Cambodia) set the stage for the First Indochina War which in turn led to the Vietnam War.
During WWII, the Vichy French aligned colonial authorities cooperated with the Japanese invaders. The communist-controlled common front Viet Minh (supported by the Allies) was formed among the Vietnamese in the colony in 1941 to fight for the independence of Vietnam, against both the Japanese and prewar French powers. After the Vietnamese Famine of 1945 support for the Viet Minh was bolstered as the front launched a rebellion, sacking rice warehouses and urging the Vietnamese to refuse to pay taxes. Because the French colonial authorities started to hold secret talks with the Free French, the Japanese interned them 9 March 1945. When Japan surrendered in August, this created a power vacuum, and the Viet Minh took power in the August Revolution, declaring the Democratic Republic of Vietnam (DRV). However, the Allies (including the USSR) all agreed that the area belonged to the French. Chinese forces moved in from the north and British from the south (as the French were unable to do so immediately themselves) and then handed power to the French, a process completed by March 1946. Attempts to integrate the Democratic Republic of Vietnam with French rule failed and the Viet Minh launched their rebellion against the French rule starting the First Indochina War that same year (the Viet Minh organized common fronts to fight the French in Laos and Cambodia). In March 1949, France recognized Vietnam's independence within the French Union and gradually gave it autonomy to counter communists in international context of decolonization.
The war ended in July 1954 with French withdrawal and a partition of Vietnam that was intended to be temporary until elections could be held. The DRV held the north while South Vietnam formed into a separate republic in control of Ngo Dinh Diem who was backed in his refusal to hold elections by the U.S. The communist party of the south eventually organized the common front NLF to fight to unite south and north under the Democratic Republic of Vietnam and thus began the Vietnam War, which ended with the North conquering the South in 1975.
Dutch East Indies (Indonesia)
Japan invaded and occupied the Dutch East Indies during the war and replaced the colonial government with a new administration. Although the top positions were held by Japanese officers, the internment of all Dutch citizens meant that Indonesians filled many leadership and administrative positions. Following the Japanese surrender in August 1945, Indonesian nationalist leaders such as Sukarno and Mohammad Hatta declared Indonesia as independent. A four-and-a-half-year struggle followed as the Dutch tried to re-establish their rule in colony, using a significant portion of their Marshall Plan aid to this end. The Dutch were aided by British forces for the first phase of the conflict until the United Kingdom withdrew. The British also initially used 35,000 Japanese Surrendered Personnel to support their military operations in Indonesia. Although Dutch forces re-occupied most of Indonesia, an Indonesian guerrilla campaign supported by the majority of Indonesians ensured, and ultimately international opinion favoured independence. In December 1949, the Netherlands formally recognised Indonesian sovereignty.
Wartime criminals recruited as Cold War assets
Covert operations and espionage
British covert operations in the Baltic States, which began in 1944 against the Nazis, escalated following the war. In Operation Jungle, the Secret Intelligence Service (known as MI6) recruited and trained Estonians, Latvians, and Lithuanians for the clandestine work in the Baltic states between 1948 and 1955. Leaders of the operation included Alfons Rebane, Stasys Žymantas, and Rūdolfs Silarājs. The agents were transported under the cover of the "British Baltic Fishery Protection Service". They launched from British-occupied Germany, using a converted World War II E-boat captained and crewed by former members of the wartime German navy. British intelligence also trained and infiltrated anti-communist agents into Soviet Union from across the Finnish border, with orders to assassinate Soviet officials. In the end, counter-intelligence supplied to the KGB by Kim Philby allowed the KGB to penetrate and ultimately gain control of MI6's entire intelligence network in the Baltic states.
Vietnam and the Middle East would later damage the reputation gained by the U.S. during its successes in Europe.
The KGB believed that the Third World rather than Europe was the arena in which it could win the Cold War. Moscow would in later years fuel an arms buildup in Africa. In later years, African countries used as proxies in the Cold War would often become "failed states" of their own.
In 2014, The New York Times reported that "In the decades after World War II, the Central Intelligence Agency (CIA) and other United States agencies employed at least a thousand Nazis as Cold War spies and informants and, as recently as the 1990s, concealed the government's ties to some still living in America, newly disclosed records and interviews show." According to Timothy Naftali, "The CIA's central concern [in recruiting former Nazi collaborators] was not so much the extent of the criminal's guilt as the likelihood that the agent's criminal past could remain a secret."
Recruitment of former enemy scientists
When the divisions of postwar Europe began to emerge, the war crimes programmes and denazification policies of Britain and the United States were relaxed in favour of recruiting German scientists, especially nuclear and long-range rocket scientists. Many of these, prior to their capture, had worked on developing the German V-2 long-range rocket at the Baltic coast German Army Research Center Peenemünde. Western Allied occupation force officers in Germany were ordered to refuse to cooperate with the Soviets in sharing captured wartime secret weapons, the recovery for which, specifically in regards to advanced German aviation technology and personnel, the British had sent the Fedden Mission into Germany to contact its aviation technology centers and key personnel, paralleled by the United States with its own Operation Lusty aviation technology personnel and knowledge recovery program.
In Operation Paperclip, beginning in 1945, the United States imported 1,600 German scientists and technicians, as part of the intellectual reparations owed to the U.S. and the UK, including about $10 billion (US$ billion in dollars) in patents and industrial processes. In late 1945, three German rocket-scientist groups arrived in the U.S. for duty at Fort Bliss, Texas, and at White Sands Proving Grounds, New Mexico, as "War Department Special Employees".
The wartime activities of some Operation Paperclip scientists would later be investigated. Arthur Rudolph left the United States in 1984, in order to not be prosecuted. Similarly, Georg Rickhey, who came to the United States under Operation Paperclip in 1946, was returned to Germany to stand trial at the Mittelbau-Dora war crimes trial in 1947. Following his acquittal, he returned to the United States in 1948 and eventually became a U.S. citizen.
The Soviets began Operation Osoaviakhim in 1946. NKVD and Soviet army units effectively deported thousands of military-related technical specialists from the Soviet occupation zone of post-war Germany to the Soviet Union. The Soviets used 92 trains to transport the specialists and their families, an estimated 10,000–15,000 people. Much related equipment was also moved, the aim being to virtually transplant research and production centres, such as the relocated V-2 rocket centre at Mittelwerk Nordhausen, from Germany to the Soviet Union. Among the people moved were Helmut Gröttrup and about two hundred scientists and technicians from Mittelwerk. Personnel were also taken from AEG, BMW's Stassfurt jet propulsion group, IG Farben's Leuna chemical works, Junkers, Schott AG, Siebel, Telefunken, and Carl Zeiss AG.
The operation was commanded by NKVD deputy Colonel General Serov, outside the control of the local Soviet Military Administration. The major reason for the operation was the Soviet fear of being condemned for noncompliance with Allied Control Council agreements on the liquidation of German military installations. Some Western observers thought Operation Osoaviakhim was a retaliation for the failure of the Socialist Unity Party in elections, though Osoaviakhim was clearly planned before that.
Demise of the League of Nations and the founding of the United Nations
As a general consequence of the war and in an effort to maintain international peace, the Allies formed the United Nations (UN), which officially came into existence on 24 October 1945. The UN replaced the defunct League of Nations (LN) as an intergovernmental organization. The LN was formally dissolved on 20 April 1946 but had in practice ceased to function in 1939, being unable to stop the outbreak of World War II. The UN inherited some of the bodies of the LN, such as the International Labour Organization.
League of Nations mandate, mostly territories that had changed hands in World War I, became United Nations trust territories. South West Africa, an exception, was still governed under terms of the original mandate. As the successor body to the League, the UN still assumed a supervisory role over the territory. The Free City of Danzig, a semi-autonomous City-state that was partly overseen by the League, became part of Poland.
The UN adopted The Universal Declaration of Human Rights in 1948, "as a common standard of achievement for all peoples and all nations." The Soviet Union abstained from voting on adoption of the declaration. The U.S. did not ratify the social and economic rights sections.
The five major Allied powers were given permanent membership in the United Nations Security Council. The permanent members can veto any United Nations Security Council resolution, the only UN decisions that are binding according to international law. The five powers at the time of founding were: the United States of America, the United Kingdom, France, the Soviet Union and the Republic of China. The Republic of China lost the Chinese Civil War and retreated to the island of Taiwan by 1950 but continued to be a permanent member of the Council even though the de facto state in control of mainland China was the People's Republic of China (PRC). This was changed in 1971 when the PRC was given the permanent membership previously held by the Republic of China. Russia inherited the permanent membership of the Soviet Union in 1991 after the dissolution of that state.
Unresolved conflicts
Japanese holdouts persisted on various islands in the Pacific Theatre until at least 1974. Although all hostilities are now resolved, a peace treaty has never been signed between Japan and Russia due to the Kuril Islands dispute.
Economic aftermath
By the end of the war, the European economy had collapsed with some 70% of its industrial infrastructure destroyed. The property damage in the Soviet Union consisted of complete or partial destruction of 1,710 cities and towns, 70,000 villages/hamlets, and 31,850 industrial establishments. The strength of the economic recovery following the war varied throughout the world, though in general, it was quite robust, particularly in the United States.
In Europe, West Germany, after having continued to decline economically during the first years of the Allied occupation, later experienced a remarkable recovery, and had by the end of the 1950s doubled production from its pre-war levels. Italy came out of the war in poor economic condition, but by the 1950s, the Italian economy was marked by stability and high growth. France rebounded quickly and enjoyed rapid economic growth and modernisation under the Monnet Plan. The UK, by contrast, was in a state of economic ruin after the war and continued to experience relative economic decline for decades to follow.
The Soviet Union also experienced a rapid increase in production in the immediate post-war era. Japan experienced rapid economic growth, becoming one of the most powerful economies in the world by the 1980s. China, following the conclusion of its civil war, was essentially bankrupt. By 1953, economic restoration seemed fairly successful as production had resumed pre-war levels. Although China's growth rate mostly persisted, it was severely disrupted by the economic experiments of the Great Leap Forward, due to the resulting famine that caused the deaths of
At the end of the war, the United States produced roughly half of the world's industrial output. The US, of course, had been spared industrial and civilian devastation. Further, much of its pre-war industry had been converted to wartime usage. As a result, with its industrial and civilian base in much better shape than most of the world, the U.S. embarked on an economic expansion unseen in human history. U.S. gross domestic product increased from $228 billion in 1945 to just under $1.7 trillion in 1975.
Denazification
In 1951 several laws were passed, ending the denazification. As a result, many people with a former Nazi past ended up again in the political apparatus of West Germany. West German President Walter Scheel and Chancellor Kurt Georg Kiesinger were both former members of the Nazi Party. In 1957, 77% of the West German Ministry of Justice's senior officials were former Nazi Party members. Konrad Adenauer's State Secretary Hans Globke had played a major role in drafting antisemitic Nuremberg Race Laws in Nazi Germany.
Unexploded ordnance
Unexploded ordnance continues to pose a danger in the present day. In 2017 fifty thousand people were evacuated from Hanover so World War II era bombs could be defused. As of 2023, it is still thought that thousands of unexploded bombs remain from World War II.
Environment
When World War II ended scientists did not have procedures for safe disposal of chemical arsenals. At the direction of the UK, US and Russia, chemical weapons were loaded onto ships by the metric ton and dumped into the sea. The exact locations of the dumping are not known due to poor record keeping, but it is estimated that 1 million metric tons of chemical weapons remain on the ocean floor where they are rusting and pose the risk of leaks. Sulfur mustard exposure has been reported in some parts of coastal Italy and sulfur mustard bombs have been found as far as Delaware, likely brought in with the shellfish cargo.
See also
Aftermath of the Holocaust
Bretton Woods system
Western Union
Demobilization of United States Armed Forces after World War II
Danube River Conference of 1948
Operation Unthinkable
Neo-fascism
Post-fascism
Aftermath of World War II in Bavaria
References
Bibliography
Further reading
Black, Monica. A Demon-Haunted Land: Witches, Wonder Doctors, and the Ghosts of the Past in Post–WWII Germany (Metropolitan Books, 2020).
Gatrell, Peter. The unsettling of Europe: the great migration, 1945 to the present (Penguin UK, 2019).
Hilton, Laura J. "Who was 'worthy'? How empathy drove policy decisions about the uprooted in occupied Germany, 1945–1948". Holocaust and Genocide Studies 32.1 (2018): 8–28. online
Hoffmann, Steven A. "Japan: Foreign Occupation and Democratic Transition". in Establishing Democracies (Routledge, 2021) pp. 115–148.
Kehoe, Thomas J., and Elizabeth M. Greenhalgh. "Bias in the Treatment of Non-Germans in the British and American Military Government Courts in Occupied Germany, 1945–46". Social Science History 44.4 (2020): 641–666.
Konrád, Ota, Boris Barth, and Jaromír Mrňka. "Reshaping the Nation: An Introduction to the Collective Identities and Post-war Violence in Europe, 1944–1948". in Collective Identities and Post-War Violence in Europe, 1944–48 (Palgrave Macmillan, Cham, 2022) pp. 1–16.
Lundtofte, Henrik. "Purges, Patriotism, and Political Violence: The Danish Case 1944–1945". in Collective Identities and Post-War Violence in Europe, 1944–48 (Palgrave Macmillan, Cham, 2022) pp. 129–164.
McClellan, Dorothy. S., and Knez, Nikola. "Post-World War II Forced Repatriations to Yugoslavia: Genocide's Legacy for Democratic Nation Building". International Journal of Social Sciences 7.2 (2018): 62–91.
Mayers, David. America and the postwar world: Remaking international society, 1945–1956 (Routledge, 2018).
Naimark, Norman M. "Violence in the European Interregnum, 1944–1947". in Collective Identities and Post-War Violence in Europe, 1944–48 (Palgrave Macmillan, Cham, 2022) pp. 17–33.
Piketty, Guillaume. "From the Capitoline Hill to the Tarpeian Rock? Free French coming out of war". European Review of History: Revue européenne d'histoire 25.2 (2018): 354–373. .
Pritchard, Gareth. "East-Central Europe: From Nazi rule to communism, 1943–1948". in The Routledge History of the Second World War (Routledge, 2021) pp. 671–686.
Strupp, Christoph. "The Port of Hamburg in the 1940s and 1950s: Physical Reconstruction and Political Restructuring in the Aftermath of World War II". Journal of Urban History 47.2 (2021): 354–372.
Szulc, Tad (1990). Then and Now: How the World Has Changed since W.W. II. New York: W. Morrow & Co. 515 p. .
Tippner, Anja. "Postcatastrophic entanglement? Contemporary Czech writers remember the holocaust and post-war ethnic cleansing". Memory Studies 14.1 (2021): 80–94.
Ward, Robert E., and Yoshikazu Sakamoto, eds. Democratizing Japan: The Allied Occupation (University of Hawaii Press, 2019).
External links
Nuclear warfare
World War II | Aftermath of World War II | [
"Chemistry"
] | 11,219 | [
"Radioactivity",
"Nuclear warfare"
] |
7,062,469 | https://en.wikipedia.org/wiki/Earthquake%20scenario | Earthquake scenario is a planning tool to determine the appropriate emergency responses or building systems in areas exposed to earthquake hazards. It uses the basics of seismic hazard studies, but usually places a set earthquake on a specific fault, most likely near a high-population area. Most scenarios relate directly to urban seismic risk, and seismic risk in general.
Some earthquake scenarios follow some of the latest methodologies from the nuclear industry, namely a Seismic Margin Assessment (SMA). In the process, a Review Level Earthquake (RLE) is chosen that challenges the system, has a reasonable probability, and is not totally overwhelming.
Scenarios have been developed for Seattle, New York City, and many of the faults in California. In general, areas west of the Rockies use urban earthquakes of M7 (moment magnitude), and eastern cities use an M6.
Some eastern cities do not have an earthquake scenario. As an example, the Greater Toronto area in Ontario, Canada has a local seismicity with about as much a chance for an M6 as most of the moderate earthquake zones of Eastern North America (ENA), including New York City. As seen on the map, the RLE would be an M6 located in the western end of Lake Ontario. It could be suspected that the damage would follow the New York City scenario, with extensive damage to lifelines, and brick buildings on soft ground.
Notes
External links
Infrastructure Risk Research Project at The University of British Columbia, Vancouver, Canada
Earthquake and seismic risk mitigation
Disaster preparedness | Earthquake scenario | [
"Engineering"
] | 306 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
7,062,509 | https://en.wikipedia.org/wiki/Biosignal | A biosignal is any signal in living beings that can be continually measured and monitored. The term biosignal is often used to refer to bioelectrical signals, but it may refer to both electrical and non-electrical signals. The usual understanding is to refer only to time-varying signals, although spatial parameter variations (e.g. the nucleotide sequence determining the genetic code) are sometimes subsumed as well.
Electrical biosignals
Electrical biosignals, or bioelectrical time signals, usually refers to the change in electric current produced by the sum of an electrical potential difference across a specialized tissue, organ or cell system like the nervous system. Thus, among the best-known bioelectrical signals are:
Electroencephalogram (EEG)
Electrocardiogram (ECG)
Electromyogram (EMG)
Electrooculogram (EOG)
Electroretinogram (ERG)
Electrogastrogram (EGG)
Galvanic skin response (GSR) or electrodermal activity (EDA)
EEG, ECG, EOG and EMG are measured with a differential amplifier which registers the difference between two electrodes attached to the skin. However, the galvanic skin response measures electrical resistance and the Magnetoencephalography (MEG) measures the magnetic field induced by electrical currents (electroencephalogram) of the brain.
With the development of methods for remote measurement of electric fields using new sensor technology, electric biosignals such as EEG and ECG can be measured without electric contact with the skin. This can be applied, for example, for remote monitoring of brain waves and heart beat of patients who must not be touched, in particular patients with serious burns.
Electrical currents and changes in electrical resistances across tissues can also be measured from plants.
Biosignals may also refer to any non-electrical signal that is capable of being monitored from biological beings, such as mechanical signals (e.g. the mechanomyogram or MMG), acoustic signals (e.g. phonetic and non-phonetic utterances, breathing), chemical signals (e.g. pH, oxygenation) and optical signals (e.g. movements).
Use in artistic contexts
In recent years, the use of biosignals has gained interest amongst an international artistic community of performers and composers who use biosignals to produce and control sound. Research and practice in the field go back decades in various forms and have lately been enjoying a resurgence, thanks to the increasing availability of more affordable and less cumbersome technologies. An entire issue of eContact!, published by the Canadian Electroacoustic Community in July 2012, was dedicated to this subject, with contributions from the key figures in the domain.
See also
Bioindicator
Biomarker
Biosignature
Molecular marker
Multimedia information retrieval
References
Bibliography
Donnarumma, Marco. "Proprioception, Effort and Strain in "Hypo Chrysos": Action art for vexed body and the Xth Sense." eContact! 14.2 — Biotechnological Performance Practice / Pratiques de performance biotechnologique (July 2012). Montréal: CEC.
Tanaka, Atau. "The Use of Electromyogram Signals (EMG) in Musical Performance: A Personal survey of two decades of practice." eContact! 14.2 — Biotechnological Performance Practice / Pratiques de performance biotechnologique (July 2012). Montréal: CEC.
External links
Applications
Using electroencephalograph signals for task classification and activity recognition Microsoft
NASA scientists use hands-off approach to land passengers jet
Hardware
University of Vienna : cours Biomedical Engineering, Electromyography (EMG)
Electroencephalographe,EEG, sans fil ( Cornell University, Ithaca, NY, USA)
Biology terminology
Electrophysiology | Biosignal | [
"Biology"
] | 807 | [
"nan"
] |
7,062,643 | https://en.wikipedia.org/wiki/Oil%20burner%20%28engine%29 | An oil burner engine is a steam engine that uses oil as its fuel. The term is usually applied to a locomotive or ship engine that burns oil to heat water, to produce the steam which drives the pistons, or turbines, from which the power is derived.
This is mechanically very different from diesel engines, which use internal combustion, although they are sometimes colloquially referred to as oil burners.
History
A variety of experimental oil powered steam boilers were patented in the 1860s. Most of the early patents used steam to spray atomized oil into the steam boilers furnace. Attempts to burn oil from a free surface were unsuccessful due to the inherently low rates of combustion from the available surface area.
On 21 April 1868, the steam yacht Henrietta made a voyage down the river Clyde powered by an oil fired boiler designed and patented by a Mr Donald of George Miller & Co. Donald's design used a jet of dry steam to spray oil into a furnace lined with fireproof bricks. Prior to the Henrietta’s oil burner conversion, George Miller & Co was recorded as having used oil to power their works in Glasgow for a “considerable time”.
During the late 19th century numerous burner designs were patented using combinations of steam, compressed air and injection pumps to spray oil into boiler furnaces. Most of the early oil burner designs were commercial failures due to the high cost of oil (relative to coal) rather than any technical issues with the burners themselves.
During the early 20th century, marine and large oil burning steam engines generally used electric motor or steam driven injection pumps. Oil would be draw from a storage tank through suction strainers and across viscosity-reducing oil heaters. The oil would then be pumped through discharge strainers before entering the burners as a whirling mist. Combustion air was introduced through special furnace-fronts, which were fitted with dampers to regulate the supply. Smaller land-based oil burning steam engines typically used steam jets fed from the main boiler to blast atomized oil into the burner nozzles.
Steam ships
In the 1870s, Caspian steamships began using mazut, a residual fuel oil which at that time was produced as a waste stream by the many oil refineries located in the Absheron peninsula. During the late 19th century Mazut remained cheap and plentiful in the Caspian region.
In 1870, either the SS Iran or SS Constantine (depending on source) became the first ship to convert to burning fuel oil, both were Caspian based merchant steamships.
During the 1870s, the Imperial Russian Navy converted the ships of the Caspian fleet to oil burners starting with the Khivenets in 1874.
In 1894, the oil tanker SS Baku Standard became the first oil burning vessel to cross the Atlantic Ocean. In 1903, the Red Star Liner SS Kensington became the first passenger liner to make the Atlantic crossing with boilers fired by fuel oil.
Fuel oil has a higher energy density than coal and oil powered ships did not need to employ stokers however coal remained the dominant power source for marine boilers throughout the 19th century primarily due to the relatively high cost of fuel oil. Oil was used in marine boilers to a greater extent during the early 20th century. By 1939, about half the world’s ships burned fuel oil, of these about half had steam engines and the other half used diesel engines.
Steam locomotives
Oil burners designed by Thomas Urquhart were fitted to the locomotives of the Gryazi-Tsaritsyn railway in southern Russia. Thomas Urquhart, who was employed as a Locomotive Superintendent by the Gryazi-Tsaritsyn Railway Company, began his experiments in 1874. By 1885 all the locomotives of the Gryazi-Tsaritsyn Railway had been converted to run on fuel oil.
In Great Britain, an early pioneer of oil burning railway locomotives was James Holden, of the Great Eastern Railway. In James Holden's system, steam was raised by burning coal before the oil fuel was turned on. Holden's first oil burning locomotive Petrolea, was a class T19 2-4-0. Built in 1893, Petrolea burned waste oil that the railway had previously been discharging into the River Lea. Due to the relatively low cost of coal, oil was rarely used on Britain's stream trains and in most cases only where there was a shortage of coal.
In the United States, the first oil burning steam locomotive was in service on the Southern Pacific railroad by 1900. By 1915 there were 4,259 oil burning steam locomotives in the United States, which represented 6.5% of all the locomotives then in service. Most oil burners were operated in areas west of the Mississippi where oil was abundant. American usage of oil burning steam locomotives peaked in 1945 when they were responsible for around 20% of all the fuel consumed (measured by energy content) during rail freight operations. After WW2, both oil and coal burning steam locomotives were replaced by more efficient diesel engines and had been almost entirely phased out of service by 1960.
Notable early oil-fired steamships
Passenger liners
SS Kensington
NMS Regele Carol I (one oil fired + one coal fired boiler)
SS Tenyo Maru
SS George Washington
Warships
Re Umberto-class - Italian ironclad battleships equipped to burn a mix of coal and oil
Rostislav - Russian battleship
HMS Spiteful - British Royal Navy destroyer
Paulding-class destroyers - US Navy
Notable oil-fired steam locomotives
General
Most cab forward locomotives
Some Fairlie locomotives
Some steam locomotives used on heritage railways
Advanced steam technology locomotives
Australia
NSWGR D55 Class
NSWGR D59 Class
VR J Class
VR R Class
WAGR U Class
WAGR Pr Class
India
Darjeeling Himalayan Railway
Nilgiri Mountain Railway
Great Britain
GER Class T19
GER Class P43
WD Austerity 2-10-0 (3672 converted in preservation).
GWR oil burning steam locomotives (4965 Rood Ashton Hall to be converted during overhaul in preservation).
New Zealand
NZR JA class (North British-built locomotives only)
NZR JB class
NZR K class (1932) - converted from coal 1947-53
NZR KA class - converted from coal 1947-53
North America
('*' symbol indicates locomotive was converted or is being converted from coal-burning to oil-burning in either revenue service or excursion service)
Sierra Railway 3 - Part of Railtown 1897 State Historic Park (Jamestown, CA)
Sierra Railway 28 - Part of Railtown 1897 State Historic Park (Jamestown, CA)
McCloud Railway 25 - Oregon Coast Scenic Railroad (Garibaldi, OR)
Polson Logging Co. 2 - Albany & Eastern Railroad (Albany, OR)
California Western 45 - California Western Railroad (Fort Bragg, CA)
US Army Transportation Corps 1702* - Great Smoky Mountains Railroad (Bryson City, NC)
Southern Railway 722* - Great Smoky Mountains Railroad (Bryson City, NC)
Union Pacific 844* - UP Heritage Fleet (Cheyenne, WY)
Union Pacific 4014* - UP Heritage Fleet (Cheyenne, WY)
Union Pacific 3985* - Railroading Heritage of Midwest America (Silvis, IL)
Union Pacific 5511 - Railroading Heritage of Midwest America (Silvis, IL)
Union Pacific 737* - Double-T Agricultural Museum (Stevinson, CA)
White Pass & Yukon Route 73 - White Pass & Yukon Route (Skagway, AK)
Alaska Railroad 557* - Engine 557 Restoration Company (Anchorage, AK)
Santa Fe 5000 - Amarillo, TX
Santa Fe 3759* - Locomotive Park (Kingman, AZ)
Santa Fe 3751* - San Bernardino Railroad Historical Society (San Bernardino, CA)
Santa Fe 3450* - RailGiants Train Museum (Pomona, CA)
Santa Fe 3415* - Abilene & Smoky Valley Railroad (Abilene, KS)
Santa Fe 2926 - New Mexico Steam Locomotive & Railroad Historical Society (Albuquerque, NM)
Santa Fe 1316* - Texas State Railroad (Palestine, TX)
Texas & Pacific 610 - Texas State Railroad (Palestine, TX)
Southern Pine Lumber Co. 28 - Texas State Railroad (Palestine, TX)
Tremont & Gulf 30/Magma Arizona 7 - Texas State Railroad (Palestine, TX)
Lake Superior & Ishpeming 18* - Colebrookdale Railroad (Boyertown, PA)
Florida East Coast 148 - US Sugar Corporation (Clewiston, FL)
Atlantic Coast Line 1504* - US Sugar Corporation (Clewiston, FL)
Grand Canyon Railway 29* - Grand Canyon Railway (Williams, AR)
Grand Canyon Railway 4960* - Grand Canyon Railway (Williams, AR)
Oregon Railroad & Navigation 197 - Oregon Rail Heritage Center (Portland, OR)
Spokane, Portland & Seattle 700 - Oregon Rail Heritage Center (Portland, OR)
Southern Pacific 4449 - Oregon Rail Heritage Center (Portland, OR)
Southern Pacific 4460 - National Museum of Transportation (Kirkwood, MO)
Southern Pacific 4294 - California State Railroad Museum (Sacramento, CA)
Southern Pacific 2479 - Niles Canyon Railway (Sunol, CA)
Southern Pacific 2472 - Golden Gate Railroad Museum
Southern Pacific 2467 - Pacific Locomotive Association
Southern Pacific 2353 - Pacific Southwest Railway Museum (Campo, CA)
Southern Pacific 1744 - Niles Canyon Railway (Sunol, CA)
Southern Pacific 786 - Austin Steam Train Association, Inc. (Cedar Park, TX)
Southern Pacific 745 - Louisiana Steam Train Association, Inc. (Jefferson, LA)
Southern Pacific 18 - Eastern California Museum (Independence, CA)
Cotton Belt 819 - Arkansas Railroad Museum (Pine Bluff, AR)
Frisco 1522 - National Museum of Transportation (Kirkwood, MO)
Southern Railway 401* - Monticello Railway Museum (Monticello, IL)
Reading 2100* - American Steam Railroad Preservation Association (Cleveland, OH)
See also
Oil refinery
Steam power during the Industrial Revolution
Timeline of steam power
References
External links
Fuel energy & steam traction
Engine technology
Energy conversion
Combustion engineering
Steam engine technology | Oil burner (engine) | [
"Technology",
"Engineering"
] | 2,022 | [
"Engine technology",
"Industrial engineering",
"Combustion engineering",
"Engines"
] |
7,063,485 | https://en.wikipedia.org/wiki/IAR%20Systems | IAR Systems is a Swedish computer software company that offers development tools for embedded systems. IAR Systems was founded in 1983, and is listed on Nasdaq Nordic in Stockholm. IAR is an abbreviation of Ingenjörsfirma Anders Rundgren, which means Anders Rundgren Engineering Company.
IAR Systems develops C and C++ language compilers, debuggers, and other tools for developing and debugging firmware for 8-, 16-, 32-, and 64-bit processors. The firm began in the 8-bit market, later moved into the expanding 32-bit market and, in more recent years, added 64-bit support to its Arm (2021) and RISC-V (2022) toolchains.
IAR Systems is headquartered in Uppsala, Sweden, and has more than 200 employees globally. The company operates subsidiaries in Germany, France, India, Japan, South Korea, China, United States, Taiwan, and United Kingdom and reaches the rest of the world through distributors. IAR Systems is a subsidiary of IAR Systems Group.
Products
IAR Embedded Workbench – an integrated development environment including a C/C++ compiler, the code analysis tools C-STAT and C-RUN, and the C-SPY debugger.
IAR Build Tools - the command line version of IAR Embedded Workbench, tailored for Continuous Integration, available for Windows and Linux hosts.
IAR Visual State – a design tool for developing event-driven programming systems based on the event-driven finite-state machine paradigm. IAR Visual State presents the developer with the finite-state machine subset of Unified Modeling Language (UML) for C, C++, C# or Java code generation. By restricting the design abilities to state machines, it is possible to employ formal model checking to find and flag unwanted properties like state dead-ends and unreachable parts of the design. It is not a full UML editor.
Functional Safety certified options are available for IAR Embedded Workbench and IAR Build Tools.
IAR Embedded Workbench
The toolchain IAR Embedded Workbench, which supports more than 30 different processor families, is a complete integrated development environment (IDE) with compiler, analysis tools, debugger, functional safety, and security. The development tools support these targets: 78K, 8051, ARM, AVR, AVR32, CR16C, Coldfire, H8, HCS12, M16C, M32C, MSP430, Maxim MAXQ, RISC-V RV32, R32C, R8C, RH850, RL78, RX, S08, SAM8, STM8, SuperH, V850. Supported ARM core families are: ARM7, ARM9, ARM10, ARM11, Cortex: M0, M0+, M1, M3, M4, M7, M23, M33; R4, R5, R7; A5, A7, A8, A9, A15, A17. RISC-V tools support the RV32I, RV32E and RV64I base integer instruction sets and a wide range of standard and non-standard extensions.
ISO/ANSI C Compliance; as of March 2017:
ANSI X3.159-1989 (known as C89).
ISO/IEC 9899:1990 (known as C89 or C90) including all technical corrigenda and addenda.
ISO/IEC 9899:1999 (known as C99) including up to technical corrigendum No3.
ISO/IEC 9899:2011 (known as C11). (first available in ARM v8.10 tools)
ISO/IEC 9899:2018 (known as C17). (first available in ARM v8.40 tools)
ISO/ANSI C++ Compliance; as of March 2017:
ISO/IEC 14882:2003 (known as C++03).
ISO/IEC 14882:2014 (known as C++14). (first available in ARM v8.10 tools)
ISO/IEC 14882:2017 (known as C++17). (first available in ARM v8.30 tools)
Embedded C++ Compliance; as of February 2015:
C++ as defined by ISO/IEC 14882:2003.
Embedded C++ (EC++) as defined by Embedded C++ Technical Committee Draft, Version WP-AM-0003, 13 October 1999.
Extended Embedded C++, defined by IAR Systems.
MISRA C Rule Checking Conformance:
MISRA C:2004
MISRA C:2012 Amendment 3
MISRA C++:2008
References
External links
Software companies of Sweden
Companies based in Uppsala County
Companies established in 1983
Embedded systems
Companies listed on Nasdaq Stockholm | IAR Systems | [
"Technology",
"Engineering"
] | 1,022 | [
"Embedded systems",
"Computer science",
"Computer engineering",
"Computer systems"
] |
7,064,233 | https://en.wikipedia.org/wiki/History%20of%20nanotechnology | The history of nanotechnology traces the development of the concepts and experimental work falling under the broad category of nanotechnology. Although nanotechnology is a relatively recent development in scientific research, the development of its central concepts happened over a longer period of time. The emergence of nanotechnology in the 1980s was caused by the convergence of experimental advances such as the invention of the scanning tunneling microscope in 1981 and the discovery of fullerenes in 1985, with the elucidation and popularization of a conceptual framework for the goals of nanotechnology beginning with the 1986 publication of the book Engines of Creation. The field was subject to growing public awareness and controversy in the early 2000s, with prominent debates about both its potential implications as well as the feasibility of the applications envisioned by advocates of molecular nanotechnology, and with governments moving to promote and fund research into nanotechnology. The early 2000s also saw the beginnings of commercial applications of nanotechnology, although these were limited to bulk applications of nanomaterials rather than the transformative applications envisioned by the field.
Early uses of nanomaterials
Carbon nanotubes have been found in pottery from Keeladi, India, dating to c. 600–300 BC, though it is not known how they formed or whether the substance containing them was employed deliberately. Cementite nanowires have been observed in Damascus steel, a material dating back to c. 900 AD, their origin and means of manufacture also unknown.
Although nanoparticles are associated with modern science, they were used by artisans as far back as the ninth century in Mesopotamia for creating a glittering effect on the surface of pots.
In modern times, pottery from the Middle Ages and Renaissance often retains a distinct gold- or copper-colored metallic glitter. This luster is caused by a metallic film that was applied to the transparent surface of a glazing, which contains silver and copper nanoparticles dispersed homogeneously in the glassy matrix of the ceramic glaze. These nanoparticles are created by the artisans by adding copper and silver salts and oxides together with vinegar, ochre, and clay on the surface of previously glazed pottery. The technique originated in the Muslim world. As Muslims were not allowed to use gold in artistic representations, they sought a way to create a similar effect without using real gold. The solution they found was using luster.
Conceptual origins
Richard Feynman
The American physicist Richard Feynman lectured, "There's Plenty of Room at the Bottom," at an American Physical Society meeting at Caltech on December 29, 1959, which is often held to have provided inspiration for the field of nanotechnology. Feynman had described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important.
After Feynman's death, a scholar studying the historical development of nanotechnology has concluded that his actual role in catalyzing nanotechnology research was limited, based on recollections from many of the people active in the nascent field in the 1980s and 1990s. Chris Toumey, a cultural anthropologist at the University of South Carolina, found that the published versions of Feynman's talk had a negligible influence in the twenty years after it was first published, as measured by citations in the scientific literature, and not much more influence in the decade after the Scanning Tunneling Microscope was invented in 1981. Subsequently, interest in “Plenty of Room” in the scientific literature greatly increased in the early 1990s. This is probably because the term “nanotechnology” gained serious attention just before that time, following its use by K. Eric Drexler in his 1986 book, Engines of Creation: The Coming Era of Nanotechnology, which took the Feynman concept of a billion tiny factories and added the idea that they could make more copies of themselves via computer control instead of control by a human operator; and in a cover article headlined "Nanotechnology", published later that year in a mass-circulation science-oriented magazine, Omni. Toumey's analysis also includes comments from distinguished scientists in nanotechnology who say that “Plenty of Room” did not influence their early work, and in fact most of them had not read it until a later date.
These and other developments hint that the retroactive rediscovery of Feynman's “Plenty of Room” gave nanotechnology a packaged history that provided an early date of December 1959, plus a connection to the charisma and genius of Richard Feynman. Feynman's stature as a Nobel laureate and as an iconic figure in 20th century science surely helped advocates of nanotechnology and provided a valuable intellectual link to the past.
Norio Taniguchi
Japanese scientist Norio Taniguchi of Tokyo University of Science was the first to use the term "nano-technology" in a 1974 conference, to describe semiconductor processes such as thin film deposition and ion beam milling exhibiting characteristic control on the order of a nanometer. His definition was, "'Nano-technology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or one molecule." However, the term was not used again until 1981 when Eric Drexler, who was unaware of Taniguchi's prior use of the term, published his first paper on nanotechnology in 1981.
K. Eric Drexler
In the 1980s the idea of nanotechnology as a deterministic, rather than stochastic, handling of individual atoms and molecules was conceptually explored in depth by K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and two influential books.
In 1980, Drexler encountered Feynman's provocative 1959 talk "There's Plenty of Room at the Bottom" while preparing his initial scientific paper on the subject, “Molecular Engineering: An approach to the development of general capabilities for molecular manipulation,” published in the Proceedings of the National Academy of Sciences in 1981. The term "nanotechnology" (which paralleled Taniguchi's "nano-technology") was independently applied by Drexler in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity. He also first published the term "grey goo" to describe what might happen if a hypothetical self-replicating machine, capable of independent operation, were constructed and released. Drexler's vision of nanotechnology is often called "Molecular Nanotechnology" (MNT) or "molecular manufacturing."
His 1991 Ph.D. work at the MIT Media Lab was the first doctoral degree on the topic of molecular nanotechnology and (after some editing) his thesis, "Molecular Machinery and Manufacturing with Applications to Computation," was published as Nanosystems: Molecular Machinery, Manufacturing, and Computation, which received the Association of American Publishers award for Best Computer Science Book of 1992. Drexler founded the Foresight Institute in 1986 with the mission of "Preparing for nanotechnology.” Drexler is no longer a member of the Foresight Institute.
Experimental research and advances
In nanoelectronics, nanoscale thickness was demonstrated in the gate oxide and thin films used in transistors as early as the 1960s, but it was not until the late 1990s that MOSFETs (metal–oxide–semiconductor field-effect transistors) with nanoscale gate length were demonstrated. Nanotechnology and nanoscience got a boost in the early 1980s with two major developments: the birth of cluster science and the invention of the scanning tunneling microscope (STM). These developments led to the discovery of fullerenes in 1985 and the structural assignment of carbon nanotubes in 1991. The development of FinFET in the 1990s aldo laid the foundations for modern nanoelectronic semiconductor device fabrication.
Invention of scanning probe microscopy
The scanning tunneling microscope, an instrument for imaging surfaces at the atomic level, was developed in 1981 by Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory, for which they were awarded the Nobel Prize in Physics in 1986. Binnig, Calvin Quate and Christoph Gerber invented the first atomic force microscope in 1986. The first commercially available atomic force microscope was introduced in 1989.
IBM researcher Don Eigler was the first to manipulate atoms using a scanning tunneling microscope in 1989. He used 35 Xenon atoms to spell out the IBM logo. He shared the 2010 Kavli Prize in Nanoscience for this work.
Advances in interface and colloid science
Interface and colloid science had existed for nearly a century before they became associated with nanotechnology. The first observations and size measurements of nanoparticles had been made during the first decade of the 20th century by Richard Adolf Zsigmondy, winner of the 1925 Nobel Prize in Chemistry, who made a detailed study of gold sols and other nanomaterials with sizes down to 10 nm using an ultramicroscope which was capable of visualizing particles much smaller than the light wavelength. Zsigmondy was also the first to use the term "nanometer" explicitly for characterizing particle size. In the 1920s, Irving Langmuir, winner of the 1932 Nobel Prize in Chemistry, and Katharine B. Blodgett introduced the concept of a monolayer, a layer of material one molecule thick. In the early 1950s, Derjaguin and Abrikosova conducted the first measurement of surface forces.
In 1974 the process of atomic layer deposition for depositing uniform thin films one atomic layer at a time was developed and patented by Tuomo Suntola and co-workers in Finland.
In another development, the synthesis and properties of semiconductor nanocrystals were studied. This led to a fast increasing number of semiconductor nanoparticles of quantum dots.
Discovery of fullerenes
Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry. Smalley's research in physical chemistry investigated formation of inorganic and semiconductor clusters using pulsed molecular beams and time of flight mass spectrometry. As a consequence of this expertise, Curl introduced him to Kroto in order to investigate a question about the constituents of astronomical dust. These are carbon rich grains expelled by old stars such as R Corona Borealis. The result of this collaboration was the discovery of C60 and the fullerenes as the third allotropic form of carbon. Subsequent discoveries included the endohedral fullerenes, and the larger family of fullerenes the following year.
The discovery of carbon nanotubes is largely attributed to Sumio Iijima of NEC in 1991, although carbon nanotubes have been produced and observed under a variety of conditions prior to 1991. Iijima's discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods in 1991 and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, then they would exhibit remarkable conducting properties helped create the initial buzz that is now associated with carbon nanotubes. Nanotube research accelerated greatly following the independent discoveries by Bethune at IBM and Iijima at NEC of single-walled carbon nanotubes and methods to specifically produce them by adding transition-metal catalysts to the carbon in an arc discharge.
In the early 1990s Huffman and Kraetschmer, of the University of Arizona, discovered how to synthesize and purify large quantities of fullerenes. This opened the door to their characterization and functionalization by hundreds of investigators in government and industrial laboratories. Shortly after, rubidium doped C60 was found to be a mid temperature (Tc = 32 K) superconductor. At a meeting of the Materials Research Society in 1992, Dr. Thomas Ebbesen (NEC) described to a spellbound audience his discovery and characterization of carbon nanotubes. This event sent those in attendance and others downwind of his presentation into their laboratories to reproduce and push those discoveries forward. Using the same or similar tools as those used by Huffman and Kratschmer, hundreds of researchers further developed the field of nanotube-based nanotechnology.
Government and corporate support
National Nanotechnology Initiative
The National Nanotechnology Initiative is a United States federal nanotechnology research and development program. “The NNI serves as the central point of communication, cooperation, and collaboration for all Federal agencies engaged in nanotechnology research, bringing together the expertise needed to advance this broad and complex field." Its goals are to advance a world-class nanotechnology research and development (R&D) program, foster the transfer of new technologies into products for commercial and public benefit, develop and sustain educational resources, a skilled workforce, and the supporting infrastructure and tools to advance nanotechnology, and support responsible development of nanotechnology. The initiative was spearheaded by Mihail Roco, who formally proposed the National Nanotechnology Initiative to the Office of Science and Technology Policy during the Clinton administration in 1999, and was a key architect in its development. He is currently the Senior Advisor for Nanotechnology at the National Science Foundation, as well as the founding chair of the National Science and Technology Council subcommittee on Nanoscale Science, Engineering and Technology.
President Bill Clinton advocated nanotechnology development. In a 21 January 2000 speech at the California Institute of Technology, Clinton said, "Some of our research goals may take twenty or more years to achieve, but that is precisely why there is an important role for the federal government." Feynman's stature and concept of atomically precise fabrication played a role in securing funding for nanotechnology research, as mentioned in President Clinton's speech:
President George W. Bush further increased funding for nanotechnology. On December 3, 2003, Bush signed into law the 21st Century Nanotechnology Research and Development Act, which authorizes expenditures for five of the participating agencies totaling US$3.63 billion over four years. The NNI budget supplement for Fiscal Year 2009 provides $1.5 billion to the NNI, reflecting steady growth in the nanotechnology investment.
Growing public awareness and controversy
"Why the future doesn't need us"
"Why the future doesn't need us" is an article written by Bill Joy, then Chief Scientist at Sun Microsystems, in the April 2000 issue of Wired magazine. In the article, he argues that "Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species." Joy argues that developing technologies provide a much greater danger to humanity than any technology before it has ever presented. In particular, he focuses on genetics, nanotechnology and robotics. He argues that 20th-century technologies of destruction, such as the nuclear bomb, were limited to large governments, due to the complexity and cost of such devices, as well as the difficulty in acquiring the required materials. He also voices concern about increasing computer power. His worry is that computers will eventually become more intelligent than we are, leading to such dystopian scenarios as robot rebellion. He notably quotes the Unabomber on this topic. After the publication of the article, Bill Joy suggested assessing technologies to gauge their implicit dangers, as well as having scientists refuse to work on technologies that have the potential to cause harm.
In the AAAS Science and Technology Policy Yearbook 2001 article titled A Response to Bill Joy and the Doom-and-Gloom Technofuturists, Bill Joy was criticized for having technological tunnel vision on his prediction, by failing to consider social factors. In Ray Kurzweil's The Singularity Is Near, he questioned the regulation of potentially dangerous technology, asking "Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies may someday be used for malevolent purposes?".
Prey
Prey is a 2002 novel by Michael Crichton which features an artificial swarm of nanorobots which develop intelligence and threaten their human inventors. The novel generated concern within the nanotechnology community that the novel could negatively affect public perception of nanotechnology by creating fear of a similar scenario in real life.
Drexler–Smalley debate
Richard Smalley, best known for co-discovering the soccer ball-shaped “buckyball” molecule and a leading advocate of nanotechnology and its many applications, was an outspoken critic of the idea of molecular assemblers, as advocated by Eric Drexler. In 2001 he introduced scientific objections to them attacking the notion of universal assemblers in a 2001 Scientific American article, leading to a rebuttal later that year from Drexler and colleagues, and eventually to an exchange of open letters in 2003.
Smalley criticized Drexler's work on nanotechnology as naive, arguing that chemistry is extremely complicated, reactions are hard to control, and that a universal assembler is science fiction. Smalley believed that such assemblers were not physically possible and introduced scientific objections to them. His two principal technical objections, which he had termed the “fat fingers problem" and the "sticky fingers problem”, argued against the feasibility of molecular assemblers being able to precisely select and place individual atoms. He also believed that Drexler's speculations about apocalyptic dangers of molecular assemblers threaten the public support for development of nanotechnology.
Smalley first argued that "fat fingers" made MNT impossible. He later argued that nanomachines would have to resemble chemical enzymes more than Drexler's assemblers and could only work in water. He believed these would exclude the possibility of "molecular assemblers" that worked by precision picking and placing of individual atoms. Also, Smalley argued that nearly all of modern chemistry involves reactions that take place in a solvent (usually water), because the small molecules of a solvent contribute many things, such as lowering binding energies for transition states. Since nearly all known chemistry requires a solvent, Smalley felt that Drexler's proposal to use a high vacuum environment was not feasible.
Smalley also believed that Drexler's speculations about apocalyptic dangers of self-replicating machines that have been equated with "molecular assemblers" would threaten the public support for development of nanotechnology. To address the debate between Drexler and Smalley regarding molecular assemblers, Chemical & Engineering News published a point-counterpoint consisting of an exchange of letters that addressed the issues.
Drexler and coworkers responded to these two issues in a 2001 publication. Drexler and colleagues noted that Drexler never proposed universal assemblers able to make absolutely anything, but instead proposed more limited assemblers able to make a very wide variety of things. They challenged the relevance of Smalley's arguments to the more specific proposals advanced in Nanosystems. Drexler maintained that both were straw man arguments, and in the case of enzymes, Prof. Klibanov wrote in 1994, "...using an enzyme in organic solvents eliminates several obstacles..." Drexler also addresses this in Nanosystems by showing mathematically that well designed catalysts can provide the effects of a solvent and can fundamentally be made even more efficient than a solvent/enzyme reaction could ever be. Drexler had difficulty in getting Smalley to respond, but in December 2003, Chemical & Engineering News carried a 4-part debate.
Ray Kurzweil spends four pages in his book 'The Singularity Is Near' to showing that Richard Smalley's arguments are not valid, and disputing them point by point. Kurzweil ends by stating that Drexler's visions are very practicable and even happening already.
Royal Society report on the implications of nanotechnology
The Royal Society and Royal Academy of Engineering's 2004 report on the implications of nanoscience and nanotechnologies was inspired by Prince Charles' concerns about nanotechnology, including molecular manufacturing. However, the report spent almost no time on molecular manufacturing. In fact, the word "Drexler" appears only once in the body of the report (in passing), and "molecular manufacturing" or "molecular nanotechnology" not at all. The report covers various risks of nanoscale technologies, such as nanoparticle toxicology. It also provides a useful overview of several nanoscale fields. The report contains an annex (appendix) on grey goo, which cites a weaker variation of Richard Smalley's contested argument against molecular manufacturing. It concludes that there is no evidence that autonomous, self replicating nanomachines will be developed in the foreseeable future, and suggests that regulators should be more concerned with issues of nanoparticle toxicology.
Initial commercial applications
The early 2000s saw the beginnings of the use of nanotechnology in commercial products, although most applications are limited to the bulk use of passive nanomaterials. Examples include titanium dioxide and zinc oxide nanoparticles in sunscreen, cosmetics and some food products; silver nanoparticles in food packaging, clothing, disinfectants and household appliances such as Silver Nano; carbon nanotubes for stain-resistant textiles; and cerium oxide as a fuel catalyst. As of March 10, 2011, the Project on Emerging Nanotechnologies estimated that over 1300 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3–4 per week.
The National Science Foundation funded researcher David Berube to study the field of nanotechnology. His findings are published in the monograph Nano-Hype: The Truth Behind the Nanotechnology Buzz. This study concludes that much of what is sold as “nanotechnology” is in fact a recasting of straightforward materials science, which is leading to a “nanotech industry built solely on selling nanotubes, nanowires, and the like” which will “end up with a few suppliers selling low margin products in huge volumes." Further applications which require actual manipulation or arrangement of nanoscale components await further research. Though technologies branded with the term 'nano' are sometimes little related to and fall far short of the most ambitious and transformative technological goals of the sort in molecular manufacturing proposals, the term still connotes such ideas. According to Berube, there may be a danger that a "nano bubble" will form, or is forming already, from the use of the term by scientists and entrepreneurs to garner funding, regardless of interest in the transformative possibilities of more ambitious and far-sighted work.
Invention of ionizable cationic lipids at the turn of the 21st century allowed subsequent development of solid lipid nanoparticles, which in the 2020s became the most successful and well-known non-viral nanoparticle drug delivery system due to their use in several mRNA vaccines during the COVID-19 pandemic.
See also
Timeline of carbon nanotubes
Discovery of graphene
History of DNA nanotechnology
References
External links
Who Invented Nanotechnology
What is Nanotechnology with Full Information
How to make a career in technology
Nanotechnology | History of nanotechnology | [
"Materials_science",
"Technology",
"Engineering"
] | 4,828 | [
"Science and technology studies",
"Materials science",
"History of technology",
"Nanotechnology",
"History of science and technology"
] |
7,064,416 | https://en.wikipedia.org/wiki/Fuel%20Price%20Escalator | The Fuel Price Escalator (later Fuel Duty Stabiliser), a fuel duty policy in the United Kingdom ahead of inflation, was introduced in March 1993 as a measure to stem the increase in pollution from road transport and cut the need for new road building at a time of major road protests, at Twyford Down and other locations. Set initially at 3% above inflation it was increased in two stages to 6% before being suspended and then, in 2011, replaced by a 'fuel duty stabiliser' (also known as the 'fuel price stabiliser' and 'fair fuel stabiliser') following further increases in the price of oil.
History
Fuel Price Escalator
At a time of rapidly rising concerns about the effect of road transport on the environment, and in particular from the program of road building which had resulted in major road protests, at Twyford Down and other locations, the Conservatives under John Major introduced a 'Fuel Price Escalator' in March 1993 set initially at 3% ahead of inflation per year, increased to 5% later in the same year, and then increased again to 6% in 1997 by the Blair ministry after Labour won power.
The last rise due to the escalator took place following the budget on 9 March 1999 at a time of rapidly increasing oil prices. In 2000 at a time of rising protests at the cost of fuel Gordon Brown announced that the prices would only be increased by inflation due to the high price of oil.
Fuel Duty Stabiliser
Increases were deferred for a number of budgets and then in 2011, at a time of rapidly increasing oil prices, George Osborne cut 1p from the tax, increased the Petroleum Revenue Tax to raise at additional £2bn from North Sea oil firms, and announced that the escalator would be replaced with a 'fuel price stabiliser'. but would rise if oil prices fell below $75 per barrel.
In the 2011 budget the Chancellor had also announced a rise of 1p in January 2012 and then 5p in August 2013, but later cancelled the 1p rise and reduced the 5p August rise to 3p in November 2012. In the budget of 2012 Osborne confirmed the 3p August rise, before first postponing it and then cancelling it in December 2012. A further proposed inflation-based increase in fuel duty was cancelled by the chancellor in March 2013.
In March 2016, with oil prices at about $40 a barrel, and following widespread speculation that the duty would be increased at a time of record low oil prices, the chancellor froze fuel duty for the sixth year running, and reduced the tax on North Sea oil firms.
See also
Energy policy of the United Kingdom
Energy use and conservation in the United Kingdom
Elasticity (economics)
References
Further reading
A Fuel Duty Stabiliser – is it really that complicated?
Energy conservation in the United Kingdom
Environment of the United Kingdom
Politics of the United Kingdom
Taxation in the United Kingdom
History of transport in the United Kingdom
Petroleum politics
Energy conservation
Energy economics
Inflation in the United Kingdom | Fuel Price Escalator | [
"Chemistry",
"Environmental_science"
] | 612 | [
"Petroleum",
"Environmental social science",
"Energy economics",
"Petroleum politics"
] |
7,064,918 | https://en.wikipedia.org/wiki/History%20of%20hard%20disk%20drives | In 1953, IBM recognized the immediate application for what it termed a "Random Access File" having high capacity and rapid random access at a relatively low cost. After considering technologies such as wire matrices, rod arrays, drums, drum arrays, etc., the engineers at IBM's San Jose California laboratory invented the hard disk drive. The disk drive created a new level in the computer data hierarchy, then termed Random Access Storage but today known as secondary storage, less expensive and slower than main memory (then typically drums and later core memory) but faster and more expensive than tape drives.
The commercial usage of hard disk drives (HDD) began in 1957, with the shipment of a production IBM 305 RAMAC system including IBM Model 350 disk storage. US Patent 3,503,060 issued March 24, 1970, and arising from the IBM RAMAC program is generally considered to be the fundamental patent for disk drives.
Each generation of disk drives replaced larger, more sensitive and more cumbersome devices. The earliest drives were usable only in the protected environment of a data center. Later generations progressively reached factories, offices and homes, eventually becoming ubiquitous.
Disk media diameter was initially 24 inches, but over time it has been reduced to today's 3.5-inch and 2.5-inch standard sizes. Drives with the larger 24-inch- and 14-inch-diameter media were typically mounted in standalone boxes (resembling washing machines) or large equipment rack enclosures. Individual drives often required high-current AC power due to the large motors required to spin the large disks. Drives with smaller media generally conformed to de facto standard form factors.
The capacity of hard drives has grown exponentially over time. When hard drives became available for personal computers, they offered 5-megabyte capacity. During the mid-1990s the typical hard disk drive for a PC had a capacity in the range of 500 megabyte to 1 gigabyte. hard disk drives up to 32 TB were available.
Unit production peaked in 2010 at about 650 million units, and has been in a slow decline since then.
1950s–1970s
The IBM 350 Disk File was developed under the code-name RAMAC by an IBM San Jose team led by Reynold Johnson. It was announced in 1956 with the then new IBM 305 RAMAC computer. A variant, the IBM 355 Disk File, was simultaneously announced with the IBM RAM 650 computer, an enhancement to the IBM 650.
The IBM 350 drive had fifty platters, with a total capacity of five million 6-bit characters (3.75 megabytes). A single head assembly having two heads was used for access to all the platters, yielding an average access time of just under 1 second.
The RAMAC disk drive created a new level in the computer data hierarchy, today known as secondary storage, less expensive and slower than main memory (then typically core or drum) but faster and more expensive than tape drives. Subsequently, there was a period of about 20 years in which other technologies competed with disks in the secondary storage marketplace, for example tape strips, e.g., NCR CRAM, tape cartridges, e.g., IBM 3850, and drums, e.g., Burroughs B430, UNIVAC FASTRAND, but all ultimately were displaced by HDDs.
The IBM 1301 Disk Storage Unit, announced in 1961, introduced the usage of heads having self-acting air bearings (self-flying heads) with one head per each surface of the disks. It was followed in 1963 by the IBM 1302, with 4 times the capacity.
Also in 1961, Bryant Computer Products introduced its 4000 series disk drives. These massive units stood tall, long, and wide, and had up to 26 platters, each in diameter, rotating at up to 1,200 rpm. Access times were from 50 to 205 milliseconds (ms). The drive's total capacity, depending on the number of platters installed, was up to 205,377,600 bytes (205 MB).
The first disk drive to use removable media was the IBM 1311 drive. It was introduced in 1962 using the IBM 1316 disk pack to store two million characters. It was followed by the IBM 2311 (1964) using the IBM 1316 disk pack to store 5 megabyte, IBM 2314 (1965) using the IBM 2316 disk pack to store 29 megabytes, the IBM 3330 using 3336 disk packs to store 100 megabytes and the 3330-11 using the 3336–11 to store 200 megabytes.
Memorex shipped the first HDD, the Memorex 630, in 1968, plug compatible to an IBM model 2311 marking the beginning of independent competition (Plug Compatible Manufacturers or PCMs) for HDDs attached to IBM systems. It was followed in 1969 by the Memorex 660, an IBM 2314 compatible, which was OEM'ed to DEC and resold as the RP02.
In 1964, Burroughs introduced the B-475 disk drive, with a head per track, as part of the B5500.
In 1970, IBM introduced the 2305 disk drive, with a head per track. In June it introduce the 3330 Direct Access Storage Facility, code-named Merlin. Its removable disk packs could hold 100 MB. A major advance introduced with the 3330 was the use of error correction, which made this and most subsequent drives more reliable and less expensive because small imperfections in the disk surface can be tolerated.
In 1973, Control Data Corporation introduced the first of its series of SMD disk drives using conventional disk pack technology. The SMD family became the predominant disk drive in the minicomputer market into the 1980s.
Also in 1973, IBM introduced the IBM 3340 "Winchester" disk drive and the 3348 data module, the first significant commercial use of low mass and low load heads with lubricated platters and the last IBM disk drive with removable media. This technology and its derivatives remained the standard through 2011. Project head Kenneth Haughton named it after the Winchester 30-30 rifle because it was planned to have two 30 MB spindles; however, the actual product shipped with two spindles for data modules of either 35 MB or 70 MB. The name 'Winchester' and some derivatives are still common in some non-English speaking countries to generally refer to any hard disks (e.g. Hungary, Russia).
In 1975 the "Swinging arm" actuator was introduced by both IBM and StorageTek. The simple design of the IBM 62GV (Gulliver) drive, invented at IBM's UK Hursley Labs, became IBM's most licensed electro-mechanical invention. The swing arm was adopted in the 1980s for all HDDs and is still used nearly 50 years and 10 billion arms later.
Smaller diameter media came into usage during the 1970s and by the end of the decade standard form factors had been established for drives using nominally 8-inch media (e.g., Shugart SA1000) and nominally 5.25-inch media (e.g., Seagate ST-506).
During the 1970s, captive production, dominated by IBM's production for its own use, remained the largest revenue channel for HDDs, though the relative importance of the OEM channel grew. Led by Control Data, Diablo Systems, CalComp and Memorex, the OEM segment reached $631 million in 1979, but still well below the $2.8 billion associated with captive production.
1980s, the transition to the PC era
The 1980s saw the minicomputer age plateau as PCs were introduced. Manufacturers such as IBM, DEC and Hewlett-Packard continued to manufacture 14-inch hard drive systems as industry demanded higher storage; one such drive is the 1980 2.52 GB IBM 3380. But it was clear that smaller Winchester storage systems were eclipsing large platter hard drives.
In the 1980s 8-inch drives used with some mid-range systems increased from a low of about 30 MB in 1980, to a top-of-the-line 3 GB in 1989.
Hard disk drives for personal computers (PCs) were initially a rare and very expensive optional feature; systems typically had only the less expensive floppy disk drives or even cassette tape drives as both secondary storage and transport media. However, by the late 1980s, hard disk drives were standard on all but the cheapest PC and floppy disks were used almost solely as transport media.
Most hard disk drives in the early 1980s were sold to PC end users by systems integrators such as the Corvus Disk System or the systems manufacturer such as the Apple ProFile. The IBM PC XT in 1983, included an internal standard 10 MB hard disk drive and IBM's version of Xebec's hard disk drive controller, and soon thereafter internal hard disk drives proliferated on personal computers, one popular type was the ST506/ST412 hard drive and MFM interface.
HDDs continued to get smaller with the introduction of the 3.5-inch form factor by Rodime in 1983 and the 2.5-inch form factor by PrairieTek in 1988.
Industry participation peaked with about 75 active manufacturers in 1985 and then declined thereafter even though volume continued to climb: by 1989 reaching 22 million units and US$23 billion in revenue.
1990s
Even though there were a number of new entrants, industry participants continued to decline in total to 15 in 1999. Unit volume and industry revenue monotonically increased during the 1990s to 174 million units and $26 billion.
In the mid-90s, PC Card Type II hard disk drive cards became available. These cards were the first to be 5 mm thick as it is the thickness of a Type II PC Card.
The industry production consolidated around the 3.5-inch and 2.5-inch form factors; the larger form factors dying off while several smaller form factors were offered but achieved limited success, e.g. HP 1.3-inch Kittyhawk, IBM 1-inch Microdrive, etc.
2001 to present
In 2001, the HDD industry experienced its first ever decline in units and revenue.
The number of industry participants decreased to six in 2009 and to three in 2013. (See for more details)
In 2009 – Fujitsu exits by selling HDD business to Toshiba
In 2011 – Floods hit many hard drive factories. Predictions of a worldwide shortage of hard disk drives cause prices to double.
In 2012, Western Digital announced the first 2.5-inch, 5 mm thick drive, and the first 2.5-inch, 7 mm thick drive with two platters
Unit production peaked in 2010 at about 650 million units. Unit shipment has been in a slow decline since then, shipping about 276 million units in 2018 with a somewhat slower decline projected thereafter.
As of 2020, SSDs started to compete with HDDs.
As of January 2024, the largest hard drive is 32 TB (while SSDs can be much bigger at 100 TB, mainstream consumer SSDs cap at 8 TB). Smaller, 2.5-inch drives, are available at up to 2 TB for laptops, and 6 TB as external drives.
Timeline
1956 – IBM 350A, shipment of prototype disk drive to Zellerbach, SF CA, USA
1957 – IBM 350, first production disk drive, 5 million characters (6-bit), equivalent to 3.75 megabytes.
1961 – IBM 1301 Disk Storage Unit introduced with one head per surface and aerodynamic flying heads, 28 million characters (6-bit) per module.
1961 – Bryant 4000 (Bryant Computer Products division of Ex-Cell-O) up to 205 MB on up to 26 29-inch diameter platters.
1962 – IBM 1311 introduced removable disk packs containing 6 disks, storing 2 million characters per pack
1964 – IBM 2311 with 7.25 megabytes per disk pack
1964 – IBM 2310 removable cartridge disk drive with 1.02 MB on one disk
1965 – IBM 2314 with 11 disks and 29 MB per disk pack
1968 – Memorex is first to ship an IBM-plug-compatible disk drive
1970 – IBM 3330 Merlin, introduced error correction, 100 MB per disk pack
1973 – IBM 3340 Winchester introduced removable sealed disk packs that included head and arm assembly, 35 or 70 MB per pack
1973 – CDC SMD announced and shipped, 40 MB disk pack
1975 - IBM 62GV Gulliver and STC 8000 introduced the Swinging Arm Actuator, adopted for all HDD in the 1980s.
1976 – 1976 IBM 3350 "Madrid" – 317.5 megabytes, eight 14-inch disks, re-introduction of disk drive with fixed disk media
1979 – IBM 3370 introduced thin-film heads, 571 MB, non-removable
1979 – IBM 0680 Piccolo – 64.5 megabytes, six 8-inch disks, first 8-inch HDD
1980 – The IBM 3380 was the world's first gigabyte-capacity disk drive. Two head disk assemblies (essentially two HDDs) were packaged in a cabinet the size of a refrigerator, weighed (1000 lb), and had a price tag of US$ (Model B4) which is in present-day terms.
1980 – Seagate releases the first 5.25-inch hard drive, the ST-506; it had a 5-megabyte capacity, weighed 5 pounds (2.3 kilograms), and cost US$1,500
1982 – HP 7935 404 megabyte, 7-platter hard drive for minicomputers, HP-IB bus, $27,000
1983 – RO351/RO352 first 3 inch drive released with capacity of 10 megabytes
1986 – Standardization of SCSI
1988 – PrairieTek 220 – 20 megabytes, two 2.5-inch disks, first 2.5-inch HDD
1989 – Jimmy Zhu and H. Neal Bertram from UCSD proposed exchange decoupled granular microstructure for thin-film disk storage media, still used today.
1990 – IBM 0681 "Redwing" – 857 megabytes, twelve 5.25-inch disks. First HDD with PRML Technology (Digital Read Channel with 'partial-response maximum-likelihood' algorithm).
1991 – Areal Technology MD-2060 – 60 megabytes, one 2.5-inch disk platter. First commercial hard drive with platters made from glass.
1991 – IBM 0663 "Corsair" – 1,004 megabytes, eight 3.5-inch disks; first HDD using magnetoresistive heads
1991 – Intégral Peripherals 1820 "Mustang" – 21.4 megabytes, one 1.8-inch disk, first 1.8-inch HDD
1992 – HP Kittyhawk – 20 MB, first 1.3-inch hard-disk drive
1992 – Seagate ships the first 7,200-rpm hard drive, the Barracuda
1993 – IBM 3390 model 9, the last Single Large Expensive Disk drive announced by IBM
1994 – IBM introduces Laser Textured Landing Zones (LZT)
1994 – Maxtor introduces the first 5 mm thick hard drive.
1996 – Seagate ships the first 10,000-rpm hard drive, the Cheetah
1997 – IBM Deskstar 16 GB "Titan" – 16,800 megabytes, five 3.5-inch disks; first Giant Magnetoresistance (GMR) heads
1997 – Seagate introduces the first hard drive with fluid bearings
1997 – Seagate introduces the first 7,200-rpm ATA hard drive, Medalist Pro 6530 and Medalist Pro 9140
1998 – UltraDMA/33 and ATAPI standardized
1999 – IBM releases the Microdrive in 170 MB and 340 MB capacities
2000 – Seagate ships the first 15,000-rpm hard drive, the Cheetah X15
2002 – (Parallel) ATA breaks 137 GB (128 GiB) addressing space barrier
2002 – Seagate ships the first Serial ATA hard drives
2003 – IBM sells disk drive division to Hitachi
2004 – MK2001MTN first 0.85-inch drive released by Toshiba with capacity of 2 gigabytes
2005 – Serial ATA 3 Gbit/s standardized
2005 – Seagate introduces Tunnel MagnetoResistive Read Sensor (TMR) and Thermal Spacing Control
2005 – Introduction of faster SAS (Serial Attached SCSI)
2005 – First perpendicular magnetic recording (PMR) HDD shipped: Toshiba 1.8-inch 40/80 GB
2006 – First 200 GB 2.5-inch hard drive utilizing perpendicular recording (Toshiba)
2006 - Samsung announces a hybrid hard drive developed jointly with Microsoft, which has large amounts of flash memory as cache to improve performance, meant as a stopgap between hard drives and solid state drives In 2010, Seagate announced its version of the drive, calling it an SSHD.
2007 - Hitachi is the first to offer a 1 TB hard drive in a 3.5 inch form factor.
2009 - Western Digital is the first to offer a 1 TB hard drive in a 2.5 inch form factor.
2009 – Western Digital ships first HDD with dual stage piezoelectric actuator
2010 – First hard drive manufactured by using the Advanced Format of 4,096byte sectors instead of 512byte sectors.
2011 - Samsung announces 1 TB of capacity per 3.5 inch hard drive platter.
2012 – TDK demonstrates 2 TB on a single 3.5-inch platter
2012 – WDC acquires HGST operating it as a wholly owned subsidiary. WDC then provides rights to Toshiba, allowing it to re-enter the 3.5-inch desktop hard disk drive market.
2013 – HGST ships first modern helium-filled hard disk drive; He6 with 6 TB on 7 platters (announced in 2012).
2013 – Seagate claims first to ship shingled magnetic recording (SMR) HDDs
2017 – Seagate claims data transfer speeds of 480 MB/s out of a conventional hard drive rotating at 7200 rpm using two independent actuator arms each holding eight read-write heads (two per platter) and announces plans for launch in 2019 under the Mach.2 trademark. This is similar to the read speeds of low-end SSDs (write speed on an SSD is lower than the read speed) Products with this technology were released in 2021.
2020 - The first EAMR drive, the Ultrastar HC550, shipped in late 2020.
2021 – 20 TB HAMR drives were released by Seagate in January 2021.
2021 - Toshiba releases the first FC-MAMR hard drives.
2022 - Western Digital and Seagate release HDDs with 10 platters.
2024 - Seagate releases the Exos Mozaic 3+ product line with HAMR heads, in sizes up to 32 TB.
Manufacturing history
Manufacturing began in California's Silicon Valley in 1957 with IBM's production shipment of the first HDD, the IBM RAMAC 350. The industry grew slowly at first with three additional companies in the market by 1964: Anelex, Bryant Computer Products and Data Products. The industry grew rapidly in the late 1960s and again in the 1980s reaching a peak of 75 manufacturers in 1984. There have been at least 221 companies manufacturing hard disk drives but most of that industry has vanished through bankruptcy or mergers and acquisitions. Surviving manufacturers are Seagate, Toshiba and Western Digital (WD) with Toshiba as the senior participant having entered the market in 1977, twenty years after IBM started the market.
From beginning and into the early 1980s manufacturing was mainly by US firms in the United States at locations such as Silicon Valley, Los Angeles, Minnesota and Oklahoma City. In the 1980s US firms, beginning with Seagate, began to shift production to Singapore and then other locations in southeast Asia. In a span of seven years, 1983 to 1990, Singapore became the single largest location of HDD production, amounting to 55% of worldwide production. Japanese HDD companies later also moved their production to southeast Asia. Today the three remaining firms all produce their units in the Pacific Rim.
By the 1990s the dollar value of magnetic recording devices produced by companies located in California's "Silicon Valley" exceeded the dollar value of semiconductor devices produced there leading some to suggest that a more appropriate name for this area would be "Iron Oxide Valley," after the magnetic material coating the disks. All three remaining firms still have significant activities in Silicon Valley, but no HDD manufacturing. Western Digital still manufactures its read-write head wafers in Fremont CA.
See also
History of the floppy disk
History of IBM magnetic disk drives
References
Further reading
External links
A brief history of hard drives, retrieved 2014 Jan 11
Timeline, 50 years of hard drives retrieved 2010 Nov 25
HDD Price History.
Hard disk drives
Hard disk drives
Hard disk drives | History of hard disk drives | [
"Technology"
] | 4,281 | [
"History of computing hardware",
"History of computing"
] |
7,065,238 | https://en.wikipedia.org/wiki/Ibandronic%20acid | Ibandronic acid is a bisphosphonate medication used in the prevention and treatment of osteoporosis and metastasis-associated skeletal fractures in people with cancer. It may also be used to treat hypercalcemia (elevated blood calcium levels). It is typically formulated as its sodium salt ibandronate sodium.
It was patented in 1986 by Boehringer Mannheim and approved for medical use in 1996.
Medical uses
Ibandronate is indicated for the treatment and prevention of osteoporosis in post-menopausal women. In May 2003, the US Food and Drug Administration (FDA) approved ibandronate as a daily treatment for post-menopausal osteoporosis. The basis for this approval was a three-year, randomized, double-blind, placebo-controlled trial women with post-menopausal osteoporosis. Each participant also received daily oral doses of calcium and 400IUs [international units] of vitamin D. At the study's conclusion, both doses significantly reduced the occurrence risk of new vertebral fractures by 50–52 percent when compared to the effects of the placebo drug.
Ibandronate is efficacious for the prevention of metastasis-related bone fractures in multiple myeloma, breast cancer, and certain other cancers.
Adverse effects
In 2008, the US Food and Drug Administration (FDA) issued a communication warning of the possibility of severe and sometimes incapacitating bone, joint or muscle pain. A study conducted by the American Society of Bone and Mineral Research concluded that long-term use of bisphosphonates, including Boniva, may increase the risk of a rare but serious fracture of the femur. The drug also has been associated with osteonecrosis of the jaw, a relatively rare but serious condition.
Pharmacology
Mechanism of action
Nitrogen containing bisphosphonates, which include ibandronate, pamidronate and alendronate exert their effects on osteoclasts mainly by inhibiting the synthesis of isoprenoid lipids such as isopentenyl diphosphate (IPP), farnesyl diphosphate (FPP), and geranylgeranyl diphosphate (GGPP) via the mevalonate pathway. These isoprenoids are used in posttranslational modifcation(prenylation) of small GTPases such as Ras, Rho, and Rac. These prenylated GTPases are necessary for various cellular processes including osteoclast morphology, endosome trafficking, and apoptosis.
Society and culture
Brand names
Ibandronic acid is sold under the brand names Boniva, Bondronat, Bonviva, Bandrone, Ibandrix, Adronil, Bondrova, Bonprove, and Fosfonat.
References
Amines
Bisphosphonates
Farnesyl pyrophosphate synthase inhibitors
Drugs developed by Hoffmann-La Roche
Drugs developed by Genentech
Drugs developed by GSK plc | Ibandronic acid | [
"Chemistry"
] | 643 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
7,065,636 | https://en.wikipedia.org/wiki/Pipefitter | A pipefitter or steamfitter is a tradesman who installs, assembles, fabricates, maintains, and repairs mechanical piping systems. Pipefitters usually begin as helpers or apprentices. Journeyman pipefitters deal with industrial/commercial/marine piping and heating/cooling systems. Typical industrial process pipe is under high pressure, which requires metals such as carbon steel, stainless steel, and many different alloy metals fused together through precise cutting, threading, grooving, bending, and welding. A plumber concentrates on lower pressure piping systems for sewage and potable tap water in the industrial, commercial, institutional, or residential atmosphere. Utility piping typically consists of copper, PVC, CPVC, polyethylene, and galvanized pipe, which is typically glued, soldered, or threaded. Other types of piping systems include steam, ventilation, hydraulics, chemicals, fuel, and oil.
In Canada, pipefitting is classified as a compulsory trade, and carries a voluntary "red seal" inter-provincial standards endorsement. Pipefitter apprenticeships are controlled and regulated provincially, and in some cases allow for advance standing in similar trades upon completion.
In the United States, many states require pipefitters to be licensed. Requirements differ from state to state, but most include a four- to five-year apprenticeship. Union pipefitters are required to pass an apprenticeship test (often called a "turn-out exam") before becoming a licensed journeyman. Others can be certified by NCCER (formerly the National Center for Construction Education and Research).
Occupational summary
Pipefitters install, assemble, fabricate, maintain, repair, and troubleshoot pipe carrying fuel, chemicals, water, steam, and air in heating, cooling, lubricating, and various other process piping systems. Pipefitters are employed in the maintenance departments of power stations, refineries, offshore installations, factories, and similar establishments, by pipefitting contractors.
Scope of work
Blueprint reading
Detailing
CAD drawing coordinators
Layout
Pipe threading
Pipe grinding
Plasma cutting
Gas arc cutting
Rigging
Brazing
Soldering
Mitering
Tube bending
Valve installation and repair
Mechanical pipe cutting and grooving
Supports and hanger installation
Preparation and installation of medical gas piping
Welding; GMAW, TiG, SMAW, Orbital)
Trade groups
In North America, union pipefitters are members of the United Association. Wages vary from area to area, based on demands for experienced personnel and existing contracts between local unions and contractors. The United Association is also affiliated with the piping trades unions in Ireland and Australia.
Differences between pipefitting and pipelaying
Pipefitters should not be confused with pipelayers. Both trades involve pipe and valves and both use some of the same tools. However, pipelayers usually work outside, laying pipe underground or on the seabed, whereas pipefitters typically work inside, installing piping in buildings, aeroplanes, or ships.
One author summarizes the different tasks as follows:
whereas,
Occupational hazards
Pipe fitters are often exposed to hazardous or dangerous materials, such as asbestos, lead, ammonia, steam, flammable gases, various resins and solvents including benzene and various refrigerants. Much progress was made in the 20th century toward eliminating or reducing hazardous materials exposures. Many aspects of hazardous materials are now regulated by law in most countries, including asbestos usage and removal and refrigerant selection and handling. A major occupational hazard that pipefitters face is from welding fumes, including ultraviolet light, heavy metals, and chlorinated compounds during welding or torch cutting. Contact with previously mentioned solvents, adhesives, and epoxies during repair or installation of PVC/ABS pipes. Exposure to materials and liquids in old pipes during repair or removal.
Other occupational hazards include exposure to the weather, heavy lifting, crushing hazards, lacerations, and other risks normal to the construction industry.
See also
Piping and plumbing fitting
References
External links
Pipe Hangers & Support
Construction trades workers
Industrial occupations
Fitter | Pipefitter | [
"Chemistry",
"Engineering"
] | 833 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
7,065,702 | https://en.wikipedia.org/wiki/List%20of%20hyperaccumulators | This article covers known hyperaccumulators, accumulators or species tolerant to the following: Aluminium (Al), Silver (Ag), Arsenic (As), Beryllium (Be), Chromium (Cr), Copper (Cu), Manganese (Mn), Mercury (Hg), Molybdenum (Mo), Naphthalene, Lead (Pb), Selenium (Se) and Zinc (Zn).
See also:
Hyperaccumulators table – 2: Nickel
Hyperaccumulators table – 3: Cd, Cs, Co, Pu, Ra, Sr, U, radionuclides, hydrocarbons, organic solvents, etc.
Hyperaccumulators table – 1
Cs-137 activity was much smaller in leaves of larch and sycamore maple than of spruce: spruce > larch > sycamore maple.
References
+01
Hyperaccumulators|+01
Pollution control technologies
Lists of plants
Science-related lists
Pollution-related lists
Botany | List of hyperaccumulators | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 218 | [
"Lists of plants",
"Plants",
"Lists of biota",
"Phytoremediation plants",
"Pollution control technologies",
"Biodegradation",
"Ecological techniques",
"Botany",
"Environmental engineering",
"Bioremediation",
"Environmental soil science"
] |
7,065,881 | https://en.wikipedia.org/wiki/Ebastine | Ebastine is a H1 antihistamine with low potential for causing drowsiness.
It does not penetrate the blood–brain barrier to a significant amount and thus combines an effective block of the H1 receptor in peripheral tissue with a low incidence of central side effects, i.e. seldom causing sedation or drowsiness.
It was patented in 1983 by Almirall S.A and came into medical use in 1990. The substance is often provided in micronised form due to poor water solubility.
Uses
Ebastine is a second-generation H1 receptor antagonist that is indicated mainly for allergic rhinitis and chronic idiopathic urticaria. It is available in 10 and 20 mg tablets and as fast-dissolving tablets, as well as in pediatric syrup. It has a recommended flexible daily dose of 10 or 20 mg, depending on disease severity.
Data from over 8,000 patients in more than 40 clinical trials and studies suggest efficacy of ebastine in the treatment of intermittent allergic rhinitis, persistent allergic rhinitis and other indications.
Safety
Ebastine has shown overall safety and tolerability profile with no cognitive/psychomotor impairment and no sedation worse than placebo, and cardiac safety, that is, no QT prolongation. The incidence of most commonly reported adverse events was comparable between the ebastine and placebo groups, which confirms that ebastine has a favourable safety profile.
While experiments in pregnant animals showed no risk for the unborn, no such data are available in humans. It is not known whether ebastine passes into the breast milk.
Pharmacokinetic profile
After oral administration, ebastine undergoes extensive first-pass metabolism by hepatic cytochrome P450 3A4 into its active carboxylic acid metabolite, carebastine. This conversion is practically complete.
Brand names
Ebastine is available in different formulations (tablets, fast dissolving tablets and syrup) and commercialized under different brand names around the world, Ebast, Ebatin, Ebatin Fast, Ebatrol, Atmos, Ebet, Ebastel FLAS, Kestine, KestineLIO, KestinLYO, EstivanLYO, Evastel Z, Eteen (EURO Pharma Ltd.), Tebast (SQUARE), Ebasten (ACI), etc.
See also
Desloratadine
References
External links
H1 receptor antagonists
Ethers
Piperidines
Aromatic ketones
Peripherally selective drugs | Ebastine | [
"Chemistry"
] | 527 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
7,066,173 | https://en.wikipedia.org/wiki/Gliquidone | Gliquidone (INN, sold under the trade name Glurenorm) is an anti-diabetic medication in the sulfonylurea class. It is classified as a second-generation sulfonylurea. It is used in the treatment of diabetes mellitus type 2. It is marketed by the pharmaceutical company Boehringer Ingelheim (Germany).
Contraindications
Allergy to sulfonylureas or sulfonamides
Diabetes mellitus type 1
Diabetic ketoacidosis
Patients that underwent removal of the pancreas
Acute porphyria
Severe liver disease accompanying with liver insufficiency
Several conditions (e.g., infectious diseases or major surgical intervention), when insulin administration is required
Pregnancy or breastfeeding
Pharmacokinetics
Gliquidone is fully metabolized by the liver. Its metabolites are excreted virtually completely with bile (even with long-term administration), thus allowing the use of medication in diabetic patients with kidney disease and diabetic nephropathy.
References
Potassium channel blockers
Imides
Phenol ethers
1-(Benzenesulfonyl)-3-cyclohexylureas
Tetrahydroisoquinolines | Gliquidone | [
"Chemistry"
] | 268 | [
"Imides",
"Functional groups"
] |
7,066,200 | https://en.wikipedia.org/wiki/Contact%20binary%20%28small%20Solar%20System%20body%29 | A contact binary is a small Solar System body, such as a minor planet or comet, that is composed of two bodies that have gravitated toward each other until they touch, resulting in a bilobated, peanut-like overall shape. Contact binaries are distinct from true binary systems such as binary asteroids where both components are separated. The term is also used for stellar contact binaries.
An example of a contact binary is the Kuiper belt object 486958 Arrokoth, which was imaged by the New Horizons spacecraft during its flyby in January 2019.
History
The existence of contact binary asteroids was first speculated by planetary scientist Allan F. Cook in 1971, who sought for potential explanations for the extremely elongated shape of the Jupiter trojan asteroid 624 Hektor, whose longest axis measures roughly across and is twice as long as its shorter axes according to light curve measurements. Astronomers William K. Hartmann and Dale P. Cruikshank performed further investigation into Cook's contact binary hypothesis in 1978 and found it to be a plausible explanation for Hektor's elongated shape. They argued that since Hektor is the largest Jupiter trojan, its elongated shape could not have originated from the fragmentation of a larger asteroid. Rather, Hektor is more likely a "compound asteroid" consisting of two similarly-sized primitive asteroids, or planetesimals, that are in contact with each other as a result of a very low-speed collision. Hartmann theorized in 1979 that Jupiter trojan planetesimals formed close together with similar motions in Jupiter's Lagrange points, which allowed for low-speed collisions between planetesimals to take place and form contact binaries. The hypothesis of Hektor's contact binary nature contributed to the growing evidence of the existence of binary asteroids and asteroid satellites, which were not discovered until the Galileo spacecraft's flyby of 243 Ida and Dactyl 1993.
Until 1989, contact binary asteroids had only been inferred from the high-amplitude U-shape of their light curves. The first visually confirmed contact binary was the near-Earth asteroid 4769 Castalia (formerly 1989 PB), whose double-lobed shape was revealed in high-resolution delay-Doppler radar imaging by the Arecibo Observatory and Goldstone Solar System Radar in August 1989. These radar observations were led by Steven J. Ostro and his team of radar astronomers, who published the results in 1990. In 1994, Ostro and his colleague R. Scott Hudson developed and published a three-dimensional shape model of Castalia reconstructed from the 1989 radar images, providing the first radar shape model of a contact binary asteroid.
In 1992, the Kuiper belt was discovered and astronomers subsequently began observing and measuring light curves of Kuiper belt objects (KBOs) to determine their shapes and rotational properties. In 2002–2003, then-graduate student Scott S. Sheppard and his advisor David C. Jewitt observed the KBO and plutino with the University of Hawaiʻi's 2.24-m telescope at Mauna Kea, as part of a survey dedicated to measuring the light curves of KBOs. With their results published in 2004, they discovered that exhibits a large, U-shaped light curve amplitude characteristic of contact binaries, providing the first evidence of contact binary KBOs. Sheppard and Jewitt identified additional contact binary candidates from other KBOs known to exhibit large light curve amplitudes, hinting that contact binaries are abundant in the Kuiper belt.
The contact binary nature of comets was first suspected after the Deep Space 1 spacecraft's flyby of 19P/Borrelly in 2001, which revealed a bilobate peanut-shaped nucleus with a thick neck connecting the two lobes. The nucleus of 1P/Halley has also been described as peanut-shaped by researchers in 2004, based on imagery from the Giotto and Vega probes in 1986. However, the low bifurcation and thick-necked shapes of both of these comet nuclei made it unclear whether they are truly contact binaries. In 2008, the Arecibo Observatory imaged the Halley-type comet 8P/Tuttle in radar and revealed a highly bifurcated nucleus consisting of two distinct spheroidal lobes, providing the first unambiguous evidence of a contact binary comet nucleus. Later radar imaging and spacecraft exploration of the Jupiter family comet 103P/Hartley in 2010 also revealed a thick-necked, peanut-shaped nucleus similar to 19P/Borelly. By that time, half of the comets that have been imaged in detail were known to be bilobate, which implied that contact binaries in the comet population are similarly abundant as contact binaries in other minor planet populations.
Formation and evolution
In the Solar System, contact binary objects typically form when two objects collide at speeds slow enough to prevent disruption of their shapes. However, the mechanisms leading to this formation vary depending on the size and orbital location of the objects.
Near-Earth asteroids
Collisional fragments
Due to their close proximity to the Sun, the evolution of near-Earth asteroid (NEA) shapes and binary systems is dominated by the uneven reflection of sunlight off their surfaces, which causes gradual orbital acceleration by the Yarkovsky effect and gradual rotational acceleration by the Yarkovsky–O'Keefe–Radzievskii–Paddack (YORP) effect.
High-mass ratio and doubly-synchronous binary systems such as 69230 Hermes are plausible sources for contact binaries in the NEA population, since they are subject to the binary YORP effect, which acts over timescales of 1,000–10,000 years to either contract the components' orbits until they contact, or expand their orbits until they become gravitationally detached asteroid pairs. The origin of contact binaries from doubly-synchronous binaries in the NEA population is evident from the fact that very few doubly-synchronous binary NEAs are known, whereas contact binary NEAs are much more common. For doubly-synchronous binary systems with -diameter components, the tangential and radial impact velocities when they collide are less than , which are low enough to not disrupt the shapes of the two bodies.
In 2007, Daniel J. Scheeres proposed that contact binary asteroids in the NEA population can undergo rotational fissioning after being rotationally accelerated by the YORP effect. Depending on the relative sizes and shapes of the fissioned components, there are three possible evolutionary pathways for contact binary NEAs. Firstly, if the primary component is elongated and dominates the mass of the system, the secondary will either escape the system or collide with the primary since the orbits of the fissioned components are unstable. Secondly, if the primary component is elongated and accounts for roughly half of the system's mass, the secondary can temporarily orbit the primary before it will collide with the primary, reforming the contact binary but with a different distribution of the system mass. Thirdly, if the primary is spheroidal and dominates the mass of the system, the fissioned components can remain in long-lasting orbits as a stable binary system. As shown by these cases, it is unlikely that fissioned contact binaries can form stable binaries.
In 2011, Seth A. Jacobson and Scheeres expanded upon their 2007 theory of binary fission and proposed that NEAs can go through repeated cycles of fissioning and reimpacting through the YORP effect.
Trans-Neptunian objects
In the trans-Neptunian region and especially the Kuiper belt, binary systems are thought to have formed from the direct collapse of gas and dust from the surrounding protoplanetary nebula due to streaming instability. Through impacts and gravitational perturbations by the outer planets, the mutual orbits of binary trans-Neptunian objects contract and eventually destabilize to form contact binaries.
Geophysical properties
Impacts on one of the lobes of contact binary rubble pile asteroids do not cause significant disruption to the asteroid as the shockwave produced by the impact is damped by the asteroid's rubble pile structure and then blocked by the discontinuity between the two lobes.
Occurrence
Near-Earth asteroids
In 2022, Anne Virkki and colleagues published an analysis of 191 near-Earth asteroids (NEAs) that were observed by the Arecibo Observatory radar from December 2017–2019. From this sample, they found that 10 out of the 33 (~30%) NEAs larger than in diameter were contact binaries, which is double the previously estimated percentage of 14% for contact binaries of this diameter in the NEA population. Although the sample size is small and therefore not statistically significant, it could imply that contact binaries could be more common than previously thought.
Kuiper belt
In 2015–2019, Audrey Thirouin and Scott Sheppard performed a survey of KBOs from the plutino (2:3 Neptune resonance) and cold classical (low inclination and eccentricity) populations with the Lowell Discovery Telescope and Magellan-Baade Telescope. They found that 40–50% of the population of plutinos smaller than in diameter (H ≥ 6) are contact binaries consisting of nearly equal-mass components, whereas at least 10–25% of the population of cold classical KBOs of the same size range are contact binaries. The differing contact binary fractions of these two populations imply they underwent different formation and evolution mechanisms.
Thirouin and Sheppard continued their survey of KBOs in 2019–2021, focusing on the twotino population in the 1:2 orbital resonance with Neptune. They found that 7–14% of twotinos are contact binaries, which is relatively low albeit similar to the contact binary fraction of the cold classical population. Thirouin and Sheppard noted that the twotinos' contact binary fraction is consistent with predictions by David Nesvorný and David Vokrouhlický in 2019, who suggested that 10–30% of dynamically excited and resonant Kuiper belt populations are contact binaries.
486958 Arrokoth is the first confirmed example of a contact binary KBO, seen through stellar occultations in 2018 and spacecraft imaging in 2019.
A stellar occultation by the KBO 19521 Chaos on 29 March 2023 revealed that it had an apparently bilobate shape across, which could potentially make it the largest known contact binary object in the Solar System. However, the bilobate shape seen in the occultation could well be two binary components transiting each other during the event; this is supported by the smaller-than-expected size of Chaos measured in the occultation.
Comets
Irregular moons
The Cassini spacecraft observed several irregular moons of Saturn at various phase angles while in it was orbit around Saturn from 2004–2017, which allowed for the determination of rotation periods and shapes of the Saturnian irregular moons. In 2018–2019, researchers Tilmann Denk and Stefan Mottola investigated Cassinis irregular moon observations and found that Kiviuq, Erriapus, Bestla, and Bebhionn exhibited exceptionally large light curve amplitudes that may indicate contact binary shapes, or potentially binary (or subsatellite) systems. In particular, the light curve amplitude of Kiviuq is the largest of the irregular moons observed by Cassini, which makes it the most likely candidate for a contact binary or binary moon. Considering that the irregular moons have most likely undergone or were formed by disruptive collisions in the past, it is possible that the fragments of disrupted irregular moons could remain gravitationally bound in orbit around each other, forming a binary system that would eventually become a contact binary.
Examples
Comet Churyumov–Gerasimenko and Comet Tuttle are most likely contact binaries, while asteroids suspected of being contact binaries include the unusually elongated 624 Hektor and the bilobated 216 Kleopatra and 4769 Castalia. 25143 Itokawa, which was photographed by the Hayabusa probe, also appears to be a contact binary which has resulted in an elongated, bent body. Asteroid 4179 Toutatis with its elongated shape, as photographed by Chang'e-2, is a contact binary candidate as well. Among the distant minor planets, the icy Kuiper belt object Arrokoth was confirmed to be a contact binary when the New Horizons spacecraft flew past in 2019. The small main-belt asteroid 152830 Dinkinesh was confirmed to have the first known contact binary satellite after the Lucy probe flew by it on November 1, 2023.
See also
Contact binary (star)
Binary asteroid
Asteroid pair
References
Contact binary (small Solar System body)
Bodies of the Solar System | Contact binary (small Solar System body) | [
"Astronomy"
] | 2,611 | [
"Bodies of the Solar System",
"Astronomical objects",
"Solar System"
] |
7,066,452 | https://en.wikipedia.org/wiki/Casson%20invariant | In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.
Kevin Walker (1992) found an extension to rational homology 3-spheres, called the Casson–Walker invariant, and Christine Lescop (1995) extended the invariant to all closed oriented 3-manifolds.
Definition
A Casson invariant is a surjective map
λ from oriented integral homology 3-spheres to Z satisfying the following properties:
λ(S3) = 0.
Let Σ be an integral homology 3-sphere. Then for any knot K and for any integer n, the difference
is independent of n. Here denotes Dehn surgery on Σ by K.
For any boundary link K ∪ L in Σ the following expression is zero:
The Casson invariant is unique (with respect to the above properties) up to an overall multiplicative constant.
Properties
If K is the trefoil then
.
The Casson invariant is 1 (or −1) for the Poincaré homology sphere.
The Casson invariant changes sign if the orientation of M is reversed.
The Rokhlin invariant of M is equal to the Casson invariant mod 2.
The Casson invariant is additive with respect to connected summing of homology 3-spheres.
The Casson invariant is a sort of Euler characteristic for Floer homology.
For any integer n
where is the coefficient of in the Alexander–Conway polynomial , and is congruent (mod 2) to the Arf invariant of K.
The Casson invariant is the degree 1 part of the Le–Murakami–Ohtsuki invariant.
The Casson invariant for the Seifert manifold is given by the formula:
where
The Casson invariant as a count of representations
Informally speaking, the Casson invariant counts half the number of conjugacy classes of representations of the fundamental group of a homology 3-sphere M into the group SU(2). This can be made precise as follows.
The representation space of a compact oriented 3-manifold M is defined as where denotes the space of irreducible SU(2) representations of . For a Heegaard splitting of , the Casson invariant equals times the algebraic intersection of with .
Generalizations
Rational homology 3-spheres
Kevin Walker found an extension of the Casson invariant to rational homology 3-spheres. A Casson-Walker invariant is a surjective map λCW from oriented rational homology 3-spheres to Q satisfying the following properties:
1. λ(S3) = 0.
2. For every 1-component Dehn surgery presentation (K, μ) of an oriented rational homology sphere M′ in an oriented rational homology sphere M:
where:
m is an oriented meridian of a knot K and μ is the characteristic curve of the surgery.
ν is a generator the kernel of the natural map H1(∂N(K), Z) → H1(M−K, Z).
is the intersection form on the tubular neighbourhood of the knot, N(K).
Δ is the Alexander polynomial normalized so that the action of t corresponds to an action of the generator of in the infinite cyclic cover of M−K, and is symmetric and evaluates to 1 at 1.
where x, y are generators of H1(∂N(K), Z) such that , v = δy for an integer δ and s(p, q) is the Dedekind sum.
Note that for integer homology spheres, the Walker's normalization is twice that of Casson's: .
Compact oriented 3-manifolds
Christine Lescop defined an extension λCWL of the Casson-Walker invariant to oriented compact 3-manifolds. It is uniquely characterized by the following properties:
If the first Betti number of M is zero,
.
If the first Betti number of M is one,
where Δ is the Alexander polynomial normalized to be symmetric and take a positive value at 1.
If the first Betti number of M is two,
where γ is the oriented curve given by the intersection of two generators of and is the parallel curve to γ induced by the trivialization of the tubular neighbourhood of γ determined by .
If the first Betti number of M is three, then for a,b,c a basis for , then
.
If the first Betti number of M is greater than three, .
The Casson–Walker–Lescop invariant has the following properties:
When the orientation of M changes the behavior of depends on the first Betti number of M: if is M with the opposite orientation, then
That is, if the first Betti number of M is odd the Casson–Walker–Lescop invariant is unchanged, while if it is even it changes sign.
For connect-sums of manifolds
SU(N)
In 1990, C. Taubes showed that the SU(2) Casson invariant of a 3-homology sphere M has a gauge theoretic interpretation as the Euler characteristic of , where is the space of SU(2) connections on M and is the group of gauge transformations. He regarded the Chern–Simons invariant as a -valued Morse function on and used invariance under perturbations to define an invariant which he equated with the SU(2) Casson invariant. ()
H. Boden and C. Herald (1998) used a similar approach to define an SU(3) Casson invariant for integral homology 3-spheres.
References
Selman Akbulut and John McCarthy, Casson's invariant for oriented homology 3-spheres— an exposition. Mathematical Notes, 36. Princeton University Press, Princeton, NJ, 1990.
Michael Atiyah, New invariants of 3- and 4-dimensional manifolds. The mathematical heritage of Hermann Weyl (Durham, NC, 1987), 285–299, Proc. Sympos. Pure Math., 48, Amer. Math. Soc., Providence, RI, 1988.
Hans Boden and Christopher Herald, The SU(3) Casson invariant for integral homology 3-spheres. Journal of Differential Geometry 50 (1998), 147–206.
Christine Lescop, Global Surgery Formula for the Casson-Walker Invariant. 1995,
Nikolai Saveliev, Lectures on the topology of 3-manifolds: An introduction to the Casson Invariant. de Gruyter, Berlin, 1999.
Kevin Walker, An extension of Casson's invariant. Annals of Mathematics Studies, 126. Princeton University Press, Princeton, NJ, 1992.
Geometric topology | Casson invariant | [
"Mathematics"
] | 1,368 | [
"Topology",
"Geometric topology"
] |
6,899,545 | https://en.wikipedia.org/wiki/Secondary%20calculus%20and%20cohomological%20physics | In mathematics, secondary calculus is a proposed expansion of classical differential calculus on manifolds, to the "space" of solutions of a (nonlinear) partial differential equation. It is a sophisticated theory at the level of jet spaces and employing algebraic methods.
Secondary calculus
Secondary calculus acts on the space of solutions of a system of partial differential equations (usually nonlinear equations). When the number of independent variables is zero (i.e. the equations are all algebraic) secondary calculus reduces to classical differential calculus.
All objects in secondary calculus are cohomology classes of differential complexes growing on diffieties. The latter are, in the framework of secondary calculus, the analog of smooth manifolds.
Cohomological physics
Cohomological physics was born with Gauss's theorem, describing the electric charge contained inside a given surface in terms of the flux of the electric field through the surface itself. Flux is the integral of a differential form and, consequently, a de Rham cohomology class. It is not by chance that formulas of this kind, such as the well known Stokes formula, though being a natural part of classical differential calculus, have entered in modern mathematics from physics.
Classical analogues
All the constructions in classical differential calculus have an analog in secondary calculus. For instance, higher symmetries of a system of partial differential equations are the analog of vector fields on differentiable manifolds. The Euler operator, which associates to each variational problem the corresponding Euler–Lagrange equation, is the analog of the classical differential associating to a function on a variety its differential. The Euler operator is a secondary differential operator of first order, even if, according to its expression in local coordinates, it looks like one of infinite order. More generally, the analog of differential forms in secondary calculus are the elements of the first term of the so-called C-spectral sequence, and so on.
The simplest diffieties are infinite prolongations of partial differential equations, which are subvarieties of infinite jet spaces. The latter are infinite dimensional varieties that can not be studied by means of standard functional analysis. On the contrary, the most natural language in which to study these objects is differential calculus over commutative algebras. Therefore, the latter must be regarded as a fundamental tool of secondary calculus. On the other hand, differential calculus over commutative algebras gives the possibility to develop algebraic geometry as if it were differential geometry.
Theoretical physics
Recent developments of particle physics, based on quantum field theories and its generalizations, have led to understand the deep cohomological nature of the quantities describing both classical and quantum fields. The turning point was the discovery of the famous BRST transformation. For instance, it was understood that observables in field theory are classes in horizontal de Rham cohomology which are invariant under the corresponding gauge group and so on. This current in modern theoretical physics is called Cohomological Physics.
It is relevant that secondary calculus and cohomological physics, which developed for twenty years independently from each other, arrived at the same results. Their confluence took place at the international conference Secondary Calculus and Cohomological Physics (Moscow, August 24–30, 1997).
Prospects
A large number of modern mathematical theories harmoniously converges in the framework of secondary calculus, for instance: commutative algebra and algebraic geometry, homological algebra and differential topology, Lie group and Lie algebra theory, differential geometry, etc.
See also
References
I. S. Krasil'shchik, Calculus over Commutative Algebras: a concise user's guide, Acta Appl. Math. 49 (1997) 235–248; DIPS-01/98
I. S. Krasil'shchik, A. M. Verbovetsky, Homological Methods in Equations of Mathematical Physics, Open Ed. and Sciences, Opava (Czech Rep.), 1998; DIPS-07/98.
I. S. Krasil'shchik, A. M. Vinogradov (eds.), Symmetries and conservation laws for differential equations of mathematical physics, Translations of Math. Monographs 182, Amer. Math. Soc., 1999.
J. Nestruev, Smooth Manifolds and Observables, Graduate Texts in Mathematics 220, Springer, 2002, .
A. M. Vinogradov, The C-spectral sequence, Lagrangian formalism, and conservation laws I. The linear theory, J. Math. Anal. Appl. 100 (1984) 1—40; Diffiety Inst. Library.
A. M. Vinogradov, The C-spectral sequence, Lagrangian formalism, and conservation laws II. The nonlinear theory, J. Math. Anal. Appl. 100 (1984) 41–129; Diffiety Inst. Library.
A. M. Vinogradov, From symmetries of partial differential equations towards secondary (`quantized') calculus, J. Geom. Phys. 14 (1994) 146–194; Diffiety Inst. Library.
A. M. Vinogradov, Introduction to Secondary Calculus, Proc. Conf. Secondary Calculus and Cohomology Physics (M. Henneaux, I. S. Krasil'shchik, and A. M. Vinogradov, eds.), Contemporary Mathematics, Amer. Math. Soc., Providence, Rhode Island, 1998; DIPS-05/98.
A. M. Vinogradov, Cohomological Analysis of Partial Differential Equations and Secondary Calculus, Translations of Math. Monographs 204, Amer. Math. Soc., 2001.
External links
The Diffiety Institute
Diffiety School
Homological algebra
Partial differential equations | Secondary calculus and cohomological physics | [
"Mathematics"
] | 1,187 | [
"Fields of abstract algebra",
"Mathematical structures",
"Category theory",
"Homological algebra"
] |
6,899,646 | https://en.wikipedia.org/wiki/Lead-bismuth%20eutectic | Lead-Bismuth Eutectic or LBE is a eutectic alloy of lead (44.5 at%) and bismuth (55.5 at%) used as a coolant in some nuclear reactors, and is a proposed coolant for the lead-cooled fast reactor, part of the Generation IV reactor initiative.
It has a melting point of 123.5 °C/254.3 °F (pure lead melts at 327 °C/621 °F, pure bismuth at 271 °C/520 °F) and a boiling point of 1,670 °C/3,038 °F.
Lead-bismuth alloys with between 30% and 75% bismuth all have melting points below 200 °C/392 °F.
Alloys with between 48% and 63% bismuth have melting points below 150 °C/302 °F.
While lead expands slightly on melting and bismuth contracts slightly on melting, LBE has negligible change in volume on melting.
History
The Soviet Alfa-class submarines used LBE as a coolant for their nuclear reactors throughout the Cold War.
OKB Gidropress (the Russian developers of the VVER-type Light-water reactors) has expertise in LBE reactors. The SVBR-75/100, a modern design of this type, is one example of the extensive Russian experience with this technology.
Gen4 Energy (formerly Hyperion Power Generation), a United States firm connected with Los Alamos National Laboratory, announced plans in 2008 to design and deploy a uranium nitride fueled small modular reactor cooled by lead-bismuth eutectic for commercial power generation, district heating, and desalinization. The proposed reactor, called the Gen4 Module, was planned as a 70 MWth reactor of the sealed modular type, factory assembled and transported to site for installation, and transported back to the factory for refuelling. Gen4 Energy ceased operations in 2018.
Advantages
As compared to sodium-based liquid metal coolants such as liquid sodium or NaK, lead-based coolants have significantly higher boiling points, meaning a reactor can be operated without risk of coolant boiling at much higher temperatures. This improves thermal efficiency and could potentially allow hydrogen production through thermochemical processes.
Lead and LBE also do not react readily with water or air, in contrast to sodium and NaK which ignite spontaneously in air and react explosively with water. This means that lead- or LBE-cooled reactors, unlike sodium-cooled designs, would not need an intermediate coolant loop, which reduces the capital investment required for a plant.
Both lead and bismuth are also an excellent radiation shield, absorbing gamma radiation while simultaneously being virtually transparent to neutrons. In contrast, sodium forms the potent gamma emitter sodium-24 (half-life 15 hours) following intense neutron radiation, requiring a large radiation shield for the primary cooling loop.
As heavy nuclei, lead and bismuth can be used as spallation targets for non-fission neutron production, as in accelerator transmutation of waste (see energy amplifier).
Both lead-based and sodium-based coolants have the advantage of relatively high boiling points as compared to water, meaning it is not necessary to pressurise the reactor even at high temperatures. This improves safety as it reduces the probability of a loss of coolant accident (LOCA), and allows for passively safe designs. The thermodynamic cycle (Carnot cycle) is also more efficient with a larger difference of temperature. However, a disadvantage of higher temperatures is also the higher corrosion rate of metallic structural components in LBE due to their increased solubility in liquid LBE with temperature (formation of amalgam) and to liquid metal embrittlement.
Limitations
Lead and LBE coolant are more corrosive to steel than sodium, and this puts an upper limit on the velocity of coolant flow through the reactor due to safety considerations. Furthermore, the higher melting points of lead and LBE (327 °C and 123.5 °C respectively) may mean that solidification of the coolant may be a greater problem when the reactor is operated at lower temperatures.
Finally, upon neutron radiation bismuth-209, the main isotope of bismuth present in LBE coolant, undergoes neutron capture and subsequent beta decay, forming polonium-210, a potent alpha emitter. The presence of radioactive polonium in the coolant would require special precautions to control alpha contamination during refueling of the reactor and handling components in contact with LBE.
See also
Subcritical reactor (accelerator-driven system)
References
External links
NEA 2015 LBE Handbook
Fusible alloys
Nuclear reactor coolants
Nuclear materials
Bismuth | Lead-bismuth eutectic | [
"Physics",
"Chemistry",
"Materials_science"
] | 958 | [
"Lead alloys",
"Metallurgy",
"Fusible alloys",
"Materials",
"Nuclear materials",
"Alloys",
"Matter"
] |
6,899,692 | https://en.wikipedia.org/wiki/Tachytrope | A tachytrope is a curve in which the law of the velocity is given. It was first used by American mathematician Benjamin Peirce in A System of Analytic Mechanics, first published in 1855.
References
Sources
Velocity | Tachytrope | [
"Physics",
"Mathematics"
] | 45 | [
"Physical phenomena",
"Physical quantities",
"Applied mathematics",
"Motion (physics)",
"Applied mathematics stubs",
"Vector physical quantities",
"Velocity",
"Wikipedia categories named after physical quantities"
] |
6,899,907 | https://en.wikipedia.org/wiki/Paley%20construction | In mathematics, the Paley construction is a method for constructing Hadamard matrices using finite fields. The construction was described in 1933 by the English mathematician Raymond Paley.
The Paley construction uses quadratic residues in a finite field GF(q) where q is a power of an odd prime number. There are two versions of the construction depending on whether q is congruent to 1 or 3 modulo 4.
Quadratic character and Jacobsthal matrix
Let q be a power of an odd prime. In the finite field GF(q) the quadratic character χ(a) indicates whether the element a is zero, a non-zero square, or a non-square:
For example, in GF(7) the non-zero squares are 1 = 12 = 62, 4 = 22 = 52, and 2 = 32 = 42. Hence χ(0) = 0, χ(1) = χ(2) = χ(4) = 1, and χ(3) = χ(5) = χ(6) = −1.
The Jacobsthal matrix Q for GF(q) is the q × q matrix with rows and columns indexed by elements of GF(q) such that the entry in row a and column b is χ(a − b). For example, in GF(7), if the rows and columns of the Jacobsthal matrix are indexed by the field elements 0, 1, 2, 3, 4, 5, 6, then
The Jacobsthal matrix has the properties QQT = qI − J and QJ = JQ = 0 where I is the q × q identity matrix and J is the q × q all 1 matrix. If q is congruent to 1 mod 4 then −1 is a square in GF(q)
which implies that Q is a symmetric matrix. If q is congruent to 3 mod 4 then −1 is not a square, and Q is a skew-symmetric matrix. When q is a prime number and rows and columns are indexed by field elements in the usual 0, 1, 2, … order, Q is a circulant matrix. That is, each row is obtained from the row above by cyclic permutation.
Paley construction I
If q is congruent to 3 mod 4 then
is a Hadamard matrix of size q + 1. Here j is the all-1 column vector of length q and I is the (q+1)×(q+1) identity matrix. The matrix H is a skew Hadamard matrix, which means it satisfies H + HT = 2I.
Paley construction II
If q is congruent to 1 mod 4 then the matrix obtained by replacing all 0 entries in
with the matrix
and all entries ±1 with the matrix
is a Hadamard matrix of size 2(q + 1). It is a symmetric Hadamard matrix.
Examples
Applying Paley Construction I to the Jacobsthal matrix for GF(7), one produces the 8 × 8 Hadamard matrix,
For an example of the Paley II construction when q is a prime power rather than a prime number, consider GF(9). This is an extension field of GF(3) obtained
by adjoining a root of an irreducible quadratic. Different irreducible quadratics produce equivalent fields. Choosing x2+x−1 and letting a be a root of this polynomial, the nine elements of GF(9) may be written 0, 1, −1, a, a+1, a−1, −a, −a+1, −a−1. The non-zero squares are 1 = (±1)2, −a+1 = (±a)2, a−1 = (±(a+1))2, and −1 = (±(a−1))2. The Jacobsthal matrix is
It is a symmetric matrix consisting of nine 3 × 3 circulant blocks. Paley Construction II produces the symmetric 20 × 20 Hadamard matrix,
1- 111111 111111 111111
-- 1-1-1- 1-1-1- 1-1-1-
11 1-1111 ----11 --11--
1- --1-1- -1-11- -11--1
11 111-11 11---- ----11
1- 1---1- 1--1-1 -1-11-
11 11111- --11-- 11----
1- 1-1--- -11--1 1--1-1
11 --11-- 1-1111 ----11
1- -11--1 --1-1- -1-11-
11 ----11 111-11 11----
1- -1-11- 1---1- 1--1-1
11 11---- 11111- --11--
1- 1--1-1 1-1--- -11--1
11 ----11 --11-- 1-1111
1- -1-11- -11--1 --1-1-
11 11---- ----11 111-11
1- 1--1-1 -1-11- 1---1-
11 --11-- 11---- 11111-
1- -11--1 1--1-1 1-1---.
The Hadamard conjecture
The size of a Hadamard matrix must be 1, 2, or a multiple of 4. The Kronecker product of two Hadamard matrices of sizes m and n is an Hadamard matrix of size mn. By forming Kronecker products of matrices from the Paley construction and the 2 × 2 matrix,
Hadamard matrices of every permissible size up to 100 except for 92 are produced. In his 1933 paper, Paley says “It seems probable that, whenever m is divisible by 4, it is possible to construct an orthogonal matrix of order m composed of ±1, but the general theorem has every appearance of difficulty.” This appears to be the first published statement of the Hadamard conjecture. A matrix of size 92 was eventually constructed by Baumert, Golomb, and Hall, using a construction due to Williamson combined with a computer search. Currently, Hadamard matrices have been shown to exist for all for m < 668.
See also
Paley biplane
Paley graph
References
Matrices | Paley construction | [
"Mathematics"
] | 1,384 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
6,900,166 | https://en.wikipedia.org/wiki/Utility%20submeter | Utility sub-metering is a system that allows a landlord, property management firm, condominium association, homeowners association, or other multi-tenant property to bill tenants for individual measured utility usage. The approach makes use of individual water meters, gas meters, or electricity meters.
Sub-metering may also refer to the monitoring of the electrical consumption of individual equipment within a building, such as HVAC, indoor and outdoor lighting, refrigeration, kitchen equipment and more. In addition to the "main load" meter used by utilities to determine overall building consumption, submetering utilizes individual "submeters" that allow building and facility managers to have visibility into the energy use and performance of their equipment, creating opportunities for energy and capital expenditure savings.
Overview
Typically a multi-tenant dwelling has either one master meter for the entire property or a meter for each building and the property is responsible for the entire utility bill. Submetering allows property owners who supply utilities to their tenants the ability to account for each tenant's usage in measurable terms. By fairly billing each tenant for their portion, submetering promotes conservation and offsets the expense of bills generated from a master meter, maintenance and improvements for well water systems, lagoon, or septic systems. Submetering is legally allowable in most states and municipalities, but owners should consult a Utility Management Vendor for assistance with local and state compliance and regulations.
Typical users of submetering are mobile home parks, apartments, condominiums, townhouses, student housing, and commercial areas. Usually, utility submetering is placed in situations where the local utility cannot or will not individually meter the utility in question. Municipal Utility companies are often reluctant to take on metering individual spaces for several reasons. One reason is that rental space tenants tend to be more transient and are more difficult to collect from. By billing only the owner, they can place liens on real property if not paid (as opposed to tenants they may not know exist or who have little to lose if they move without paying). Utilities also generally prefer not to have water meters beyond their easement (i.e., the property boundary), since leaks to a service line would be before the meter and could be of less concern to a property owner. Other reasons include difficulty in getting access to meters for reading, or electrical systems and plumbing not suitable for submetering.
Before submetering, many landlords either included the utility cost in the bulk price of the rent or lease, or divided the utility usage among the tenants in some way such as equally, by square footage via allocation methods often called RUBS (Ratio Utility Billing System) or some other means. Without a meter to measure individual usage, there is less incentive to identify building inefficiencies, since the other tenants or landlord may pay all or part of those costs. Submetering creates awareness of water and Energy conservation because landlords and tenants are equally aware of what they will pay for these inefficiencies if they are not attended to. Conservation also allows property owners to keep the cost of rent reasonable and fair for all units regardless of how much water or energy they consume.
On the other hand, submetering provides an opportunity for building owners to shift their rising electricity costs to tenants who lack ownership or control over thermal efficiency of the structure, its insulation, windows, and major energy consuming appliances. Landlords may attempt to deem their charges for electric service as "additional rent" making tenants subject to eviction for nonpayment of electric bills, which would not be possible if they were direct customers of the utility. The Ontario Energy Board in August 2009 nullified all landlord submetering and allowed future submetering only upon informed tenant consent, including provision of third party energy audits to tenants to enable them to judge the total cost of rent plus electricity.
Some submetering products connect with software that provides consumption data. This data provides users with the information to locate leaks and high-consumption areas. Users can apply this data to implement conservation or renovation projects to lower usage & costs, meet government mandates, or participate in green building programs such as LEED and green globes.
System design
A submetering system typically includes a "master meter", which is owned by the utility supplying the water, electricity, or gas, with overall usage billed directly to the property owner. The property owner or manager then places their own private meters on individual tenant spaces to determine individual usage levels and bill each tenant for their share. In some cases, the landlord might add the usage cost to the regular rent or lease bill. In other cases, a third party might read, bill, and possibly even collect for the service. Some of these companies also install and maintain meters and reading systems.
Panel or circuit submeters are used to measure resource use of the same system for added security, economic, reliability, and behavioral benefits. These provide important insights into resource consumption of building systems and equipment working in the same series. Submeters can measure use of a single panel, or multiple points within a panel system using single-point, multi-point, and branch circuit submeters.
The latest trend in submetering is Automatic Meter Reading, or AMR. This technology is used to get from meter reading to billing by an automated electronic means. This can be by handheld computers that collect data using touch wands, walk or drive-by radio, fixed network systems where the meter has a transmitter or transceiver that sends the data to a central location, or transmission via Wi-Fi, cellular, or Internet connections.
Although not technically submetering, an alternate method of utility cost allocation called RUBS (Ratio Utility Billing Systems) is sometimes used to allocate costs to tenants when true submetering is not practical or not possible due to plumbing or wiring constraints. This method divides utility costs by square footage, number of occupants, or some other combination of cost ratios.
Submetering in the world
Submeters take many forms. For example, central heating in apartment blocks in Belgium, Germany and Switzerland is sometimes submetered with liquid filled calibrated vials, known as heat cost allocators, attached to each of the heating radiators. The metering company visits the apartments about once a year and reads the liquid level and replaces the vials. Some apartment owners have replaced the vials with electronic submeters that transmit temperature readings via radio to a master unit in each apartment. The master unit in turn transmits collated readings to the utility company, thereby saving both labour costs and inconvenience to both tenant and landlord. The master unit displays a number representing the current total of "heating value".
Submetering history and laws
The concept of submetering was effectively "invented" sometime in the 1920s, when many laws currently affecting submetering were written. Submetering was not widespread until the energy crisis in the mid-1970s, which prompted an increase in submetering for gas and electric usage. Water submetering began its increase nationally in the mid-1990s when water and wastewater prices started rising. However, submetering really did not take a hold in the property management world until the late 1980s, with the ever increasing costs associated with utilities and a society more aware of environmental conservation.
Utility submetering has its roots in Denmark. In 1902 two Danish brothers, Axel and Odin Clorius, established Clorius Controls. The company commenced work on developing and producing a range of self-acting temperature controllers. In 1924 Clorius received its first patent for a heat cost allocator. The device was meant to measure energy usage in apartments built with a common boiler heating system. The device was attached to each radiator in an apartment unit. By measuring energy usage at each radiator, a consumption-based utility bill could be prepared for each unit.
Utilities submetered
Natural Gas
Water (potable or non-potable)
Hot water (for space heating or domestic service)
Electricity
HVAC (few companies offer this technology)
Cable television
Steam
Solar Thermal Generation
Onsite Power Generation
See also
Automatic meter reading
Distributed generation
Feed-in Tariff
Flow measurement
Net metering
Smart meter
References
Public utilities
Flow meters
Water supply | Utility submeter | [
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 1,678 | [
"Hydrology",
"Measuring instruments",
"Flow meters",
"Environmental engineering",
"Water supply",
"Fluid dynamics"
] |
6,900,318 | https://en.wikipedia.org/wiki/Building%20insulation | Building insulation is material used in a building (specifically the building envelope) to reduce the flow of thermal energy. While the majority of insulation in buildings is for thermal purposes, the term also applies to acoustic insulation, fire insulation, and impact insulation (e.g. for vibrations caused by industrial applications). Often an insulation material will be chosen for its ability to perform several of these functions at once.
Since prehistoric times, humans have created thermal insulation with materials such as animal fur and plants. With the agricultural development, earth, stone, and cave shelters arose. In the 19th century, people started to produce insulated panels and other artificial materials. Now, insulation is divided into two main categories: bulk insulation and reflective insulation. Buildings typically use a combination.
Insulation is an important economic and environmental investment for buildings. By installing insulation, buildings use less energy for heating and cooling and occupants experience less thermal variability. Retrofitting buildings with further insulation is an important climate change mitigation tactic, especially when buildings are heated by oil, natural gas, or coal-based electricity. Local and national governments and utilities often have a mix of incentives and regulations to encourage insulation efforts on new and renovated buildings as part of efficiency programs in order to reduce grid energy use and its related environmental impacts and infrastructure costs.
Insulation
The definition of thermal insulation
Thermal insulation usually refers to the use of appropriate insulation materials and design adaptations for buildings to slow the transfer of heat through the enclosure to reduce heat loss and gain. The transfer of heat is caused by the temperature difference between indoors and outdoors. Heat may be transferred either by conduction, convection, or radiation. The rate of transmission is closely related to the propagating medium. Heat is lost or gained by transmission through the ceilings, walls, floors, windows, and doors. This heat reduction and acquisition are usually unwelcome. It not only increases the load on the HVAC system resulting in more energy wastes but also reduces the thermal comfort of people in the building. Thermal insulation in buildings is an important factor in achieving thermal comfort for its occupants. Insulation reduces unwanted heat loss or gain and can decrease the energy demands of heating and cooling systems. It does not necessarily deal with issues of adequate ventilation and may or may not affect the level of sound insulation. In a narrow sense, insulation can just refer to the insulation materials employed to slow heat loss, such as: cellulose, glass wool, rock wool, polystyrene, polyurethane foam, vermiculite, perlite, wood fiber, plant fiber (cannabis, flax, cotton, cork, etc.), recycled cotton denim, straw, animal fiber (sheep's wool), cement, and earth or soil, reflective insulation (also known as a radiant barrier) but it can also involve a range of designs and techniques to address the main modes of heat transfer - conduction, radiation, and convection materials.
Most of the materials in the above list only retain a large amount of air or other gases between the molecules of the material. The gas conducts heat much less than the solids. These materials can form gas cavities, which can be used to insulate heat with low heat transfer efficiency. This situation also occurs in the fur of animals and birds feathers, animal hair can employ the low thermal conductivity of small pockets of gas, so as to achieve the purpose of reducing heat loss.
The effectiveness of reflective insulation (radiant barrier) is commonly evaluated by the reflectivity (emittance) of the surface with airspace facing to the heat source.
The effectiveness of bulk insulation is commonly evaluated by its R-value, of which there are two – metric (SI) (with unit K⋅W−1⋅m2) and US customary (with unit °F⋅ft2⋅h/BTU), the former being 0.176 times the latter numerically, or the reciprocal quantity the thermal conductivity or U-value W⋅K−1⋅m−2.
For example, in the US the insulation standard for attics, is recommended to be at least R-38 US units, (equivalent to R-6.7 or a U value of 0.15 in SI units). The equivalent standard in the UK are technically comparable, the approved document L would normally require an average U value over the roof area of 0.11 to 0.18 depending on the age of the property and the type of roof construction. Newer buildings have to meet a higher standard than those built under previous versions of the regulations.
It is important to realise a single R-value or U-value does not take into account the quality of construction or local environmental factors for each building. Construction quality issues can include inadequate vapor barriers and problems with draft-proofing. In addition, the properties and density of the insulation material itself are critical. Most countries have some regime of either inspections or certification of approved installers to make sure that good standards are maintained.
History of thermal insulation
The history of thermal insulation is not so long compared with other materials, but human beings have been aware of the importance of insulation for a long time. In the prehistoric time, human beings began their activity of making shelters against wild animals and heavy weather, human beings started their exploration of thermal insulation. Prehistoric peoples built their dwellings by using the materials of animal skins, fur, and plant materials like reed, flax, and straw, these materials were first used as clothing materials, because their dwellings were temporary, they were more likely to use the materials they used in clothing, which were easy to obtain and process. The materials of animal furs and plant products can hold a large amount of air between molecules which can create an air cavity to reduce the heat exchange.
Later, human beings' long life spans and the development of agriculture determined that they needed a fixed place of residence, earth-sheltered houses, stone houses, and cave dwellings began to emerge. The high density of these materials can cause a time lag effect in thermal transfer, which can make the inside temperature change slowly. This effect keep inside of the buildings warm in winter and cool in summer, also because of the materials like earth or stone is easy to get, this design is really popular in many places like Russia, Iceland, Greenland.
Organic materials were the first available to build a shelter for people to protect themselves from bad weather conditions and to help keep them warm. But organic materials like animal and plant fiber cannot exist for a long time, so these natural materials cannot satisfy people's long-term need for thermal insulation. So, people began to search for substitutes which are more durable. In the 19th century, people were no longer satisfied with using natural materials for thermal insulation, they processed the organic materials and produced the first insulated panels. At the same time, more and more artificial materials start to emerge, and a large range of artificial thermal insulation materials were developed, e.g. rock wool, fiberglass, foam glass, and hollow bricks.
Significance of thermal insulation
Thermal insulation can play a significant role in buildings, great demands of thermal comfort result in a large amount of energy consumed for full-heating for all rooms. Around 40% of energy consumption can be attributed to the building, mainly consumed by heating or cooling. Sufficient thermal insulation is the fundamental task that ensures a healthy indoor environment and against structure damages. It is also a key factor in dealing with high energy consumption, it can reduce the heat flow through the building envelope. Good thermal insulation can also bring the following benefits to the building:
Preventing building damage caused by the formation of moisture on the inside of the building envelope. Thermal insulation makes sure that the temperatures of room surface don't fall below a critical level, which avoids condensation and the formation of mould. According to the Building Damage reports, 12.7% and 14% of building damage was caused by mould problems. If there is no sufficient thermal insulation in the building, high relative humidity inside the building will lead to condensation and finally result in mould problems.
Producing a comfortable thermal environment for people living in the building. Good thermal insulation allows sufficiently high temperatures inside the building during the winter, and it also achieves the same level of thermal comfort by offering relatively low air temperature in the summer.
Reducing unwanted heating or cooling energy input. Thermal insulation reduces the heat exchange through the building envelope, which allows the heating and cooling machines to achieve the same indoor air temperature with less energy input.
Planning and examples
How much insulation a house should have depends on building design, climate, energy costs, budget, and personal preference. Regional climates make for different requirements. Building codes often set minimum standards for fire safety and energy efficiency, which can be voluntarily exceeded within the context of sustainable architecture for green certifications such as LEED.
The insulation strategy of a building needs to be based on a careful consideration of the mode of energy transfer and the direction and intensity in which it moves. This may alter throughout the day and from season to season. It is important to choose an appropriate design, the correct combination of materials, and building techniques to suit the particular situation.
United States
The thermal insulation requirements in the USA follow the ASHRAE 90.1 which is the U.S. energy standard for all commercial and some residential buildings. ASHRAE 90.1 standard considers multiple perspectives such as prescriptive, building envelope types and energy cost budget. And the standard has some mandatory thermal insulation requirements. All thermal insulation requirements in ASHRAE 90.1 are divided by the climate zone, it means that the amount of insulation needed for a building is determined by which climate zone the building locates. The thermal insulation requirements are shown as R-value and continuous insulation R-value as the second index. The requirements for different types of walls (wood framed walls, steel framed walls, and mass walls) are shown in the table.
To determine whether you should add insulation, you first need to find out how much insulation you already have in your home and where. A qualified home energy auditor will include an insulation check as a routine part of a whole-house energy audit. However, you can sometimes perform a self-assessment in certain areas of the home, such as attics. Here, a visual inspection, along with use of a ruler, can give you a sense of whether you may benefit from additional insulation. Residential energy audits are often initiated due homeowners being alerted by a gradual increase in their utility bills which often reflects the buildings attic as being poorly insulated.
An initial estimate of insulation needs in the United States can be determined by the US Department of Energy's ZIP code insulation calculator.
Russia
In Russia, the availability of abundant and cheap gas has led to poorly insulated, overheated, and inefficient consumption of energy. The Russian Center for Energy Efficiency found that Russian buildings are either over- or under-heated, and often consume up to 50 percent more heat and hot water than needed. 53 percent of all carbon dioxide (CO2) emissions in Russia are produced through heating and generating electricity for buildings. However, greenhouse gas emissions from the former Soviet Bloc are still below their 1990 levels.
Energy codes in the Soviet Union start to establish in 1955, norms and rules first mentioned the performance of the building envelope and heat losses, and they formed norms to regulate the energy characteristics of the building envelope. And the most recent version of Russia energy code (SP 50.13330.2012) was published in 2003. The energy codes of Russia were established by experts of government institutes or nongovernmental organization like ABOK. The energy code of Russia have been revised several times since 1955, the 1995 versions reduced energy depletion per square meter for heating by 20%, and the 2000 version reduced by 40%. The code also has a mandatory requirement on thermal insulation of buildings accompany with some voluntary provisions, mainly focused on heat loss from the building shell.
Australia
The thermal insulation requirements of Australia follow the climate of the building location, the table below is the minimum insulation requirements based on climate, which is determined by the Building Code of Australia (BCA). The building in Australia applies insulation in roofs, ceilings, external walls, and various components of the building (such as Veranda roofs in the hot climate, Bulkhead, Floors). Bulkheads (wall section between ceilings which are in different heights) should have the same insulated level as the ceilings since they suffer the same temperature levels. And the external walls of Australia's building should be insulated to decrease all kinds of heat transfer. Besides the walls and ceilings, the Australia energy code also requires insulation for floors (not all floors). Raised timber floors must have around 400mm soil clearance below the lowest timbers to provide sufficient space for insulation, and concrete slab such as suspended slabs and slab-on-ground should be insulated in the same way.
China
China has various climatic characters, which are divided by geographical areas. There are five climate zones in China to identify the building design include thermal insulation. (The very cold zone, cold zone, hot summer and cold winter zone, hot summer and warm winter zone and cold winter zone).
Germany
Germany established its requirements of building energy efficiency in 1977, and the first energy code-the Energy Saving Ordinance (EnEV) which based on the building performance was introduced in 2002. And the 2009 version of the Energy Saving Ordinance increased the minimum R-values of the thermal insulation of the building shell and introduced requirements for air-tightness tests. The Energy Saving Ordinance (EnEV) 2013 clarified the requirement of thermal insulation of the ceiling. And it mentioned that if the ceiling was not fulfilled, thermal insulation will be needed in accessible ceilings over upper floor's heated rooms. [U-Value must be under 0.24 W/(m2⋅K)]
Netherlands
The building decree (Bouwbesluit) of the Netherlands makes a clear distinction between home renovation or newly built houses. New builds count as completely new homes, but also new additions and extensions are considered to be new builds. Furthermore, renovations whereby at least 25% of the surface of the integral building is changed or enlarged is also considered to be a new build. Therefore, during thorough renovations, there's a chance that the new construction must meet the new building requirement for insulation of the Netherlands. If the renovation is of a smaller nature, the renovation directive applies. Examples of renovation are post-insulation of a cavity wall and post-insulation of a sloping roof against the roof boarding or under the tiles. Note that every renovation must meet the minimum Rc value of 1.3 W/(m2⋅K). If the current insulation has a higher insulation value (the legally obtained level), then this value counts as a lower limit.
New Zealand
Insulation requirements for new houses and small buildings in New Zealand are set out in the Building Code and standard NZS 4128:2009.
Zones 1 and 2 include most of the North Island, including Waiheke Island and Great Barrier Island. Zone 3 includes the Taupo District, Ruapehu District, and the Rangitikei District north of 39°50′ latitude south (i.e. north of and including Mangaweka) in the North Island, the South Island, Stewart Island, and the Chatham Islands.
United Kingdom
Insulation requirements are specified in the Building regulations and in England and Wales the technical content is published as Approved Documents
Document L defines thermal requirements, and while setting minimum standards can allow for the U values for elements such as roofs and walls to be traded off against other factors such as the type of heating system in a whole building energy use assessment.
Scotland and Northern Ireland have similar systems but the detail technical standards are not identical.
The standards have been revised several times in recent years, requiring more efficient use of energy as the UK moves towards a low-carbon economy.
Technologies and strategies in different climates
Cold climates
Strategies in cold climate
In cold conditions, the main aim is to reduce heat flow out of the building. The components of the building envelope—windows, doors, roofs, floors/foundations, walls, and air infiltration barriers—are all important sources of heat loss; in an otherwise well insulated home, windows will then become an important source of heat transfer. The resistance to conducted heat loss for standard single glazing corresponds to an R-value of about 0.17 m2⋅K⋅W−1 or more than twice that for typical double glazing (compared to 2–4 m2⋅K⋅W−1 for glass wool batts). Losses can be reduced by good weatherisation, bulk insulation, and minimising the amount of non-insulative (particularly non-solar facing) glazing. Indoor thermal radiation can also be a disadvantage with spectrally selective (low-e, low-emissivity) glazing. Some insulated glazing systems can double to triple R values.
Technologies in cold climate
The vacuum panels and aerogel wall surface insulation are two technologies that can enhance the energy performance and thermal insulating effectiveness of the residential buildings and commercial buildings in cold climate regions such as New England and Boston. In the past time, the price of thermal insulation materials that displayed high insulated performance was very expensive. With the development of material industry and the booming of science technologies, more and more insulation materials and insulated technologies have emerged during the 20th century, which gives us various options for building insulation. Especially in the cold climate areas, a large amount of thermal insulation is needed to deal with the heat losses caused by cold weather (infiltration, ventilation, and radiation). There are two technologies that are worth discussing:
Exterior insulation system (EIFS) based on Vacuum insulation panels (VIP)
VIPs are noted for their ultra-high thermal resistance, their ability of thermal resistance is four to eight times more than conventional foam insulation materials which lead to a thinner thickness of thermal insulation to the building shell compared with traditional materials. The VIPs are usually composed of core panels and metallic enclosures. The common materials that used to produce Core panels are fumed and precipitated silica, open-cell polyurethane (PU), and different types of fiberglass. And the core panel is covered by the metallic enclosure to create a vacuum environment, the metallic enclosure can make sure that the core panel is kept in the vacuum environment. Although this material has a high thermal performance, it still maintains a high price in the last twenty years.
Aerogel exterior and interior wall surface insulation
Aerogel was first discovered by Samuel Stephens Kistle in 1931. It is a kind of gel of which the liquid component of the material is replaced by a gas, thus creating a material that is 99% air. This material has a relatively high R-value of around R-10 per inch which is considerably higher compared with conventional plastic foam insulation materials, due to their designed high porosity. But the difficulties in processing and low productivity limit the development of Aerogels, the cost price of this material still remains at a high level. Only two companies in the United States offer the commercial Aerogel product for wall insulation purposes.
Aerogels for glazing
The DOE estimates thermal losses nearing 30% through windows, and thermal gains from sunlight leading to unwanted heating. Due to the high R associated with aerogels, their use for glazing applications has become an area of interest explored by many research institutions. Their implementation, however, must not hinder the primary function of windows: transparency. Typically, aerogels have low transmission and appear hazy, even amongst those considered transparent, which is why they have generally been reserved to wall insulation applications. Eldho Abraham, a researcher at the University of Colorado Boulder, recently demonstrated the capabilities of aerogels by designing a silanized cellulose aerogel (SiCellA) which offers near 99% visible transmission in addition to thermal conductivities which effectively reject or retain heat depending on the interior environment, akin to heating/cooling alterations. This is due to the designed 97.5% porosity of the SiCellA: pores are smaller than the wavelength of visible light, leading to transmission; the pores also minimize contact between the cellulose fibers, leading to lower thermal conductivities. The use of cellulose fibers lends itself to sustainability, as this is a naturally derived fiber sourced from wood pulps. This opens the door not only to aerogels, but also more general wood-based materials implementation in an effort to assist sustainable design alternatives with compounding energy saving effects.
Hot climates
Strategies in hot climate
In hot conditions, the greatest source of heat energy is solar radiation. This can enter buildings directly through windows or it can heat the building shell to a higher temperature than the ambient, increasing the heat transfer through the building envelope. The Solar Heat Gain Co-efficient (SHGC) (a measure of solar heat transmittance) of standard single glazing can be around 78–85%. Solar gain can be reduced by adequate shading from the sun, light coloured roofing, spectrally selective (heat-reflective) paints and coatings and, various types of insulation for the rest of the envelope. Specially coated glazing can reduce SHGC to around 10%. Radiant barriers are highly
effective for attic spaces in hot climates. In this application, they are much more effective in hot climates than cold climates. For downward heat flow, convection is weak and radiation dominates heat transfer across an air space. Radiant barriers must face an adequate air-gap to be effective.
If refrigerative air-conditioning is employed in a hot, humid climate, then it is particularly important to seal the building envelope. Dehumidification of humid air infiltration can waste significant energy. On the other hand, some building designs are based on effective cross-ventilation instead of refrigerative air-conditioning to provide convective cooling from prevailing breezes.
Technologies in hot climate
In hot dry climate regions like Egypt and Africa, thermal comfort in the summer is the main question, nearly half of energy consumption in urban area is depleted by air conditioning systems to satisfy peoples' demand for thermal comfort, many developing countries in hot dry climate region suffer a shortage of electricity in the summer due to the increasing use of cooling machines. A new technology called Cool Roof has been introduced to ameliorate this situation. In the past, architects used thermal mass materials to improve thermal comfort, the heavy thermal insulation could cause the time-lag effect which might slow down the speed of heat transfer during the daytime and keep the indoor temperature in a certain range (Hot and dry climate regions usually have a large temperature difference between the day and night).
The cool roof is low-cost technology based on solar reflectance and thermal emittance, which uses reflective materials and light colors to reflect the solar radiation. The solar reflectance and the thermal emittance are two key factors that determine the thermal performance of the roof, and they can also improve the effectiveness of the thermal insulation since around 30% solar radiation is reflected back to the sky. The shape of the roof is also under consideration, the curved roof can receive less solar energy compared with conventional shapes. Meanwhile, the drawback of this technology is obvious that the high reflectivity will cause visual discomfort. On the other hand, the high reflectivity and thermal emittance of the roof will increase the heating load of the building.
Orientation – passive solar design
Optimal placement of building elements (e.g. windows, doors, heaters) can play a significant role in insulation by considering the impact of solar radiation on the building and the prevailing breezes. Reflective laminates can help reduce passive solar heat in pole barns, garages, and metal buildings.
Construction
See insulated glass and quadruple glazing for discussion of windows.
Building envelope
The thermal envelope defines the conditioned or living space in a house. The attic or basement may or may not be included in this area. Reducing airflow from inside to outside can help to reduce convective heat transfer significantly.
Ensuring low convective heat transfer also requires attention to building construction (weatherization) and the correct installation of insulative materials.
The less natural airflow into a building, the more mechanical ventilation will be required to support human comfort. High humidity can be a significant issue associated with lack of airflow, causing condensation, rotting construction materials, and encouraging microbial growth such as mould and bacteria. Moisture can also drastically reduce the effectiveness of insulation by creating a thermal bridge (see below). Air exchange systems can be actively or passively incorporated to address these problems.
Thermal bridge
Thermal bridges are points in the building envelope that allow heat conduction to occur. Since heat flows through the path of least resistance, thermal bridges can contribute to poor energy performance. A thermal bridge is created when materials create a continuous path across a temperature difference, in which the heat flow is not interrupted by thermal insulation. Common building materials that are poor insulators include glass and metal.
A building design may have limited capacity for insulation in some areas of the structure. A common construction design is based on stud walls, in which thermal bridges are common in wood or steel studs and joists, which are typically fastened with metal. Notable areas that most commonly lack sufficient insulation are the corners of buildings, and areas where insulation has been removed or displaced to make room for system infrastructure, such as electrical boxes (outlets and light switches), plumbing, fire alarm equipment, etc.
Thermal bridges can also be created by uncoordinated construction, for example by closing off parts of external walls before they are fully insulated.
The existence of inaccessible voids within the wall cavity which are devoid of insulation can be a source of thermal bridging.
Some forms of insulation transfer heat more readily when wet, and can therefore also form a thermal bridge in this state.
The heat conduction can be minimized by any of the following: reducing the cross sectional area of the bridges, increasing the bridge length, or decreasing the number of thermal bridges.
One method of reducing thermal bridge effects is the installation of an insulation board (e.g. foam board EPS XPS, wood fibre board, etc.) over the exterior outside wall. Another method is using insulated lumber framing for a thermal break inside the wall.
Installation
Insulating buildings during construction is much easier than retrofitting, as generally the insulation is hidden, and parts of the building need to be deconstructed to reach them.
Depending on the country there are different regulations as to which type of insulation is the best alternative for buildings, considering energy efficiency and environmental factors. Geographical location also affects the type of insulation needed as colder climates will need a bigger investment than warmer ones on installation costs.
Materials
There are essentially two types of building insulation - bulk insulation and reflective insulation. Most buildings use a combination of both types to make up a total building insulation system. The type of insulation used is matched to create maximum resistance to each of the three forms of building heat transfer - conduction, convection, and radiation.
The classification of thermal insulation materials
According to three ways of heat exchange, most thermal insulation we use in our buildings can be divided into two categories: Conductive and convective insulators and radiant heat barriers. And there are more detailed classifications to distinguish between different materials. Many thermal insulation materials work by creating tiny air cavities between molecules, these air cavities can largely reduce the heat exchange through the materials. But there are two exceptions which don't use air cavities as their functional element to prevent heat transfer. One is reflective thermal insulation, which creates a great airspace by forming a radiation barrier by attaching metal foil on one side or both sides, this thermal insulation mainly reduces the radiation heat transfer. Although the polished metal foil attached on the materials can only prevent the radiation heat transfer, its effect to stop heat transfer can be dramatic. Another thermal insulation that doesn't apply air cavities is vacuum insulation, the vacuum-insulated panels can stop all kinds of convection and conduction and it can also largely mitigate the radiation heat transfer. But the effectiveness of vacuum insulation is also limited by the edge of the material, since the edge of the vacuum panel can form a thermal bridge which leads to a reduction of the effectiveness of the vacuum insulation. The effectiveness of the vacuum insulation is also related to the area of the vacuum panels.
Conductive and convective insulators
Bulk insulators block conductive heat transfer and convective flow either into or out of a building. Air is a very poor conductor of heat and therefore makes a good insulator. Insulation to resist conductive heat transfer uses air spaces between fibers, inside foam or plastic bubbles and in building cavities like the attic. This is beneficial in an actively cooled or heated building, but can be a liability in a passively cooled building; adequate provisions for cooling by ventilation or radiation are needed.
Fibrous insulation materials
Fibrous materials are made by tiny diameter fibers which evenly distribute the airspace. The commonly used materials are silica, glass, rock wool, and slag wool. Glass fiber and mineral wool are two insulation materials that are most widely used in this type.
Cellular insulation materials
Cellular insulation is composed of small cells which are separated from each other. The commonly cellular materials are glass and foamed plastic like polystyrene, polyolefin, and polyurethane.
Radiant heat barriers
Radiant barriers work in conjunction with an air space to reduce radiant heat transfer across the air space. Radiant or reflective insulation reflects heat instead of either absorbing it or letting it pass through. Radiant barriers are often seen used in reducing downward heat flow, because upward heat flow tends to be dominated by convection. This means that for attics, ceilings, and roofs, they are most effective in hot climates.
They also have a role in reducing heat losses in cool climates. However, much greater insulation can be achieved through the addition of bulk insulators (see above).
Some radiant barriers are spectrally selective and will preferentially reduce the flow of infra-red radiation in comparison to other wavelengths. For instance, low-emissivity (low-e) windows will transmit light and short-wave infra-red energy into a building but reflect the long-wave infra-red radiation generated by interior furnishings. Similarly, special heat-reflective paints are able to reflect more heat than visible light, or vice versa.
Thermal emissivity values probably best reflect the effectiveness of radiant barriers. Some manufacturers quote an 'equivalent' R-value for these products but these figures can be difficult to interpret, or even misleading, since R-value testing measures total heat loss in a laboratory setting and does not control the type of heat loss responsible for the net result (radiation, conduction, convection).
A film of dirt or moisture can alter the emissivity and hence the performance of radiant barriers.
Eco-friendly insulation
Eco-friendly insulation is a term used for insulating products with limited environmental impact. The commonly accepted approach to determine whether or not an insulation product, or in fact any product or service is eco-friendly is by doing a life-cycle assessment (LCA). A number of studies compared the environmental impact of insulation materials in their application. The comparison shows that most important is the insulation value of the product meeting the technical requirements for the application. Only in a second order step, a differentiation between materials becomes relevant. The report commissioned by the Belgian government to VITO is a good example of such a study.
See also
External wall insulation
R-value (insulation) – includes a list of insulations with R-values
Thermal insulation
Thermal mass
Materials
Building insulation materials
Hempcrete
Insulated glazing
Mineral wool
Packing (firestopping)
Quadruple glazing
Window insulation film
Wool insulation
Design
Cool roof
Green roof
Low-energy building
Passive house
Passive solar building design
Passive solar design
Solar architecture
Superinsulation
Zero energy building
Zero heating building
Construction
Building construction
Building Envelope
Building performance
Deep energy retrofit
Weatherization
Other
Condensation
Draught excluder
HVAC
Ventilation
References
External links
Tips for Selecting Roof Insulation
Best Practice Guide Air Sealing & Insulation Retrofits for Single Family Homes
insulate surfaces from water, heat and moisture
Sustainable building
Insulators
Thermal protection
Energy conservation
Heat transfer
Building materials | Building insulation | [
"Physics",
"Chemistry",
"Engineering"
] | 6,554 | [
"Transport phenomena",
"Sustainable building",
"Physical phenomena",
"Heat transfer",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Thermodynamics",
"Matter",
"Building materials"
] |
6,900,845 | https://en.wikipedia.org/wiki/Biodiversity%20informatics | Biodiversity informatics is the application of informatics techniques to biodiversity information, such as taxonomy, biogeography or ecology. It is defined as the application of Information technology technologies to management, algorithmic exploration, analysis and interpretation of primary data regarding life, particularly at the species level organization. Modern computer techniques can yield new ways to view and analyze existing information, as well as predict future situations (see niche modelling). Biodiversity informatics is a term that was only coined around 1992 but with rapidly increasing data sets has become useful in numerous studies and applications, such as the construction of taxonomic databases or geographic information systems. Biodiversity informatics contrasts with "bioinformatics", which is often used synonymously with the computerized handling of data in the specialized area of molecular biology.
Overview
Biodiversity informatics (different but linked to bioinformatics) is the application of information technology methods to the problems of organizing, accessing, visualizing and analyzing primary biodiversity data. Primary biodiversity data is composed of names, observations and records of specimens, and genetic and morphological data associated to a specimen. Biodiversity informatics may also have to cope with managing information from unnamed taxa such as that produced by environmental sampling and sequencing of mixed-field samples. The term biodiversity informatics is also used to cover the computational problems specific to the names of biological entities, such as the development of algorithms to cope with variant representations of identifiers such as species names and authorities, and the multiple classification schemes within which these entities may reside according to the preferences of different workers in the field, as well as the syntax and semantics by which the content in taxonomic databases can be made machine queryable and interoperable for biodiversity informatics purposes...
History of the discipline
Biodiversity Informatics can be considered to have commenced with the construction of the first computerized taxonomic databases in the early 1970s, and progressed through subsequent developing of distributed search tools towards the late 1990s including the Species Analyst from Kansas University, the North American Biodiversity Information Network NABIN, CONABIO in Mexico, INBio in Costa Rica, and others, the establishment of the Global Biodiversity Information Facility in 2001, and the parallel development of a variety of niche modelling and other tools to operate on digitized biodiversity data from the mid-1980s onwards (e.g. see ). In September 2000, the U.S. journal Science devoted a special issue to "Bioinformatics for Biodiversity", the journal Biodiversity Informatics commenced publication in 2004, and several international conferences through the 2000s have brought together biodiversity informatics practitioners, including the London e-Biosphere conference in June 2009. A supplement to the journal BMC Bioinformatics (Volume 10 Suppl 14) published in November 2009 also deals with biodiversity informatics.
History of the term
According to correspondence reproduced by Walter Berendsohn, the term "Biodiversity Informatics" was coined by John Whiting in 1992 to cover the activities of an entity known as the Canadian Biodiversity Informatics Consortium, a group involved with fusing basic biodiversity information with environmental economics and geospatial information in the form of GPS and GIS. Subsequently, it appears to have lost any obligate connection with the GPS/GIS world and be associated with the computerized management of any aspects of biodiversity information (e.g. see )
Digital taxonomy (systematics)
Global list of all species
One major goal for biodiversity informatics is the creation of a complete master list of currently recognised species of the world. This goal has been achieved to a large extent by the Catalogue of Life project which lists >2 million species in its 2022 Annual Checklist. A similar effort for fossil taxa, the Paleobiology Database documents some 100,000+ names for fossil species, out of an unknown total number.
Genus and species scientific names as unique identifiers
Application of the Linnaean system of binomial nomenclature for species, and uninomials for genera and higher ranks, has led to many advantages but also problems with homonyms (the same name being used for multiple taxa, either inadvertently or legitimately across multiple kingdoms), synonyms (multiple names for the same taxon), as well as variant representations of the same name due to orthographic differences, minor spelling errors, variation in the manner of citation of author names and dates, and more. In addition, names can change through time on account of changing taxonomic opinions (for example, the correct generic placement of a species, or the elevation of a subspecies to species rank or vice versa), and also the circumscription of a taxon can change according to different authors' taxonomic concepts. One proposed solution to this problem is the usage of Life Science Identifiers (LSIDs) for machine-machine communication purposes, although there are both proponents and opponents of this approach.
A consensus classification of organisms
Organisms can be classified in a multitude of ways (see main page Biological classification), which can create design problems for Biodiversity Informatics systems aimed at incorporating either a single or multiple classification to suit the needs of users, or to guide them towards a single "preferred" system. Whether a single consensus classification system can ever be achieved is probably an open question, however the Catalogue of Life has commissioned activity in this area which has been succeeded by a published system proposed in 2015 by M. Ruggiero and co-workers.
Biodiversity Maps
Biodiversity maps provide a cartographic representation of spatial biodiversity data. This data can be used in conjunction with Species Checklists to help with biodiversity conservation efforts. Biodiversity maps can help reveal patterns of species distribution and range changes. This may reflect biodiversity loss, habitat degradation, or changes in species composition. Combined with urban development data, maps can inform land management by modeling scenarios which might impact biodiversity.
Biodiversity maps can be produced in a variety of ways: traditionally range maps were hand-drawn based on literature reports but increasingly large-scale data, e.g. from citizen science projects (e.g. iNaturalist) and digitized museum collections (e.g. VertNet) are used. GIS tools such as ArcGIS or R packages such as dismo can specifically aid in species distribution modeling (ecological niche modeling) and even predict impacts of ecological change on biodiversity. GBIF, OBIS, and IUCN are large web-based repositories of species spatial-temporal data that source many existing biodiversity maps.
Mobilizing primary biodiversity information
"Primary" biodiversity information can be considered the basic data on the occurrence and diversity of species (or indeed, any recognizable taxa), commonly in association with information regarding their distribution in either space, time, or both. Such information may be in the form of retained specimens and associated information, for example as assembled in the natural history collections of museums and herbaria, or as observational records, for example either from formal faunal or floristic surveys undertaken by professional biologists and students, or as amateur and other planned or unplanned observations including those increasingly coming under the scope of citizen science. Providing online, coherent digital access to this vast collection of disparate primary data is a core Biodiversity Informatics function that is at the heart of regional and global biodiversity data networks, examples of the latter including OBIS and GBIF.
As a secondary source of biodiversity data, relevant scientific literature can be parsed either by humans or (potentially) by specialized information retrieval algorithms to extract the relevant primary biodiversity information that is reported therein, sometimes in aggregated / summary form but frequently as primary observations in narrative or tabular form. Elements of such activity (such as extracting key taxonomic identifiers, keywording / index terms, etc.) have been practiced for many years at a higher level by selected academic databases and search engines. However, for the maximum Biodiversity Informatics value, the actual primary occurrence data should ideally be retrieved and then made available in a standardized form or forms; for example both the Plazi and INOTAXA projects are transforming taxonomic literature into XML formats that can then be read by client applications, the former using TaxonX-XML and the latter using the taXMLit format. The Biodiversity Heritage Library is also making significant progress in its aim to digitize substantial portions of the out-of-copyright taxonomic literature, which is then subjected to optical character recognition (OCR) so as to be amenable to further processing using biodiversity informatics tools.
Standards and protocols
In common with other data-related disciplines, Biodiversity Informatics benefits from the adoption of appropriate standards and protocols in order to support machine-machine transmission and interoperability of information within its particular domain. Examples of relevant standards include the Darwin Core XML schema for specimen- and observation-based biodiversity data developed from 1998 onwards, plus extensions of the same, Taxonomic Concept Transfer Schema, plus standards for Structured Descriptive Data, and Access to Biological Collection Data (ABCD); while data retrieval and transfer protocols include DiGIR (now mostly superseded) and TAPIR (TDWG Access Protocol for Information Retrieval). Many of these standards and protocols are currently maintained, and their development overseen, by Biodiversity Information Standards (TDWG).
Current activities
At the 2009 e-Biosphere conference in the U.K., the following themes were adopted, which is indicative of a broad range of current Biodiversity Informatics activities and how they might be categorized:
Application: Conservation / Agriculture / Fisheries / Industry / Forestry
Application: Invasive Alien Species
Application: Systematic and Evolutionary Biology
Application: Taxonomy and Identification Systems
New Tools, Services and Standards for Data Management and Access
New Modeling Tools
New Tools for Data Integration
New Approaches to Biodiversity Infrastructure
New Approaches to Species Identification
New Approaches to Mapping Biodiversity
National and Regional Biodiversity Databases and Networks
A post-conference workshop of key persons with current significant Biodiversity Informatics roles also resulted in a Workshop Resolution that stressed, among other aspects, the need to create durable, global registries for the resources that are basic to biodiversity informatics (e.g., repositories, collections); complete the construction of a solid taxonomic infrastructure; and create ontologies for biodiversity data.
Example projects
Global:
The Global Biodiversity Information Facility (GBIF), and the Ocean Biogeographic Information System (OBIS) (for marine species)
The Species 2000, ITIS (Integrated Taxonomic Information System), and Catalogue of Life projects
Global Names
EOL, The Encyclopedia of Life project
The Consortium for the Barcode of Life project
The Map of Life project
The Reptile Database project
The AmphibiaWeb project
The uBio Universal Biological Indexer and Organizer, from the Woods Hole Marine Biological Laboratory
The Index to Organism Names (ION) from Clarivate Analytics, providing access to scientific names of taxa from numerous journals as indexed in the Zoological Record
The Interim Register of Marine and Nonmarine Genera (IRMNG)
ZooBank, the registry for nomenclatural acts and relevant systematic literature in zoology
The Index Nominum Genericorum, compilation of generic names published for organisms covered by the International Code of Botanical Nomenclature, maintained at the Smithsonian Institution in the U.S.A.
The International Plant Names Index
MycoBank, documenting new names and combinations for fungi
The List of Prokaryotic names with Standing in Nomenclature (LPSN) - Official register of valid names for bacteria and archaea, as governed by the International Code of Nomenclature of Bacteria
The Biodiversity Heritage Library project - digitising biodiversity literature
Wikispecies, open source (community-editable) compilation of taxonomic information, companion project to Wikipedia
TaxonConcept.org, a Linked Data project that connects disparate species databases
Instituto de Ciencias Naturales. Universidad Nacional de Colombia. Virtual Collections and Biodiversity Informatics Unit
ANTABIF. The Antarctic Biodiversity Information Facility gives free and open access to Antarctic Biodiversity data, in the spirit of the Antarctic Treaty.
Genesys, database of plant genetic resources maintained in national, regional and international gene banks
VertNet, Access to vertebrate primary occurrence data from data sets worldwide.
Regional / national projects:
Fauna Europaea
Atlas of Living Australia
Pan-European Species directories Infrastructure (PESI)
Symbiota
iDigBio, Integrated Digitized Biocollections (USA)
i4Life project
Sistema de Información sobre Biodiversidad de Colombia
India Biodiversity Portal (IBP)
Bhutan Biodiversity Portal (BBP)
Weed Identification and Knowledge in the Western Indian Ocean (WIKWIO)
LifeWatch is proposed by ESFRI as a pan-European research (e-)infrastructure to support Biodiversity research and policy-making.
Vermont Atlas of Life
A listing of over 600 current biodiversity informatics related activities can be found at the TDWG "Biodiversity Information Projects of the World" database.
See also
Web-based taxonomy
List of biodiversity databases
References
Further reading
External links
Biodiversity Informatics (journal)
Information science by discipline
Taxonomy (biology)
Computational fields of study | Biodiversity informatics | [
"Technology",
"Biology"
] | 2,606 | [
"Computational fields of study",
"Computing and society",
"Taxonomy (biology)"
] |
6,901,456 | https://en.wikipedia.org/wiki/Gpsd | gpsd is a computer software program that collects data from a Global Positioning System (GPS) receiver and provides the data via an Internet Protocol (IP) network to potentially multiple client applications in a server-client application architecture. Gpsd may be run as a daemon to operate transparently as a background task of the server. The network interface provides a standardized data format for multiple concurrent client applications, such as Kismet or GPS navigation software.
Gpsd is commonly used on Unix-like operating systems. It is distributed as free software under the 2-clause BSD license.
Design
gpsd provides a TCP/IP service by binding to port 2947 by default. It communicates via that socket by accepting commands, and returning results. These commands use a JSON-based syntax and provide JSON responses. Multiple clients can access the service concurrently.
The application supports many types of GPS receivers with connections via serial ports, USB, and Bluetooth. Starting in 2009, gpsd also supports AIS receivers.
gpsd supports interfacing with the Network Time Protocol (NTP) server ntpd via shared memory to enable setting the host platform's time via the GPS clock.
Authors
gpsd was originally written by Remco Treffkorn with Derrick Brashear, then maintained by Russell Nelson. It is now maintained by Eric S. Raymond.
References
External links
Global Positioning System
Free software programmed in C
Free software programmed in Python
Software using the BSD license | Gpsd | [
"Technology",
"Engineering"
] | 298 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
6,901,481 | https://en.wikipedia.org/wiki/Global%20distribution%20system | A global distribution system (GDS) is a computerised network system owned or operated by a company that enables transactions between travel industry service providers, mainly airlines, hotels, car rental companies, and travel agencies. The GDS mainly uses real-time inventory (e.g. number of hotel rooms available, number of flight seats available, or number of cars available) from the service providers. Travel agencies traditionally relied on GDS for services, products and rates in order to provide travel-related services to the end consumers. Thus, a GDS can link services, rates and bookings consolidating products and services across all three travel sectors: i.e., airline reservations, hotel reservations, car rentals.
GDS is different from a computer reservation system, which is a reservation system used by the service providers (also known as vendors). Primary customers of GDS are travel agents (both online and office-based) who make reservations on various reservation systems run by the vendors. GDS holds no inventory; the inventory is held on the vendor's reservation system itself. A GDS system will have a real-time link to the vendor's database. For example, when a travel agency requests a reservation on the service of a particular airline company, the GDS system routes the request to the appropriate airline's computer reservations system.
Example of a booking facilitation done by an airline GDS
A mirror image of the passenger name record (PNR) in the airline reservations system is maintained in the GDS system. If a passenger books an itinerary containing air segments of multiple airlines through a travel agency, the passenger name record in the GDS system would hold information on their entire itinerary, while each airline they fly on would only have a portion of the itinerary that is relevant to them. This would contain flight segments on their own services and inbound and onward connecting flights (known as info segments) of other airlines in the itinerary. For example, if a passenger books a journey from Amsterdam to London on KLM, London to New York on British Airways, and New York to Frankfurt on Lufthansa through a travel agent and if the travel agent is connected to Amadeus GDS, the PNR in the Amadeus GDS would contain the full itinerary, while the PNR in KLM would show the Amsterdam to London segment along with the British Airways flight as an onward info segment. Likewise, the PNR in the Lufthansa system would show the New York to Frankfurt segment with the British Airways flight as an arrival information segment. Finally, the PNR in British Airways' system would show all three segments, one as a live segment and the other two as arrival and onward info segments.
Some GDS systems also have a dual-use capability for hosting multiple computer reservation systems; in such situations functionally the computer reservations system and the GDS partition of the system behave as if they were separate systems.
Mid-office travel automation
Mid-office automation captures Passenger name record technically abbreviated as PNR data from a variety of global distribution systems (Sabre, Galileo, Amadeus, and Worldspan) sources and lets travel agencies create custom business rules to validate reservation accuracy, monitor travel policies, perform file finishing, prepare itineraries/invoices and process ticketing.
Quality control software is used for such functions as ensuring reservations are formatted properly, checking for lower fares and watching for seat availability, upgrades, waitlist clearance, and taking advantage of back to back ticketing opportunities. When customized, such tools allow agencies and corporate accounts to monitor virtually any information in global distribution system passenger name records. Accelerating such tools also creates opportunities for customer relationship management. (a)
Mid-office automation is key to increasing the touchless rate of online adoption.
Future of GDS systems and companies
Global distribution systems in the travel industry originated from a traditional legacy business model that existed to inter-operate between airline vendors and travel agents. During the early days of computerized reservations systems flight ticket reservations were not possible without a GDS. As time progressed, many airline vendors (including budget and mainstream operators) have now adopted a strategy of 'direct selling' to their wholesale and retail customers (passengers). They invested heavily in their own reservations and direct-distribution channels and partner systems. This helps to minimize direct dependency on GDS systems to meet sales and revenue targets and allows for a more dynamic response to market needs. These technology advancements in this space facilitate an easier way to cross-sell to partner airlines and via travel agents, eliminating the dependency on a dedicated global GDS federating between systems. Also, multiple price comparison websites eliminate the need of dedicated GDS for point-in-time prices and inventory for both travel agents and end-customers. Hence some experts argued that these changes in business models might have led to the complete phasing out of GDS in the Airline space by the year 2020. On the other hand, some travel professional experts demonstrate that GDS still continue to offer the flexibility and bulk buying capacities for airline consolidators to reach travel agents that individual airline systems are not able to provide customer segments with wider choices. Their argument is, individual airline distribution systems are not designed to interoperate with competitors systems.
Lufthansa Group announced in June 2015 that it was imposing an additional charge of €16 when booking through an external global distribution system rather than their own systems. They stated their choice was based upon that the costs of using external systems was several times higher than their own. Several other airlines including Air France–KLM and Emirates Airline also stated that they are following the development.
However, hotels and car rental industry continue to benefit from GDS, especially last-minute inventory disposal using GDS to bring additional operational revenue. GDS here is useful to facilitate global reach using existing network and low marginal costs when compared to online air travel bookings. Some GDS companies are also in the process of investing and establishing significant offshore capability in a move to reduce costs and improve their profit margins to serve their customer directly accommodating changing business models.
See also
References
Air Grace Aviation Academy- GDS Training provider in Delhi
https://airgraceacademy.com/travel-tourism-management/
Bibliography
"COMPLEAT mid-office automation software" by Concur Technologies Inc.
"Mid-Office Software solution of the next generation for all companies sizes" by Boenso Travel IT Solutions
"XChange Mid-office" by QuadLabs Technologies Pvt ltd
"The Myth of Online Adoption" at Cornerstone Info Sys
"Fusion of Oracle Mid-Office Travel Management with Online Corporate Travel Tools" at MP Travel Pty Ltd (Australia)
What is the Best Academy To Learn Amadeus GDS" by Air Grace Aviation Academy.
What is the Best Fare in Today's Market" by RightRez, Inc.
"Mid-office / ERP software features for travel companies" by Midoco GmbH.
Travel technology
Business software
Airline tickets | Global distribution system | [
"Technology"
] | 1,426 | [
"Computer reservation systems",
"Computer systems"
] |
6,901,703 | https://en.wikipedia.org/wiki/Algorithm%20characterizations | Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.
The problem of definition
Over the last 200 years, the definition of the algorithm has become more complicated and detailed as researchers have tried to pin down the term. Indeed, there may be more than one type of "algorithm". But most agree that algorithm has something to do with defining generalized processes for the creation of "output" integers from other "input" integers – "input parameters" arbitrary and infinite in extent, or limited in extent but still variable—by the manipulation of distinguishable symbols (counting numbers) with finite collections of rules that a person can perform with paper and pencil.
The most common number-manipulation schemes—both in formal mathematics and in routine life—are: (1) the recursive functions calculated by a person with paper and pencil, and (2) the Turing machine or its Turing equivalents—the primitive register-machine or "counter-machine" model, the random-access machine model (RAM), the random-access stored-program machine model (RASP) and its functional equivalent "the computer".
When we are doing "arithmetic" we are really calculating by the use of "recursive functions" in the shorthand algorithms we learned in grade school, for example, adding and subtracting.
The proofs that every "recursive function" we can calculate by hand we can compute by machine and vice versa—note the usage of the words calculate versus compute—is remarkable. But this equivalence together with the thesis (unproven assertion) that this includes every calculation/computation indicates why so much emphasis has been placed upon the use of Turing-equivalent machines in the definition of specific algorithms, and why the definition of "algorithm" itself often refers back to "the Turing machine". This is discussed in more detail under Stephen Kleene's characterization.
The following are summaries of the more famous characterizations (Kleene, Markov, Knuth) together with those that introduce novel elements—elements that further expand the definition or contribute to a more precise definition.
[
A mathematical problem and its result can be considered as two points in a space, and the solution consists of a sequence of steps or a path linking them. Quality of the solution is a function of the path. There might be more than one attribute defined for the path, e.g. length, complexity of shape, an ease of generalizing, difficulty, and so on.
]
Chomsky hierarchy
There is more consensus on the "characterization" of the notion of "simple algorithm".
All algorithms need to be specified in a formal language, and the "simplicity notion" arises from the simplicity of the language. The Chomsky (1956) hierarchy is a containment hierarchy of classes of formal grammars that generate formal languages. It is used for classifying of programming languages and abstract machines.
From the Chomsky hierarchy perspective, if the algorithm can be specified on a simpler language (than unrestricted), it can be characterized by this kind of language, else it is a typical "unrestricted algorithm".
Examples: a "general purpose" macro language, like M4 is unrestricted (Turing complete), but the C preprocessor macro language is not, so any algorithm expressed in C preprocessor is a "simple algorithm".
See also Relationships between complexity classes.
Features of a good algorithm
The following are desirable features of a well-defined algorithm, as discussed in Scheider and Gersting (1995):
Unambiguous Operations: an algorithm must have specific, outlined steps. The steps should be exact enough to precisely specify what to do at each step.
Well-Ordered: The exact order of operations performed in an algorithm should be concretely defined.
Feasibility: All steps of an algorithm should be possible (also known as effectively computable).
Input: an algorithm should be able to accept a well-defined set of inputs.
Output: an algorithm should produce some result as an output, so that its correctness can be reasoned about.
Finiteness: an algorithm should terminate after a finite number of instructions.
Properties of specific algorithms that may be desirable include space and time efficiency, generality (i.e. being able to handle many inputs), or determinism.
1881 John Venn's negative reaction to W. Stanley Jevons's Logical Machine of 1870
In early 1870 W. Stanley Jevons presented a "Logical Machine" (Jevons 1880:200) for analyzing a syllogism or other logical form e.g. an argument reduced to a Boolean equation. By means of what Couturat (1914) called a "sort of logical piano [,] ... the equalities which represent the premises ... are "played" on a keyboard like that of a typewriter. ... When all the premises have been "played", the panel shows only those constituents whose sum is equal to 1, that is, ... its logical whole. This mechanical method has the advantage over VENN's geometrical method..." (Couturat 1914:75).
For his part John Venn, a logician contemporary to Jevons, was less than thrilled, opining that "it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines" (italics added, Venn 1881:120). But of historical use to the developing notion of "algorithm" is his explanation for his negative reaction with respect to a machine that "may subserve a really valuable purpose by enabling us to avoid otherwise inevitable labor":
(1) "There is, first, the statement of our data in accurate logical language",
(2) "Then secondly, we have to throw these statements into a form fit for the engine to work with – in this case the reduction of each proposition to its elementary denials",
(3) "Thirdly, there is the combination or further treatment of our premises after such reduction,"
(4) "Finally, the results have to be interpreted or read off. This last generally gives rise to much opening for skill and sagacity."
He concludes that "I cannot see that any machine can hope to help us except in the third of these steps; so that it seems very doubtful whether any thing of this sort really deserves the name of a logical engine."(Venn 1881:119–121).
1943, 1952 Stephen Kleene's characterization
This section is longer and more detailed than the others because of its importance to the topic: Kleene was the first to propose that all calculations/computations—of every sort, the totality of—can equivalently be (i) calculated by use of five "primitive recursive operators" plus one special operator called the mu-operator, or be (ii) computed by the actions of a Turing machine or an equivalent model.
Furthermore, he opined that either of these would stand as a definition of algorithm.
A reader first confronting the words that follow may well be confused, so a brief explanation is in order. Calculation means done by hand, computation means done by Turing machine (or equivalent). (Sometimes an author slips and interchanges the words). A "function" can be thought of as an "input-output box" into which a person puts natural numbers called "arguments" or "parameters" (but only the counting numbers including 0—the nonnegative integers) and gets out a single nonnegative integer (conventionally called "the answer"). Think of the "function-box" as a little man either calculating by hand using "general recursion" or computing by Turing machine (or an equivalent machine).
"Effectively calculable/computable" is more generic and means "calculable/computable by some procedure, method, technique ... whatever...". "General recursive" was Kleene's way of writing what today is called just "recursion"; however, "primitive recursion"—calculation by use of the five recursive operators—is a lesser form of recursion that lacks access to the sixth, additional, mu-operator that is needed only in rare instances. Thus most of life goes on requiring only the "primitive recursive functions."
1943 "Thesis I", 1952 "Church's Thesis"
In 1943 Kleene proposed what has come to be known as Church's thesis:
"Thesis I. Every effectively calculable function (effectively decidable predicate) is general recursive" (First stated by Kleene in 1943 (reprinted page 274 in Davis, ed. The Undecidable; appears also verbatim in Kleene (1952) p.300)
In a nutshell: to calculate any function the only operations a person needs (technically, formally) are the 6 primitive operators of "general" recursion (nowadays called the operators of the mu recursive functions).
Kleene's first statement of this was under the section title "12. Algorithmic theories". He would later amplify it in his text (1952) as follows:
"Thesis I and its converse provide the exact definition of the notion of a calculation (decision) procedure or algorithm, for the case of a function (predicate) of natural numbers" (p. 301, boldface added for emphasis)
(His use of the word "decision" and "predicate" extends the notion of calculability to the more general manipulation of symbols such as occurs in mathematical "proofs".)
This is not as daunting as it may sound – "general" recursion is just a way of making our everyday arithmetic operations from the five "operators" of the primitive recursive functions together with the additional mu-operator as needed. Indeed, Kleene gives 13 examples of primitive recursive functions and Boolos–Burgess–Jeffrey add some more, most of which will be familiar to the reader—e.g. addition, subtraction, multiplication and division, exponentiation, the CASE function, concatenation, etc., etc.; for a list see Some common primitive recursive functions.
Why general-recursive functions rather than primitive-recursive functions?
Kleene et al. (cf §55 General recursive functions p. 270 in Kleene 1952) had to add a sixth recursion operator called the minimization-operator (written as μ-operator or mu-operator) because Ackermann (1925) produced a hugely growing function—the Ackermann function—and Rózsa Péter (1935) produced a general method of creating recursive functions using Cantor's diagonal argument, neither of which could be described by the 5 primitive-recursive-function operators. With respect to the Ackermann function:
"...in a certain sense, the length of the computation algorithm of a recursive function which is not also primitive recursive grows faster with the arguments than the value of any primitive recursive function" (Kleene (1935) reprinted p. 246 in The Undecidable, plus footnote 13 with regards to the need for an additional operator, boldface added).
But the need for the mu-operator is a rarity. As indicated above by Kleene's list of common calculations, a person goes about their life happily computing primitive recursive functions without fear of encountering the monster numbers created by Ackermann's function (e.g. super-exponentiation).
1952 "Turing's thesis"
Turing's Thesis hypothesizes the computability of "all computable functions" by the Turing machine model and its equivalents.
To do this in an effective manner, Kleene extended the notion of "computable" by casting the net wider—by allowing into the notion of "functions" both "total functions" and "partial functions". A total function is one that is defined for all natural numbers (positive integers including 0). A partial function is defined for some natural numbers but not all—the specification of "some" has to come "up front". Thus the inclusion of "partial function" extends the notion of function to "less-perfect" functions. Total- and partial-functions may either be calculated by hand or computed by machine.
Examples:
"Functions": include "common subtraction m − n" and "addition m + n"
"Partial function": "Common subtraction" m − n is undefined when only natural numbers (positive integers and zero) are allowed as input – e.g. 6 − 7 is undefined
Total function: "Addition" m + n is defined for all positive integers and zero.
We now observe Kleene's definition of "computable" in a formal sense:
Definition: "A partial function φ is computable, if there is a machine M which computes it" (Kleene (1952) p. 360)
"Definition 2.5. An n-ary function f(x1, ..., xn) is partially computable if there exists a Turing machine Z such that
f(x1, ..., xn) = ΨZ(n)(x1, ..., [xn)
In this case we say that [machine] Z computes f. If, in addition, f(x1, ..., xn) is a total function, then it is called computable" (Davis (1958) p. 10)
Thus we have arrived at Turing's Thesis:
"Every function which would naturally be regarded as computable is computable ... by one of his machines..." (Kleene (1952) p.376)
Although Kleene did not give examples of "computable functions" others have. For example, Davis (1958) gives Turing tables for the Constant, Successor and Identity functions, three of the five operators of the primitive recursive functions:
Computable by Turing machine:
Addition (also is the Constant function if one operand is 0)
Increment (Successor function)
Common subtraction (defined only if x ≥ y). Thus "x − y" is an example of a partially computable function.
Proper subtraction x┴y (as defined above)
The identity function: for each i, a function UZn = ΨZn(x1, ..., xn) exists that plucks xi out of the set of arguments (x1, ..., xn)
Multiplication
Boolos–Burgess–Jeffrey (2002) give the following as prose descriptions of Turing machines for:
Doubling: 2p
Parity
Addition
Multiplication
With regards to the counter machine, an abstract machine model equivalent to the Turing machine:
Examples Computable by Abacus machine (cf Boolos–Burgess–Jeffrey (2002))
Addition
Multiplication
Exponention: (a flow-chart/block diagram description of the algorithm)
Demonstrations of computability by abacus machine (Boolos–Burgess–Jeffrey (2002)) and by counter machine (Minsky 1967):
The six recursive function operators:
Zero function
Successor function
Identity function
Composition function
Primitive recursion (induction)
Minimization
The fact that the abacus/counter-machine models can simulate the recursive functions provides the proof that: If a function is "machine computable" then it is "hand-calculable by partial recursion". Kleene's Theorem XXIX :
"Theorem XXIX: "Every computable partial function φ is partial recursive..." (italics in original, p. 374).
The converse appears as his Theorem XXVIII. Together these form the proof of their equivalence, Kleene's Theorem XXX.
1952 Church–Turing Thesis
With his Theorem XXX Kleene proves the equivalence of the two "Theses"—the Church Thesis and the Turing Thesis. (Kleene can only hypothesize (conjecture) the truth of both thesis – these he has not proven):
THEOREM XXX: The following classes of partial functions ... have the same members: (a) the partial recursive functions, (b) the computable functions ..."(p. 376)
Definition of "partial recursive function": "A partial function φ is partial recursive in [the partial functions] ψ1, ... ψn if there is a system of equations E which defines φ recursively from [partial functions] ψ1, ... ψn" (p. 326)
Thus by Kleene's Theorem XXX: either method of making numbers from input-numbers—recursive functions calculated by hand or computated by Turing-machine or equivalent—results in an "effectively calculable/computable function". If we accept the hypothesis that every calculation/computation can be done by either method equivalently we have accepted both Kleene's Theorem XXX (the equivalence) and the Church–Turing Thesis (the hypothesis of "every").
A note of dissent: "There's more to algorithm..." Blass and Gurevich (2003)
The notion of separating out Church's and Turing's theses from the "Church–Turing thesis" appears not only in Kleene (1952) but in Blass-Gurevich (2003) as well. But while there are agreements, there are disagreements too:
"...we disagree with Kleene that the notion of algorithm is that well understood. In fact the notion of algorithm is richer these days than it was in Turing's days. And there are algorithms, of modern and classical varieties, not covered directly by Turing's analysis, for example, algorithms that interact with their environments, algorithms whose inputs are abstract structures, and geometric or, more generally, non-discrete algorithms" (Blass-Gurevich (2003) p. 8, boldface added)
1954 A. A. Markov Jr.'s characterization
Andrey Markov Jr. (1954) provided the following definition of algorithm:
"1. In mathematics, "algorithm" is commonly understood to be an exact prescription, defining a computational process, leading from various initial data to the desired result...."
"The following three features are characteristic of algorithms and determine their role in mathematics:
"a) the precision of the prescription, leaving no place to arbitrariness, and its universal comprehensibility -- the definiteness of the algorithm;
"b) the possibility of starting out with initial data, which may vary within given limits -- the generality of the algorithm;
"c) the orientation of the algorithm toward obtaining some desired result, which is indeed obtained in the end with proper initial data -- the conclusiveness of the algorithm." (p.1)
He admitted that this definition "does not pretend to mathematical precision" (p. 1). His 1954 monograph was his attempt to define algorithm more accurately; he saw his resulting definition—his "normal" algorithm—as "equivalent to the concept of a recursive function" (p. 3). His definition included four major components (Chapter II.3 pp. 63ff):
"1. Separate elementary steps, each of which will be performed according to one of [the substitution] rules... [rules given at the outset]
"2. ... steps of local nature ... [Thus the algorithm won't change more than a certain number of symbols to the left or right of the observed word/symbol]
"3. Rules for the substitution formulas ... [he called the list of these "the scheme" of the algorithm]
"4. ...a means to distinguish a "concluding substitution" [i.e. a distinguishable "terminal/final" state or states]
In his Introduction Markov observed that "the entire significance for mathematics" of efforts to define algorithm more precisely would be "in connection with the problem of a constructive foundation for mathematics" (p. 2). Ian Stewart (cf Encyclopædia Britannica) shares a similar belief: "...constructive analysis is very much in the same algorithmic spirit as computer science...". For more see constructive mathematics and Intuitionism.
Distinguishability and Locality: Both notions first appeared with Turing (1936–1937) --
"The new observed squares must be immediately recognizable by the computer [sic: a computer was a person in 1936]. I think it reasonable to suppose that they can only be squares whose distance from the closest of the immediately observed squares does not exceed a certain fixed amount. Let us stay that each of the new observed squares is within L squares of one of the previously observed squares." (Turing (1936) p. 136 in Davis ed. Undecidable)
Locality appears prominently in the work of Gurevich and Gandy (1980) (whom Gurevich cites). Gandy's "Fourth Principle for Mechanisms" is "The Principle of Local Causality":
"We now come to the most important of our principles. In Turing's analysis the requirement that the action depend only on a bounded portion of the record was based on a human limitiation. We replace this by a physical limitation which we call the principle of local causation. Its justification lies in the finite velocity of propagation of effects and signals: contemporary physics rejects the possibility of instantaneous action at a distance." (Gandy (1980) p. 135 in J. Barwise et al.)
1936, 1963, 1964 Gödel's characterization
1936: A rather famous quote from Kurt Gödel appears in a "Remark added in proof [of the original German publication] in his paper "On the Length of Proofs" translated by Martin Davis appearing on pp. 82–83 of The Undecidable. A number of authors—Kleene, Gurevich, Gandy etc. -- have quoted the following:
"Thus, the concept of "computable" is in a certain definite sense "absolute," while practically all other familiar metamathematical concepts (e.g. provable, definable, etc.) depend quite essentially on the system with respect to which they are defined." (p. 83)
1963: In a "Note" dated 28 August 1963 added to his famous paper On Formally Undecidable Propositions (1931) Gödel states (in a footnote) his belief that "formal systems" have "the characteristic property that reasoning in them, in principle, can be completely replaced by mechanical devices" (p. 616 in van Heijenoort). ". . . due to "A. M. Turing's work a precise and unquestionably adequate definition of the general notion of formal system can now be given [and] a completely general version of Theorems VI and XI is now possible." (p. 616). In a 1964 note to another work he expresses the same opinion more strongly and in more detail.
1964: In a Postscriptum, dated 1964, to a paper presented to the Institute for Advanced Study in spring 1934, Gödel amplified his conviction that "formal systems" are those that can be mechanized:
"In consequence of later advances, in particular of the fact that, due to A. M. Turing's work, a precise and unquestionably adequate definition of the general concept of formal system can now be given . . . Turing's work gives an analysis of the concept of "mechanical procedure" (alias "algorithm" or "computational procedure" or "finite combinatorial procedure"). This concept is shown to be equivalent with that of a "Turing machine".* A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable formulas . . . ." (p. 72 in Martin Davis ed. The Undecidable: "Postscriptum" to "On Undecidable Propositions of Formal Mathematical Systems" appearing on p. 39, loc. cit.)
The * indicates a footnote in which Gödel cites the papers by Alan Turing (1937) and Emil Post (1936) and then goes on to make the following intriguing statement:
"As for previous equivalent definitions of computability, which however, are much less suitable for our purpose, see Alonzo Church, Am. J. Math., vol. 58 (1936) [appearing in The Undecidable pp. 100-102]).
Church's definitions encompass so-called "recursion" and the "lambda calculus" (i.e. the λ-definable functions). His footnote 18 says that he discussed the relationship of "effective calculatibility" and "recursiveness" with Gödel but that he independently questioned "effectively calculability" and "λ-definability":
"We now define the notion . . . of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers18 (or of a λ-definable function of positive integers.
"It has already been pointed out that, for every function of positive integers which is effectively calculable in the sense just defined, there exists an algorithm for the calculation of its value.
"Conversely it is true . . ." (p. 100, The Undecidable).
It would appear from this, and the following, that far as Gödel was concerned, the Turing machine was sufficient and the lambda calculus was "much less suitable." He goes on to make the point that, with regards to limitations on human reason, the jury is still out:
("Note that the question of whether there exist finite non-mechanical procedures** not equivalent with any algorithm, has nothing whatsoever to do with the adequacy of the definition of "formal system" and of "mechanical procedure.") (p. 72, loc. cit.)
"(For theories and procedures in the more general sense indicated in footnote ** the situation may be different. Note that the results mentioned in the postscript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics.) (p. 73 loc. cit.)
Footnote **: "I.e., such as involve the use of abstract terms on the basis of their meaning. See my paper in Dial. 12(1958), p. 280." (this footnote appears on p. 72, loc. cit).
1967 Minsky's characterization
Minsky (1967) baldly asserts that "an algorithm is "an effective procedure" and declines to use the word "algorithm" further in his text; in fact his index makes it clear what he feels about "Algorithm, synonym for Effective procedure"(p. 311):
"We will use the latter term [an effective procedure] in the sequel. The terms are roughly synonymous, but there are a number of shades of meaning used in different contexts, especially for 'algorithm'" (italics in original, p. 105)
Other writers (see Knuth below) use the word "effective procedure". This leads one to wonder: What is Minsky's notion of "an effective procedure"? He starts off with:
"...a set of rules which tell us, from moment to moment, precisely how to behave" (p. 106)
But he recognizes that this is subject to a criticism:
"... the criticism that the interpretation of the rules is left to depend on some person or agent" (p. 106)
His refinement? To "specify, along with the statement of the rules, the details of the mechanism that is to interpret them". To avoid the "cumbersome" process of "having to do this over again for each individual procedure" he hopes to identify a "reasonably uniform family of rule-obeying mechanisms". His "formulation":
"(1) a language in which sets of behavioral rules are to be expressed, and
"(2) a single machine which can interpret statements in the language and thus carry out the steps of each specified process." (italics in original, all quotes this para. p. 107)
In the end, though, he still worries that "there remains a subjective aspect to the matter. Different people may not agree on whether a certain procedure should be called effective" (p. 107)
But Minsky is undeterred. He immediately introduces "Turing's Analysis of Computation Process" (his chapter 5.2). He quotes what he calls "Turing's thesis"
"Any process which could naturally be called an effective procedure can be realized by a Turing machine" (p. 108. (Minsky comments that in a more general form this is called "Church's thesis").
After an analysis of "Turing's Argument" (his chapter 5.3)
he observes that "equivalence of many intuitive formulations" of Turing, Church, Kleene, Post, and Smullyan "...leads us to suppose that there is really here an 'objective' or 'absolute' notion. As Rogers [1959] put it:
"In this sense, the notion of effectively computable function is one of the few 'absolute' concepts produced by modern work in the foundations of mathematics'" (Minsky p. 111 quoting Rogers, Hartley Jr (1959) The present theory of Turing machine computability, J. SIAM 7, 114-130.)
1967 Rogers' characterization
In his 1967 Theory of Recursive Functions and Effective Computability Hartley Rogers' characterizes "algorithm" roughly as "a clerical (i.e., deterministic, bookkeeping) procedure . . . applied to . . . symbolic inputs and which will eventually yield, for each such input, a corresponding symbolic output"(p. 1). He then goes on to describe the notion "in approximate and intuitive terms" as having 10 "features", 5 of which he asserts that "virtually all mathematicians would agree [to]" (p. 2). The remaining 5 he asserts "are less obvious than *1 to *5 and about which we might find less general agreement" (p. 3).
The 5 "obvious" are:
1 An algorithm is a set of instructions of finite size,
2 There is a capable computing agent,
3 "There are facilities for making, storing, and retrieving steps in a computation"
4 Given #1 and #2 the agent computes in "discrete stepwise fashion" without use of continuous methods or analogue devices",
5 The computing agent carries the computation forward "without resort to random methods or devices, e.g., dice" (in a footnote Rogers wonders if #4 and #5 are really the same)
The remaining 5 that he opens to debate, are:
6 No fixed bound on the size of the inputs,
7 No fixed bound on the size of the set of instructions,
8 No fixed bound on the amount of memory storage available,
9 A fixed finite bound on the capacity or ability of the computing agent (Rogers illustrates with example simple mechanisms similar to a Post–Turing machine or a counter machine),
10 A bound on the length of the computation -- "should we have some idea, 'ahead of time', how long the computationwill take?" (p. 5). Rogers requires "only that a computation terminate after some finite number of steps; we do not insist on an a priori ability to estimate this number." (p. 5).
1968, 1973 Knuth's characterization
Knuth (1968, 1973) has given a list of five properties that are widely accepted as requirements for an algorithm:
Finiteness: "An algorithm must always terminate after a finite number of steps ... a very finite number, a reasonable number"
Definiteness: "Each step of an algorithm must be precisely defined; the actions to be carried out must be rigorously and unambiguously specified for each case"
Input: "...quantities which are given to it initially before the algorithm begins. These inputs are taken from specified sets of objects"
Output: "...quantities which have a specified relation to the inputs"
Effectiveness: "... all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done exactly and in a finite length of time by a man using paper and pencil"
Knuth offers as an example the Euclidean algorithm for determining the greatest common divisor of two natural numbers (cf. Knuth Vol. 1 p. 2).
Knuth admits that, while his description of an algorithm may be intuitively clear, it lacks formal rigor, since it is not exactly clear what "precisely defined" means, or "rigorously and unambiguously specified" means, or "sufficiently basic", and so forth. He makes an effort in this direction in his first volume where he defines in detail what he calls the "machine language" for his "mythical MIX...the world's first polyunsaturated computer" (pp. 120ff). Many of the algorithms in his books are written in the MIX language. He also uses tree diagrams, flow diagrams and state diagrams.
"Goodness" of an algorithm, "best" algorithms: Knuth states that "In practice, we not only want algorithms, we want good algorithms...." He suggests that some criteria of an algorithm's goodness are the number of steps to perform the algorithm, its "adaptability to computers, its simplicity and elegance, etc." Given a number of algorithms to perform the same computation, which one is "best"? He calls this sort of inquiry "algorithmic analysis: given an algorithm, to determine its performance characteristcis" (all quotes this paragraph: Knuth Vol. 1 p. 7)
1972 Stone's characterization
Stone (1972) and Knuth (1968, 1973) were professors at Stanford University at the same time so it is not surprising if there are similarities in their definitions (boldface added for emphasis):
"To summarize ... we define an algorithm to be a set of rules that precisely defines a sequence of operations such that each rule is effective and definite and such that the sequence terminates in a finite time." (boldface added, p. 8)
Stone is noteworthy because of his detailed discussion of what constitutes an “effective” rule – his robot, or person-acting-as-robot, must have some information and abilities within them, and if not the information and the ability must be provided in "the algorithm":
"For people to follow the rules of an algorithm, the rules must be formulated so that they can be followed in a robot-like manner, that is, without the need for thought... however, if the instructions [to solve the quadratic equation, his example] are to be obeyed by someone who knows how to perform arithmetic operations but does not know how to extract a square root, then we must also provide a set of rules for extracting a square root in order to satisfy the definition of algorithm" (p. 4-5)
Furthermore, "...not all instructions are acceptable, because they may require the robot to have abilities beyond those that we consider reasonable.” He gives the example of a robot confronted with the question is “Henry VIII a King of England?” and to print 1 if yes and 0 if no, but the robot has not been previously provided with this information. And worse, if the robot is asked if Aristotle was a King of England and the robot only had been provided with five names, it would not know how to answer. Thus:
“an intuitive definition of an acceptable sequence of instructions is one in which each instruction is precisely defined so that the robot is guaranteed to be able to obey it” (p. 6)
After providing us with his definition, Stone introduces the Turing machine model and states that the set of five-tuples that are the machine's instructions are “an algorithm ... known as a Turing machine program” (p. 9). Immediately thereafter he goes on say that a “computation of a Turing machine is described by stating:
"1. The tape alphabet
"2. The form in which the [input] parameters are presented on the tape
"3. The initial state of the Turing machine
"4. The form in which answers [output] will be represented on the tape when the Turing machine halts
"5. The machine program" (italics added, p. 10)
This precise prescription of what is required for "a computation" is in the spirit of what will follow in the work of Blass and Gurevich.
1995 Soare's characterization
"A computation is a process whereby we proceed from initially given objects, called inputs, according to a fixed set of rules, called a program, procedure, or algorithm, through a series of steps and arrive at the end of these steps with a final result, called the output. The algorithm, as a set of rules proceeding from inputs to output, must be precise and definite with each successive step clearly determined. The concept of computability concerns those objects which may be specified in principle by computations . . ."(italics in original, boldface added p. 3)
2000 Berlinski's characterization
While a student at Princeton in the mid-1960s, David Berlinski was a student of Alonzo Church (cf p. 160). His year-2000 book The Advent of the Algorithm: The 300-year Journey from an Idea to the Computer contains the following definition of algorithm:
"In the logician's voice:
"an algorithm is
a finite procedure,
written in a fixed symbolic vocabulary,
governed by precise instructions,
moving in discrete steps, 1, 2, 3, . . .,
whose execution requires no insight, cleverness,
intuition, intelligence, or perspicuity,
and that sooner or later comes to an end." (boldface and italics in the original, p. xviii)
2000, 2002 Gurevich's characterization
A careful reading of Gurevich 2000 leads one to conclude (infer?) that he believes that "an algorithm" is actually "a Turing machine" or "a pointer machine" doing a computation. An "algorithm" is not just the symbol-table that guides the behavior of the machine, nor is it just one instance of a machine doing a computation given a particular set of input parameters, nor is it a suitably programmed machine with the power off; rather an algorithm is the machine actually doing any computation of which it is capable. Gurevich does not come right out and say this, so as worded above this conclusion (inference?) is certainly open to debate:
" . . . every algorithm can be simulated by a Turing machine . . . a program can be simulated and therefore given a precise meaning by a Turing machine." (p. 1)
" It is often thought that the problem of formalizing the notion of sequential algorithm was solved by Church [1936] and Turing [1936]. For example, according to Savage [1987], an algorithm is a computational process defined by a Turing machine. Church and Turing did not solve the problem of formalizing the notion of sequential algorithm. Instead they gave (different but equivalent) formalizations of the notion of computable function, and there is more to an algorithm than the function it computes. (italics added p. 3)
"Of course, the notions of algorithm and computable function are intimately related: by definition, a computable function is a function computable by an algorithm. . . . (p. 4)
In Blass and Gurevich 2002 the authors invoke a dialog between "Quisani" ("Q") and "Authors" (A), using Yiannis Moshovakis as a foil, where they come right out and flatly state:
"A: To localize the disagreement, let's first mention two points of agreement. First, there are some things that are obviously algorithms by anyone's definition -- Turing machines, sequential-time ASMs [Abstract State Machines], and the like. . . .Second, at the other extreme are specifications that would not be regarded as algorithms under anyone's definition, since they give no indication of how to compute anything . . . The issue is how detailed the information has to be in order to count as an algorithm. . . . Moshovakis allows some things that we would call only declarative specifications, and he would probably use the word "implementation" for things that we call algorithms." (paragraphs joined for ease of readability, 2002:22)
This use of the word "implementation" cuts straight to the heart of the question. Early in the paper, Q states his reading of Moshovakis:
"...[H]e would probably think that your practical work [Gurevich works for Microsoft] forces you to think of implementations more than of algorithms. He is quite willing to identify implementations with machines, but he says that algorithms are something more general. What it boils down to is that you say an algorithm is a machine and Moschovakis says it is not." (2002:3)
But the authors waffle here, saying "[L]et's stick to "algorithm" and "machine", and the reader is left, again, confused. We have to wait until Dershowitz and Gurevich 2007 to get the following footnote comment:
" . . . Nevertheless, if one accepts Moshovakis's point of view, then it is the "implementation" of algorithms that we have set out to characterize."(cf Footnote 9 2007:6)
2003 Blass and Gurevich's characterization
Blass and Gurevich describe their work as evolved from consideration of Turing machines and pointer machines, specifically Kolmogorov-Uspensky machines (KU machines), Schönhage Storage Modification Machines (SMM), and linking automata as defined by Knuth. The work of Gandy and Markov are also described as influential precursors.
Gurevich offers a 'strong' definition of an algorithm (boldface added):
"...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine....In practice, it would be ridiculous...[Nevertheless,] [c]an one generalize Turing machines so that any algorithm, never mind how abstract, can be modeled by a generalized machine?...But suppose such generalized Turing machines exist. What would their states be?...a first-order structure ... a particular small instruction set suffices in all cases ... computation as an evolution of the state ... could be nondeterministic... can interact with their environment ... [could be] parallel and multi-agent ... [could have] dynamic semantics ... [the two underpinings of their work are:] Turing's thesis ...[and] the notion of (first order) structure of [Tarski 1933]" (Gurevich 2000, p. 1-2)
The above phrase computation as an evolution of the state differs markedly from the definition of Knuth and Stone—the "algorithm" as a Turing machine program. Rather, it corresponds to what Turing called the complete configuration (cf Turing's definition in Undecidable, p. 118) -- and includes both the current instruction (state) and the status of the tape. [cf Kleene (1952) p. 375 where he shows an example of a tape with 6 symbols on it—all other squares are blank—and how to Gödelize its combined table-tape status].
In Algorithm examples we see the evolution of the state first-hand.
1995 – Daniel Dennett: evolution as an algorithmic process
Philosopher Daniel Dennett analyses the importance of evolution as an algorithmic process in his 1995 book Darwin's Dangerous Idea. Dennett identifies three key features of an algorithm:
Substrate neutrality: an algorithm relies on its logical structure. Thus, the particular form in which an algorithm is manifested is not important (Dennett's example is long division: it works equally well on paper, on parchment, on a computer screen, or using neon lights or in skywriting). (p. 51)
Underlying mindlessness: no matter how complicated the end-product of the algorithmic process may be, each step in the algorithm is sufficiently simple to be performed by a non-sentient, mechanical device. The algorithm does not require a "brain" to maintain or operate it. "The standard textbook analogy notes that algorithms are recipes of sorts, designed to be followed by novice cooks."(p. 51)
Guaranteed results: If the algorithm is executed correctly, it will always produce the same results. "An algorithm is a foolproof recipe." (p. 51)
It is on the basis of this analysis that Dennett concludes that "According to Darwin, evolution is an algorithmic process". (p. 60).
However, in the previous page he has gone out on a much-further limb. In the context of his chapter titled "Processes as Algorithms", he states:
"But then . . are there any limits at all on what may be considered an algorithmic process? I guess the answer is NO; if you wanted to, you can treat any process at the abstract level as an algorithmic process. . . If what strikes you as puzzling is the uniformity of the [ocean's] sand grains or the strength of the [tempered-steel] blade, an algorithmic explanation is what will satisfy your curiosity -- and it will be the truth. . . .
"No matter how impressive the products of an algorithm, the underlying process always consists of nothing but a set of mindless steps succeeding each other without the help of any intelligent supervision; they are 'automatic' by definition: the workings of an automaton." (p. 59)
It is unclear from the above whether Dennett is stating that the physical world by itself and without observers is intrinsically algorithmic (computational) or whether a symbol-processing observer is what is adding "meaning" to the observations.
2002 John Searle adds a clarifying caveat to Dennett's characterization
Daniel Dennett is a proponent of strong artificial intelligence: the idea that the logical structure of an algorithm is sufficient to explain mind. John Searle, the creator of the Chinese room thought experiment, claims that "syntax [that is, logical structure] is by itself not sufficient for semantic content [that is, meaning]" . In other words, the "meaning" of symbols is relative to the mind that is using them; an algorithm—a logical construct—by itself is insufficient for a mind.
Searle cautions those who claim that algorithmic (computational) processes are intrinsic to nature (for example, cosmologists, physicists, chemists, etc.):
2002: Boolos-Burgess-Jeffrey specification of Turing machine calculation
For examples of this specification-method applied to the addition algorithm "m+n" see Algorithm examples.
An example in Boolos-Burgess-Jeffrey (2002) (pp. 31–32) demonstrates the precision required in a complete specification of an algorithm, in this case to add two numbers: m+n. It is similar to the Stone requirements above.
(i) They have discussed the role of "number format" in the computation and selected the "tally notation" to represent numbers:
"Certainly computation can be harder in practice with some notations than others... But... it is possible in principle to do in any other notation, simply by translating the data... For purposes of framing a rigorously defined notion of computability, it is convenient to use monadic or tally notation" (p. 25-26)
(ii) At the outset of their example they specify the machine to be used in the computation as a Turing machine. They have previously specified (p. 26) that the Turing-machine will be of the 4-tuple, rather than 5-tuple, variety. For more on this convention see Turing machine.
(iii) Previously the authors have specified that the tape-head's position will be indicated by a subscript to the right of the scanned symbol. For more on this convention see Turing machine. (In the following, boldface is added for emphasis):
"We have not given an official definition of what it is for a numerical function to be computable by a Turing machine, specifying how inputs or arguments are to be represented on the machine, and how outputs or values represented. Our specifications for a k-place function from positive integers to positive integers are as follows:
"(a) [Initial number format:] The arguments m1, ... mk, ... will be represented in monadic [unary] notation by blocks of those numbers of strokes, each block separated from the next by a single blank, on an otherwise blank tape.
Example: 3+2, 111B11
"(b) [Initial head location, initial state:] Initially, the machine will be scanning the leftmost 1 on the tape, and will be in its initial state, state 1.
Example: 3+2, 11111B11
"(c) [Successful computation -- number format at Halt:] If the function to be computed assigns a value n to the arguments that are represented initially on the tape, then the machine will eventually halt on a tape containing a block of strokes, and otherwise blank...
Example: 3+2, 11111
"(d) [Successful computation -- head location at Halt:] In this case [c] the machine will halt scanning the left-most 1 on the tape...
Example: 3+2, 1n1111
"(e) [Unsuccessful computation -- failure to Halt or Halt with non-standard number format:] If the function that is to be computed assigns no value to the arguments that are represented initially on the tape, then the machine either will never halt, or will halt in some nonstandard configuration..."(ibid)
Example: Bn11111 or B11n111 or B11111n
This specification is incomplete: it requires the location of where the instructions are to be placed and their format in the machine--
(iv) in the finite-state machine's TABLE or, in the case of a Universal Turing machine on the tape, and
(v) the Table of instructions in a specified format
This later point is important. Boolos-Burgess-Jeffrey give a demonstration (p. 36) that the predictability of the entries in the table allow one to "shrink" the table by putting the entries in sequence and omitting the input state and the symbol. Indeed, the example Turing machine computation required only the 4 columns as shown in the table below (but note: these were presented to the machine in rows):
2006: Sipser's assertion and his three levels of description
For examples of this specification-method applied to the addition algorithm "m+n" see Algorithm examples.
Sipser begins by defining '"algorithm" as follows:
"Informally speaking, an algorithm is a collection of simple instructions for carrying out some task. Commonplace in everyday life, algorithms sometimes are called procedures or recipes (italics in original, p. 154)
"...our real focus from now on is on algorithms. That is, the Turing machine merely serves as a precise model for the definition of algorithm .... we need only to be comfortable enough with Turing machines to believe that they capture all algorithms" ( p. 156)
Does Sipser mean that "algorithm" is just "instructions" for a Turing machine, or is the combination of "instructions + a (specific variety of) Turing machine"? For example, he defines the two standard variants (multi-tape and non-deterministic) of his particular variant (not the same as Turing's original) and goes on, in his Problems (pages 160–161), to describe four more variants (write-once, doubly infinite tape (i.e. left- and right-infinite), left reset, and "stay put instead of left). In addition, he imposes some constraints. First, the input must be encoded as a string (p. 157) and says of numeric encodings in the context of complexity theory:
"But note that unary notation for encoding numbers (as in the number 17 encoded by the unary number 11111111111111111) isn't reasonable because it is exponentially larger than truly reasonable encodings, such as base k notation for any k ≥ 2." (p. 259)
Van Emde Boas comments on a similar problem with respect to the random-access machine (RAM) abstract model of computation sometimes used in place of the Turing machine when doing "analysis of algorithms":
"The absence or presence of multiplicative and parallel bit manipulation operations is of relevance for the correct understanding of some results in the analysis of algorithms.
". . . [T]here hardly exists such as a thing as an "innocent" extension of the standard RAM model in the uniform time measures; either one only has additive arithmetic or one might as well include all reasonable multiplicative and/or bitwise Boolean instructions on small operands." (Van Emde Boas, 1990:26)
With regard to a "description language" for algorithms Sipser finishes the job that Stone and Boolos-Burgess-Jeffrey started (boldface added). He offers us three levels of description of Turing machine algorithms (p. 157):
High-level description: "wherein we use ... prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head."
Implementation description: "in which we use ... prose to describe the way that the Turing machine moves its head and the way that it stores data on its tape. At this level we do not give details of states or transition function."
Formal description: "... the lowest, most detailed, level of description... that spells out in full the Turing machine's states, transition function, and so on."
2011: Yanofsky
In Yanofsky (2011) an algorithm is defined to be the set of programs that implement that algorithm: the set of all programs is partitioned into equivalence classes. Although the set of programs does not form a category, the set of algorithms form a category with extra structure. The conditions that describe when two programs are equivalent turn out to be coherence relations which give the extra structure to the category of algorithms.
2024: Seiller
In Seiller (2024) an algorithm is defined as an edge-labelled graph, together with an interpretation of labels as maps in an abstract data structure. This definition is given together with a formal definition of programs (and models of computation), allowing to formally define the notion of implementation, that is when a program implements an algorithm. The notion of algorithm thus obtained avoids some known issues, and is understood as a specification of some kind. In particular, a given program can (and in fact, always do) implement several algorithms. Another important feature of the approach is that it takes into account the fact that a given algorithm can be implemented in different (and possibly unrelated) computational models.
Notes
References
David Berlinski (2000), The Advent of the Algorithm: The 300-Year Journey from an Idea to the Computer, Harcourt, Inc., San Diego, (pbk.)
George Boolos, John P. Burgess, Richard Jeffrey (2002), Computability and Logic: Fourth Edition, Cambridge University Press, Cambridge, UK. (pbk).
Andreas Blass and Yuri Gurevich (2003), Algorithms: A Quest for Absolute Definitions, Bulletin of European Association for Theoretical Computer Science 81, 2003. Includes an excellent bibliography of 56 references.
Burgin, M. Super-recursive algorithms, Monographs in computer science, Springer, 2005.
. A source of important definitions and some Turing machine-based algorithms for a few recursive functions.
Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included.
Gandy, Robin, Church's Thesis and principles for Mechanisms, in J. Barwise, H. J. Keisler and K. Kunen, eds., The Kleene Symposium, North-Holland Publishing Company 1980) pp. 123–148. Gandy's famous "4 principles of [computational] mechanisms" includes "Principle IV -- The Principle of Local Causality".
Gurevich, Yuri, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages 77–111. Includes bibliography of 33 sources.
Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church Thesis).
Excellent — accessible, readable — reference source for mathematical "foundations".
The first of Knuth's famous series of three texts.
Lewis, H.R. and Papadimitriou, C.H. Elements of the Theory of Computation, Prentice-Hall, Uppre Saddle River, N.J., 1998
Markov, A. A. (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e. Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]
Minsky expands his "...idea of an algorithm — an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines.
Rogers, Hartley Jr, (1967), Theory of Recursive Functions and Effective Computability, MIT Press (1987), Cambridge MA, (pbk.)
Seiller, Thomas, (2024), Mathematical Informatics, Habilitation thesis, Université Sorbonne Paris Nord, .
Sipser, Michael, (2006), Introduction to the Theory of Computation: Second Edition, Thompson Course Technology div. of Thompson Learning, Inc. Boston, MA. .
Soare, Robert, (1995 to appear in Proceedings of the 10th International Congress of Logic, Methodology, and Philosophy of Science, August 19–25, 1995, Florence Italy), Computability and Recursion), on the web at ??.
Ian Stewart, Algorithm, Encyclopædia Britannica 2006.
Cf in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4).
van Emde Boas, Peter (1990), "Machine Models and Simulations" pp 3–66, appearing in Jan van Leeuwen (1990), Handbook of Theoretical Computer Science. Volume A: Algorithms & Complexity, The MIT Press/Elsevier, 1990, (Volume A)
Computability theory
Models of computation
Formal methods
Algorithms | Algorithm characterizations | [
"Mathematics",
"Engineering"
] | 12,584 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Software engineering",
"Computability theory",
"Formal methods"
] |
6,902,101 | https://en.wikipedia.org/wiki/Billings%20Refinery%20%28Phillips%2066%29 | The Billings Refinery is an oil refinery located in Billings, Montana. The refinery is currently owned and operated by Phillips 66. Completed in 1947, the refinery covers . It is capable of producing 450 million gallons of gasoline per year.
See also
List of oil refineries
References
External links
Phillips 66 website
Buildings and structures in Billings, Montana
Energy infrastructure completed in 1947
Energy infrastructure in Montana
Oil refineries in the United States
Phillips 66
1947 establishments in Montana | Billings Refinery (Phillips 66) | [
"Chemistry"
] | 91 | [
"Petroleum",
"Petroleum stubs"
] |
6,902,178 | https://en.wikipedia.org/wiki/Arcturus%20moving%20group | In astronomy, the Arcturus moving group or Arcturus stream is a moving group or stellar stream, discovered by Olin J. Eggen (1971), comprising 53 stars moving at 275,000 miles per hour, which includes the nearby bright star Arcturus. It comprises many stars which share similar proper motion and so appear to be physically associated.
This group of stars is not in the plane of the Milky Way galaxy, and has been proposed as a remnant of an ancient dwarf satellite galaxy, long since disrupted and assimilated into the Milky Way. It consists of old stars deficient in heavy elements. However, Bensby and colleagues, in analysing chemical composition of F and G dwarf stars in the solar neighbourhood, found there was no difference in chemical makeup of stars from the stream, suggesting an intragalactic rather than extragalactic origin. One possibility is that the stream appeared in a manner similar to the Hercules group, which is hypothesized to have formed due to Outer Lindblad Resonance with the Galactic bar. However, it is unclear how this could produce an overdensity of stars in the thick disk.
Research from the RAdial Velocity Experiment (RAVE) at the Australian Astronomical Observatory, headed by Quentin Parker, was the first to quantify the nature of the group, though astronomers had known of its existence for some time. It was first discovered in 1971.
Other members include the red giant Kappa Gruis and the M-class stars 27 Cancri, Alpha Vulpeculae and RT Hydrae.
See also
List of stellar streams
References
External links
Stellar streams
Boötes
Milky Way | Arcturus moving group | [
"Astronomy"
] | 331 | [
"Boötes",
"Constellations"
] |
6,903,437 | https://en.wikipedia.org/wiki/Temporal%20analysis%20of%20products | Temporal Analysis of Products (TAP), (TAP-2), (TAP-3) is an experimental technique for studying
the kinetics of physico-chemical interactions
between gases and complex solid materials, primarily heterogeneous catalysts.
The TAP methodology is based on short pulse-response experiments at low background pressure (10−6-102 Pa),
which are used to probe different steps in a catalytic process on the surface of a
porous material including diffusion, adsorption,
surface reactions, and desorption.
History
Since its invention by Dr. John T. Gleaves (then at Monsanto Company) in late 1980s,
TAP has been used to study a variety of industrially and academically relevant catalytic reactions, bridging the gap between surface science
experiments and applied catalysis.
The state-of-the-art TAP installations (TAP-3) do not only provide better signal-to-noise ratio than the first generation TAP machines (TAP-1),
but also allow for advanced automation and direct coupling with other techniques.
Hardware
TAP instrument consists of a heated packed-bed microreactor connected to a high-throughput vacuum system,
a pulsing manifold with fast electromagnetically-driven gas injectors, and a Quadrupole Mass Spectrometer (QMS)
located in the vacuum system below the micro-reactor outlet.
Experiments
In a typical TAP pulse-response experiment, very small (~10−9 mol) and narrow (~100 μs) gas pulses are introduced into the evacuated (~10−6 torr) microreactor
containing a catalytic sample. While the injected gas molecules traverse the microreactor packing through the interstitial voids,
they encounter the catalyst on which they may undergo chemical transformations. Unconverted and newly formed gas molecules eventually
reach the reactor's outlet and escape into an adjacent vacuum chamber, where they are detected with millisecond time resolution
by the QMS. The exit-flow rates of reactants, products and inert molecules recorded by the QMS are then
used to quantify catalytic properties and deduce reaction mechanisms. The same TAP instrument can
typically accommodate other types of kinetic measurements, including atmospheric pressure flow experiments (105 Pa),
Temperature-Programmed Desorption (TPD), and Steady-State Isotopic Transient Kinetic Analysis (SSITKA).
Data analysis
The general methodology of TAP data analysis, developed in a series of papers by Grigoriy (Gregory) Yablonsky
,
is based on comparing an inert gas response which is controlled only by Knudsen diffusion
with a reactive gas response which is controlled by diffusion as well as adsorption and chemical reactions on the catalyst sample.
TAP pulse-response experiments can be effectively modeled by a one-dimensional (1D) diffusion equation with uniquely simple combination of boundary conditions.
References
Inorganic reactions | Temporal analysis of products | [
"Chemistry"
] | 587 | [
"Inorganic reactions"
] |
6,903,459 | https://en.wikipedia.org/wiki/Contractor%20ratings | Contractor rating systems, also known as contractor prequalifications, are one of the larger cost-saving practices available and more routinely applied by governmental organizations as a means of avoiding the high cost and inflated pricing that results from reduced competition on public work by using bonding and surety to guarantee performance of public work.
Years ago, public purchasing officials began applying prequalification and short-listing of pre-selected contractors for bidding on public procurement contracts. A subjective process is in many places the exclusive means of getting on a bidders list for public contract work.
These ratings and processes now make the whole issue of bonding and surety, (that has been around since the late 19th century to guarantee of performance and paying large premiums), obsolete and redundant since the public officials have already reduced risks and are paying premiums associated with reducing competition by using the prequalification process and rating systems.
References
Construction | Contractor ratings | [
"Engineering"
] | 182 | [
"Construction"
] |
6,903,593 | https://en.wikipedia.org/wiki/Maritime%20simulator | A maritime simulator or ship simulator is a system that simulates ships and maritime environments for training, research and other purposes. Today, simulator training given by maritime schools and academies is part of the basic training of maritime professionals.
At minimum, a maritime simulator consists of a software that realistically simulates the dynamic behavior of a vessel and its systems in a simulated maritime environment and an interface that allows the person using the simulator to control the vessel and interact with its simulated surroundings. In case of so-called full mission bridge simulators, this interface consists of a realistic mock-up of the vessel's bridge and control consoles, and screens or projectors providing up to 360-degree virtual view of the ship's surroundings similar to flight simulators in the aviation industry. Without the real-time visualization, the simulation software can also be used for "fast time" simulations where the vessels are controlled by autopilot. In addition, there are maritime simulators for example for ECDIS, engine room, and cargo handling operations, as well as shore-side operations such as Vessel Traffic Service (VTS).
Maritime simulation games such as Ship Simulator and Virtual Sailor are also available for home users.
See also
Simulation
References
Virtual reality
Training ships
Maritime education | Maritime simulator | [
"Physics"
] | 253 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
6,903,943 | https://en.wikipedia.org/wiki/Deoxyguanosine%20triphosphate | Deoxyguanosine triphosphate (dGTP) is a nucleoside triphosphate, and a nucleotide precursor used in cells for DNA synthesis. The substance is used in the polymerase chain reaction technique, in sequencing, and in cloning. It is also the competitor of inhibition onset by acyclovir in the treatment of HSV virus.
References
Nucleotides
Phosphate esters | Deoxyguanosine triphosphate | [
"Chemistry"
] | 89 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
6,904,103 | https://en.wikipedia.org/wiki/ZINC%20database | The ZINC database (recursive acronym: ZINC is not commercial) is a curated collection of commercially available chemical compounds prepared especially for virtual screening. ZINC is used by investigators (generally people with training as biologists or chemists) in pharmaceutical companies, biotechnology companies, and research universities.
Scope and access
ZINC is different from other chemical databases because it aims to represent the biologically relevant, three dimensional form of the molecule.
Curation and updates
ZINC is updated regularly and may be downloaded and used free of charge. It is developed by John Irwin in the Shoichet Laboratory in the Department of Pharmaceutical Chemistry at the University of California, San Francisco.
Version
The latest release of the website interface is "ZINC-22". The database is continuously updated and is claimed to contain over 37 billion commercially available molecules.
Uses
The database is typically used for molecule mining, a process in which Quantitative structure–activity relationships are used to find new compounds with improved biological activity, given a known starting point found, for example, by high-throughput screening.
See also
PubChem a database of small molecules from the chemical and biological literature, hosted by NCBI
ChEMBL, a database of information about medicinal chemistry and biological activities of small molecules.
References
External links
ZINC database
Chemical databases
Biological databases | ZINC database | [
"Chemistry",
"Biology"
] | 258 | [
"Bioinformatics",
"Chemical databases",
"Biological databases"
] |
6,904,158 | https://en.wikipedia.org/wiki/Space%20Power%20Facility | Space Power Facility (SPF) is a NASA facility used to test spaceflight hardware under simulated launch and spaceflight conditions. The SPF is part of NASA's Neil A. Armstrong Test Facility, which in turn is part of the Glenn Research Center. The Neil A. Armstrong Test Facility and the SPF are located near Sandusky, Ohio (Oxford Township, Erie County, Ohio).
The SPF is able to simulate a spacecraft's launch environment, as well as in-space environments. NASA has developed these capabilities under one roof to optimize testing of spaceflight hardware while minimizing transportation issues. Space Power Facility has become a "One Stop Shop" to qualify flight hardware for crewed space flight. This facility provides the capability to perform the following environmental testing:
Thermal-vacuum testing
Reverberation acoustic testing
Mechanical vibration testing
Modal testing
Electromagnetic interference and compatibility testing
Thermal-vacuum test chamber
This is a vacuum chamber built by NASA in 1969. It stands high and in diameter, enclosing a bullet-shaped space. It is the world's largest thermal vacuum chamber. It was originally commissioned for nuclear-electric power studies under vacuum conditions, but was later decommissioned. It was subsequently recommissioned for use in testing spacecraft propulsion systems. Recent uses include testing the airbag landing systems for the Mars Pathfinder and the Mars Exploration Rovers Spirit and Opportunity, under simulated Mars atmospheric conditions.
The facility was designed and constructed to test both nuclear and non-nuclear space hardware in a simulated low-Earth-orbiting environment. Although the facility was designed for testing nuclear hardware, only non-nuclear tests have been performed throughout its history. Test programs performed at the facility include high-energy experiments, rocket-fairing separation tests, Mars Lander system tests, deployable solar sail tests, and International Space Station hardware tests. The facility can sustain a high vacuum (10−6 torr, 130 μPa), and simulate solar radiation via a 4 MW quartz heat lamp array, solar spectrum by a 400 kW arc lamp, and cold environments () with a variable geometry cryogenic cold shroud.
The facility is available on a full-cost reimbursable basis to government, universities, and the private sector.
Aluminum test chamber
The aluminum test chamber is a vacuum-tight aluminum plate vessel that is in diameter and high. Designed for an external pressure of and internal pressure of , the chamber is constructed of Type 5083 aluminum which is a clad on the interior surface with a thick type 3003 aluminum for corrosion resistance. This material was selected because of its low neutron absorption cross-section. The floor plate and vertical shell are (total) thick, while the dome shell is . Welded circumferentially to the exterior surface is aluminum structural T-section members that are deep and wide. The doors of the test chamber are in size and have double door seals to prevent leakage. The chamber floor was designed for a load of 300 tons.
Concrete chamber enclosure
The concrete chamber enclosure serves not only as a radiological shield but also as a primary vacuum barrier from atmospheric pressure. in diameter and in height, the chamber was designed to withstand atmospheric pressure outside of the chamber at the same time vacuum conditions are occurring within. The concrete thickness varies from and contains a leak-tight steel containment barrier embedded within. The chamber's doors are and have inflatable seals. The space between the concrete enclosure and the aluminum test chamber is pumped down to a pressure of during a test.
Brian Cox of the BBC's Human Universe filmed a rock and feather drop episode at the Space Power Facility.
Electromagnetic interference/compatibility (EMI/EMC) functionality
Designed specifically as a large-scale thermal-vacuum test chamber for qualification testing of vehicles and equipment in outer-space conditions, it was discovered in the late 2000s that the unique construction of the SPF interior aluminum vacuum chamber also makes it an extremely large and electrically complex microwave or radio frequency cavity with excellent reverberant electro-magnetic characteristics. In 2009 these characteristics were measured by the National Institute of Standards and Technology and others after which the facility was understood to be, not only the world's largest Vacuum chamber, but also the world's largest EMI/EMC test facility. In 2011, the Glenn Research Center successfully performed a calibration of the aluminum vacuum chamber using IEC 61000-4-21 methodologies. As a result of these activities, the SPF can perform radiated susceptibility EMI tests for vehicles and equipment per MIL-STD-461, and can achieve MIL-STD-461F limits above approximately 80 MHz. In the spring of 2017 the low-power characterizations and calibrations from 2009 and 2011 were proven correct in a series of high-power tests performed in the chamber to validate its capabilities. The SPF chamber is currently being prepared for EMI radiated susceptibility testing of the crew module for the Artemis 1 of NASA's Orion spacecraft.
Reverberant Acoustic Test Facility
The Reverberant Acoustic Test Facility has 36 nitrogen-driven horns to simulate the high noise levels that are experienced during a space vehicle launch and supersonic ascent conditions. The RATF is capable of an overall sound pressure level of 163 dB within a chamber.
Mechanical Vibration Test Facility
The Mechanical Vibration Test Facility (MVF) is a three-axis vibration system. It will apply vibration in each of the three orthogonal axes (not simultaneously) with one direction in parallel to the Earth-launch thrust axis (X) at 5–150 Hz, 0-1.25 g-pk vertical, and 5–150 Hz 0-1.0 g-pk for the horizontal axes.
Vertical, or the thrust axis, shaking is accomplished by using 16 vertical actuators manufactured by TEAM Corporation, each capable of . The 16 vertical actuators allow for testing of up to a article at the previously stated frequency and amplitude limits.
Horizontal shaking is accomplished by four TEAM Corporation Horizontal Actuators. The horizontal actuators are used during vertical testing to counteract cross axis forces and overturning moments.
Modal test facility
In addition to the sine vibe table, a fixed-base modal floor sufficient for the diameter test article is available. The fixed-base modal test facility is a thick steel floor on top of of concrete, that is tied to the earth using deep tensioned rock anchors.
There were over of rock anchors, and of concrete used in the construction of the fixed-base modal test facility and mechanical vibration test facility.
Assembly area
The SPF layout is ideal for performing multiple test programs. The facility has two large high bay areas adjacent to either side of the vacuum chamber. The advantage of having both areas available is that it allows for two complex tests to be prepared simultaneously. One can be prepared in a high bay while another is being conducted in the vacuum chamber. Large chamber doors provide access to the test chamber from either high bay.
References
External links
Neil Armstrong Test Facility - official NASA website
Skylab Shroud in Plum Brook Space Power Facility
NASA image gallery, featuring the SPF
Detailed facility capabilities
"Space Power Facility Construction" at Youtube
Aerospace engineering
Glenn Research Center
NASA facilities
Buildings and structures in Erie County, Ohio | Space Power Facility | [
"Engineering"
] | 1,468 | [
"Aerospace engineering"
] |
6,904,406 | https://en.wikipedia.org/wiki/MHC%20restriction | MHC-restricted antigen recognition, or MHC restriction, refers to the fact that a T cell can interact with a self-major histocompatibility complex molecule and a foreign peptide bound to it, but will only respond to the antigen when it is bound to a particular MHC molecule.
When foreign proteins enter a cell, they are broken into smaller pieces called peptides. These peptides, also known as antigens, can derive from pathogens such as viruses or intracellular bacteria. Foreign peptides are brought to the surface of the cell and presented to T cells by proteins called the major histocompatibility complex (MHC). During T cell development, T cells go through a selection process in the thymus to ensure that the T cell receptor (TCR) will not recognize MHC molecule presenting self-antigens, i.e that its affinity is not too high. High affinity means it will be autoreactive, but no affinity means it will not bind strongly enough to the MHC. The selection process results in developed T cells with specific TCRs that might only respond to certain MHC molecules but not others. The fact that the TCR will recognize only some MHC molecules but not others contributes to "MHC restriction". The biological reason of MHC restriction is to prevent supernumerary wandering lymphocytes generation, hence energy saving and economy of cell-building materials.
T-cells are a type of lymphocyte that is significant in the immune system to activate other immune cells. T-cells will recognize foreign peptides through T-cell receptors (TCRs) on the surface of the T cells, and then perform different roles depending on the type of T cell they are in order to defend the host from the foreign peptide, which may have come from pathogens like bacteria, viruses or parasites. Enforcing the restriction that T cells are activated by peptide antigens only when the antigens are bound to self-MHC molecules, MHC restriction adds another dimension to the specificity of T cell receptors so that an antigen is recognized only as peptide-MHC complexes.
MHC restriction in T cells occurs during their development in the thymus, specifically positive selection. Only the thymocytes (developing T cells in the thymus) that are capable of binding, with an appropriate affinity, with the MHC molecules can receive a survival signal and go on to the next level of selection. MHC restriction is significant for T cells to function properly when it leaves the thymus because it allows T cell receptors to bind to MHC and detect cells that are infected by intracellular pathogens, viral proteins and bearing genetic defects. Two models explaining how restriction arose are the germline model and the selection model.
The germline model suggests that MHC restriction is a result of evolutionary pressure favoring T cell receptors that are capable of binding to MHC. The selection model suggests that not all T cell receptors show MHC restriction, however only the T cell receptors with MHC restriction are expressed after thymus selection. In fact, both hypotheses are reflected in the determination of TCR restriction, such that both germline-encoded interactions between TCR and MHC and co-receptor interactions with CD4 or CD8 to signal T cell maturation occur during selection.
Introduction
The TCRs of T cells recognize linear peptide antigens only if coupled with a MHC molecule. In other words, the ligands of TCRs are specific peptide-MHC complexes. MHC restriction is particularly important for self-tolerance, which makes sure that the immune system does not target self-antigens. When primary lymphocytes are developing and differentiating in the thymus or bone marrow, T cells die by apoptosis if they express high affinity for self-antigens presented by an MHC molecule or express too low an affinity for self MHC.
T cell maturation involves two distinct developmental stages: positive selection and negative selection. Positive selection ensures that any T-cells with a high enough affinity for MHC bound peptide survive and goes on to negative selection, while negative selection induces death in T-cells which bind self-peptide-MHC complex too strongly. Ultimately, the T-cells differentiate and mature to become either T helper cells or T cytotoxic cells. At this point the T cells leave the primary lymphoid organ and enter the blood stream.
The interaction between TCRs and peptide-MHC complex is significant in maintaining the immune system against foreign antigens. MHC restriction allows TCRs to detect host cells that are infected by pathogens, contains non-self proteins or bears foreign DNA. However, MHC restriction is also responsible for chronic autoimmune diseases and hypersensitivity.
Structural specificity
The peptide-MHC complex presents a surface that looks like an altered self to the TCR. The surface consisting of two α helices from the MHC and a bound peptide sequence is projected away from the host cell to the T cells, whose TCRs are projected away from the T cells towards the host cells. In contrast with T cell receptors which recognize linear peptide epitopes, B cell receptors recognize a variety of conformational epitopes (including peptide, carbohydrate, lipid and DNA) with specific three-dimensional structures.
Imposition
The imposition of MHC restriction on the highly variable TCR has caused heated debate. Two models have been proposed to explain the imposition of MHC restriction. The Germline model proposes that MHC restriction is hard-wired in the TCR Germline sequence due to co-evolution of TCR and MHC to interact with each other. The Selection model suggests that MHC restriction is not a hard-wired property in the Germline sequences of TCRs, but imposed on them by CD4 and CD8 co-receptors during positive selection. The relative importance of the two models are not yet determined.
Germline model
The Germline hypothesis suggests that the ability to bind to MHC is intrinsic and encoded within the germline DNA that are coding for TCRs. This is because of evolutionary pressure selects for TCRs that are capable of binding to MHC and selects against those that are not capable of binding to MHC. Since the emergence of TCR and MHC ~500 million years ago, there is ample opportunity for TCR and MHC to coevolve to recognize each other. Therefore, it is proposed that evolutionary pressure would lead to conserved amino acid sequences at regions of contact with MHCs on TCRs.
Evidence from X-ray crystallography has shown comparable binding topologies between various TCR and MHC-peptide complexes. In addition, conserved interactions between TCR and specific MHCs support the hypothesis that MHC restriction is related to the co-evolution of TCR and MHC to some extent.
Selection model
The selection hypothesis argues that instead of being an intrinsic property, MHC restriction is imposed on the T cells during positive thymic selection after random TCRs are produced. According to this model, T cells are capable of recognizing a variety of peptide epitopes independent of MHC molecules before undergoing thymic selection. During thymic selection, only the T cells with affinity to MHC are signaled to survive after the CD4 or CD8 co-receptors also bind to the MHC molecule. This is called positive selection.
During positive selection, co-receptors CD4 and CD8 initiate a signaling cascade following MHC binding. This involves the recruitment of Lck, a tyrosine kinase essential for T cell maturation that is associated with the cytoplasmic tail of the CD4 or CD8 co-receptors. Selection model argues that Lck is directed to TCRs by co-receptors CD4 and CD8 when they recognize MHC molecules. Since TCRs interact better with Lck when they are binding to the MHC molecules that are binding to the co-receptors in a ternary complex, T cells that can interact with MHCs bound to by the co-receptors can activate the Lck kinase and receive a survival signal.
Supporting this argument, genetically modified T cells without CD4 and CD8 co-receptors express MHC-independent TCRs. It follows that MHC restriction is imposed by CD4 and CD8 co-receptors during positive selection of T cell selection.
Reconciliation
A reconciliation of the two models was offered later on suggesting that both co-receptor and germline predisposition to MHC binding play significant roles in imposing MHC restriction. Since only those T cells that are capable of binding to MHCs are selected for during positive selection in the thymus, to some extent evolutionary pressure selects for germline TCR sequences that bind MHC molecules. On the other hand, as suggested by the selection model, T cell maturation requires the TCRs to bind to the same MHC molecules as the CD4 or CD8 co-receptor during T cell selection, thus imposing MHC restriction.
References
External links
Immune system | MHC restriction | [
"Biology"
] | 1,848 | [
"Immune system",
"Organ systems"
] |
6,904,597 | https://en.wikipedia.org/wiki/Static%20wick | Static wicks, also called static dischargers or static discharge wicks, are devices used to remove static electricity from aircraft in flight. They take the form of small sticks pointing backwards from the wings, and are fitted on almost all civilian aircraft.
Function
Precipitation static is an electrical charge on an airplane caused by flying through rain, snow storms, ice, or dust particles. Charge also accumulates through friction between the aircraft hull and the air. When the aircraft charge is great enough, it discharges into the surrounding air. Without static dischargers, the charge discharges in large batches through pointed aircraft extremities, such as antennas, wing tips, vertical and horizontal stabilizers, and other protrusions. The discharge creates a broad-band radio frequency noise from DC to 1000 MHz, which can affect aircraft communication.
To control this discharge, so as to allow the continuous operation of navigation and radio communication systems, static wicks are installed on the trailing edges of aircraft. These include (electrically grounded) ailerons, elevators, rudder, wing, horizontal and vertical stabilizer tips. Static wicks are high electrical resistance (6–200 megaohm) devices with a lower corona voltage and sharper points than the surrounding aircraft structure. This means that the corona discharge into the atmosphere flows through them, and occurs gradually.
Static wicks are not lightning arresters and do not affect the likelihood of an aircraft being struck by lightning. They will not function if they are not properly bonded to the aircraft. There must be a conductive path from all parts of the airplane to the dischargers, otherwise they will be useless. Access panels, doors, cowls, navigation lights, antenna mounting hardware, control surfaces, etc., can create static noise if they cannot discharge through the static wick.
History
The first static wicks were developed by a joint Army-Navy team led by Dr. Ross Gunn of the Naval Research Laboratory and fitted onto military aircraft during World War II. They were shown to be effective even in extreme weather conditions in 1946 by a United States Army Air Corps team led by Capt. Ernest Lynn Cleveland.
Dayton Granger, an inventor from Florida, received a patent on static wicks in 1950.
See also
Pan Am Flight 214
Precipitation (meteorology)
Electrostatic discharge
Triboelectric effect
Ground loop (electricity)
References
Electrical engineering
Electrodes | Static wick | [
"Chemistry",
"Engineering"
] | 478 | [
"Electrical engineering",
"Electrochemistry",
"Electrodes"
] |
6,904,737 | https://en.wikipedia.org/wiki/Rachitomi | The Rachitomi were a group of extinct Palaeozoic labyrinthodont amphibians, according to an earlier classification system. They are defined by the structure of the vertebrae, having large semi-circular intercentra below the notochord and smaller paired though prominent pleurocentra on each side above and behind, forming anchoring points for the ribs.
This form of complex backbone was found in some crossopterygian fish, the Ichthyostegalia, most Temnospondyli and some Reptiliomorpha. Primitive reptiles kept the complex rachitomous vertebrae, but with the pleurocentra being the more dominant. As a phylogenetic unit, the Rachitomi thus are a paraphyletic unit.
References
Stegocephalians
Paraphyletic groups | Rachitomi | [
"Biology"
] | 172 | [
"Phylogenetics",
"Paraphyletic groups"
] |
6,904,987 | https://en.wikipedia.org/wiki/Virtual%20heritage | Virtual heritage or cultural heritage and technology is the body of works dealing with information and communication technologies and their application to cultural heritage, such as virtual archaeology. It aims to restore ancient cultures as real (virtual) environments where users can immerse.
Virtual heritage and cultural heritage have independent meanings: cultural heritage refers to sites, monuments, buildings and objects "with historical, aesthetic, archaeological, scientific, ethnological or anthropological value", whereas virtual heritage refers to instances of these within a technological domain, usually involving computer visualization of artefacts or virtual reality environments.
First use
The first use of virtual heritage as a museum exhibit, and the derivation of the name virtual tour, was in 1994 as a museum visitor interpretation, providing a 'walk-through' of a 3D reconstruction of Dudley Castle in England as it was in 1550.
This consisted of a computer controlled laserdisc based system designed by British-based engineer Colin Johnson. It is a little-known fact that one of the first users of virtual heritage was Queen Elizabeth II, when she officially opened the visitor centre in June 1994.
Because the Queen's officials had requested titles, descriptions and instructions of all activities, the system was named 'Virtual Tour', being a cross between virtual reality and royal tour.
Projects
One technology that is frequently employed in virtual heritage applications is augmented reality (AR), which is used to provide on-site reconstructions of archaeological sites or artefacts. An example is the lifeClipper project, a Swiss commercial tourism and mixed reality urban heritage project. Using HMD technology, users walking the streets of Basel can see cultured AR video characters and objects as well as oddly-shaped stencils.
Many virtual heritage projects focus on the tangible aspects of cultural heritage, for example 3D modelling, graphics and animation. In doing so, they often overlook the intangible aspects of cultural heritage associated with objects and sites, such as stories, performances and dances. The tangible aspects of cultural heritage are not inseparable from the intangible and one method for combining them is the use of virtual heritage serious games, such as the 'Digital Songlines' and 'Virtual Songlines' which modified computer game technology to preserve, protect and present the cultural heritage of Aboriginal Australian Peoples. There have been numerous applications of digital models being used to engage the public and encourage involvement in built heritage activities and discourse.
Place-Hampi is another example of a virtual heritage project. It applies co-evolutionary systems to show a cultural presence using stereoscopic rendering of the landscape of Hampi landscape, a UNESCO World Heritage Site in Karnataka, India.
See also
CyArk
Computational archaeology
Digital heritage
References
Further reading
Michael Falser, Monica Juneja (eds.). 'Archaeologizing' Heritage? Transcultural Entanglements between Local Social Practices and Global Virtual Realities. Heidelberg, New York: Springer (2013), .
External links
Cultural heritage
Virtual reality
Digital humanities | Virtual heritage | [
"Technology"
] | 589 | [
"Digital humanities",
"Computing and society"
] |
6,905,037 | https://en.wikipedia.org/wiki/Demidov%20Prize | The Demidov Prize () is a national scientific prize in Russia awarded annually to the members of the Russian Academy of Sciences. Originally awarded from 1832 to 1866 in the Russian Empire, it was revived by the government of Russia's Sverdlovsk Oblast in 1993. In its original incarnation it was one of the first annual scientific awards, and its traditions influenced other awards of this kind including the Nobel Prize.
History
In 1831 Count Pavel Nikolaievich Demidov, representative of the famous Demidov family, established a scientific prize in his name. The Saint Petersburg Academy of Sciences (now the Russian Academy of Sciences) was chosen as the awarding institution. In 1832 the president of the Petersburg Academy of Sciences, Sergei Uvarov, awarded the first prizes.
From 1832 to 1866 the Academy awarded 55 full prizes (5,000 rubles) and 220 part prizes. Among the winners were many prominent Russian scientists: the founder of field surgery and inventor of the plaster immobilisation method in treatment of fractures, Nikolai Pirogov; the seafarer and geographer Adam Johann von Krusenstern, who led the first russian circumnavigation of the globe; Dmitri Mendeleev, the creator of the periodic table of elements; Boris Jacobi, pioneer of the first usable electric motors; and many others. One of the recipients was the founder's younger brother, Count Anatoly Nikolaievich Demidov, 1st Prince of San Donato, in 1847; Pavel had died in 1840, making Anatoly the Count Demidov (note that Russia did not recognize Anatoly's Italian title of prince).
From 1866, 25 years after Count Demidov's death, as was according to the terms of his bequest, there were no more awards.
In 1993, on the initiative of the vice-president of the Russian Academy of Sciences Gennady Mesyats and the governor of the Sverdlovsk Oblast Eduard Rossel, the Demidov Prize traditions were restored. The prize is awarded for outstanding achievements in natural sciences and humanities. The winners are elected annually among the members of the Russian Academy of Sciences. According to the tradition every year the Demidov Scientific Foundation chooses three or four academicians to receive the award. The prize includes a medal, a diploma and $10,000. The awards ceremony takes place every year at the Governor's Palace of Sverdlovsk Oblast, in Yekaterinburg, Russia. The recipients of the Prize also give lectures at the Ural State University (Demidov Lecture).
Winners (1832-1866)
Winners (from 1993)
See also
List of general science and technology awards
List of biology awards
List of chemistry awards
List of mathematics awards
List of physics awards
References
Bibliography
(in Russian) N. A. Mezenin: Лауреаты Демидовских премий Петербургской Академии наук. Л., Наука, 1987.
(in Russian) Yuri Alexandrovich Sokolov, Zoya Antonovna Bessudnova, L. T. Prizhdetskaya: Отечественные действительные и почетные члены Российской академии наук 18-20 вв. Геология и горные науки.- М.: Научный мир, 2000.
External links
Demidov Foundation short history
List of all the winners of the full Demidov Prize
Demidov Prize and Demidov Lecturing at Lebedev Physical Institute web site
Physics awards
Chemistry awards
Mathematics awards
Biology awards
Awards established in 1993 | Demidov Prize | [
"Technology"
] | 825 | [
"Biology awards",
"Chemistry awards",
"Mathematics awards",
"Science and technology awards",
"Physics awards"
] |
6,905,066 | https://en.wikipedia.org/wiki/Pholiota%20microspora | Pholiota microspora, commonly known as Pholiota nameko or simply , is a small, amber-brown mushroom with a slightly gelatinous coating that is used as an ingredient in miso soup and nabemono. In some countries this mushroom is available in kit form and can be grown at home. It is one of Japan's most popular cultivated mushrooms, tastes slightly nutty and is often used in stir-fries. They are also sold dried. Nameko is a cold triggered mushroom that typically fruits in the fall months when the temperature drops below 10°C for the first time, and flushes twice a few weeks apart.
In Mandarin Chinese the mushroom is known as 滑子蘑; (Pinyin: huá zi mó) or 滑菇; (Pinyin: huá gū).
In Russia it is also consumed widely, and is known as (often sold as) "opyonok" (опёнок) or plural "opyata" (опята).
In America the mushroom is sometimes called a "butterscotch mushroom".
See also
List of Pholiota species
References
Fungi described in 1929
Japanese cuisine
Strophariaceae
Fungi in cultivation
Fungi of Japan
Fungi of China
Russian cuisine
Fungus species | Pholiota microspora | [
"Biology"
] | 258 | [
"Fungi",
"Fungus species"
] |
6,905,141 | https://en.wikipedia.org/wiki/Spur%20%28botany%29 | The botanical term “spur” is given to outgrowths of tissue on different plant organs. The most common usage of the term in botany refers to nectar spurs in flowers.
nectar spur
spur (stem)
spur (leaf)
See also
Fascicle
Sepal
Petal
Tepal
Calyx
Corolla
Plant anatomy
Plant morphology | Spur (botany) | [
"Biology"
] | 67 | [
"Plant morphology",
"Plants"
] |
6,905,166 | https://en.wikipedia.org/wiki/Shimeji | Shimeji (Japanese: , or ) is a group of edible mushrooms native to East Asia, but also found in northern Europe. Hon-shimeji (Lyophyllum shimeji) is a mycorrhizal fungus and difficult to cultivate. Other species are saprotrophs, and buna-shimeji (Hypsizygus tessulatus) is now widely cultivated. Shimeji is rich in umami-tasting compounds such as guanylic acid, glutamic acid, and aspartic acid.
Species
Several species are sold as shimeji mushrooms. All are saprotrophic except Lyophyllum shimeji.
Mycorrhizal
Hon-shimeji (), Lyophyllum shimeji
The cultivation methods have been patented by several groups, such as Takara Bio and Yamasa, and the cultivated hon-shimeji is available from several manufacturers in Japan.
Saprotrophic
Buna-shimeji (, lit. beech shimeji), Hypsizygus tessulatus, also known in English as the brown beech or brown clamshell mushroom.
Hypsizygus marmoreus is a synonym of Hypsizygus tessulatus. Cultivation of Buna-shimeji was first patented by Takara Shuzo Co., Ltd. in 1972 as hon-shimeji and the production started in 1973 in Japan. Now, several breeds are widely cultivated and sold fresh in markets.
Bunapi-shimeji (), known in English as the white beech or white clamshell mushroom.
Bunapi was selected from UV-irradiated buna-shimeji ('hokuto #8' x 'hokuto #12') and the breed was registered as 'hokuto shiro #1' by Hokuto Corporation.
Hatake-shimeji (), Lyophyllum decastes.
Shirotamogidake (), Hypsizygus ulmarius.
These two species had been also sold as hon-shimeji.
Velvet pioppino (alias velvet pioppini, black poplar mushroom, Chinese: /), Agrocybe aegerita.
Shimeji health benefits
Shimeji mushrooms contain minerals like potassium and phosphorus, magnesium, zinc, and copper. Shimeji mushrooms lower the cholesterol level of the body. This mushroom is rich in glycoprotein (HM-3A), marmorin, beta-(1-3)-glucan, hypsiziprenol, and hypsin therefore is a potential natural anticancer agent. Shimeji mushrooms contain angiotensin I-converting enzyme (ACE) inhibitor which is an oligopeptide that may be helpful in lowering blood pressure and reducing the risk of stroke in persons having hypertension. Also rich in polysaccharides, phenolic compounds, and flavonoids. Therefore, inhibits inflammatory cytokines and oxidative stress and protects from lung failure. These compounds also help in reducing oxidative stress-mediated disease through radical scavenging activity hence these mushrooms are antioxidants also.
Culinary Use
Shimeji should always be cooked: it is not a good mushroom to serve raw due to a somewhat bitter taste, but the bitterness disappears completely upon cooking. The cooked mushroom has a pleasant, firm, slightly crunchy texture and a slightly nutty flavor. Cooking also makes this mushroom easier to digest. It works well in stir-fried foods like stir-fried vegetables, as well as with wild game or seafood. Also, it can be used in soups, stews, and in sauces. When cooked alone, Shimeji mushrooms can be sautéed whole, including the stem or stalk (only the very end cut off), using a higher temperature or they can be slow roasted at a low temperature with a small amount of butter or cooking oil. Shimeji is used in soups, nabe and takikomi gohan.
See also
List of Japanese ingredients
References
External links
Honshimeji Mushroom, RecipeTips.com. Brown Beech (Buna shimeji), White Beech (Bunapi shimeji), and the Pioppino (Agrocybe aegerita) mushrooms.
Edible fungi
Fungi in cultivation
Japanese cuisine terms
Fungi of Asia
Fungus common names | Shimeji | [
"Biology"
] | 926 | [
"Fungus common names",
"Fungi",
"Common names of organisms"
] |
6,905,183 | https://en.wikipedia.org/wiki/Timelike%20homotopy | On a Lorentzian manifold, certain curves are distinguished as timelike. A timelike homotopy between two timelike curves is a homotopy such that each intermediate curve is timelike. No closed timelike curve (CTC) on a Lorentzian manifold is timelike homotopic to a point (that is, null timelike homotopic); such a manifold is therefore said to be multiply connected by timelike curves (or timelike multiply connected). A manifold such as the 3-sphere can be simply connected (by any type of curve), and at the same time be timelike multiply connected. Equivalence classes of timelike homotopic curves define their own fundamental group, as noted by Smith (1967). A smooth topological feature which prevents a CTC from being deformed to a point may be called a timelike topological feature.
References
Algebraic topology
Homotopy theory
Lorentzian manifolds | Timelike homotopy | [
"Mathematics"
] | 195 | [
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
6,905,266 | https://en.wikipedia.org/wiki/Flight%20director%20%28aeronautics%29 | In aviation, a flight director (FD) is a flight instrument that is overlaid on the attitude indicator that shows the pilot of an aircraft the attitude required to execute the desired flight path. Flight directors are mostly commonly used during approach and landing. They can be used with or without autopilot systems.
Description
Flight director (FD) modes integrated with autopilot systems perform calculations for more advanced automation, like "selected course (intercepting), changing altitudes, and tracking navigation sources with cross winds." FD computes and displays the proper pitch and bank angles required for the aircraft to follow a selected flight path.
The flight director usually receives input from an Air Data Computer (ADC) and a flight data computer. The ADC supplies altitude, airspeed and temperature data, heading data from magnetic sources such as flux valves, heading selected on the Horizontal Situation Indicator (HSI) (or Primary flight display (PFD)/multi-function display (MFD)/ electronic horizontal situation indicator (EHSI)), navigation data from Flight Management System (FMS), VHF omnidirectional range (VOR)/ (DME), and RNAV sources. The flight data computer integrates all of the data such as speed, position, closure, drift, track, desired course, and altitude into a command signal. The command signal is displayed on the attitude indicator in the form of command bars, which show the pitch and roll inputs necessary to achieve the selected targets.
The pilot simply keeps the aircraft symbol on the attitude indicator aligned with the command bars, or allow the autopilot to make the actual control movements to fly the selected track and altitude. A simple example: The aircraft flies level on 045° heading at flight level FL150 at indicated airspeed, the FD bars are thus centered. Then the flight director is set to heading 090° and a new flight level FL200. The aircraft must thus turn to the right and climb.
This is done by banking to the right while climbing. The roll bar will deflect to the right and the pitch bar will deflect upwards. The pilot will then pull back on the control column while banking to the right. Once the aircraft reaches the proper bank angle, the FD vertical bar will center and remain centered until it is time to roll back to wings level (when the heading approaches 090°).
When the aircraft approaches FL200 the FD horizontal bar will deflect downwards thus commanding the pilot to lower the nose in order to level off at FL200.
A flight director can be used with or without automation of the flight control surfaces. The FD is commonly used in direct connection with the autopilot (AP), where the FD commands the AP to put the aircraft in the attitude necessary to follow a trajectory. In some aircraft, the autopilot cannot function without the flight director engaged. Without a flight director, the autopilot will be limited to simpler modes such as following a heading or maintaining an altitude. With a flight director, the autopilot can follow a flight plan programmed into the flight computer.
The exact form of the flight director's display varies with the instrument type, either crosshair or command bars (so-called "cue").
See also
Acronyms and abbreviations in avionics
Attitude indicator
Flight instruments
Head-up display (HUD)
References
Avionics
Navigational flight instruments
Aircraft automation | Flight director (aeronautics) | [
"Technology",
"Engineering"
] | 707 | [
"Avionics",
"Automation",
"Aircraft instruments",
"Navigational flight instruments",
"Aircraft automation"
] |
6,905,282 | https://en.wikipedia.org/wiki/Quaternary%20carbon | A quaternary carbon is a carbon atom bound to four other carbon atoms. For this reason, quaternary carbon atoms are found only in hydrocarbons having at least five carbon atoms. Quaternary carbon atoms can occur in branched alkanes, but not in linear alkanes.
Synthesis
The formation of chiral quaternary carbon centers has been a synthetic challenge. Chemists have developed asymmetric Diels–Alder reactions, Heck reaction, Enyne cyclization, cycloaddition reactions, C–H activation, Allylic substitution, Pauson–Khand reaction, etc. to construct asymmetric quaternary carbon atoms.
One of the most industrially important compounds containing a quaternary carbon is bis-phenol A (BPA). The central atom is a quaternary carbon. Retrosynthetically, that carbon is the central atom of an acetone molecule before condensation with two equivalents of phenol - BPA Production Process
References
Chemical nomenclature
Organic chemistry | Quaternary carbon | [
"Chemistry"
] | 213 | [
"nan"
] |
6,905,345 | https://en.wikipedia.org/wiki/Greenhouse%20gas%20inventory | Greenhouse gas inventories are emission inventories of greenhouse gas emissions that are developed for a variety of reasons. Scientists use inventories of natural and anthropogenic (human-caused) emissions as tools when developing atmospheric models. Policy makers use inventories to develop strategies and policies for emissions reductions and to track the progress of those policies.
Regulatory agencies and corporations also rely on inventories to establish compliance records with allowable emission rates. Businesses, the public, and other interest groups use inventories to better understand the sources and trends in emissions.
Unlike some other air emission inventories, greenhouse gas inventories include not only emissions from source categories, but also removals by carbon sinks. These removals are typically referred to as carbon sequestration.
Greenhouse gas inventories typically use Global warming potential (GWP) values to combine emissions of various greenhouse gases into a single weighted value of emissions.
Examples
Some of the key examples of greenhouse gas inventories include:
All Annex I countries are required to report annual emissions and sinks of greenhouse gases under the United Nations Framework Convention on Climate Change (UNFCCC)
National governments that are Parties to the UNFCCC and/or the Kyoto Protocol are required to submit annual inventories of all anthropogenic greenhouse gas emissions from sources and removals from sinks.
The Kyoto Protocol includes additional requirements for national inventory systems, inventory reporting, and annual inventory review for determining compliance with Articles 5 and 8 of the Protocol.
Project developers under the Clean Development Mechanism of the Kyoto Protocol prepare inventories as part of their project baselines.
Greenhouse gas emissions accounting
Greenhouse gas emissions accounting is measuring the amount of greenhouse gases (GHG) emitted during a given period of time by a polity, usually a country but sometimes a region or city. Such measures are used to conduct climate science and climate policy.
There are two main, conflicting ways of measuring GHG emissions: production-based (also known as territorial-based) and consumption-based. The Intergovernmental Panel on Climate Change defines production-based emissions as taking place “within national territory and offshore areas over which the country has jurisdiction”. Consumption-based emissions take into account the effects of trade, encompassing the emissions from domestic final consumption and those caused by the production of its imports. From the perspective of trade, consumption-based emissions accounting is thus the reverse of production-based emissions accounting, which includes exports but excludes imports (Table 1).
The choice of accounting method can have very important effects on policymaking, as each measure can generate a very different result. Thus, different values for a National greenhouse gas Emissions Inventory (NEI) could result in a country choosing different optimal mitigation activities, the wrong choice based on wrong information being potentially damaging. The application of production-based emissions accounting is currently favoured in policy terms as it is easier to measure, but it is criticised in the literature principally for its inability to allocate emissions embodied in international trade/transportation and the potential for carbon leakage.
Almost all countries in the world are parties to the Paris Agreement, which requires them to provide regular production-based GHG emissions inventories to the United Nations Framework Convention on Climate Change (UNFCCC), in order to track both countries achievement of their nationally determined contributions and climate policies as well as regional climate policies such as the EU Emissions Trading Scheme (ETS), and the world's progress in limiting global warming.
Comparison of production based and consumption-based accounting
Over the last few decades emissions have grown at an increasing rate from 1.0% yr−1 throughout the 1990s to 3.4% yr−1 between 2000 and 2008. These increases have been driven not only by a growing global population and per-capita GDP, but also by global increases in the energy intensity of GDP (energy per unit GDP) and the carbon intensity of energy (emissions per unit energy). These drivers are most apparent in developing markets (Kyoto non-Annex B countries), but what is less apparent is that a substantial fraction of the growth in these countries is to satisfy the demand of consumers in developed countries (Kyoto Annex B countries). This is exaggerated by a process known as Carbon Leakage whereby Annex B countries decrease domestic production in place of increased importation of products from non-Annex B countries where emission policies are less strict. Although this may seem the rational choice for consumers when considering local pollutants, consumers are inescapably affected by global pollutants such as GHG, irrespective of where production occurs. Although emissions have slowed since 2007 as a result of the global financial crisis, the longer-term trend of increased emissions is likely to resume.
Today, much international effort is put into slowing the anthropogenic release of GHG and resulting climate change. In order to set benchmarks and emissions targets for - as well as monitor and evaluate the progress of - international and regional policies, the accurate measurement of each country's NEI becomes imperative.
Production-based accounting
As production-based emissions accounting is currently favoured in policy terms, its methodology is well established. Emissions are calculated not directly but indirectly from fossil fuel usage and other relevant processes such as industry and agriculture according to 2006 guidelines issued by the IPCC for GHG reporting. The guidelines span numerous methodologies dependent on the level of sophistication (Tiers 1–3 in Table 2). The simplest methodology combines the extent of human activity with a coefficient quantifying the emissions from that activity, known as an ‘emission factor’. For example, to estimate emissions from the energy sector (typically contributing over 90% of emissions and 75% of all GHG emissions in developed countries) the quantity of fuels combusted is combined with an emission factor - the level of sophistication increasing with the accuracy and complexity of the emission factor. Table 2 outlines how the UK implements these guidelines to estimate some of its emissions-producing activities.
Emissions from burning wood are counted against the country where the trees were felled rather than the country where they are burnt.
Consumption-based accounting
Consumption-based emissions accounting has an equally established methodology using Input-Output Tables. These "display the interconnection between different sectors of production and allow for a tracing of the production and consumption in an economy" and were originally created for national economies. However, as production has become increasingly international and the import/export market between nations has flourished, Multi-Regional Input-Output (MRIO) models have been developed. The unique feature of MRIO is allowing a product to be traced across its production cycle, "quantifying the contributions to the value of the product from different economic sectors in various countries represented in the model. It hence offers a description of the global supply chains of products consumed". From this, assuming regional- and industry-specific data for CO2 emissions per unit of output are available, the total amount of emissions for the product can be calculated, and therefore the amount of emissions the final consumer is allocated responsibility for.
The two methodologies of emissions accounting begin to expose their key differences. Production-based accounting is transparently consistent with GDP, whereas consumption-based accounting (more complex and uncertain) is consistent with national consumption and trade. However, the most important difference is that the latter covers global emissions - including those ‘embodied’ emissions that are omitted in production-based accounting - and offers globally based mitigation options. Thus the attribution of emissions embodied in international trade is the crux of the matter.
Emissions embodied in international trade
Figure 1 and Table 3 show extent of emissions embodied in international trade and thus their importance when attempting emissions reductions. Figure 1 shows the international trade flows of the top 10 countries with largest trade fluxes in 2004 and illustrates the dominance of trade from developing countries (principally China, Russia and India) to developed countries (principally USA, EU and Japan). Table 3 supports this showing that the traded emissions in 2008 total 7.8 gigatonnes (Gt) with a net CO2 emissions trade from developing to developed countries of 1.6 Gt.
Table 3 also shows how these processes of production, consumption and trade have changed from 1990 (commonly chosen for baseline levels) to 2008. Global emissions have risen 39%, but in the same period developed countries seem to have stabilized their domestic emissions, whereas developing countries’ domestic emissions have doubled. This ‘stabilization’ is arguably misleading, however, if the increased trade from developing to developed countries is considered. This has increased from 0.4 Gt CO2 to 1.6 Gt CO2 - a 17%/year average growth meaning 16 Gt CO2 have been traded from developing to developed countries between 1990 and 2008. Assuming a proportion of the increased production in developing countries is to fulfil the consumption demands of developed countries, the process known as carbon leakage becomes evident. Thus, including international trade (i.e. the methodology of consumption-based accounting) reverses the apparent decreasing trend in emissions in developed countries, changing a 2% decrease (as calculated by production-based accounting) into a 7% increase across the time period. This point is only further emphasized when these trends are studied at a less aggregated scale.
Figure 2 shows the percentage surplus of emissions as calculated by production-based accounting over consumption-based accounting. In general, production-based accounting proposes lower emissions for the EU and OECD countries (developed countries) and higher emissions for BRIC and rest of the world (developing countries). However, consumption-based accounting proposes the reverse with lower emissions in BRIC and RoW, and higher emissions in EU and OECD countries. This led Boitier to term EU and OECD ‘CO2 consumers’ and BRIC and RoW ‘CO2 producers’.
The large difference in these results is corroborated by further analysis. The EU-27 in 1994 counted emissions using the consumption-based approach at 11% higher than those counted using the production-based approach, this difference rising to 24% in 2008. Similarly OECD countries reached a peak variance of 16% in 2006 whilst dropping to 14% in 2008. In contrast, although RoW starts and ends relatively equal, in the intervening years it is a clear CO2 producer, as are BRIC with an average consumption-based emissions deficit of 18.5% compared to production-based emissions.
Peters and Hertwich completed a MRIO study to calculate emissions embodied in international trade using data from the 2001 Global Trade Analysis Program (GTAP). After manipulation, although their numbers are slightly more conservative (EU 14%; OECD 3%; BRIC 16%; RoW 6%) than Boitier the same trend is evident - developed countries are CO2 consumers and developing countries are CO2 producers. This trend is seen across the literature and supporting the use of consumption-based emissions accounting in policy-making decisions.
Tools and standards
ISO 14064
The ISO 14064 standards (published in 2006 and early 2007) are the most recent additions to the ISO 14000 series of international standards for environmental management. The ISO 14064 standards provide governments, businesses, regions and other organisations with an integrated set of tools for programs aimed at measuring, quantifying and reducing greenhouse gas emissions. These standards allow organisations take part in emissions trading schemes using a globally recognised standard.
Local Government Operations Protocol
The Local Government Operations Protocol (LGOP) is a tool for accounting and reporting greenhouse gas emissions across a local government's operations. Adopted by the California Air Resources Board (ARB) in September 2008 for local governments to develop and report consistent GHG inventories to help meet California's AB 32 GHG reduction obligations, it was developed in partnership with California Climate Action Registry, The Climate Registry, ICLEI and dozens of stakeholders.
The California Sustainability Alliance also created the Local Government Operations Protocol Toolkit, which breaks down the complexities of the LGOP manual and provides an area by area summary of the recommended inventory protocols.
Know IPCC Format for GHG Emissions Inventory
The data in the GHG emissions inventory is presented using the IPCC format (seven sectors presented using the Common Reporting Format, or CRF) as is all communication between Member States and the Secretariat of the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol.
Advantages of consumption-based accounting
Consumption-based emissions accounting may be deemed superior as it incorporates embodied emissions currently ignored by the UNFCCC preferred production-based accounting. Other key advantages include: extending mitigation options, covering more global emissions through increased participation, and inherently encompassing policies such as the Clean Development Mechanism (CDM).
Extending mitigation options
Under the production-based system a country is punished for having a pollution intensive resource base. If this country has pollution intensive exports, such as Norway where 69% of its CO2 emissions are the result of production for export, a simple way to meet its emissions reductions set out under Kyoto would be to reduce its exports. Although this would be environmentally advantageous, it would be economically and politically harmful as exports are an important part of a country's GDP. However, by having appropriate mechanisms in place, such as a harmonized global tax, border-tax adjustment or quotas, a consumption-based accounting system could shift the comparative advantage towards a decision that includes environmental factors. The tax most discussed is based on the carbon content of the fossil fuels used to produce and transport the product, the greater the level of carbon used the more tax being charged. If a country did not voluntarily participate then a border tax could be imposed on them. This system would have the effect of embedding the cost of environmental load in the price of the product and therefore market forces would shift production to where it is economically and environmentally preferable, thus reducing GHG emissions
Increasing participation
In addition to reducing emissions directly this system may also alleviate competitiveness concerns in twofold ways: firstly, domestic and foreign producers are exposed to the same carbon tax; and secondly, if multiple countries are competing for the same export market they can promote environmental performance as a marketing tool. A loss of competitiveness resulting from the absence of legally binding commitments for non-Annex B countries was the principal reason the US and Australia, two heavily emitting countries, did not originally ratify the Kyoto protocol (Australia later ratified in 2007). By alleviating such concerns more countries may participate in future climate policies resulting in a greater percentage of global emissions being covered by legally binding reduction policies. Furthermore, as developed countries are currently expected to reduce their emissions more than developing countries, the more emissions are (fairly) attributed to developed countries the more they become covered by legally bound reduction policies. Peters argues that this last prediction means that consumption-based accounting would advantageously result in greater emissions reductions irrespective of increased participation.
Encompassing policies such as the CDM
The CDM is a flexible mechanism set up under the Kyoto Protocol with the aim of creating ‘Carbon Credits’ for trade in trading schemes such as the EU ETS. Despite coming under heavy criticism (see Evans, p134-135; and Burniaux et al., p58-65), the theory is that as the marginal cost of environmental abatement is lower in non-Annex B countries a scheme like this will promote technology transfer from Annex B to non-Annex B countries resulting in cheaper emissions reductions. Because under consumption-based emissions accounting a country is responsible for the emissions caused by its imports, it is important for the importing country to encourage good environmental behaviour and promote the cleanest production technologies available in the exporting country. Therefore, unlike the Kyoto Protocol where the CDM was added later, consumption-based emissions accounting inherently promotes clean development in the foreign country because of the way it allocates emissions. One loophole that remains relevant is carbon colonialism whereby developed countries do not mitigate the underlying problem but simply continue to increase consumption offsetting this by exploiting the abatement potential of developing countries.
Disadvantages of consumption-based accounting
Despite its advantages consumption-based emissions accounting is not without its drawbacks. These were highlighted above and in Table 1 and are principally: greater uncertainty, greater complexity requiring more data not always available, and requiring greater international collaboration.
Greater uncertainty and complexity
Uncertainty derives from three main reasons: production-based accounting is much closer to statistical sources and GDP which are more assured; the methodology behind consumption-based accounting requires an extra step over production-based accounting, this step inherently incurring further doubt; and consumption-based accounting includes data from all trading partners of a particular country which will contain different levels of accuracy. The bulk of data required is its second pitfall as in some countries the lack of data means consumption-based accounting is not possible. However, levels and accuracy of data will improve as more and better techniques are developed and the scientific community produce more data sets - examples including the recently launched global databases: EORA from the University of Sydney, EXIOPOL and WIOD databases from European consortia, and the Asian IDE-JETRO. In the short term it will be important to attempt to quantify the level of uncertainty more accurately.
Greater international co-operation
The third problem is that consumption-based accounting requires greater international collaboration to deliver effective results. A Government has the authority to implement policies only over emissions it directly generates. In consumption-based accounting emissions from different geo-political territories are allocated to the importing country. Although the importing country can indirectly oppose this by changing its importing habits or by applying a border tax as discussed, only by greater international collaboration, through an international dialogue such as the UNFCCC, can direct and meaningful emissions reductions be enforced.
Sharing emissions responsibility
Thus far it has been implied that one must implement either production-based accounting or consumption-based accounting. However, there are arguments that the answer lies somewhere in the middle i.e. emissions should be shared between the importing and exporting countries. This approach asserts that although it is the final consumer that ultimately initiates the production, the activities that create the product and associated pollution also contribute to the producing country's GDP. This topic is still developing in the literature principally through works by Rodrigues et al., Lenzen et al., Marques et al. as well as through empirical studies by such as Andrew and Forgie. Crucially it proposes that at each stage of the supply chain the emissions are shared by some pre-defined criteria between the different actors involved.
Whilst this approach of sharing emissions responsibility seems advantageous, the controversy arises over what these pre-defined criteria should be. Two of the current front runners are Lenzen et al. who say “the share of responsibility allocated to each agent should be proportional to its value added” and Rodrigues et al. who say it should be based on “the average between an agent's consumption-based responsibility and income-based responsibility” (quoted in Marques et al.). As no criteria set has been adequately developed and further work is needed to produce a finished methodology for a potentially valuable concept.
Trends
Measures of regions' GHG emissions are critical to climate policy. It is clear that production-based emissions accounting, the currently favoured method for policy-making, significantly underestimates the level of GHG emitted by excluding emissions embodied in international trade. Implementing consumption-based accounting which includes such emissions, developed countries take a greater share of GHG emissions and consequently the low level of emissions commitments for developing countries are not as important. Not only does consumption-based accounting encompass global emissions, it promotes good environmental behaviour and increases participation by reducing competitiveness.
Despite these advantages the shift from production-based to consumption-based accounting arguably represents a shift from one extreme to another. The third option of sharing responsibility between importing and exporting countries represents a compromise between the two systems. However, as yet no adequately developed methodology exists for this third way, so further study is required before it can be implemented for policy-making decisions.
Today, given its lower uncertainty, established methodology and reporting, consistency between political and environmental boundaries, and widespread implementation, it is hard to see any movement away from the favoured production-based accounting. However, because of its key disadvantage of omitting emissions embodied in international trade, it is clear that consumption-based accounting provides invaluable information and should at least be used as a ‘shadow’ to production-based accounting. With further work into the methodologies of consumption-based accounting and sharing emissions responsibility, both can play greater roles in the future of climate policy.
See also
Carbon footprint
Environmental economics
Greenhouse gas monitoring
Greenhouse Gases Observing Satellite (GOSAT) (Ibuki)
References
External links
Intergovernmental Panel on Climate Change (IPCC) national greenhouse gas inventory guidance manuals
UNFCCC National Inventory process
The GHG Protocol (WRI/WBCSD) - A corporate accounting and reporting standard
ISO 14064 standards for greenhouse gas accounting and verification
IPCC National Greenhouse Gas Inventories Programme
The Climate Registry
National inventories of GHG emitted in 2021 (received by the UNFCCC in 2023)
Greenhouse Gas Inventory Data – Flexible Queries Annex I Parties
Greenhouse gas emissions | Greenhouse gas inventory | [
"Chemistry"
] | 4,290 | [
"Greenhouse gases",
"Greenhouse gas emissions"
] |
6,905,370 | https://en.wikipedia.org/wiki/Jakarta%20Mail | Jakarta Mail (formerly JavaMail) is a Jakarta EE API used to send and receive email via SMTP, POP3 and IMAP. Jakarta Mail is built into the Jakarta EE platform, but also provides an optional package for use in Java SE.
The current version is 2.1.3, released on February 29, 2024. Another open source Jakarta Mail implementation exists (GNU JavaMail), which -while supporting only the obsolete JavaMail 1.3 specification- provides the only free NNTP backend, which makes it possible to use this technology to read and send news group articles.
As of 2019, the software is known as Jakarta Mail, and is part of the Jakarta EE brand (formerly known as Java EE). The reference implementation is part of the Eclipse Angus project.
Maven coordinates of the relevant projects required for operation are:
mail API: jakarta.mail:jakarta.mail-api:2.1.3
mail implementation: org.eclipse.angus:angus-mail:2.0.3
multimedia extensions: jakarta.activation:jakarta.activation-api:2.1.3
Licensing
Jakarta Mail is hosted as an open source project on Eclipse.org under its new name Jakarta Mail.
Most of the Jakarta Mail source code is licensed under the following licences:
EPL-2.0
GPL-2.0 with Classpath Exception license
The source code for the demo programs is licensed under the BSD license
Examples
import jakarta.mail.*;
import jakarta.mail.internet.*;
import java.time.*;
import java.util.*;
// Send a simple, single part, text/plain e-mail
public class TestEmail {
static Clock clock = Clock.systemUTC();
public static void main(String[] args) {
// SUBSTITUTE YOUR EMAIL ADDRESSES HERE!
String to = "sendToMailAddress";
String from = "sendFromMailAddress";
// SUBSTITUTE YOUR ISP'S MAIL SERVER HERE!
String host = "smtp.yourisp.invalid";
// Create properties, get Session
Properties props = new Properties();
// If using static Transport.send(),
// need to specify which host to send it to
props.put("mail.smtp.host", host);
// To see what is going on behind the scene
props.put("mail.debug", "true");
Session session = Session.getInstance(props);
try {
// Instantiate a message
Message msg = new MimeMessage(session);
//Set message attributes
msg.setFrom(new InternetAddress(from));
InternetAddress[] address = {new InternetAddress(to)};
msg.setRecipients(Message.RecipientType.TO, address);
msg.setSubject("Test E-Mail through Java");
Date now = Date.from(LocalDateTime.now(clock).toInstant(ZoneOffset.UTC));
msg.setSentDate(now);
// Set message content
msg.setText("This is a test of sending a " +
"plain text e-mail through Java.\n" +
"Here is line 2.");
//Send the message
Transport.send(msg);
} catch (MessagingException mex) {
// Prints all nested (chained) exceptions as well
mex.printStackTrace();
}
}
}
Sample Code to Send Multipart E-Mail, HTML E-Mail and File Attachments
package org.example;
import jakarta.activation.*;
import jakarta.mail.*;
import jakarta.mail.internet.*;
import java.io.*;
import java.time.*;
import java.util.*;
public class SendMailUsage {
static Clock clock = Clock.systemUTC();
public static void main(String[] args) {
// SUBSTITUTE YOUR EMAIL ADDRESSES HERE!!!
String to = "sendToMailAddress";
String from = "sendFromMailAddress";
// SUBSTITUTE YOUR ISP'S MAIL SERVER HERE!!!
String host = "smtpserver.yourisp.invalid";
// Create properties for the Session
Properties props = new Properties();
// If using static Transport.send(),
// need to specify the mail server here
props.put("mail.smtp.host", host);
// To see what is going on behind the scene
props.put("mail.debug", "true");
// Get a session
Session session = Session.getInstance(props);
try {
// Get a Transport object to send e-mail
Transport bus = session.getTransport("smtp");
// Connect only once here
// Transport.send() disconnects after each send
// Usually, no username and password is required for SMTP
bus.connect();
//bus.connect("smtpserver.yourisp.net", "username", "password");
// Instantiate a message
Message msg = new MimeMessage(session);
// Set message attributes
msg.setFrom(new InternetAddress(from));
InternetAddress[] address = {new InternetAddress(to)};
msg.setRecipients(Message.RecipientType.TO, address);
// Parse a comma-separated list of email addresses. Be strict.
msg.setRecipients(Message.RecipientType.CC,
InternetAddress.parse(to, true));
// Parse comma/space-separated list. Cut some slack.
msg.setRecipients(Message.RecipientType.BCC,
InternetAddress.parse(to, false));
msg.setSubject("Test E-Mail through Java");
msg.setSentDate(Date.from(LocalDateTime.now(clock).toInstant(ZoneOffset.UTC)));
// Set message content and send
setTextContent(msg);
msg.saveChanges();
bus.sendMessage(msg, address);
setMultipartContent(msg);
msg.saveChanges();
bus.sendMessage(msg, address);
setFileAsAttachment(msg, "C:/WINDOWS/CLOUD.GIF");
msg.saveChanges();
bus.sendMessage(msg, address);
setHTMLContent(msg);
msg.saveChanges();
bus.sendMessage(msg, address);
bus.close();
} catch (MessagingException mex) {
// Prints all nested (chained) exceptions as well
mex.printStackTrace();
// How to access nested exceptions
while (null != mex.getNextException()) {
// Get next exception in chain
Exception ex = mex.getNextException();
ex.printStackTrace();
if (!(ex instanceof MessagingException)) break;
else mex = (MessagingException) ex;
}
}
}
// A simple, single-part text/plain e-mail.
public static void setTextContent(Message msg) throws MessagingException {
// Set message content
String mytxt = "This is a test of sending a " +
"plain text e-mail through Java.\n" +
"Here is line 2.";
msg.setText(mytxt);
// Alternate form
msg.setContent(mytxt, "text/plain");
}
// A simple multipart/mixed e-mail. Both body parts are text/plain.
public static void setMultipartContent(Message msg) throws MessagingException {
// Create and fill first part
MimeBodyPart p1 = new MimeBodyPart();
p1.setText("This is part one of a test multipart e-mail.");
// Create and fill second part
MimeBodyPart p2 = new MimeBodyPart();
// Here is how to set a charset on textual content
p2.setText("This is the second part", "us-ascii");
// Create the Multipart. Add BodyParts to it.
Multipart mp = new MimeMultipart();
mp.addBodyPart(p1);
mp.addBodyPart(p2);
// Set Multipart as the message's content
msg.setContent(mp);
}
// Set a file as an attachment. Uses JAF FileDataSource.
public static void setFileAsAttachment(Message msg, String filename)
throws MessagingException {
// Create and fill first part
MimeBodyPart p1 = new MimeBodyPart();
p1.setText("This is part one of a test multipart e-mail." +
"The second part is file as an attachment");
// Create second part
MimeBodyPart p2 = new MimeBodyPart();
// Put a file in the second part
FileDataSource fds = new FileDataSource(filename);
p2.setDataHandler(new DataHandler(fds));
p2.setFileName(fds.getName());
// Create the Multipart. Add BodyParts to it.
Multipart mp = new MimeMultipart();
mp.addBodyPart(p1);
mp.addBodyPart(p2);
// Set Multipart as the message's content
msg.setContent(mp);
}
// Set a single part HTML content.
// Sending data of any type is similar.
public static void setHTMLContent(Message msg) throws MessagingException {
String html = "<html><head><title>" +
msg.getSubject() +
"</title></head><body><h1>" +
msg.getSubject() +
"</h1><p>This is a test of sending an HTML e-mail" +
" through Java.</body></html>";
// HTMLDataSource is a static nested class
msg.setDataHandler(new DataHandler(new HTMLDataSource(html)));
}
/*
* Static nested class to act as a JAF datasource to send HTML e-mail content
*/
static class HTMLDataSource implements DataSource {
private String html;
public HTMLDataSource(String htmlString) {
html = htmlString;
}
// Return html string in an InputStream.
// A new stream must be returned each time.
public InputStream getInputStream() throws IOException {
if (null == html) throw new IOException("Null HTML");
return new ByteArrayInputStream(html.getBytes());
}
public OutputStream getOutputStream() throws IOException {
throw new IOException("This DataHandler cannot write HTML");
}
public String getContentType() {
return "text/html";
}
public String getName() {
return "JAF text/html dataSource to send e-mail only";
}
}
}
References
External links
Jakarta Mail EE4J project page
FAQ
GNU JavaMail obsolete, but contains code for an NNTP backend
Email
Java platform
Java enterprise platform | Jakarta Mail | [
"Technology"
] | 2,618 | [
"Computing platforms",
"Java platform"
] |
6,905,518 | https://en.wikipedia.org/wiki/Electron%20beam-induced%20current | Electron-beam-induced current (EBIC) is a semiconductor analysis technique performed in a scanning electron microscope (SEM) or scanning transmission electron microscope (STEM). It is most commonly used to identify buried junctions or defects in semiconductors, or to examine minority carrier properties. EBIC is similar to cathodoluminescence in that it depends on the creation of electron–hole pairs in the semiconductor sample by the microscope's electron beam. This technique is used in semiconductor failure analysis and solid-state physics.
Physics of the technique
If the semiconductor sample contains an internal electric field, as will be present in the depletion region at a p-n junction or Schottky junction, the electron–hole pairs will be separated by drift due to the electric field. If the p- and n-sides (or semiconductor and Schottky contact, in the case of a Schottky device) are connected through a picoammeter, a current will flow.
EBIC is best understood by analogy: in a solar cell, photons of light fall on the entire cell, thus delivering energy and creating electron hole pairs, and cause a current to flow. In EBIC, energetic electrons take the role of the photons, causing the EBIC current to flow. However, because the electron beam of an SEM or STEM is very small, it is scanned across the sample and variations in the induced EBIC are used to map the electronic activity of the sample.
By using the signal from the picoammeter as the imaging signal, an EBIC image is formed on the screen of the SEM or STEM. When a semiconductor device is imaged in cross-section, the depletion region will show bright EBIC contrast. The shape of the contrast can be treated mathematically to determine the minority carrier properties of the semiconductor, such as diffusion length and surface recombination velocity. In plain-view, areas with good crystal quality will show bright contrast, and areas containing defects will show dark EBIC contrast.
As such, EBIC is a semiconductor analysis technique useful for evaluating minority carrier properties and defect populations.
EBIC can be used to probe subsurface hetero-junctions of nanowires and the properties of minority carriers .
EBIC has also been extended to the study of local defects in insulators. For example, W.S. Lau (Lau Wai Shing) developed "true oxide electron beam induced current" in the 1990s. Thus, besides p-n junction or Schottky junction, EBIC can also be applied to MOS diodes. Local defects in semiconductor and local defects in the insulator could be distinguished. There exists a kind of defect which originates in the silicon substrate and extends into the insulator on top of the silicon substrate. (Please see references below.)
Recently, EBIC has been applied to high-k dielectric used in advanced CMOS technology.
SEEBIC
A related STEM EBIC technique, called secondary electron emission EBIC, or SEEBIC, measures the positive current produced by emission of secondary electrons from a sample . SEEBIC was first demonstrated in 2018, likely due to its much smaller signal compared to the standard EBIC mode (electron-hole pair separation). The smaller interaction volume of secondary electron generation compared to electron-hole pair production makes SEEBIC accessible at much higher spatial resolution . SEEBIC signal is sensitive to a number of electronic properties, and is most notably the only high-resolution electrical conductivity mapping technique for the electron microscope .
Quantitative SEM EBIC
Most EBIC images acquired in the SEM are qualitative, only showing EBIC signal as image display contrast. Use of an external scan control generator on the SEM and a dedicated data acquisition system allow for sub-picoamp measurements and can give quantitative results. Some systems are commercially available that do this, and provide the ability to provide functional imaging by biasing and applying gate voltages to semiconductor devices.
References
(Review Article)
(Note: EBIC was performed on advanced high-k gate stack even though it is not obvious by reading the title of the paper.)
Electron beam
Scientific techniques
Semiconductor device fabrication
Semiconductor analysis | Electron beam-induced current | [
"Chemistry",
"Materials_science"
] | 852 | [
"Electron",
"Electron beam",
"Semiconductor device fabrication",
"Microtechnology"
] |
6,905,556 | https://en.wikipedia.org/wiki/Infrared%20cirrus | Infrared cirrus or galactic cirrus are galactic filamentary structures seen in space over most of the sky that emit far-infrared light. The name is given because the structures are cloud-like in appearance. These structures were first detected by the Infrared Astronomy Satellite at wavelengths of 60 and 100 micrometres.
See also
Galaxy filament
Cosmic infrared background
References
External links
Molecular Hydrogen in Infrared Cirrus, Kristen Gillmon, J. Michael Shull, 2006 Abstract
PDF Paper
The Physics of Infrared Cirrus, C. Darren Dowell, Roger H. Hildebrand, Alexandre Lazarian, Michael W. Werner, Ellen Zweibel
Interstellar media | Infrared cirrus | [
"Physics",
"Astronomy"
] | 140 | [
"Interstellar media",
"Outer space",
"Plasma physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Plasma physics stubs"
] |
6,906,155 | https://en.wikipedia.org/wiki/Road%20speed%20limit%20enforcement%20in%20Australia | Road speed limit enforcement in Australia constitutes the actions taken by the authorities to force road users to comply with the speed limits in force on Australia's roads. Speed limit enforcement equipment such as speed cameras and other technologies such as radar and LIDAR are widely used by the authorities. In some regions, aircraft equipped with VASCAR devices are also used.
Each of the Australian states have their own speed limit enforcement policies and strategies and approved enforcement devices.
Methods
Mobile Gatso speed camera
This mobile camera or speed camera is used in Victoria and Queensland and can be operated in various manners. Without a flash, the only evidence of speed camera on the outside of the car is a black rectangular box, which sends out the radar beam, about 30 cm by 10 cm, mounted on the front of the car. On the older models of the camera, and on rainy days or in bad light, a cable is used to link it to a box with a flash placed just in front of the vehicle. The operator sits in the car and takes the pictures, which are then uploaded to a laptop computer. In both states unmarked cars are used. In Victoria these cameras are operated by Serco contractors, while in Queensland uniformed police officers operate them.
Many of the modern Gatso cameras now feature full capability, flashless operation. The advent of infra-red flash technology has provided Gatsos with the capacity to capture vehicles exceeding the limit in varying conditions - without emitting a bright flash, which in many cases can be considered distracting to the driver, especially if taken head-on. Infra-red light is invisible to the human eye, but when paired with a camera with an infra-red sensor, can be used as a flash to produce a clear image in low light conditions.
Mobile Multanova speed camera
Used only in Western Australia, this Doppler RADAR-based camera is mounted usually on a tripod on the side of the road. It is sometimes covered by a black sheet and there is usually a "anywhere anytime" sign following it chained onto a pole or tree. It is sometimes incorrectly referred to as a "Multinova". Multanovas are manufactured by a Swiss company of the same name - Western Australia utilises the 6F and the 9F models.
During the daytime, the Multanova unit uses a standard "white" flash, but in low light or night time, a red filter is added to the flash so as to not dazzle the driver.
The camera is always accompanied by a white station wagon or by a black or, more commonly a white, silver or brown Nissan X-Trail, staffed by an un-sworn police officer (not a contractor) who is responsible for assembling and disassembling the unit, supervising it and operating the accompanying laptop in the car for the few hours that it is deployed at a location. The Nissan X-Trail usually has a bull bar and spotlights on it and a large, thick antenna. The camera stays usually for about four to five hours. There were 25 in use in Perth at the beginning of 2008.
As of late 2011 Multanova use in WA has been discontinued in favour of LIDAR exclusively.
Fixed speed-only camera
These cameras come in many forms, some free standing on poles; others mounted on bridges or overhead gantries. The cameras may consist of a box for taking photographs, as well as a smaller box for the flash, or only a single box containing all the instruments. Recently introduced infrared cameras, do not emit a blinding flash and can therefore be used to take front-on photographs showing the driver's face.
Most states are now starting to replace older analogue film fixed cameras with modern digital variants.
Fixed speed cameras can use Doppler RADAR or Piezo strips embedded in the road to measure a vehicle's speed as it passes the camera.
However ANPR technology is also used to time vehicles between two or more fixed cameras that are a known distance apart (typically at least several kilometres). The average speed is then calculated using the formula: . The longer distance over which the speed is measured prevents drivers from slowing down momentarily for a camera before speeding up again. The SAFE-T-CAM system uses this technology, but was designed to only targets heavy vehicles. Newer ANPR cameras in Victoria are able to target any vehicle.
Fixed dual speed and red light camera
These cameras are used in the Northern Territory, South Australia, the Australian Capital Territory, Victoria, New South Wales and Western Australia. They detect speeding at the intersection as well as running a red light. They look the same as red light cameras, except they are digital and look slightly more modern. Some of the Victorian cameras are Traffipax brand.
In New South Wales and South Australia dual redlight/speed cameras are identified by a "Safety Camera" sign.
Queensland is in the process of investigating conversion to dual redlight/speed cameras as the current system is reaching end-of-life.
Other speed checking devices
Police also use other technology that does not rely photographs being taken of an offence, typically where officers enforce the speed limit in person.
'Silver Eagle'
New South Wales police used the Silver Eagle vehicle-mounted unit. This radar device is typically mounted on the right hand side of the vehicle just behind the driver, and is operated from inside the vehicle. The units are approved for use only in rural areas where traffic is sparse, and may be used from a stationary or moving vehicle.
'Stalker'
Police vehicles in New South Wales have recently been fitted with a dual-radar known as the Stalker DSR 2X, which is able to monitor vehicles moving in two different directions at the same time.
Other
NSW police also use LIDAR devices as well as vehicle speedometers and speed estimates to prosecute speeding motorists.
The TIRTL device is deployed as a speed measurement sensor in Victoria and New South Wales. The device consists of a pair of sensors embedded in the curb that use a series of infrared beams to monitor vehicles at wheel height. Although the sensors themselves are very difficult to see, they are accompanied by a standard Traffipax camera to capture images of the offence. The state of New South Wales approved the device in November 2008 for use in the state as dual red light / speed cameras (named "safety cameras" under the Roads & Traffic Authority's terminology).
Motorcycle and bicycle-mounted police in New South Wales are equipped with the binocular-styled "Pro-Lite+" LIDAR device.
History
Victoria
Started with a small trial in 1985 using signed cameras with minimal effect. The major introduction was at the end of 1989 with hidden speed cameras starting at around 500 hours/month increasing to 4,000 hours/month by 1992. During the testing of the cameras the percentage of drivers speeding (over the speed camera thresholds) was 24% and by the end of 1992 this had dropped to 4%. The revenue collected by each camera dropped from $2,000/hour to $1,000/hour over 18 months. The road toll dropped from 776 in 1989 (no cameras) to 396 in 1992 (49% drop).
New South Wales
Mobile speed cameras were first used in New South Wales in 1991. In 1999 the authorities began to install fixed cameras, and signs warning of their presence, at crash black spots.
Western Australia
The government of Western Australia started using speed cameras in 1988.
See also
Point system
References
External links
State-published speed camera locations
Transport for NSW page of speed camera locations - NSW
Red Light and Speed Camera Locations Victoria - Victoria
Transport Department - Red Light / Speed Camera Locations - South Australia
SA Police Speed Camera Locations - South Australia
Speedometers and Speeding Fines
Driving in Australia
Road transport in Australia
Road safety in Australia
Traffic enforcement cameras | Road speed limit enforcement in Australia | [
"Technology",
"Engineering"
] | 1,579 | [
"Measuring instruments",
"Traffic enforcement cameras"
] |
6,906,718 | https://en.wikipedia.org/wiki/Subsun | A subsun (also spelled sub-sun) is an optical phenomenon that appears as a glowing spot visible within clouds or mist when observed from above. The subsun appears directly below the actual Sun, and is caused by sunlight reflecting off numerous tiny ice crystals suspended in the atmosphere. As such, the effect belongs to the family of halos.
Formation
The subsun phenomena appears when a region of hexagonal ice crystals act as a large mirror, creating a virtual image of the Sun below the horizon. As they fall through the air, the ice forms plate crystals which orient horizontally, i.e., with their hexagonal surfaces parallel to the Earth's surface. When they are disturbed by turbulence, the plates "wobble", causing their surfaces to deviate some degrees from the ideal horizontal orientation, and causing the reflection (i.e., the subsun) to become elongated vertically.
Deformations
When the subsun is stretched far enough vertically, it can become a vertical column known as a lower sun pillar. A sun pillar is a form of light pillar.
Examples (Images)
See also
Crown flash
Sun dog
Halo
Sun Pillar/Light Pillar
References
Atmospheric optical phenomena | Subsun | [
"Physics"
] | 240 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
6,906,913 | https://en.wikipedia.org/wiki/Townsend%20discharge | In electromagnetism, the Townsend discharge or Townsend avalanche is an ionisation process for gases where free electrons are accelerated by an electric field, collide with gas molecules, and consequently free additional electrons. Those electrons are in turn accelerated and free additional electrons. The result is an avalanche multiplication that permits significantly increased electrical conduction through the gas. The discharge requires a source of free electrons and a significant electric field; without both, the phenomenon does not occur.
The Townsend discharge is named after John Sealy Townsend, who discovered the fundamental ionisation mechanism by his work circa 1897 at the Cavendish Laboratory, Cambridge.
General description
The avalanche occurs in a gaseous medium that can be ionised (such as air). The electric field and the mean free path of the electron must allow free electrons to acquire an energy level (velocity) that can cause impact ionisation. If the electric field is too small, then the electrons do not acquire enough energy. If the mean free path is too short, then the electron gives up its acquired energy in a series of non-ionising collisions. If the mean free path is too long, then the electron reaches the anode before colliding with another molecule.
The avalanche mechanism is shown in the accompanying diagram. The electric field is applied across a gaseous medium; initial ions are created with ionising radiation (for example, cosmic rays). An original ionisation event produces an ion pair; the positive ion accelerates towards the cathode while the free electron accelerates towards the anode. If the electric field is strong enough, then the free electron can gain sufficient velocity (energy) to liberate another electron when it next collides with a molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause further impact ionisations, and so on. This process is effectively a chain reaction that generates free electrons. Initially, the number of collisions grows exponentially, but eventually, this relationship will break down—the limit to the multiplication in an electron avalanche is known as the Raether limit.
The Townsend avalanche can have a large range of current densities. In common gas-filled tubes, such as those used as gaseous ionisation detectors, magnitudes of currents flowing during this process can range from about 10−18 to 10−5 amperes.
Quantitative description
Townsend's early experimental apparatus consisted of planar parallel plates forming two sides of a chamber filled with a gas. A direct-current high-voltage source was connected between the plates, the lower-voltage plate being the cathode and the upper-voltage the anode. He forced the cathode to emit electrons using the photoelectric effect by irradiating it with x-rays, and he found that the current flowing through the chamber depended on the electric field between the plates. However, this current showed an exponential increase as the plate gaps became small, leading to the conclusion that the gas ions were multiplying as they moved between the plates due to the high electric field.
Townsend observed currents varying exponentially over ten or more orders of magnitude with a constant applied voltage when the distance between the plates was varied. He also discovered that gas pressure influenced conduction: he was able to generate ions in gases at low pressure with a much lower voltage than that required to generate a spark. This observation overturned conventional thinking about the amount of current that an irradiated gas could conduct.
The experimental data obtained from his experiments are described by the formula
where
is the current flowing in the device,
is the photoelectric current generated at the cathode surface,
is Euler's number,
is the first Townsend ionisation coefficient, expressing the number of ion pairs generated per unit length (e.g. meter) by a negative ion (anion) moving from cathode to anode, and
is the distance between the plates of the device.
The almost-constant voltage between the plates is equal to the breakdown voltage needed to create a self-sustaining avalanche: it decreases when the current reaches the glow discharge regime. Subsequent experiments revealed that the current rises faster than predicted by the above formula as the distance increases; two different effects were considered in order to better model the discharge: positive ions and cathode emission.
Gas ionisation caused by motion of positive ions
Townsend put forward the hypothesis that positive ions also produce ion pairs, introducing a coefficient expressing the number of ion pairs generated per unit length by a positive ion (cation) moving from anode to cathode. The following formula was found:
since , in very good agreement with experiments.
The first Townsend coefficient ( α ), also known as first Townsend avalanche coefficient, is a term used where secondary ionisation occurs because the primary ionisation electrons gain sufficient energy from the accelerating electric field, or from the original ionising particle. The coefficient gives the number of secondary electrons produced by primary electron per unit path length.
Cathode emission caused by impact of ions
Townsend, Holst and Oosterhuis also put forward an alternative hypothesis, considering the augmented emission of electrons by the cathode caused by impact of positive ions. This introduced Townsend's second ionisation coefficient , the average number of electrons released from a surface by an incident positive ion, according to the formula
These two formulas may be thought as describing limiting cases of the effective behavior of the process: either can be used to describe the same experimental results. Other formulas describing various intermediate behaviors are found in the literature, particularly in reference 1 and citations therein.
Conditions
A Townsend discharge can be sustained only over a limited range of gas pressure and electric field intensity. The accompanying plot shows the variation of voltage drop and the different operating regions for a gas-filled tube with a constant pressure, but a varying current between its electrodes. The Townsend avalanche phenomena occurs on the sloping plateau B-D. Beyond D, the ionisation is sustained.
At higher pressures, discharges occur more rapidly than the calculated time for ions to traverse the gap between electrodes, and the streamer theory of spark discharge of Raether, Meek, and Loeb is applicable. In highly non-uniform electric fields, the corona discharge process is applicable. See Electron avalanche for further description of these mechanisms.
Discharges in vacuum require vaporization and ionisation of electrode atoms. An arc can be initiated without a preliminary Townsend discharge, for example when electrodes touch and are then separated.
Penning discharge
In the presence of a magnetic field, the likelihood of an avalanche discharge occurring under high vacuum conditions can be increased through a phenomenon known as Penning discharge. This occurs when electrons can become trapped within a potential minimum, thereby extending the mean free path of the electrons [Fränkle 2014].
Applications
Gas-discharge tubes
The starting of Townsend discharge sets the upper limit to the blocking voltage a glow discharge gas-filled tube can withstand. This limit is the Townsend discharge breakdown voltage, also called ignition voltage of the tube.
The occurrence of Townsend discharge, leading to glow discharge breakdown, shapes the current–voltage characteristic of a gas-discharge tube such as a neon lamp in such a way that it has a negative differential resistance region of the S-type. The negative resistance can be used to generate electrical oscillations and waveforms, as in the relaxation oscillator whose schematic is shown in the picture on the right. The sawtooth shaped oscillation generated has frequency
where
is the glow discharge breakdown voltage,
is the Townsend discharge breakdown voltage, and
, and are respectively the capacitance, the resistance and the supply voltage of the circuit.
Since temperature and time stability of the characteristics of gas diodes and neon lamps is low, and also the statistical dispersion of breakdown voltages is high, the above formula can only give a qualitative indication of what the real frequency of oscillation is.
Gas phototubes
Avalanche multiplication during Townsend discharge is naturally used in gas phototubes, to amplify the photoelectric charge generated by incident radiation (visible light or not) on the cathode: achievable current is typically 10~20 times greater respect to that generated by vacuum phototubes.
Ionising radiation detectors
Townsend avalanche discharges are fundamental to the operation of gaseous ionisation detectors such as the Geiger–Müller tube and the proportional counter in either detecting ionising radiation or measuring its energy. The incident radiation will ionise atoms or molecules in the gaseous medium to produce ion pairs, but different use is made by each detector type of the resultant avalanche effects.
In the case of a GM tube, the high electric field strength is sufficient to cause complete ionisation of the fill gas surrounding the anode from the initial creation of just one ion pair. The GM tube output carries information that the event has occurred, but no information about the energy of the incident radiation.
In the case of proportional counters, multiple creation of ion pairs occurs in the "ion drift" region near the cathode. The electric field and chamber geometries are selected so that an "avalanche region" is created in the immediate proximity of the anode. A negative ion drifting towards the anode enters this region and creates a localised avalanche that is independent of those from other ion pairs, but which can still provide a multiplication effect. In this way, spectroscopic information on the energy of the incident radiation is available by the magnitude of the output pulse from each initiating event.
The accompanying plot shows the variation of ionisation current for a co-axial cylinder system. In the ion chamber region, there are no avalanches and the applied voltage only serves to move the ions towards the electrodes to prevent re-combination. In the proportional region, localised avalanches occur in the gas space immediately round the anode which are numerically proportional to the number of original ionising events. Increasing the voltage further increases the number of avalanches until the Geiger region is reached where the full volume of the fill gas around the anodes ionised, and all proportional energy information is lost. Beyond the Geiger region, the gas is in continuous discharge owing to the high electric field strength.
See also
Avalanche breakdown
Electric arc
Electric discharge in gases
Field electron emission
Paschen's law
Photoelectric effect
Townsend (unit)
Notes
References
.
Chapter 11 "Electrical conduction in gases" and chapter 12 "Glow- and Arc-discharge tubes and circuits".
External links
Simulation showing electron paths during avalanche
Electrical discharge in gases
Ionization
Ions
Molecular physics
Electron | Townsend discharge | [
"Physics",
"Chemistry"
] | 2,130 | [
"Ionization",
"Electron",
"Physical phenomena",
"Matter",
"Electrical discharge in gases",
"Molecular physics",
"Plasma phenomena",
" molecular",
"nan",
"Atomic",
"Ions",
" and optical physics"
] |
6,907,330 | https://en.wikipedia.org/wiki/Conformational%20entropy | In chemical thermodynamics, conformational entropy is the entropy associated with the number of conformations of a molecule. The concept is most commonly applied to biological macromolecules such as proteins and RNA, but also be used for polysaccharides and other molecules. To calculate the conformational entropy, the possible conformations of the molecule may first be discretized into a finite number of states, usually characterized by unique combinations of certain structural parameters, each of which has been assigned an energy. In proteins, backbone dihedral angles and side chain rotamers are commonly used as parameters, and in RNA the base pairing pattern may be used. These characteristics are used to define the degrees of freedom (in the statistical mechanics sense of a possible "microstate"). The conformational entropy associated with a particular structure or state, such as an alpha-helix, a folded or an unfolded protein structure, is then dependent on the probability of the occupancy of that structure.
The entropy of heterogeneous random coil or denatured proteins is significantly higher than that of the tertiary structure of its folded native state. In particular, the conformational entropy of the amino acid side chains in a protein is thought to be a major contributor to the energetic stabilization of the denatured state and thus a barrier to protein folding. However, a recent study has shown that side-chain conformational entropy can stabilize native structures among alternative compact structures. The conformational entropy of RNA and proteins can be estimated; for example, empirical methods to estimate the loss of conformational entropy in a particular side chain on incorporation into a folded protein can roughly predict the effects of particular point mutations in a protein. Side-chain conformational entropies can be defined as Boltzmann sampling over all possible rotameric states:
where is the gas constant and is the probability of a residue being in rotamer .
The limited conformational range of proline residues lowers the conformational entropy of the denatured state and thus stabilizes the native states. A correlation has been observed between the thermostability of a protein and its proline residue content.
See also
Configuration entropy
Folding funnel
Loop entropy
Molten globule
Protein folding
References
Protein structure
Thermodynamic entropy | Conformational entropy | [
"Physics",
"Chemistry"
] | 472 | [
"Thermodynamics stubs",
"Physical quantities",
"Thermodynamic entropy",
"Entropy",
"Thermodynamics",
"Structural biology",
"Statistical mechanics",
"Protein structure",
"Physical chemistry stubs"
] |
6,907,406 | https://en.wikipedia.org/wiki/Amanin | Amanin is a cyclic peptide. It is one of the amatoxins, all of which are found in several members of the mushroom genus Amanita.
Toxicology
Like other amatoxins, amanin is an inhibitor of RNA polymerase II. Upon ingestion, it binds to the RNA polymerase II enzyme which completely prevents mRNA synthesis, effectively causing cytolysis of hepatocytes (liver cells) and kidney cells.
See also
Mushroom poisoning
References
Peptides
Amatoxins
Hepatotoxins
Tryptamines | Amanin | [
"Chemistry"
] | 114 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.