id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
5,539,197
https://en.wikipedia.org/wiki/Excluded%20point%20topology
In mathematics, the excluded point topology is a topology where exclusion of a particular point defines openness. Formally, let X be any non-empty set and p ∈ X. The collection of subsets of X is then the excluded point topology on X. There are a variety of cases which are individually named: If X has two points, it is called the Sierpiński space. This case is somewhat special and is handled separately. If X is finite (with at least 3 points), the topology on X is called the finite excluded point topology If X is countably infinite, the topology on X is called the countable excluded point topology If X is uncountable, the topology on X is called the uncountable excluded point topology A generalization is the open extension topology; if has the discrete topology, then the open extension topology on is the excluded point topology. This topology is used to provide interesting examples and counterexamples. Properties Let be a space with the excluded point topology with special point The space is compact, as the only neighborhood of is the whole space. The topology is an Alexandrov topology. The smallest neighborhood of is the whole space the smallest neighborhood of a point is the singleton These smallest neighborhoods are compact. Their closures are respectively and which are also compact. So the space is locally relatively compact (each point admits a local base of relatively compact neighborhoods) and locally compact in the sense that each point has a local base of compact neighborhoods. But points do not admit a local base of closed compact neighborhoods. The space is ultraconnected, as any nonempty closed set contains the point Therefore the space is also connected and path-connected. See also Finite topological space Fort space List of topologies Particular point topology References Topological spaces
Excluded point topology
[ "Mathematics" ]
359
[ "Topological spaces", "Mathematical structures", "Topology", "Space (mathematics)" ]
5,539,282
https://en.wikipedia.org/wiki/Borel%20hierarchy
In mathematical logic, the Borel hierarchy is a stratification of the Borel algebra generated by the open subsets of a Polish space; elements of this algebra are called Borel sets. Each Borel set is assigned a unique countable ordinal number called the rank of the Borel set. The Borel hierarchy is of particular interest in descriptive set theory. One common use of the Borel hierarchy is to prove facts about the Borel sets using transfinite induction on rank. Properties of sets of small finite ranks are important in measure theory and analysis. Borel sets The Borel algebra in an arbitrary topological space is the smallest collection of subsets of the space that contains the open sets and is closed under countable unions and complementation. It can be shown that the Borel algebra is closed under countable intersections as well. A short proof that the Borel algebra is well-defined proceeds by showing that the entire powerset of the space is closed under complements and countable unions, and thus the Borel algebra is the intersection of all families of subsets of the space that have these closure properties. This proof does not give a simple procedure for determining whether a set is Borel. A motivation for the Borel hierarchy is to provide a more explicit characterization of the Borel sets. Boldface Borel hierarchy The Borel hierarchy or boldface Borel hierarchy on a space X consists of classes , , and for every countable ordinal greater than zero. Each of these classes consists of subsets of X. The classes are defined inductively from the following rules: A set is in if and only if it is open. A set is in if and only if its complement is in . A set is in for if and only if there is a sequence of sets such that each is in for some and . A set is in if and only if it is both in and in . The motivation for the hierarchy is to follow the way in which a Borel set could be constructed from open sets using complementation and countable unions. A Borel set is said to have finite rank if it is in for some finite ordinal ; otherwise it has infinite rank. If then the hierarchy can be shown to have the following properties: For every α, . Thus, once a set is in or , that set will be in all classes in the hierarchy corresponding to ordinals greater than α . Moreover, a set is in this union if and only if it is Borel. If is an uncountable Polish space, it can be shown that is not contained in for any , and thus the hierarchy does not collapse. Borel sets of small rank The classes of small rank are known by alternate names in classical descriptive set theory. The sets are the open sets. The sets are the closed sets. The sets are countable unions of closed sets, and are called Fσ sets. The sets are the dual class, and can be written as a countable intersection of open sets. These sets are called Gδ sets. Lightface hierarchy The lightface Borel hierarchy (also called the effective Borel hierarchypp.163--164) is an effective version of the boldface Borel hierarchy. It is important in effective descriptive set theory and recursion theory. The lightface Borel hierarchy extends the arithmetical hierarchy of subsets of an effective Polish space. It is closely related to the hyperarithmetical hierarchy. The lightface Borel hierarchy can be defined on any effective Polish space. It consists of classes , and for each nonzero countable ordinal less than the Church–Kleene ordinal . Each class consists of subsets of the space. The classes, and codes for elements of the classes, are inductively defined as follows: A set is if and only if it is effectively open, that is, an open set which is the union of a computably enumerable sequence of basic open sets. A code for such a set is a pair (0,e), where e is the index of a program enumerating the sequence of basic open sets. A set is if and only if its complement is . A code for one of these sets is a pair (1,c) where c is a code for the complementary set. A set is if there is a computably enumerable sequence of codes for a sequence of sets such that each is for some and . A code for a set is a pair (2,e), where e is an index of a program enumerating the codes of the sequence . A code for a lightface Borel set gives complete information about how to recover the set from sets of smaller rank. This contrasts with the boldface hierarchy, where no such effectivity is required. Each lightface Borel set has infinitely many distinct codes. Other coding systems are possible; the crucial idea is that a code must effectively distinguish between effectively open sets, complements of sets represented by previous codes, and computable enumerations of sequences of codes. It can be shown that for each there are sets in , and thus the hierarchy does not collapse. No new sets would be added at stage , however. A famous theorem due to Spector and Kleene states that a set is in the lightface Borel hierarchy if and only if it is at level of the analytical hierarchy. These sets are also called hyperarithmetic. Additionally, for all natural numbers , the classes and of the effective Borel hierarchy are the same as the classes and of the arithmetical hierarchy of the same name.p.168 The code for a lightface Borel set A can be used to inductively define a tree whose nodes are labeled by codes. The root of the tree is labeled by the code for A. If a node is labeled by a code of the form (1,c) then it has a child node whose code is c. If a node is labeled by a code of the form (2,e) then it has one child for each code enumerated by the program with index e. If a node is labeled with a code of the form (0,e) then it has no children. This tree describes how A is built from sets of smaller rank. The ordinals used in the construction of A ensure that this tree has no infinite path, because any infinite path through the tree would have to include infinitely many codes starting with 2, and thus would give an infinite decreasing sequence of ordinals. Conversely, if an arbitrary subtree of has its nodes labeled by codes in a consistent way, and the tree has no infinite paths, then the code at the root of the tree is a code for a lightface Borel set. The rank of this set is bounded by the order type of the tree in the Kleene–Brouwer order. Because the tree is arithmetically definable, this rank must be less than . This is the origin of the Church–Kleene ordinal in the definition of the lightface hierarchy. Relation to other hierarchies See also Projective hierarchy Wadge hierarchy Veblen hierarchy References Sources Kechris, Alexander. Classical Descriptive Set Theory. Graduate Texts in Mathematics v. 156, Springer-Verlag, 1995. . Jech, Thomas. Set Theory, 3rd edition. Springer, 2003. . Descriptive set theory Mathematical logic hierarchies
Borel hierarchy
[ "Mathematics" ]
1,518
[ "Mathematical logic", "Mathematical logic hierarchies" ]
5,539,917
https://en.wikipedia.org/wiki/Membrane%20glycoproteins
Membrane glycoproteins are membrane proteins which help in cell recognition, including fibronectin, laminin and osteonectin. See also Glycocalyx, a glycoprotein which surrounds the membranes of bacterial, epithelial and other cells External links Glycoproteins
Membrane glycoproteins
[ "Chemistry" ]
69
[ "Glycoproteins", "Glycobiology" ]
5,540,117
https://en.wikipedia.org/wiki/Azinphos-methyl
Azinphos-methyl (Guthion) (also spelled azinophos-methyl) is a broad spectrum organophosphate insecticide manufactured by Bayer CropScience, Gowan Co., and Makhteshim Agan. Like other pesticides in this class, it owes its insecticidal properties (and human toxicity) to the fact that it is an acetylcholinesterase inhibitor (the same mechanism is responsible for the toxic effects of the V-series nerve agent chemical weapons). It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. History and uses Azinphos-methyl is a neurotoxin derived from nerve agents developed during World War II. It was first registered in the US in 1959 as an insecticide and is also used as active ingredient in organophosphate (OP) pesticides. It is not registered for consumer or residential use. It has been linked to health problems of farmers who apply it, and the U.S. Environmental Protection Agency (EPA) considered a denial of reregistration, citing, “concern to farm workers, pesticide applicators, and aquatic ecosystems. The use of AZM has been fully banned in the USA since 30 September 2013, ending a phase-out period of twelve years. Azinphos-methyl has been banned in the European Union since 2006 and in Turkey since 2013. The New Zealand Environmental Risk Management Authority made a decision to phase out azinphos-methyl over a five-year period starting from 2009. In 2014, it was still used in Australia and partly in New Zealand. Available forms AzM is often used as active ingredient in organophosphate pesticides like Guthion, Gusathion (GUS), Gusathion-M, Crysthyron, Cotnion, Cotnion-methyl, Metriltrizotion, Carfene, Bay 9027, Bay 17147, and R-1852. This is why Guthion is often used as a nickname for AzM. Studies have shown that pure AzM is less toxic than GUS. This increased toxicity can be explained by the interactions between the different compounds in the mixture. Synthesis The synthesis (in this case, of carbon-14-labelled material) can be seen in figure 1. In the first step, o-nitroaniline (compound 1) is purified through dissolution in hot water-ethanol mixture in relation 2:1. [Activated carbon] is added and the result is filtrated for clarifying. The filtrate is chilled while kept in movement to generate crystals, usually at 4 °C, but if needed it can also be cooled to -10 °C. The crystals are then collected, washed and dried. If it is pure enough it is used for the following steps, which take place at 0 till 5 °C. To produce o-Nitrobenzonitrile-14C (compound 2), the first component o-nitroaniline and (concentrated reagent grade) hydrochloric acid are put together with ice and water. Sodium nitrite, dissolved in water, is added to this thin slurry. After the formation of a pale-yellow solution, which indicates the completion of the diazotization reaction, the pH should be adjusted to 6. After this, the solution is introduced to a mixture of cuprous cyanide and toluene. At room temperature the toluene layer is removed. The aqueous layer is washed and dried and the purified product is isolated by crystallization. The third product is Anthranilamide-14C (compound 3). It is formed out of o-Nitrobenzonitrile-14C, which is first solved in ethanol and hydrazine hydrate. The solvent is heated subsequently, treated in a well-ventilated hood with small periodic charges, smaller than 10 mg, of Raney nickel. Under nitrogen atmosphere the ethanolic solution is clarified and dried. The next step is to form 1,2,3-Benzotriazin-4(3H)-one-14C (compound 4). In water dissolved sodium nitrite is added to anthranilamide and hydrochloric acid in ice water. Because this is a diazotization reaction, the product is pale-yellow again. After this the pH is adjusted to 8,5. This causes the ring closure to form 1,2,3-Benzotriazin-4(3H)-one-14C. This results in a sodium salt slurry that can be treated with hydrochloric acid, what lowers the pH down to 2 till 4. The 1,2,3-Benzotriazin-4(3H)-one-14C is collected, washed and dried. In the following step 1,2,3-Benzotriazin-4-(3-chloromethyl)-one-14C has to be formed. Therefore, 1,2,3-Benzotriazin-4(3H)-one-14C and paraformaldehyde are added to ethylene dichloride and heated to 40 °C. Then thionyl chloride is added and the whole solvent is further heated to 65 °C. After four hours of heating the solution is cooled down to room temperature. Water is added and the solution is neutralized. The ethylene dichloride layer is removed and put together with the result of the washed aqueous layer. The solvent was filtered and dried. The last step is the actual synthesis of Azinphos methyl. Ethylene dichloride is added to the compound resulting from the fifth step, 1,2,3-Benzotriazin-4-(3-chloromethyl)-one-14C. This mixture is heated to 50 °C and sodium bicarbonate and O,O-dimethyl phosphorodithioate sodium salt in water are added. The ethylene dichloride layer is removed, reextracted with ethylene dichloride and purified by filtration. The pure filtrate is dried. This product is once again purified by recrystallization from methanol. What is left is pure azinphos-methyl in form of white crystals. Absorption Azinphos-methyl can enter the body via inhalation, ingestion and dermal contact. Ingestion of azinphos-methyl is responsible for the low-dose exposure to a large part of the population, due to their presence as residues in food and drinking water. After ingestion it can be absorbed from the digestive tract. By skin contact, AzM can also enter the body through dermal cells. Absorption through the skin is responsible for the occupational exposure to relatively high doses, mainly in agriculture workers. Mechanism of toxicity Once azinphos-methyl is absorbed it can cause neurotoxic effects, like other organophosphate insecticides. At high concentrations AzM itself can be toxic because it can function as an acetylcholinesterase (AChE) inhibitor. But its toxicity is mainly due to the bioactivation by a cytochrome P450 (CYP450)-mediated desulfuration to its phosphate triester or oxon(gutoxon) (see figure 2). Gutoxon can react with a serine hydroxyl group at the active site of the AChE. The active site is then blocked and AChE is inactivated. Under normal circumstances acetylcholinesterase rapidly and efficiently degrades the neurotransmitter acetylcholine (ACh) and thereby terminates the biological activity of acetylcholine. Inhibition of AChE results in an immediate accumulation of free unbound ACh at the ending of all cholinergic nerves, which leads to overstimulation of the nervous system. Efficacy and side effects Cholinergic nerves play an important role in the normal function of the central nervous, endocrine, neuromuscular, immunological, and respiratory system. As all cholinergic fibers contain high concentrations of ACh and AChE at their terminals, inhibition of AChE can impair their function. So exposure to azinphosmethyl, whereas it inhibits AChEs, may disturb a lot of important systems and may have various effects. In the autonomic nervous system, accumulation of acetylcholine leads to the overstimulation of muscarinic receptors of the parasympathetic nervous system. This can affect exocrine glands (increased salivation, perspiration, lacrimation), the respiratory system (excessive bronchial secretions, tightness of the chest, and wheezing), the gastrointestinal tract (nausea, vomiting, diarrhea), the eyes (miosis, blurred vision) and the cardiovascular system (decrease in blood pressure, and bradycardia). Overstimulation of the nicotinic receptors in the para- or sympathetic nervous system may also cause adverse effects on the cardiovascular system, such as pallor, tachycardia and increased blood pressure. In the somatic nervous system, accumulation of acetylcholine may cause muscle fasciculation, paralysis, cramps, and flaccid or rigid tone. Overstimulation of the nerves in the central nervous system, specifically in the brain, may result in drowsiness, mental confusion and lethargy. More severe effects on the central nervous system include a state of coma without reflexes, cyanosis and depression of the respiratory centers. Thus the inhibition of the enzyme AChE may have a lot of different effects. Detoxification To prevent the toxic effects, AzM can be biotransformed. Although AzM (in figure 2 named guthion) can be bioactivated by a cytochrome P450 (CYP450)-mediated desulfuration to its phosphate triester or oxon (gutoxon), it may also be detoxified by CYP itself (reaction 2 in figure 2). CYP450 is namely able to catalyze the oxidative cleavage of the P-S-C bond in AzM to yield DMTP and MMBA. The other pathways of detoxification involves glutathione (GSH)-mediated dealkylation via cleavage of the P-O-CH3 bond, which than forms mono-demethylated AzM and GS-CH3 (reaction 3 in figure 2). This mono-demethylated AzM may be further demethylated to di-demethylated AzM and again GS-CH3 (reaction 4 in figure 2). AzM also may undergo glutathione-catalyzed dearylation which forms DMPDT and glutathione-conjugated mercaptomethyl benzazimide (reaction 5 in figure 2) Gutoxon, the compound that mainly causes AzM to be toxic, can also be detoxified. Gutoxon can again be detoxified with the help of CYP450. CYP450 catalyzes the oxidative cleavage of gutoxon, which than yields DMP and MMBA (reaction 6 in figure 2). Other detoxification pathways of gutoxon are via glutathione-mediated dealkylation, which goes via cleavage of the P-O-CH3 bond to form demethylated AzM and GS-CH3 (reaction 7 in figure 2), and via glutathione-catalyzed dearylation to yield DMTP and glutathione-conjugated mercaptomethyl benzazimide (reaction 8 in figure 2). Treatment There are two different main mechanism of treatment for toxification with AzM. One possibility is to treat the patient before exposure to AzM and the other one is to treat the patient after poisoning. Competitive antagonists of AChE can be used for pre-treatment. They can reduce mortality, which is caused by exposure to AzM. Organophosphorus AChE inhibitors can bind temporally to the catalytic site of the enzyme. Because of this binding, AzM cannot phosphorylate the enzyme anymore and the enzyme is shorter inhibited. The mechanism for treatment after exposure is to block the muscarinic receptor activation. Anticonvulsants are used to control the seizures and oximes are used to reactivate the inhibited AChE. Oximes remove the phosphoryl group bound to the active site of the AChE by binding to it. There are a few oximes that are the most efficacious by AzM poisoning, namely oxime K-27 and physostigmine. These two treatments are also used together, some patients are namely treated with atropine (a competitive antagonist of AChE) and reactivating oximes. When patients are resistant to atropine, the patients can be treated with low doses of anisodamine, a cholinergic and alpha-1 adrenergic antagonist, to achieve a shorter recovery time. Treatment with a combination of different alkaloids or synergistically with atropine is safer than using high antroponine concentrations, which can be toxic. Another possibility is to use membrane bioreactor technology. When this technology is used, no other chemical compounds need to be added. In general, pretreatment is much more efficient than post-treatment. Indications (biomarkers) The most common biomarker for exposure to AzM is the inhibition of AChE. Also other esterase enzymes as CaE and BChE are inhibited by AzM. In general AzM exposure can be better detected by AChE inhibition than CaE inhibition. In amphibians and also zebrafish, AChE is a more sensitive biomarker for low AzM exposure-levels. As already mentioned in paragraph 7 “detoxification”, AzM can be metabolized into nontoxic dimethylated alkylphosphates (AP), with the help of CYP450 and glutathione. These APs are: dimethylphosphate (DM), dimethylthiophosphate (DMTP) and dimethyldithiophosphate (DMDTP). These three metabolites may be excreted into the urine and can be used as reliable biomarkers of exposure to AzM. However these metabolites are not specific to AzM, because other organophosphate pesticides might also be metabolized into the three alkylphosphates. The amount of erythrocyte acetylcholinesterase (RBE-AChE) in the blood can also be used as a biomarker of effect for AzM. According to Zavon (1965) RBC-AChE is the best indicator of AChE activity at the nerve synapse, because this closely parallels the level of AChE in the CNS and PNS. A depression of RBC-AChE will correlate with effects due to a rapid depression of AChE enzymes found in other tissues, this is due to the fact that both enzymes can be inhibited by AzM. Environmental degradation AzM is very stable when dissolved in acidic, neutral or slightly alkaline water but above pH11 it is rapidly hydrolyzed to anthranilic acid, benzamide, and other chemicals. In natural water-rich environments microorganisms and sunlight cause AzM to break down faster, the half-life is highly variable depending on the condition, from several days to several months. Under the normal conditions, biodegradation and evaporation are the main routes of disappearance, after evaporation AzM has more exposure to UV-light, which causes photodecomposition. With little bioactivity and no exposure to UV light, it can reach half-lives of roughly a year. Effect on animals Possible effects on animals are endocrine disruption, reproductive and immune dysfunction and cancer. A remarkable phenomenon that has been demonstrated in numerous animal studies is that repeated exposure to organophosphates causes the mammals to be less susceptible to the toxic effects of the AChE inhibitors, even though cholinesterase activities are not normal. This phenomenon is caused by the excess of agonists (ACh) within the synapse, ultimately leading to a down-regulation of cholinergic receptors. Consequently, a given concentration of ACh within the synapse causes fewer receptors to be available, which then causes a lower response. Studies have shown that the AChEs in fish brains are more prone to organophosphates than amphibian brains. This can be explained by the affinity for AzM and rate of phosphorylation of the enzymes. Frog brain AChE has for example a lower affinity for AzM and a slower rate of phosphorylation than fish brain AChE. The effects on amphibians are “reduced size, notochord bending, abnormal pigmentation, defective gut and gills, swimming in circles, body shortening, and impaired growth”. In sea urchins, specifically the Paracentrotus lividus, AzM modifies the cytoskeleton assembly at high concentrations and can alter the deposition of the skeleton of the larva at low concentrations. In mice, AzM causes weight loss, inhibits brain cholinesterase (ChE) and lowers the food consumption of the mice. A decrease of 45-50% of brain ChE is lethal in mice. Also in earthworms and rats, AzM decreases AChE activity. In order to prevent stretching it too long, you may take a look at the following animal studies and their references: Zebrafish Amphipod Hyalella curvispina, the earthworm Eisenia Andrei Tilapia Oreochromis mossambicus Frog Pseudacris regilla and salamander Ambystoma gracile Toad Rhinella arenarum Rainbow trout oncorhynchus mykiss Comparison between the toad Rhinella arenarum and the rainbow trout oncorhynchus mykiss Comparison between fish Mysidopsis bahia and Cyprinodon variegatus See also Azinphos-ethyl Colony collapse disorder References External links Compendium of Pesticide Common Names EPA's Azinphos-methyl Page CDC - NIOSH Pocket Guide to Chemical Hazards - Azinphos-methyl Extoxnet - Azinphos-methyl Acetylcholinesterase inhibitors Pesticides Organophosphate insecticides Phosphorodithioates Benzotriazines
Azinphos-methyl
[ "Chemistry", "Biology", "Environmental_science" ]
3,940
[ "Toxicology", "Pesticides", "Functional groups", "Phosphorodithioates", "Biocides" ]
996,828
https://en.wikipedia.org/wiki/Orbiting%20body
In astrodynamics, an orbiting body is any physical body that orbits a more massive one, called the primary body. The orbiting body is properly referred to as the secondary body (), which is less massive than the primary body (). Thus, or . Under standard assumptions in astrodynamics, the barycenter of the two bodies is a focus of both orbits. An orbiting body may be a spacecraft (i.e. an artificial satellite) or a natural satellite, such as a planet, dwarf planet, moon, moonlet, asteroid, or comet. A system of two orbiting bodies is modeled by the Two-Body Problem and a system of three orbiting bodies is modeled by the Three-Body Problem. These problems can be generalized to an N-body problem. While there are a few analytical solutions to the n-body problem, it can be reduced to a 2-body system if the secondary body stays out of other bodies' Sphere of Influence and remains in the primary body's sphere of influence. See also Barycenter Double planet Primary (astronomy) Satellite Two-body problem Three-body problem N-body problem References Orbits Astrodynamics Physical objects
Orbiting body
[ "Physics", "Astronomy", "Engineering" ]
244
[ "Astrodynamics", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Physical objects", "Aerospace engineering", "Matter" ]
996,955
https://en.wikipedia.org/wiki/Characteristic%20energy
In astrodynamics, the characteristic energy () is a measure of the excess specific energy over that required to just barely escape from a massive body. The units are length2 time−2, i.e. velocity squared, or energy per mass. Every object in a 2-body ballistic trajectory has a constant specific orbital energy equal to the sum of its specific kinetic and specific potential energy: where is the standard gravitational parameter of the massive body with mass , and is the radial distance from its center. As an object in an escape trajectory moves outward, its kinetic energy decreases as its potential energy (which is always negative) increases, maintaining a constant sum. Note that C3 is twice the specific orbital energy of the escaping object. Non-escape trajectory A spacecraft with insufficient energy to escape will remain in a closed orbit (unless it intersects the central body), with where is the standard gravitational parameter, is the semi-major axis of the orbit's ellipse. If the orbit is circular, of radius r, then Parabolic trajectory A spacecraft leaving the central body on a parabolic trajectory has exactly the energy needed to escape and no more: Hyperbolic trajectory A spacecraft that is leaving the central body on a hyperbolic trajectory has more than enough energy to escape: where is the standard gravitational parameter, is the semi-major axis of the orbit's hyperbola (which may be negative in some convention). Also, where is the asymptotic velocity at infinite distance. Spacecraft's velocity approaches as it is further away from the central object's gravity. History of the notation According to Chauncey Uphoff, the ultimate source of the notation C3 is Forest Ray Moulton's textbook An Introduction to Celestial Mechanics. In the second edition (1914) of this book, Moulton solves the problem of the motion of two bodies under an attractive gravitational force in chapter 5. After reducing the problem to the relative motion of the bodies in the plane, he defines the constant of the motion c3 by the equation ẋ2 + ẏ2 = 2k2 M/r + c3, where M is the total mass of the two bodies and k2 is Moulton's notation for the gravitational constant. He defines c1, c2, and c4 to be other constants of the motion. The notation C3 probably became popularized via the JPL technical report TR-32-30 ("Design of Lunar and Interplanetary Ascent Trajectories", Victor C. Clarke, Jr., March 15, 1962), which used Moulton's terminology. Examples MAVEN, a Mars-bound spacecraft, was launched into a trajectory with a characteristic energy of 12.2 km2/s2 with respect to the Earth. When simplified to a two-body problem, this would mean the MAVEN escaped Earth on a hyperbolic trajectory slowly decreasing its speed towards . However, since the Sun's gravitational field is much stronger than Earth's, the two-body solution is insufficient. The characteristic energy with respect to Sun was negative, and MAVEN – instead of heading to infinity – entered an elliptical orbit around the Sun. But the maximal velocity on the new orbit could be approximated to 33.5 km/s by assuming that it reached practical "infinity" at 3.5 km/s and that such Earth-bound "infinity" also moves with Earth's orbital velocity of about 30 km/s. The InSight mission to Mars launched with a C3 of 8.19 km2/s2. The Parker Solar Probe (via Venus) plans a maximum C3 of 154 km2/s2. Typical ballistic C3 (km2/s2) to get from Earth to various planets: Mars 8-16, Jupiter 80, Saturn or Uranus 147. To Pluto (with its orbital inclination) needs about 160–164 km2/s2. See also Specific orbital energy Orbit Parabolic trajectory Hyperbolic trajectory References Footnotes Astrodynamics Orbits Energy (physics)
Characteristic energy
[ "Physics", "Mathematics", "Engineering" ]
819
[ "Astrodynamics", "Physical quantities", "Quantity", "Energy (physics)", "Aerospace engineering", "Wikipedia categories named after physical quantities" ]
997,021
https://en.wikipedia.org/wiki/Asymptotic%20gain%20model
The asymptotic gain model (also known as the Rosenstark method) is a representation of the gain of negative feedback amplifiers given by the asymptotic gain relation: where is the return ratio with the input source disabled (equal to the negative of the loop gain in the case of a single-loop system composed of unilateral blocks), G∞ is the asymptotic gain and G0 is the direct transmission term. This form for the gain can provide intuitive insight into the circuit and often is easier to derive than a direct attack on the gain. Figure 1 shows a block diagram that leads to the asymptotic gain expression. The asymptotic gain relation also can be expressed as a signal flow graph. See Figure 2. The asymptotic gain model is a special case of the extra element theorem. As follows directly from limiting cases of the gain expression, the asymptotic gain G∞ is simply the gain of the system when the return ratio approaches infinity: while the direct transmission term G0 is the gain of the system when the return ratio is zero: Advantages This model is useful because it completely characterizes feedback amplifiers, including loading effects and the bilateral properties of amplifiers and feedback networks. Often feedback amplifiers are designed such that the return ratio T is much greater than unity. In this case, and assuming the direct transmission term G0 is small (as it often is), the gain G of the system is approximately equal to the asymptotic gain G∞. The asymptotic gain is (usually) only a function of passive elements in a circuit, and can often be found by inspection. The feedback topology (series-series, series-shunt, etc.) need not be identified beforehand as the analysis is the same in all cases. Implementation Direct application of the model involves these steps: Select a dependent source in the circuit. Find the return ratio for that source. Find the gain G∞ directly from the circuit by replacing the circuit with one corresponding to T = ∞. Find the gain G0 directly from the circuit by replacing the circuit with one corresponding to T = 0. Substitute the values for T, G∞ and G0 into the asymptotic gain formula. These steps can be implemented directly in SPICE using the small-signal circuit of hand analysis. In this approach the dependent sources of the devices are readily accessed. In contrast, for experimental measurements using real devices or SPICE simulations using numerically generated device models with inaccessible dependent sources, evaluating the return ratio requires special methods. Connection with classical feedback theory Classical feedback theory neglects feedforward (G0). If feedforward is dropped, the gain from the asymptotic gain model becomes while in classical feedback theory, in terms of the open loop gain A, the gain with feedback (closed loop gain) is: Comparison of the two expressions indicates the feedback factor βFB is: while the open-loop gain is: If the accuracy is adequate (usually it is), these formulas suggest an alternative evaluation of T: evaluate the open-loop gain and G∞ and use these expressions to find T. Often these two evaluations are easier than evaluation of T directly. Examples The steps in deriving the gain using the asymptotic gain formula are outlined below for two negative feedback amplifiers. The single transistor example shows how the method works in principle for a transconductance amplifier, while the second two-transistor example shows the approach to more complex cases using a current amplifier. Single-stage transistor amplifier Consider the simple FET feedback amplifier in Figure 3. The aim is to find the low-frequency, open-circuit, transresistance gain of this circuit G = vout / iin using the asymptotic gain model. The small-signal equivalent circuit is shown in Figure 4, where the transistor is replaced by its hybrid-pi model. Return ratio It is most straightforward to begin by finding the return ratio T, because G0 and G∞ are defined as limiting forms of the gain as T tends to either zero or infinity. To take these limits, it is necessary to know what parameters T depends upon. There is only one dependent source in this circuit, so as a starting point the return ratio related to this source is determined as outlined in the article on return ratio. The return ratio is found using Figure 5. In Figure 5, the input current source is set to zero, By cutting the dependent source out of the output side of the circuit, and short-circuiting its terminals, the output side of the circuit is isolated from the input and the feedback loop is broken. A test current it replaces the dependent source. Then the return current generated in the dependent source by the test current is found. The return ratio is then T = −ir / it. Using this method, and noticing that RD is in parallel with rO, T is determined as: where the approximation is accurate in the common case where rO >> RD. With this relationship it is clear that the limits T → 0, or ∞ are realized if we let transconductance gm → 0, or ∞. Asymptotic gain Finding the asymptotic gain G∞ provides insight, and usually can be done by inspection. To find G∞ we let gm → ∞ and find the resulting gain. The drain current, iD = gm vGS, must be finite. Hence, as gm approaches infinity, vGS also must approach zero. As the source is grounded, vGS = 0 implies vG = 0 as well. With vG = 0 and the fact that all the input current flows through Rf (as the FET has an infinite input impedance), the output voltage is simply −iin Rf. Hence Alternatively G∞ is the gain found by replacing the transistor by an ideal amplifier with infinite gain - a nullor. Direct feedthrough To find the direct feedthrough we simply let gm → 0 and compute the resulting gain. The currents through Rf and the parallel combination of RD || rO must therefore be the same and equal to iin. The output voltage is therefore iin (RD || rO). Hence where the approximation is accurate in the common case where rO >> RD. Overall gain The overall transresistance gain of this amplifier is therefore: Examining this equation, it appears to be advantageous to make RD large in order make the overall gain approach the asymptotic gain, which makes the gain insensitive to amplifier parameters (gm and RD). In addition, a large first term reduces the importance of the direct feedthrough factor, which degrades the amplifier. One way to increase RD is to replace this resistor by an active load, for example, a current mirror. Two-stage transistor amplifier Figure 6 shows a two-transistor amplifier with a feedback resistor Rf. This amplifier is often referred to as a shunt-series feedback amplifier, and analyzed on the basis that resistor R2 is in series with the output and samples output current, while Rf is in shunt (parallel) with the input and subtracts from the input current. See the article on negative feedback amplifier and references by Meyer or Sedra. That is, the amplifier uses current feedback. It frequently is ambiguous just what type of feedback is involved in an amplifier, and the asymptotic gain approach has the advantage/disadvantage that it works whether or not you understand the circuit. Figure 6 indicates the output node, but does not indicate the choice of output variable. In what follows, the output variable is selected as the short-circuit current of the amplifier, that is, the collector current of the output transistor. Other choices for output are discussed later. To implement the asymptotic gain model, the dependent source associated with either transistor can be used. Here the first transistor is chosen. Return ratio The circuit to determine the return ratio is shown in the top panel of Figure 7. Labels show the currents in the various branches as found using a combination of Ohm's law and Kirchhoff's laws. Resistor R1 = RB // rπ1 and R3 = RC2 // RL. KVL from the ground of R1 to the ground of R2 provides: KVL provides the collector voltage at the top of RC as Finally, KCL at this collector provides Substituting the first equation into the second and the second into the third, the return ratio is found as Gain G0 with T = 0 The circuit to determine G0 is shown in the center panel of Figure 7. In Figure 7, the output variable is the output current βiB (the short-circuit load current), which leads to the short-circuit current gain of the amplifier, namely βiB / iS: Using Ohm's law, the voltage at the top of R1 is found as or, rearranging terms, Using KCL at the top of R2: Emitter voltage vE already is known in terms of iB from the diagram of Figure 7. Substituting the second equation in the first, iB is determined in terms of iS alone, and G0 becomes: Gain G0 represents feedforward through the feedback network, and commonly is negligible. Gain G∞ with T → ∞ The circuit to determine G∞ is shown in the bottom panel of Figure 7. The introduction of the ideal op amp (a nullor) in this circuit is explained as follows. When T → ∞, the gain of the amplifier goes to infinity as well, and in such a case the differential voltage driving the amplifier (the voltage across the input transistor rπ1) is driven to zero and (according to Ohm's law when there is no voltage) it draws no input current. On the other hand, the output current and output voltage are whatever the circuit demands. This behavior is like a nullor, so a nullor can be introduced to represent the infinite gain transistor. The current gain is read directly off the schematic: Comparison with classical feedback theory Using the classical model, the feed-forward is neglected and the feedback factor βFB is (assuming transistor β >> 1): and the open-loop gain A is: Overall gain The above expressions can be substituted into the asymptotic gain model equation to find the overall gain G. The resulting gain is the current gain of the amplifier with a short-circuit load. Gain using alternative output variables In the amplifier of Figure 6, RL and RC2 are in parallel. To obtain the transresistance gain, say Aρ, that is, the gain using voltage as output variable, the short-circuit current gain G is multiplied by RC2 // RL in accordance with Ohm's law: The open-circuit voltage gain is found from Aρ by setting RL → ∞. To obtain the current gain when load current iL in load resistor RL is the output variable, say Ai, the formula for current division is used: iL = iout × RC2 / ( RC2 + RL ) and the short-circuit current gain G is multiplied by this loading factor: Of course, the short-circuit current gain is recovered by setting RL = 0 Ω. References and notes See also Blackman's theorem Extra element theorem Mason's gain formula Feedback amplifiers Return ratio Signal-flow graph External links Lecture notes on the asymptotic gain model Electronic feedback Electronic amplifiers Control theory Signal processing Analog circuits
Asymptotic gain model
[ "Mathematics", "Technology", "Engineering" ]
2,371
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Applied mathematics", "Control theory", "Analog circuits", "Electronic engineering", "Electronic amplifiers", "Amplifiers", "Dynamical systems" ]
997,260
https://en.wikipedia.org/wiki/Specific%20angular%20momentum
In celestial mechanics, the specific relative angular momentum (often denoted or ) of a body is the angular momentum of that body divided by its mass. In the case of two orbiting bodies it is the vector product of their relative position and relative linear momentum, divided by the mass of the body in question. Specific relative angular momentum plays a pivotal role in the analysis of the two-body problem, as it remains constant for a given orbit under ideal conditions. "Specific" in this context indicates angular momentum per unit mass. The SI unit for specific relative angular momentum is square meter per second. Definition The specific relative angular momentum is defined as the cross product of the relative position vector and the relative velocity vector . where is the angular momentum vector, defined as . The vector is always perpendicular to the instantaneous osculating orbital plane, which coincides with the instantaneous perturbed orbit. It is not necessarily perpendicular to the average orbital plane over time. Proof of constancy in the two body case Under certain conditions, it can be proven that the specific angular momentum is constant. The conditions for this proof include: The mass of one object is much greater than the mass of the other one. () The coordinate system is inertial. Each object can be treated as a spherically symmetrical point mass. No other forces act on the system other than the gravitational force that connects the two bodies. Proof The proof starts with the two body equation of motion, derived from Newton's law of universal gravitation: where: is the position vector from to with scalar magnitude . is the second time derivative of . (the acceleration) is the Gravitational constant. The cross product of the position vector with the equation of motion is: Because the second term vanishes: It can also be derived that: Combining these two equations gives: Since the time derivative is equal to zero, the quantity is constant. Using the velocity vector in place of the rate of change of position, and for the specific angular momentum: is constant. This is different from the normal construction of momentum, , because it does not include the mass of the object in question. Kepler's laws of planetary motion Kepler's laws of planetary motion can be proved almost directly with the above relationships. First law The proof starts again with the equation of the two-body problem. This time the cross product is multiplied with the specific relative angular momentum The left hand side is equal to the derivative because the angular momentum is constant. After some steps (which includes using the vector triple product and defining the scalar to be the radial velocity, as opposed to the norm of the vector ) the right hand side becomes: Setting these two expression equal and integrating over time leads to (with the constant of integration ) Now this equation is multiplied (dot product) with and rearranged Finally one gets the orbit equation which is the equation of a conic section in polar coordinates with semi-latus rectum and eccentricity . Second law The second law follows instantly from the second of the three equations to calculate the absolute value of the specific relative angular momentum. If one connects this form of the equation with the relationship for the area of a sector with an infinitesimal small angle (triangle with one very small side), the equation Third law Kepler's third is a direct consequence of the second law. Integrating over one revolution gives the orbital period for the area of an ellipse. Replacing the semi-minor axis with and the specific relative angular momentum with one gets There is thus a relationship between the semi-major axis and the orbital period of a satellite that can be reduced to a constant of the central body. See also Specific orbital energy, another conserved quantity in the two-body problem. References Angular momentum Astrodynamics Orbits
Specific angular momentum
[ "Physics", "Mathematics", "Engineering" ]
761
[ "Astrodynamics", "Physical quantities", "Quantity", "Aerospace engineering", "Angular momentum", "Momentum", "Moment (physics)" ]
997,387
https://en.wikipedia.org/wiki/Specific%20orbital%20energy
In the gravitational two-body problem, the specific orbital energy (or vis-viva energy) of two orbiting bodies is the constant sum of their mutual potential energy () and their kinetic energy (), divided by the reduced mass. According to the orbital energy conservation equation (also referred to as vis-viva equation), it does not vary with time: where is the relative orbital speed; is the orbital distance between the bodies; is the sum of the standard gravitational parameters of the bodies; is the specific relative angular momentum in the sense of relative angular momentum divided by the reduced mass; is the orbital eccentricity; is the semi-major axis. It is typically expressed in (megajoule per kilogram) or (squared kilometer per squared second). For an elliptic orbit the specific orbital energy is the negative of the additional energy required to accelerate a mass of one kilogram to escape velocity (parabolic orbit). For a hyperbolic orbit, it is equal to the excess energy compared to that of a parabolic orbit. In this case the specific orbital energy is also referred to as characteristic energy. Equation forms for different orbits For an elliptic orbit, the specific orbital energy equation, when combined with conservation of specific angular momentum at one of the orbit's apsides, simplifies to: where is the standard gravitational parameter; is semi-major axis of the orbit. For a parabolic orbit this equation simplifies to For a hyperbolic trajectory this specific orbital energy is either given by or the same as for an ellipse, depending on the convention for the sign of a. In this case the specific orbital energy is also referred to as characteristic energy (or ) and is equal to the excess specific energy compared to that for a parabolic orbit. It is related to the hyperbolic excess velocity (the orbital velocity at infinity) by It is relevant for interplanetary missions. Thus, if orbital position vector () and orbital velocity vector () are known at one position, and is known, then the energy can be computed and from that, for any other position, the orbital speed. Rate of change For an elliptic orbit the rate of change of the specific orbital energy with respect to a change in the semi-major axis is where is the standard gravitational parameter; is semi-major axis of the orbit. In the case of circular orbits, this rate is one half of the gravitation at the orbit. This corresponds to the fact that for such orbits the total energy is one half of the potential energy, because the kinetic energy is minus one half of the potential energy. Additional energy If the central body has radius R, then the additional specific energy of an elliptic orbit compared to being stationary at the surface is The quantity is the height the ellipse extends above the surface, plus the periapsis distance (the distance the ellipse extends beyond the center of the Earth). For the Earth and just little more than the additional specific energy is ; which is the kinetic energy of the horizontal component of the velocity, i.e. , . Examples ISS The International Space Station has an orbital period of 91.74 minutes (5504s), hence by Kepler's Third Law the semi-major axis of its orbit is 6,738km. The specific orbital energy associated with this orbit is −29.6MJ/kg: the potential energy is −59.2MJ/kg, and the kinetic energy 29.6MJ/kg. Compared with the potential energy at the surface, which is −62.6MJ/kg., the extra potential energy is 3.4MJ/kg, and the total extra energy is 33.0MJ/kg. The average speed is 7.7km/s, the net delta-v to reach this orbit is 8.1km/s (the actual delta-v is typically 1.5–2.0km/s more for atmospheric drag and gravity drag). The increase per meter would be 4.4J/kg; this rate corresponds to one half of the local gravity of 8.8m/s2. For an altitude of 100km (radius is 6471km): The energy is −30.8MJ/kg: the potential energy is −61.6MJ/kg, and the kinetic energy 30.8MJ/kg. Compare with the potential energy at the surface, which is −62.6MJ/kg. The extra potential energy is 1.0MJ/kg, the total extra energy is 31.8MJ/kg. The increase per meter would be 4.8J/kg; this rate corresponds to one half of the local gravity of 9.5m/s2. The speed is 7.8km/s, the net delta-v to reach this orbit is 8.0km/s. Taking into account the rotation of the Earth, the delta-v is up to 0.46km/s less (starting at the equator and going east) or more (if going west). Voyager 1 For Voyager 1, with respect to the Sun: = 132,712,440,018 km3⋅s−2 is the standard gravitational parameter of the Sun r = 17 billion kilometers v = 17.1 km/s Hence: Thus the hyperbolic excess velocity (the theoretical orbital velocity at infinity) is given by However, Voyager 1 does not have enough velocity to leave the Milky Way. The computed speed applies far away from the Sun, but at such a position that the potential energy with respect to the Milky Way as a whole has changed negligibly, and only if there is no strong interaction with celestial bodies other than the Sun. Applying thrust Assume: a is the acceleration due to thrust (the time-rate at which delta-v is spent) g is the gravitational field strength v is the velocity of the rocket Then the time-rate of change of the specific energy of the rocket is : an amount for the kinetic energy and an amount for the potential energy. The change of the specific energy of the rocket per unit change of delta-v is which is |v| times the cosine of the angle between v and a. Thus, when applying delta-v to increase specific orbital energy, this is done most efficiently if a is applied in the direction of v, and when |v| is large. If the angle between v and g is obtuse, for example in a launch and in a transfer to a higher orbit, this means applying the delta-v as early as possible and at full capacity. See also gravity drag. When passing by a celestial body it means applying thrust when nearest to the body. When gradually making an elliptic orbit larger, it means applying thrust each time when near the periapsis. Such maneuver is called an Oberth maneuver or powered flyby. When applying delta-v to decrease specific orbital energy, this is done most efficiently if a is applied in the direction opposite to that of v, and again when |v| is large. If the angle between v and g is acute, for example in a landing (on a celestial body without atmosphere) and in a transfer to a circular orbit around a celestial body when arriving from outside, this means applying the delta-v as late as possible. When passing by a planet it means applying thrust when nearest to the planet. When gradually making an elliptic orbit smaller, it means applying thrust each time when near the periapsis. If a is in the direction of v: See also Specific energy change of rockets Characteristic energy C3 (Double the specific orbital energy) References Astrodynamics Orbits Physical quantities Mass-specific quantities
Specific orbital energy
[ "Physics", "Mathematics", "Engineering" ]
1,576
[ "Physical phenomena", "Astrodynamics", "Physical quantities", "Quantity", "Mass", "Intensive quantities", "Aerospace engineering", "Mass-specific quantities", "Physical properties", "Matter" ]
997,416
https://en.wikipedia.org/wiki/FTP%20bounce%20attack
FTP bounce attack is an exploit of the FTP protocol whereby an attacker is able to use the command to request access to ports indirectly through the use of the victim machine, which serves as a proxy for the request, similar to an Open mail relay using SMTP. This technique can be used to port scan hosts discreetly, and to potentially bypass a network's Access-control list to access specific ports that the attacker cannot access through a direct connection, for example with the nmap port scanner. Nearly all modern FTP server programs are configured by default to refuse commands that would connect to any host but the originating host, thwarting FTP bounce attacks. See also Confused deputy problem References External links CERT Advisory on FTP Bounce Attack CERT Article on FTP Bounce Attack Original posting describing the attack File Transfer Protocol Computer network security
FTP bounce attack
[ "Technology", "Engineering" ]
171
[ "Cybersecurity engineering", "Computer network stubs", "Computer networks engineering", "Computer network security", "Computing stubs" ]
998,070
https://en.wikipedia.org/wiki/Node%20%28physics%29
A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an anti-node, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes. Explanation Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string. In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other. In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node. In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is motionless, dividing the surface into separate regions vibrating with opposite phase. These can be made visible by sprinkling sand on the surface, and the intricate patterns of lines resulting are called Chladni figures. In transmission lines a voltage node is a current antinode, and a voltage antinode is a current node. Nodes are the points of zero displacement, not the points where two constituent waves intersect. Boundary conditions Where the nodes occur in relation to the boundary reflecting the waves depends on the end conditions or boundary conditions. Although there are many types of end conditions, the ends of resonators are usually one of two types that cause total reflection: Fixed boundary: Examples of this type of boundary are the attachment point of a guitar string, the closed end of an open pipe like an organ pipe, or a woodwind pipe, the periphery of a drumhead, a transmission line with the end short circuited, or the mirrors at the ends of a laser cavity. In this type, the amplitude of the wave is forced to zero at the boundary, so there is a node at the boundary, and the other nodes occur at multiples of half a wavelength from it: Free boundary: Examples of this type are an open-ended organ or woodwind pipe, the ends of the vibrating resonator bars in a xylophone, glockenspiel or tuning fork, the ends of an antenna, or a transmission line with an open end. In this type the derivative (slope) of the wave's amplitude (in sound waves the pressure, in electromagnetic waves, the current) is forced to zero at the boundary. So there is an amplitude maximum (antinode) at the boundary, the first node occurs a quarter wavelength from the end, and the other nodes are at half wavelength intervals from there: Examples Sound A sound wave consists of alternating cycles of compression and expansion of the wave medium. During compression, the molecules of the medium are forced together, resulting in the increased pressure and density. During expansion the molecules are forced apart, resulting in the decreased pressure and density. The number of nodes in a specified length is directly proportional to the frequency of the wave. Occasionally on a guitar, violin, or other stringed instrument, nodes are used to create harmonics. When the finger is placed on top of the string at a certain point, but does not push the string all the way down to the fretboard, a third node is created (in addition to the bridge and nut) and a harmonic is sounded. During normal play when the frets are used, the harmonics are always present, although they are quieter. With the artificial node method, the overtone is louder and the fundamental tone is quieter. If the finger is placed at the midpoint of the string, the first overtone is heard, which is an octave above the fundamental note which would be played, had the harmonic not been sounded. When two additional nodes divide the string into thirds, this creates an octave and a perfect fifth (twelfth). When three additional nodes divide the string into quarters, this creates a double octave. When four additional nodes divide the string into fifths, this creates a double-octave and a major third (17th). The octave, major third and perfect fifth are the three notes present in a major chord. The characteristic sound that allows the listener to identify a particular instrument is largely due to the relative magnitude of the harmonics created by the instrument. Waves in two or three dimensions In two dimensional standing waves, nodes are curves (often straight lines or circles when displayed on simple geometries.) For example, sand collects along the nodes of a vibrating Chladni plate to indicate regions where the plate is not moving. In chemistry, quantum mechanical waves, or "orbitals", are used to describe the wave-like properties of electrons. Many of these quantum waves have nodes and antinodes as well. The number and position of these nodes and antinodes give rise to many of the properties of an atom or covalent bond. Atomic orbitals are classified according to the number of radial and angular nodes. A radial node for the hydrogen atom is a sphere that occurs where the wavefunction for an atomic orbital is equal to zero, while the angular node is a flat plane. Molecular orbitals are classified according to bonding character. Molecular orbitals with an antinode between nuclei are very stable, and are known as "bonding orbitals" which strengthen the bond. In contrast, molecular orbitals with a node between nuclei will not be stable due to electrostatic repulsion and are known as "anti-bonding orbitals" which weaken the bond. Another such quantum mechanical concept is the particle in a box where the number of nodes of the wavefunction can help determine the quantum energy state—zero nodes corresponds to the ground state, one node corresponds to the 1st excited state, etc. In general, If one arranges the eigenstates in the order of increasing energies, , the eigenfunctions likewise fall in the order of increasing number of nodes; the nth eigenfunction has n−1 nodes, between each of which the following eigenfunctions have at least one node. References Concepts in physics Sound Musical tuning Waves
Node (physics)
[ "Physics" ]
1,469
[ "Waves", "Physical phenomena", "Motion (physics)", "nan" ]
998,103
https://en.wikipedia.org/wiki/Bioequivalence
Bioequivalence is a term in pharmacokinetics used to assess the expected in vivo biological equivalence of two proprietary preparations of a drug. If two products are said to be bioequivalent it means that they would be expected to be, for all intents and purposes, the same. One article defined bioequivalence by stating that, "two pharmaceutical products are bioequivalent if they are pharmaceutically equivalent and their bioavailabilities (rate and extent of availability) after administration in the same molar dose are similar to such a degree that their effects, with respect to both efficacy and safety, can be expected to be essentially the same. Pharmaceutical equivalence implies the same amount of the same active substance(s), in the same dosage form, for the same route of administration and meeting the same or comparable standards." For The World Health Organization (WHO) "two pharmaceutical products are bioequivalent if they are pharmaceutically equivalent or pharmaceutical alternatives, and their bioavailabilities, in terms of rate (Cmax and tmax) and extent of absorption (area under the curve), after administration of the same molar dose under the same conditions, are similar to such a degree that their effects can be expected to be essentially the same". The United States Food and Drug Administration (FDA) has defined bioequivalence as, "the absence of a significant difference in the rate and extent to which the active ingredient or active moiety in pharmaceutical equivalents or pharmaceutical alternatives becomes available at the site of drug action when administered at the same molar dose under similar conditions in an appropriately designed study." Bioequivalence In determining bioequivalence between two products such as a commercially available Branded product and a potential to-be-marketed Generic product, pharmacokinetic studies are conducted whereby each of the preparations are administered in a cross-over study (sometimes parallel study, when a cross-over study is not feasible) to volunteer subjects, generally healthy individuals but occasionally in patients. Serum/plasma samples are obtained at prescribed times and assayed for parent drug (or occasionally metabolite) concentration. Occasionally, blood concentration levels are neither feasible or possible to compare the two products (e.g. inhaled corticosteroids), then pharmacodynamic endpoints rather than pharmacokinetic endpoints (see below) are used for comparison. For a pharmacokinetic comparison, the plasma concentration data are used to assess key pharmacokinetic parameters such as area under the curve (AUC), peak concentration (Cmax), time to peak concentration (tmax), and absorption lag time (tlag). Testing should be conducted at several different doses, especially when the drug displays non-linear pharmacokinetics. In addition to data from bioequivalence studies, other data may need to be submitted to meet regulatory requirements for bioequivalence. Such evidence may include: analytical method validation in vitro-in vivo correlation studies (IVIVC) Regulatory definition The World Health Organization The World Health Organization considers two formulation bioequivalent if the 90% confidence interval for the ratio multisource (generic) product/comparator lie within 80.00–125.00% acceptance range for AUC0–t and Cmax. For high variable finished pharmaceutical products, the applicable acceptance range for Cmax can be expanded (up to 69.84–143.19%). Australia In Australia, the Therapeutics Goods Administration (TGA) considers preparations to be bioequivalent if the 90% confidence intervals (90% CI) of the rate ratios, between the two preparations, of Cmax and AUC lie in the range 0.80–1.25. Tmax should also be similar between the products. There are tighter requirements for drugs with a narrow therapeutic index and/or saturable metabolism – thus no generic products exist on the Australian market for digoxin or phenytoin for instance. Europe According to regulations applicable in the European Economic Area two medicinal products are bioequivalent if they are pharmaceutically equivalent or pharmaceutical alternatives and if their bioavailabilities after administration in the same molar dose are similar to such a degree that their effects, with respect to both efficacy and safety, will be essentially the same. This is considered demonstrated if the 90% confidence intervals (90% CI) of the ratios for AUC0–t and Cmax between the two preparations lie in the range 80–125%. United States The FDA considers two products bioequivalent if the 90% CI of the relative mean Cmax, AUC(0–t) and AUC(0–∞) of the test (e.g. generic formulation) to reference (e.g. innovator brand formulation) should be within 80% to 125% in the fasting state. Although there are a few exceptions, generally a bioequivalent comparison of Test to Reference formulations also requires administration after an appropriate meal at a specified time before taking the drug, a so-called "fed" or "food-effect" study. A food-effect study requires the same statistical evaluation as the fasting study, described above. China There were no requirements for bioequivalence in generic medications in China until the 2016 Opinion on Conducting Consistent Evaluation of the Quality and Efficacy of Generic Drugs (), which established basic rules for future bioequivalence work. Since July 2020, all newly-approved generics must pass bioequivalence checks; previous drugs may apply to be checked. Since 2019, National Centralized Volume-Based Procurement uses "passes generic-consistency evalulation" as one of the bidding criteria. The Chinese definition of "bioequivalence" entails having the test drug's geometric mean Cmax, AUC(0–t), and AUC(0–∞) fall into 80%–125% of the reference drug in both fasting and fed states. The reference drug should be preferably the original brand-name drug, then (if not available) an internationally-recognized generic approved by a developed country, then (if still not available) an internationally-recognized generic approved domestically – this is to avoid deviation from the original drug by serial use of generics as reference. If pharmacokinetic values such as Cmax do not apply to the type of drug (e.g. if the drug is not absorbed orally), comparisons can be made using other means such as dose-response curves. According to Wei et al. (2022), the Consistency Evaluation Policy increased R&D spending for Chinese pharmaceutical companies, especially among private and high-yielding ones. Liu et al. (2023) argues that the Policy increased the innovation quality of the Chinese pharmaceutical industry. Bioequivalence issues While the FDA maintains that approved generic drugs are equivalent to their branded counterparts, bioequivalence problems have been reported by physicians and patients for many drugs. Certain classes of drugs are suspected to be particularly problematic because of their chemistry. Some of these include chiral drugs, poorly absorbed drugs, and cytotoxic drugs. In addition, complex delivery mechanisms can cause bioequivalence variances. Physicians are cautioned to avoid switching patients from branded to generic, or between different generic manufacturers, when prescribing anti-epileptic drugs, warfarin, and levothyroxine. Major issues were raised in the verification of bioequivalence when multiple generic versions of FDA-approved generic drug were found not to be equivalent in efficacy and side effect profiles. In 2007, two providers of consumer information on nutritional products and supplements, ConsumerLab.com and The People's Pharmacy, released the results of comparative tests of different brands of bupropion. The People's Pharmacy received multiple reports of increased side effects and decreased efficacy of generic bupropion, which prompted it to ask ConsumerLab.com to test the products in question. The tests showed that some generic versions of Wellbutrin XL 300 mg didn't perform the same as the brand-name pill in laboratory tests. The FDA investigated these complaints and concluded that the generic version is equivalent to Wellbutrin XL in regard to bioavailability of bupropion and its main active metabolite hydroxybupropion. The FDA also said that coincidental natural mood variation is the most likely explanation for the apparent worsening of depression after the switch from Wellbutrin XL to Budeprion XL. After several years of denying patient reports, in 2012 the FDA reversed this opinion, announcing that "Budeprion XL 300 mg fails to demonstrate therapeutic equivalence to Wellbutrin XL 300 mg." The FDA did not test the bioequivalence of any of the other generic versions of Wellbutrin XL 300 mg, but requested that the four manufacturers submit data on this question to the FDA by March 2013. As of October 2013, the FDA has made determinations on the formulations from some manufacturers not being bioequivalent. In 2004, Ranbaxy was revealed to have been falsifying data regarding the generic drugs they were manufacturing. As a result, 30 products were removed from US markets and Ranbaxy paid $500 million in fines. The FDA investigated many Indian drug manufacturers after this was discovered, and as a result at least 12 companies have been banned from shipping drugs to the US. In 2017, The European Medicines Agency recommended suspension of a number of nationally approved medicines for which bioequivalence studies were conducted by Micro Therapeutic Research Labs in India, due to inspections identifying misrepresentation of study data and deficiencies in documentation and data handling. See also Generic drug Pharmacokinetics Clinical trial Abbreviated New Drug Application References External links Hussain AS, et al. The Biopharmaceutics Classification System: Highlights of the FDA's Draft Guidance Office of Pharmaceutical Science, Center for Drug Evaluation and Research, Food and Drug Administration. Mills D (2005). Regulatory Agencies Do Not Require Clinical Trials To Be Expensive International Biopharmaceutical Association: IBPA Publications. FDA CDER Office of Generic Drugs – further U.S. information on bioequivalence testing and generic drugs Proposal to waive in vivo bioequivalence requirements for WHO Model List of Essential Medicines immediate-release, solid oral dosage forms. WHO Technical Report Series, No. 937, 2006, Annex 8. Guidance for organizations performing in vivo bioequivalence studies (revision). WHO Technical Report Series 996, 2016, Annex 9. General background notes and list of international comparator pharmaceutical products. WHO Technical Report Series 1003, 2017, Annex 5. WHO List of International Comparator products (September 2016) Pharmacokinetics Clinical research Life sciences industry
Bioequivalence
[ "Chemistry", "Biology" ]
2,243
[ "Pharmacology", "Life sciences industry", "Pharmacokinetics" ]
998,156
https://en.wikipedia.org/wiki/Haze
Haze is traditionally an atmospheric phenomenon in which dust, smoke, and other dry particulates suspended in air obscure visibility and the clarity of the sky. The World Meteorological Organization manual of codes includes a classification of particulates causing horizontal obscuration into categories of fog, ice fog, steam fog, mist, haze, smoke, volcanic ash, dust, sand, and snow. Sources for particles that cause haze include farming (stubble burning, ploughing in dry weather), traffic, industry, windy weather, volcanic activity and wildfires. Seen from afar (e.g. an approaching airplane) and depending on the direction of view with respect to the Sun, haze may appear brownish or bluish, while mist tends to be bluish grey instead. Whereas haze often is considered a phenomenon occurring in dry air, mist formation is a phenomenon in saturated, humid air. However, haze particles may act as condensation nuclei that leads to the subsequent vapor condensation and formation of mist droplets; such forms of haze are known as "wet haze". In meteorological literature, the word haze is generally used to denote visibility-reducing aerosols of the wet type suspended in the atmosphere. Such aerosols commonly arise from complex chemical reactions that occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulfuric acid when exposed. The reactions are enhanced in the presence of sunlight, high relative humidity, and an absence of air flow (wind). A small component of wet-haze aerosols appear to be derived from compounds released by trees when burning, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon. Large areas of haze covering many thousands of kilometers may be produced under extensive favorable conditions each summer. Air pollution Haze often occurs when suspended dust and smoke particles accumulate in relatively dry air. When weather conditions block the dispersal of smoke and other pollutants they concentrate and form a usually low-hanging shroud that impairs visibility and may become a respiratory health threat if excessively inhaled. Industrial pollution can result in dense haze, which is known as smog. Since 1991, haze has been a particularly acute problem in Southeast Asia. The main source of the haze has been smoke from fires occurring in Sumatra and Borneo which dispersed over a wide area. In response to the 1997 Southeast Asian haze, the ASEAN countries agreed on a Regional Haze Action Plan (1997) as an attempt to reduce haze. In 2002, all ASEAN countries signed the Agreement on Transboundary Haze Pollution, but the pollution is still a problem there today. Under the agreement, the ASEAN secretariat hosts a co-ordination and support unit. During the 2013 Southeast Asian haze, Singapore experienced a record high pollution level, with the 3-hour Pollutant Standards Index reaching a record high of 401. In the United States, the Interagency Monitoring of Protected Visual Environments (IMPROVE) program was developed as a collaborative effort between the US EPA and the National Park Service in order to establish the chemical composition of haze in National Parks and establish air pollution control measures in order to restore the visibility of the air to pre-industrial levels. Additionally, the Clean Air Act requires that any current visibility problems be addressed and remedied, and future visibility problems be prevented, in 156 Class I Federal areas located throughout the United States. A full list of these areas is available on EPA's website. In addition to the severe health issues caused by haze from air pollution, dust storm particles, and bush fire smoke, reduction in irradiance is the most dominant impact of these sources of haze and a growing issue for photovoltaic production as the solar industry grows. Smog also lowers agricultural yield and it has been proposed that pollution controls could increase agricultural production in China. These effects are negative for both sides of agrivoltaics (the combination of photovoltaic electricity production and food from agriculture). International disputes Transboundary haze Haze is no longer just a confined as a domestic problem. It has become one of the causes of international disputes among neighboring countries. Haze can migrate to adjacent countries in the path of wind and thereby pollutes other countries as well, even if haze does not first manifest there. One of the most recent problems occur in Southeast Asia which largely affects the nations of Indonesia, Malaysia and Singapore. In 2013, due to forest fires in Indonesia, Kuala Lumpur and surrounding areas became shrouded in a pall of noxious fumes dispersed from Indonesia, that brings a smell of ash and coal for more than a week, in the country's worst environmental crisis since 1997. The main sources of the haze are Indonesia's Sumatra Island, Indonesian areas of Borneo, and Riau, where farmers, plantation owners and miners have set hundreds of fires in the forests to clear land during dry weather. Winds blew most of the particulates and fumes across the narrow Strait of Malacca to Malaysia, although parts of Indonesia in the path are also affected. The 2015 Southeast Asian haze was another major crisis of air quality, although there were occasions such as the 2006 and 2019 haze which were less impactful than the three major Southeast Asian haze of 1997, 2013 and 2015. Obscuration Haze causes issues in the area of terrestrial photography and imaging, where the penetration of large amounts of dense atmosphere may be necessary to image distant subjects. This results in the visual effect of a loss of contrast in the subject, due to the effect of light scattering and reflection through the haze particles. For these reasons, sunrise and sunset colors and possibly the sun itself appear subdued on hazy days, and stars may be obscured by haze at night. In some cases, attenuation by haze is so great that, toward sunset, the sun disappears altogether before even reaching the horizon. Haze can be defined as an aerial form of the Tyndall effect therefore unlike other atmospheric effects such as cloud, mist and fog, haze is spectrally selective in accordance to the electromagnetic spectrum: shorter (blue) wavelengths are scattered more, and longer (red/infrared) wavelengths are scattered less. For this reason, many super-telephoto lenses often incorporate yellow light filters or coatings to enhance image contrast. Infrared (IR) imaging may also be used to penetrate haze over long distances, with a combination of IR-pass optical filters and IR-sensitive detectors at the intended destination. See also Arctic haze ASEAN Agreement on Transboundary Haze Pollution Asian brown cloud Asian Dust Coefficient of haze Convention on Long-Range Transboundary Air Pollution Fog Mist Saharan Air Layer Southeast Asian haze Smog Trail Smelter dispute Notes External links National Pollutant Inventory - Particulate matter fact sheet Those hazy days of summer Haze over the central and eastern United States Chemical Composition of Haze in US National Parks: Views Visibility Database Visibility Air pollution Atmospheric optical phenomena Psychrometrics Pollution Fog
Haze
[ "Physics", "Mathematics" ]
1,390
[ "Visibility", "Physical phenomena", "Earth phenomena", "Fog", "Physical quantities", "Quantity", "Optical phenomena", "Wikipedia categories named after physical quantities", "Atmospheric optical phenomena" ]
999,516
https://en.wikipedia.org/wiki/Gustav%20Zeuner
Gustav Anton Zeuner (30 November 1828 – 17 October 1907) was a German physicist, engineer and epistemologist, considered the founder of technical thermodynamics and of the Dresden School of Thermodynamics. Life University and Revolution Zeuner was born in Chemnitz, Saxony. His first training in the subject of engineering was at the Chemnitz Königliche Gewerbeschule (Royal Vocational School), today Chemnitz University of Technology, where he studied from 1843-1848. In 1848 he moved the short distance to the Bergakademie (Mining Academy) in Freiberg, today also a university of technology, where he studied mining and metallurgy. He developed close links with one of his professors, the famous mineralogist Albin Julius Weisbach, with whom he worked on several projects. The university course was disrupted, however, during the revolutions which took place all over Germany. Large popular assemblies and mass demonstrations took place, primarily demanding freedom of the press, freedom of assembly, arming of the people, and a national German parliament. Zeuner joined the revolutionaries on the barricades in Dresden during the May Uprising in 1849. Unlike many of his compatriots, some of whom were sentenced to death or sent to the workhouse, Zeuner was pardoned. He was able to complete his course, and even completed his PhD at the University of Leipzig in 1853, but was banned from ever teaching at any Saxon university. Escape to Zürich In 1853, Zeuner took over as the editor of the engineering magazine "Der Civilenginieur. Zeitschrift für das Ingenieurwesen", the first German magazine specialising in mechanics, which ran until 1896. He continued in this position until 1857, even after moving to Zürich in 1855 to work as a professor for technical mechanics at the ETH Zürich, the Swiss Federal Institute of Technology in Zürich. There he worked alongside famous engineers such as Franz Reuleaux. Other Dresden revolutionaries had fled their home country for Zürich (Richard Wagner, Gottfried Semper, Theodor Mommsen). It was in Zürich that Zeuner made his model of a locomotive front end in 1858; he recognised its potential for creating momentum but was only interested in the theory and did not develop the design any further. Also in Zürich (in 1869) Zeuner invented the three-dimensional population graph now sometimes known as a Zeuner diagram but more often as a Lexis diagram after Wilhelm Lexis who modified the idea slightly. From 1859 Zeuner worked the stand-in director of the ETH Zürich, and in May 1865 he took over the position officially. His former professor, Albin Weisbach, commemorated his friend's acquisition of the post by naming a mineral after him - the transparent green crystal zeunerite. Return to Germany In 1871 Zeuner returned to Germany and was once again able to work with Weisbach when he succeeded his old friend as director of the Freiberg Mining Academy. He also taught there until 1875 as a professor of mechanics and the study of mining machinery. This was now possible, despite the teaching ban which had been placed on him, because of the amnesty granted to all the revolutionaries in 1862. In 1873, while still director of Freiberg Mining Academy, Zeuner also took on the post of director at the Royal Saxon Polytechnicum in Dresden (now Technische Universität Dresden). Zeuner's efforts there led to the introduction of the humanities; the extension of the range of subjects taught resulted in the polytechnic's rise to a full-scale polytechnic university in 1890. In 1889, aged 61, Zeuner gave up his position as director of the polytechnic to work as a lecturer until his retirement in 1897. On retiring he was made an emeritus professor. Zeuner died in Dresden in 1907. Gustav Zeuner Award Since 1993, the German Association of Engineers (Verein Deutscher Ingenieure or VDI) has presented students with the Gustav Zeuner Award for the best engineering thesis in Germany; Zeuner supported the Dresden branch of the VDI at its foundation in 1897. Publications Die Schiebersteuerungen mit besonderer Berücksichtigung der Lokomotivsteuerungen (Slide-valve controls with particular emphasis on locomotive controls) Freiberg 1858 Grundzüge der mechanischen Wärmetheorie (Basics of mechanical heat theory) 1860 Technische Thermodynamik (Technical Thermodynamics) 1887; translated in English in 1907 as Technical Thermodynamics See also Piston valve (steam engine) Zeuner water turbine References Further reading Das Leben und Wirken von Gustav Anton Zeuner by Gerd Grabow, published 1984 by Deutscher Verlag für Grundstoffanalyse. German mechanical engineers 19th-century German physicists People of the Revolutions of 1848 People from the Kingdom of Saxony Engineers from Chemnitz Leipzig University alumni 1828 births 1907 deaths Recipients of German royal pardons Academic staff of ETH Zurich Thermodynamicists
Gustav Zeuner
[ "Physics", "Chemistry" ]
1,052
[ "Thermodynamics", "Thermodynamicists" ]
999,701
https://en.wikipedia.org/wiki/Rate%20of%20convergence
In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are broadly divided into rates and orders of convergence that describe how quickly a sequence further approaches its limit once it is already close to it, called asymptotic rates and orders of convergence, and those that describe how quickly sequences approach their limits from starting points that are not necessarily close to their limits, called non-asymptotic rates and orders of convergence. Asymptotic behavior is particularly useful for deciding when to stop a sequence of numerical computations, for instance once a target precision has been reached with an iterative root-finding algorithm, but pre-asymptotic behavior is often crucial for determining whether to begin a sequence of computations at all, since it may be impossible or impractical to ever reach a target precision with a poorly chosen approach. Asymptotic rates and orders of convergence are the focus of this article. In practical numerical computations, asymptotic rates and orders of convergence follow two common conventions for two types of sequences: the first for sequences of iterations of an iterative numerical method and the second for sequences of successively more accurate numerical discretizations of a target. In formal mathematics, rates of convergence and orders of convergence are often described comparatively using asymptotic notation commonly called "big O notation," which can be used to encompass both of the prior conventions; this is an application of asymptotic analysis. For iterative methods, a sequence that converges to is said to have asymptotic order of convergence and asymptotic rate of convergence if Where methodological precision is required, these rates and orders of convergence are known specifically as the rates and orders of Q-convergence, short for quotient-convergence, since the limit in question is a quotient of error terms. The rate of convergence may also be called the asymptotic error constant, and some authors will use rate where this article uses order. Series acceleration methods are techniques for improving the rate of convergence of the sequence of partial sums of a series and possibly its order of convergence, also. Similar concepts are used for sequences of discretizations. For instance, ideally the solution of a differential equation discretized via a regular grid will converge to the solution of the continuous equation as the grid spacing goes to zero, and if so the asymptotic rate and order of that convergence are important properties of the gridding method. A sequence of approximate grid solutions of some problem that converges to a true solution with a corresponding sequence of regular grid spacings that converge to 0 is said to have asymptotic order of convergence and asymptotic rate of convergence if where the absolute value symbols stand for a metric for the space of solutions such as the uniform norm. Similar definitions also apply for non-grid discretization schemes such as the polygon meshes of a finite element method or the basis sets in computational chemistry: in general, the appropriate definition of the asymptotic rate will involve the asymptotic limit of the ratio of an approximation error term above to an asymptotic order power of a discretization scale parameter below. In general, comparatively, one sequence that converges to a limit is said to asymptotically converge more quickly than another sequence that converges to a limit if and the two are said to asymptotically converge with the same order of convergence if the limit is any positive finite value. The two are said to be asymptotically equivalent if the limit is equal to one. These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis and find wide application in mathematical analysis as a whole, including numerical analysis, real analysis, complex analysis, and functional analysis. Asymptotic rates of convergence for iterative methods Definitions Suppose that the sequence of iterates of an iterative method converges to the limit number as . The sequence is said to converge with order to and with a rate of convergence if the limit of quotients of absolute differences of sequential iterates from their limit satisfies for some positive constant if and if . Other more technical rate definitions are needed if the sequence converges but or the limit does not exist. This definition is technically called Q-convergence, short for quotient-convergence, and the rates and orders are called rates and orders of Q-convergence when that technical specificity is needed. , below, is an appropriate alternative when this limit does not exist. Sequences with larger orders converge more quickly than those with smaller order, and those with smaller rates converge more quickly than those with larger rates for a given order. This "smaller rates converge more quickly" behavior among sequences of the same order is standard but it can be counterintuitive. Therefore it is also common to define as the rate; this is the "number of extra decimals of precision per iterate" for sequences that converge with order 1. Integer powers of are common and are given common names. Convergence with order and is called linear convergence and the sequence is said to converge linearly to . Convergence with and any is called quadratic convergence and the sequence is said to converge quadratically. Convergence with and any is called cubic convergence. However, it is not necessary that be an integer. For example, the secant method, when converging to a regular, simple root, has an order of the golden ratio φ ≈ 1.618. The common names for integer orders of convergence connect to asymptotic big O notation, where the convergence of the quotient implies These are linear, quadratic, and cubic polynomial expressions when is 1, 2, and 3, respectively. More precisely, the limits imply the leading order error is exactly which can be expressed using asymptotic small o notation as In general, when for a sequence or for any sequence that satisfies those sequences are said to converge superlinearly (i.e., faster than linearly). A sequence is said to converge sublinearly (i.e., slower than linearly) if it converges and Importantly, it is incorrect to say that these sublinear-order sequences converge linearly with an asymptotic rate of convergence of 1. A sequence converges logarithmically to if the sequence converges sublinearly and also R-convergence The definitions of Q-convergence rates have the shortcoming that they do not naturally capture the convergence behavior of sequences that do converge, but do not converge with an asymptotically constant rate with every step, so that the Q-convergence limit does not exist. One class of examples is the staggered geometric progressions that get closer to their limits only every other step or every several steps, for instance the example detailed below (where is the floor function applied to ). The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit. In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate. The "R-" prefix stands for "root." A sequence that converges to is said to converge at least R-linearly if there exists an error-bounding sequence such that and converges Q-linearly to zero; analogous definitions hold for R-superlinear convergence, R-sublinear convergence, R-quadratic convergence, and so on. Any error bounding sequence provides a lower bound on the rate and order of R-convergence and the greatest lower bound gives the exact rate and order of R-convergence. As for Q-convergence, sequences with larger orders converge more quickly and those with smaller rates converge more quickly for a given order, so these greatest-rate-lower-bound error-upper-bound sequences are those that have the greatest possible and the smallest possible given that . For the example given above, the tight bounding sequence converges Q-linearly with rate 1/2, so converges R-linearly with rate 1/2. Generally, for any staggered geometric progression , the sequence will not converge Q-linearly but will converge R-linearly with rate These examples demonstrate why the "R" in R-linear convergence is short for "root." Examples The geometric progression converges to . Plugging the sequence into the definition of Q-linear convergence (i.e., order of convergence 1) shows that Thus converges Q-linearly with a convergence rate of ; see the first plot of the figure below. More generally, for any initial value in the real numbers and a real number common ratio between -1 and 1, a geometric progression converges linearly with rate and the sequence of partial sums of a geometric series also converges linearly with rate . The same holds also for geometric progressions and geometric series parameterized by any complex numbers The staggered geometric progression using the floor function that gives the largest integer that is less than or equal to converges R-linearly to 0 with rate 1/2, but it does not converge Q-linearly; see the second plot of the figure below. The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit. Generally, for any staggered geometric progression , the sequence will not converge Q-linearly but will converge R-linearly with rate these examples demonstrate why the "R" in R-linear convergence is short for "root." The sequence converges to zero Q-superlinearly. In fact, it is quadratically convergent with a quadratic convergence rate of 1. It is shown in the third plot of the figure below. Finally, the sequence converges to zero Q-sublinearly and logarithmically and its convergence is shown as the fourth plot of the figure below. Convergence rates to fixed points of recurrent sequences Recurrent sequences , called fixed point iterations, define discrete time autonomous dynamical systems and have important general applications in mathematics through various fixed-point theorems about their convergence behavior. When f is continuously differentiable, given a fixed point p, such that , the fixed point is an attractive fixed point and the recurrent sequence will converge at least linearly to p for any starting value sufficiently close to p. If and , then the recurrent sequence will converge at least quadratically, and so on. If , then the fixed point is a repulsive fixed point and sequences cannot converge to p from its immediate neighborhoods, though they may still jump to p directly from outside of its local neighborhoods. Order estimation A practical method to calculate the order of convergence for a sequence generated by a fixed point iteration is to calculate the following sequence, which converges to the order : For numerical approximation of an exact value through a numerical method of order see. Accelerating convergence rates Many methods exist to accelerate the convergence of a given sequence, i.e., to transform one sequence into a second sequence that converges more quickly to the same limit. Such techniques are in general known as "series acceleration" methods. These may reduce the computational costs of approximating the limits of the original sequences. One example of series acceleration by sequence transformation is Aitken's delta-squared process. These methods in general, and in particular Aitken's method, do not typically increase the order of convergence and thus they are useful only if initially the convergence is not faster than linear: if converges linearly, Aitken's method transforms it into a sequence that still converges linearly (except for pathologically designed special cases), but faster in the sense that . On the other hand, if the convergence is already of order ≥ 2, Aitken's method will bring no improvement. Asymptotic rates of convergence for discretization methods Definitions A sequence of discretized approximations of some continuous-domain function that converges to this target, together with a corresponding sequence of discretization scale parameters that converge to 0, is said to have asymptotic order of convergence and asymptotic rate of convergence if for some positive constants and and using to stand for an appropriate distance metric on the space of solutions, most often either the uniform norm, the absolute difference, or the Euclidean distance. Discretization scale parameters may be spacings of a regular grid in space or in time, the inverse of the number of points of a grid in one dimension, an average or maximum distance between points in a polygon mesh, the single-dimension spacings of an irregular sparse grid, or a characteristic quantum of energy or momentum in a quantum mechanical basis set. When all the discretizations are generated using a single common method, it is common to discuss the asymptotic rate and order of convergence for the method itself rather than any particular discrete sequences of discretized solutions. In these cases one considers a single abstract discretized solution generated using the method with a scale parameter and then the method is said to have asymptotic order of convergence and asymptotic rate of convergence if again for some positive constants and and an appropriate metric This implies that the error of a discretization asymptotically scales like the discretization's scale parameter to the power, or using asymptotic big O notation. More precisely, it implies the leading order error is which can be expressed using asymptotic small o notation as In some cases multiple rates and orders for the same method but with different choices of scale parameter may be important, for instance for finite difference methods based on multidimensional grids where the different dimensions have different grid spacings or for finite element methods based on polygon meshes where choosing either average distance between mesh points or maximum distance between mesh points as scale parameters may imply different orders of convergence. In some especially technical contexts, discretization methods' asymptotic rates and orders of convergence will be characterized by several scale parameters at once with the value of each scale parameter possibly affecting the asymptotic rate and order of convergence of the method with respect to the other scale parameters. Example Consider the ordinary differential equation with initial condition . We can approximate a solution to this one-dimensional equation using a sequence applying the forward Euler method for numerical discretization using any regular grid spacing and grid points indexed by as follows: which implies the first-order linear recurrence with constant coefficients Given , the sequence satisfying that recurrence is the geometric progression The exact analytical solution to the differential equation is , corresponding to the following Taylor expansion in : Therefore the error of the discrete approximation at each discrete point is For any specific , given a sequence of forward Euler approximations , each using grid spacings that divide so that , one has for any sequence of grids with successively smaller grid spacings . Thus converges to pointwise with a convergence order and asymptotic error constant at each point Similarly, the sequence converges uniformly with the same order and with rate on any bounded interval of , but it does not converge uniformly on the unbounded set of all positive real values, Comparing asymptotic rates of convergence Definitions In asymptotic analysis in general, one sequence that converges to a limit is said to asymptotically converge to with a faster order of convergence than another sequence that converges to in a shared metric space with distance metric such as the real numbers or complex numbers with the ordinary absolute difference metrics, if the two are said to asymptotically converge to with the same order of convergence if for some positive finite constant and the two are said to asymptotically converge to with the same rate and order of convergence if These comparative definitions of rate and order of asymptotic convergence are fundamental in asymptotic analysis. For the first two of these there are associated expressions in asymptotic O notation: the first is that in small o notation and the second is that in Knuth notation. The third is also called asymptotic equivalence, expressed Examples For any two geometric progressions and with shared limit zero, the two sequences are asymptotically equivalent if and only if both and They converge with the same order if and only if converges with a faster order than if and only if The convergence of any geometric series to its limit has error terms that are equal to a geometric progression, so similar relationships hold among geometric series as well. Any sequence that is asymptotically equivalent to a convergent geometric sequence may be either be said to "converge geometrically" or "converge exponentially" with respect to the absolute difference from its limit, or it may be said to "converge linearly" relative to a logarithm of the absolute difference such as the "number of decimals of precision." The latter is standard in numerical analysis. For any two sequences of elements proportional to an inverse power of and with shared limit zero, the two sequences are asymptotically equivalent if and only if both and They converge with the same order if and only if converges with a faster order than if and only if For any sequence with a limit of zero, its convergence can be compared to the convergence of the shifted sequence rescalings of the shifted sequence by a constant and scaled -powers of the shifted sequence, These comparisons are the basis for the Q-convergence classifications for iterative numerical methods as described above: when a sequence of iterate errors from a numerical method is asymptotically equivalent to the shifted, exponentiated, and rescaled sequence of iterate errors it is said to converge with order and rate Non-asymptotic rates of convergence Non-asymptotic rates of convergence do not have the common, standard definitions that asymptotic rates of convergence have. Among formal techniques, Lyapunov theory is one of the most powerful and widely applied frameworks for characterizing and analyzing non-asymptotic convergence behavior. For iterative methods, one common practical approach is to discuss these rates in terms of the number of iterates or the computer time required to reach close neighborhoods of a limit from starting points far from the limit. The non-asymptotic rate is then an inverse of that number of iterates or computer time. In practical applications, an iterative method that required fewer steps or less computer time than another to reach target accuracy will be said to have converged faster than the other, even if its asymptotic convergence is slower. These rates will generally be different for different starting points and different error thresholds for defining the neighborhoods. It is most common to discuss summaries of statistical distributions of these single point rates corresponding to distributions of possible starting points, such as the "average non-asymptotic rate," the "median non-asymptotic rate," or the "worst-case non-asymptotic rate" for some method applied to some problem with some fixed error threshold. These ensembles of starting points can be chosen according to parameters like initial distance from the eventual limit in order to define quantities like "average non-asymptotic rate of convergence from a given distance." For discretized approximation methods, similar approaches can be used with a discretization scale parameter such as an inverse of a number of grid or mesh points or a Fourier series cutoff frequency playing the role of inverse iterate number, though it is not especially common. For any problem, there is a greatest discretization scale parameter compatible with a desired accuracy of approximation, and it may not be as small as required for the asymptotic rate and order of convergence to provide accurate estimates of the error. In practical applications, when one discretization method gives a desired accuracy with a larger discretization scale parameter than another it will often be said to converge faster than the other, even if its eventual asymptotic convergence is slower. References Numerical analysis Convergence
Rate of convergence
[ "Mathematics" ]
4,227
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
590,920
https://en.wikipedia.org/wiki/Hemodialysis
Hemodialysis, also spelled haemodialysis, or simply dialysis, is a process of filtering the blood of a person whose kidneys are not working normally. This type of dialysis achieves the extracorporeal removal of waste products such as creatinine and urea and free water from the blood when the kidneys are in a state of kidney failure. Hemodialysis is one of three renal replacement therapies (the other two being kidney transplant and peritoneal dialysis). An alternative method for extracorporeal separation of blood components such as plasma or cells is apheresis. Hemodialysis can be an outpatient or inpatient therapy. Routine hemodialysis is conducted in a dialysis outpatient facility, either a purpose-built room in a hospital or a dedicated, stand-alone clinic. Less frequently hemodialysis is done at home. Dialysis treatments in a clinic are initiated and managed by specialized staff made up of nurses and technicians; dialysis treatments at home can be self-initiated and managed or done jointly with the assistance of a trained helper who is usually a family member.<ref></ref> Medical uses Hemodialysis is the choice of renal replacement therapy for patients who need dialysis acutely, and for many patients as maintenance therapy. It provides excellent, rapid clearance of solutes. A nephrologist (a medical kidney specialist) decides when hemodialysis is needed and the various parameters for a dialysis treatment. These include frequency (how many treatments per week), length of each treatment, and the blood and dialysis solution flow rates, as well as the size of the dialyzer. The composition of the dialysis solution is also sometimes adjusted in terms of its sodium, potassium, and bicarbonate levels. In general, the larger the body size of an individual, the more dialysis they will need. In North America and the UK, 3–4 hour treatments (sometimes up to 5 hours for larger patients) given 3 times a week are typical. Twice-a-week sessions are limited to patients who have a substantial residual kidney function. Four sessions per week are often prescribed for larger patients, as well as patients who have trouble with fluid overload. Finally, there is growing interest in short daily home hemodialysis, which is 1.5 – 4 hr sessions given 5–7 times per week, usually at home. There is also interest in nocturnal dialysis, which involves dialyzing a patient, usually at home, for 8–10 hours per night, 3–6 nights per week. Nocturnal in-center dialysis, 3–4 times per week, is also offered at a handful of dialysis units in the United States. Adverse effects Disadvantages Restricts independence, as people undergoing this procedure cannot travel around because of supplies' availability Requires more supplies such as high water quality and electricity Requires reliable technology like dialysis machines The procedure is complicated and requires that care givers have more knowledge Requires time to set up and clean dialysis machines, and expense with machines and associated staff Complications Fluid shifts Hemodialysis often involves fluid removal (through ultrafiltration), because most patients with renal failure pass little or no urine. Side effects caused by removing too much fluid and/or removing fluid too rapidly include low blood pressure, fatigue, chest pains, leg-cramps, nausea and headaches. These symptoms can occur during the treatment and can persist post treatment; they are sometimes collectively referred to as the dialysis hangover or dialysis washout. The severity of these symptoms is usually proportionate to the amount and speed of fluid removal. However, the impact of a given amount or rate of fluid removal can vary greatly from person to person and day to day. These side effects can be avoided and/or their severity lessened by limiting fluid intake between treatments or increasing the dose of dialysis e.g. dialyzing more often or longer per treatment than the standard three times a week, 3–4 hours per treatment schedule. Access-related Since hemodialysis requires access to the circulatory system, patients undergoing hemodialysis may expose their circulatory system to microbes, which can lead to bacteremia, an infection affecting the heart valves (endocarditis) or an infection affecting the bones (osteomyelitis). The risk of infection varies depending on the type of access used (see below). Bleeding may also occur, again the risk varies depending on the type of access used. Infections can be minimized by strictly adhering to infection control best practices. Venous needle dislodgement Venous needle dislodgement (VND) is a fatal complication of hemodialysis where the patient experiences rapid blood loss due to a faltering attachment of the needle to the venous access point. Anticoagulation-related Unfractioned heparin (UHF) is the most commonly used anticoagulant in hemodialysis, as it is generally well tolerated and can be quickly reversed with protamine sulfate. Low-molecular weight heparin (LMWH) is however, becoming increasingly popular and is now the norm in western Europe. Compared to UHF, LMWH has the advantage of an easier mode of administration and reduced bleeding but the effect cannot be easily reversed. Heparin can infrequently cause a low platelet count due to a reaction called heparin-induced thrombocytopenia (HIT). The risk of HIT is lower with LMWH compared to UHF. In such patients, alternative anticoagulants may be used. Even though HIT causes a low platelet count it can paradoxically predispose thrombosis. When comparing UHF to LMWH for the risk of adverse effects, the evidence is uncertain as to which treatment approach to thin blood has the least side effects and what is the ideal treatment strategy for preventing blood clots during hemodialysis. In patients at high risk of bleeding, dialysis can be done without anticoagulation. First-use syndrome First-use syndrome is a rare but severe anaphylactic reaction to the artificial kidney. Its symptoms include sneezing, wheezing, shortness of breath, back pain, chest pain, or sudden death. It can be caused by residual sterilant in the artificial kidney or the material of the membrane itself. In recent years, the incidence of first-use syndrome has decreased, due to an increased use of gamma irradiation, steam sterilization, or electron-beam radiation instead of chemical sterilants, and the development of new semipermeable membranes of higher biocompatibility. New methods of processing previously acceptable components of dialysis must always be considered. For example, in 2008, a series of first-use type of reactions, including deaths, occurred due to heparin contaminated during the manufacturing process with oversulfated chondroitin sulfate. Cardiovascular Long term complications of hemodialysis include hemodialysis-associated amyloidosis, neuropathy and various forms of heart disease. Increasing the frequency and length of treatments has been shown to improve fluid overload and enlargement of the heart that is commonly seen in such patients. Vitamin deficiency Folate deficiency can occur in some patients having hemodialysis. Electrolyte imbalances Although a dialysate fluid, which is a solution containing diluted electrolytes, is employed for the filtration of blood, haemodialysis can cause an electrolyte imbalance. These imbalances can derive from abnormal concentrations of potassium (hypokalemia, hyperkalemia), and sodium (hyponatremia, hypernatremia). These electrolyte imbalances are associated with increased cardiovascular mortality. Mechanism and technique The principle of hemodialysis is the same as other methods of dialysis; it involves diffusion of solutes across a semipermeable membrane. Hemodialysis utilizes counter current flow, where the dialysate is flowing in the opposite direction to blood flow in the extracorporeal circuit. Counter-current flow maintains the concentration gradient across the membrane at a maximum and increases the efficiency of the dialysis. Fluid removal (ultrafiltration) is achieved by altering the hydrostatic pressure of the dialysate compartment, causing free water and some dissolved solutes to move across the membrane along a created pressure gradient. The dialysis solution that is used may be a sterilized solution of mineral ions and is called dialysate. Urea and other waste products including potassium, and phosphate diffuse into the dialysis solution. However, concentrations of sodium and chloride are similar to those of normal plasma to prevent loss. Sodium bicarbonate is added in a higher concentration than plasma to correct blood acidity. A small amount of glucose is also commonly used. The concentration of electrolytes in the dialysate is adjusted depending on the patient's status before the dialysis. If a high concentration of sodium is added to the dialysate, the patient can become thirsty and end up accumulating body fluids, which can lead to heart damage. On the contrary, low concentrations of sodium in the dialysate solution have been associated with a low blood pressure and intradialytic weight gain, which are markers of improved outcomes. However, the benefits of using a low concentration of sodium have not been demonstrated yet, since these patients can also develop cramps, intradialytic hypotension and low sodium in serum, which are symptoms associated with a high mortality risk. Note that this is a different process to the related technique of hemofiltration. Access Three primary methods are used to gain access to the blood for hemodialysis: an intravenous catheter, an arteriovenous fistula (AV) and a synthetic graft. The type of access is influenced by factors such as the expected time course of a patient's renal failure and the condition of their vasculature. Patients may have multiple access procedures, usually because an AV fistula or graft is maturing and a catheter is still being used. The placement of a catheter is usually done under light sedation, while fistulas and grafts require an operation. Types There are three types of hemodialysis: conventional hemodialysis, daily hemodialysis, and nocturnal hemodialysis. Below is an adaptation and summary from a brochure of The Ottawa Hospital. Conventional hemodialysis Conventional hemodialysis is usually done three times per week, for about three to four hours for each treatment (Sometimes five hours for larger patients), during which the patient's blood is drawn out through a tube at a rate of 200–400 mL/min. The tube is connected to a 15, 16, or 17 gauge needle inserted in the dialysis fistula or graft, or connected to one port of a dialysis catheter. The blood is then pumped through the dialyzer, and then the processed blood is pumped back into the patient's bloodstream through another tube (connected to a second needle or port). During the procedure, the patient's blood pressure is closely monitored, and if it becomes low, or the patient develops any other signs of low blood volume such as nausea, the dialysis attendant can administer extra fluid through the machine. During the treatment, the patient's entire blood volume (about 5 L) circulates through the machine every 15 minutes. During this process, the dialysis patient is exposed to a week's worth of water for the average person. Daily hemodialysis Daily hemodialysis is typically used by those patients who do their own dialysis at home. It is less stressful (more gentle) but does require more frequent access. This is simple with catheters, but more problematic with fistulas or grafts. The "buttonhole technique" can be used for fistulas, but not grafts, requiring frequent access. Daily hemodialysis is usually done for 2 hours six days a week. Nocturnal hemodialysis The procedure of nocturnal hemodialysis is similar to conventional hemodialysis except it is performed three to six nights a week and between six and ten hours per session while the patient sleeps. Equipment The hemodialysis machine pumps the patient's blood and the dialysate through the dialyzer. The newest dialysis machines on the market are highly computerized and continuously monitor an array of safety-critical parameters, including blood (QB) and dialysate QD) flow rates; dialysis solution conductivity, temperature, and pH; and analysis of the dialysate for evidence of blood leakage or presence of air. Any reading that is out of normal range triggers an audible alarm to alert the patient-care technician who is monitoring the patient. Manufacturers of dialysis machines include companies such as Nipro, Fresenius, Gambro, Baxter, B. Braun, NxStage and Bellco. QB to QD flow rates have to reach 1:2 ratio where QB is set around 250 ml/min and QD is set around 500 ml/min to ensure good dialysis efficiency. Water system An extensive water purification system is critical for hemodialysis. Since dialysis patients are exposed to vast quantities of water, which is mixed with dialysate concentrate to form the dialysate, even trace mineral contaminants or bacterial endotoxins can filter into the patient's blood. Because the damaged kidneys cannot perform their intended function of removing impurities, molecules introduced into the bloodstream from improperly purified water can build up to hazardous levels, causing numerous symptoms or death. Aluminum, chlorine and or chloramines, fluoride, copper, and zinc, as well as bacterial fragments and endotoxins, have all caused problems in this regard. For this reason, water used in hemodialysis is carefully purified before use. A common water purification system includes a multi stage system. The water is first softened. Next the water is run through a tank containing activated charcoal to adsorb organic contaminants, and chlorine and chloramines. The water may then be temperature-adjusted if needed. Primary purification is then done by forcing water through a membrane with very tiny pores, a so-called reverse osmosis membrane. This lets the water pass, but holds back even very small solutes such as electrolytes. Final removal of leftover electrolytes is done in some water systems by passing the water through an electrodeionization (EDI) device, which removes any leftover anions or cations and replace them with hydroxyl and hydrogen ions, respectively, leaving ultrapure water. Even this degree of water purification may be insufficient. The trend lately is to pass this final purified water (after mixing with dialysate concentrate) through an ultrafiltration membrane or absolute filter. This provides another layer of protection by removing impurities, especially those of bacterial origin, that may have accumulated in the water after its passage through the original water purification system. Dialysate Once purified water is mixed with dialysate (also called dialysis fluid) concentrate consisting of: sodium, potassium, calcium, magnesium and dextrose mixed in an acid solution; this solution is mixed with the purified water and a chemical buffer. This forms the dialysate solution, which contains the basic electrolytes found in human blood. This dialysate solution contains charged ions that conducts electricity. During dialysis, the conductivity of dialysis solution is continuously monitored to ensure that the water and dialysate concentrate are being mixed in the proper proportions. Both excessively concentrated dialysis solution and excessively dilute solution can cause severe clinical problems. Chemical buffers such as bicarbonate or lactate can alternatively be added to regulate the pH of the dialysate. Both buffers can stabilize the pH of the solution at a physiological level with no negative impacts on the patient. There is some evidence of a reduction in the incidence of heart and blood problems and high blood pressure events when using bicarbonate as the pH buffer compared to lactate. However, the mortality rates after using both buffers do not show a significative difference. Dialyzer The dialyzer is the piece of equipment that filters the blood. Almost all dialyzers in use today are of the hollow-fiber variety. A cylindrical bundle of hollow fibers, whose walls are composed of semi-permeable membrane, is anchored at each end into potting compound (a sort of glue). This assembly is then put into a clear plastic cylindrical shell with four openings. One opening or blood port at each end of the cylinder communicates with each end of the bundle of hollow fibers. This forms the "blood compartment" of the dialyzer. Two other ports are cut into the side of the cylinder. These communicate with the space around the hollow fibers, the "dialysate compartment." Blood is pumped via the blood ports through this bundle of very thin capillary-like tubes, and the dialysate is pumped through the space surrounding the fibers. Pressure gradients are applied when necessary to move fluid from the blood to the dialysate compartment. Membrane and flux Dialyzer membranes come with different pore sizes. Those with smaller pore size are called "low-flux" and those with larger pore sizes are called "high-flux." Some larger molecules, such as beta-2-microglobulin, are not removed at all with low-flux dialyzers; lately, the trend has been to use high-flux dialyzers. However, such dialyzers require newer dialysis machines and high-quality dialysis solution to control the rate of fluid removal properly and to prevent backflow of dialysis solution impurities into the patient through the membrane. Dialyzer membranes used to be made primarily of cellulose (derived from cotton linter). The surface of such membranes was not very biocompatible, because exposed hydroxyl groups would activate complement in the blood passing by the membrane. Therefore, the basic, "unsubstituted" cellulose membrane was modified. One change was to cover these hydroxyl groups with acetate groups (cellulose acetate); another was to mix in some compounds that would inhibit complement activation at the membrane surface (modified cellulose). The original "unsubstituted cellulose" membranes are no longer in wide use, whereas cellulose acetate and modified cellulose dialyzers are still used. Cellulosic membranes can be made in either low-flux or high-flux configuration, depending on their pore size. Another group of membranes is made from synthetic materials, using polymers such as polyarylethersulfone, polyamide, polyvinylpyrrolidone, polycarbonate, and polyacrylonitrile. These synthetic membranes activate complement to a lesser degree than unsubstituted cellulose membranes. However, they are in general more hydrophobic which leads to increased adsorption of proteins to the membrane surface which in turn can lead to complement system activation. Synthetic membranes can be made in either low- or high-flux configuration, but most are high-flux. Nanotechnology is being used in some of the most recent high-flux membranes to create a uniform pore size. The goal of high-flux membranes is to pass relatively large molecules such as beta-2-microglobulin (MW 11,600 daltons), but not to pass albumin (MW ~66,400 daltons). Every membrane has pores in a range of sizes. As pore size increases, some high-flux dialyzers begin to let albumin pass out of the blood into the dialysate. This is thought to be undesirable, although one school of thought holds that removing some albumin may be beneficial in terms of removing protein-bound uremic toxins. Membrane flux and outcome Whether using a high-flux dialyzer improves patient outcomes is somewhat controversial, but several important studies have suggested that it has clinical benefits. The NIH-funded HEMO trial compared survival and hospitalizations in patients randomized to dialysis with either low-flux or high-flux membranes. Although the primary outcome (all-cause mortality) did not reach statistical significance in the group randomized to use high-flux membranes, several secondary outcomes were better in the high-flux group. A recent Cochrane analysis concluded that benefit of membrane choice on outcomes has not yet been demonstrated. A collaborative randomized trial from Europe, the MPO (Membrane Permeabilities Outcomes) study, comparing mortality in patients just starting dialysis using either high-flux or low-flux membranes, found a nonsignificant trend to improved survival in those using high-flux membranes, and a survival benefit in patients with lower serum albumin levels or in diabetics. Membrane flux and beta-2-microglobulin amyloidosis High-flux dialysis membranes and/or intermittent internal on-line hemodiafiltration (iHDF) may also be beneficial in reducing complications of beta-2-microglobulin accumulation. Because beta-2-microglobulin is a large molecule, with a molecular weight of about 11,600 daltons, it does not pass at all through low-flux dialysis membranes. Beta-2-M is removed with high-flux dialysis, but is removed even more efficiently with IHDF. After several years (usually at least 5–7), patients on hemodialysis begin to develop complications from beta-2-M accumulation, including carpal tunnel syndrome, bone cysts, and deposits of this amyloid in joints and other tissues. Beta-2-M amyloidosis can cause very serious complications, including spondyloarthropathy, and often is associated with shoulder joint problems. Observational studies from Europe and Japan have suggested that using high-flux membranes in dialysis mode, or IHDF, reduces beta-2-M complications in comparison to regular dialysis using a low-flux membrane.KDOQI Clinical Practice Guidelines for Hemodialysis Adequacy, 2006 Updates. CPR 5. Dialyzers and efficiency Dialyzers come in many different sizes. A larger dialyzer with a larger membrane area (A) will usually remove more solutes than a smaller dialyzer, especially at high blood flow rates. This also depends on the membrane permeability coefficient K0 for the solute in question. So dialyzer efficiency is usually expressed as the K0A – the product of permeability coefficient and area. Most dialyzers have membrane surface areas of 0.8 to 2.2 square meters, and values of K0A ranging from about 500 to 1500 mL/min. K0A'', expressed in mL/min, can be thought of as the maximum clearance of a dialyzer at very high blood and dialysate flow rates. Reuse of dialyzers The dialyzer may either be discarded after each treatment or be reused. Reuse requires an extensive procedure of high-level disinfection. Reused dialyzers are not shared between patients. There was an initial controversy about whether reusing dialyzers worsened patient outcomes. The consensus today is that reuse of dialyzers, if done carefully and properly, produces similar outcomes to single use of dialyzers. Dialyzer Reuse is a practice that has been around since the invention of the product. This practice includes the cleaning of a used dialyzer to be reused multiple times for the same patient. Dialysis clinics reuse dialyzers to become more economical and reduce the high costs of "single-use" dialysis which can be extremely expensive and wasteful. Single used dialyzers are initiated just once and then thrown out creating a large amount of bio-medical waste with no mercy for cost savings. If done right, dialyzer reuse can be very safe for dialysis patients. There are two ways of reusing dialyzers, manual and automated. Manual reuse involves the cleaning of a dialyzer by hand. The dialyzer is semi-disassembled then flushed repeatedly before being rinsed with water. It is then stored with a liquid disinfectant(PAA) for 18+ hours until its next use. Although many clinics outside the USA use this method, some clinics are switching toward a more automated/streamlined process as the dialysis practice advances. The newer method of automated reuse is achieved by means of a medical device that began in the early 1980s. These devices are beneficial to dialysis clinics that practice reuse – especially for large dialysis clinical entities – because they allow for several back to back cycles per day. The dialyzer is first pre-cleaned by a technician, then automatically cleaned by machine through a step-cycles process until it is eventually filled with liquid disinfectant for storage. Although automated reuse is more effective than manual reuse, newer technology has sparked even more advancement in the process of reuse. When reused over 15 times with current methodology, the dialyzer can lose B2m, middle molecule clearance and fiber pore structure integrity, which has the potential to reduce the effectiveness of the patient's dialysis session. Currently, as of 2010, newer, more advanced reprocessing technology has proven the ability to eliminate the manual pre-cleaning process altogether and has also proven the potential to regenerate (fully restore) all functions of a dialyzer to levels that are approximately equivalent to single-use for more than 40 cycles. As medical reimbursement rates begin to fall even more, many dialysis clinics are continuing to operate effectively with reuse programs especially since the process is easier and more streamlined than before. Epidemiology Hemodialysis was one of the most common procedures performed in U.S. hospitals in 2011, occurring in 909,000 stays (a rate of 29 stays per 10,000 population). This was an increase of 68 percent from 1997, when there were 473,000 stays. It was the fifth most common procedure for patients aged 45–64 years. History Many have played a role in developing dialysis as a practical treatment for renal failure, starting with Thomas Graham of Glasgow, who first presented the principles of solute transport across a semipermeable membrane in 1854. The artificial kidney was first developed by Abel, Rountree, and Turner in 1913, the first hemodialysis in a human being was by Haas (February 28, 1924) and the artificial kidney was developed into a clinically useful apparatus by Kolff in 1943 to 1945. This research showed that life could be prolonged in patients dying of kidney failure. Willem Kolff was the first to construct a working dialyzer in 1943. The first successfully treated patient was a 67-year-old woman in uremic coma who regained consciousness after 11 hours of hemodialysis with Kolff's dialyzer in 1945. At the time of its creation, Kolff's goal was to provide life support during recovery from acute renal failure. After World War II ended, Kolff donated the five dialyzers he had made to hospitals around the world, including Mount Sinai Hospital, New York. Kolff gave a set of blueprints for his hemodialysis machine to George Thorn at the Peter Bent Brigham Hospital in Boston. This led to the manufacture of the next generation of Kolff's dialyzer, a stainless steel Kolff-Brigham dialysis machine. According to McKellar (1999), a significant contribution to renal therapies was made by Canadian surgeon Gordon Murray with the assistance of two doctors, an undergraduate chemistry student, and research staff. Murray's work was conducted simultaneously and independently from that of Kolff. Murray's work led to the first successful artificial kidney built in North America in 1945–46, which was successfully used to treat a 26-year-old woman out of a uraemic coma in Toronto. The less-crude, more compact, second-generation "Murray-Roschlau" dialyser was invented in 1952–53, whose designs were stolen by German immigrant Erwin Halstrup, and passed off as his own (the "Halstrup–Baumann artificial kidney"). By the 1950s, Willem Kolff's invention of the dialyzer was used for acute renal failure, but it was not seen as a viable treatment for patients with stage 5 chronic kidney disease (CKD). At the time, doctors believed it was impossible for patients to have dialysis indefinitely for two reasons. First, they thought no man-made device could replace the function of kidneys over the long term. In addition, a patient undergoing dialysis developed damaged veins and arteries, so that after several treatments, it became difficult to find a vessel to access the patient's blood. The original Kolff kidney was not very useful clinically, because it did not allow for removal of excess fluid. Swedish professor Nils Alwall encased a modified version of this kidney inside a stainless steel canister, to which a negative pressure could be applied, in this way effecting the first truly practical application of hemodialysis, which was done in 1946 at the University of Lund. Alwall also was arguably the inventor of the arteriovenous shunt for dialysis. He reported this first in 1948 where he used such an arteriovenous shunt in rabbits. Subsequently, he used such shunts, made of glass, as well as his canister-enclosed dialyzer, to treat 1500 patients in renal failure between 1946 and 1960, as reported to the First International Congress of Nephrology held in Evian in September 1960. Alwall was appointed to a newly created Chair of Nephrology at the University of Lund in 1957. Subsequently, he collaborated with Swedish businessman Holger Crafoord to found one of the key companies that would manufacture dialysis equipment in the past 50 years, Gambro. The early history of dialysis has been reviewed by Stanley Shaldon. Belding H. Scribner, working with the biomechanical engineer Wayne Quinton, modified the glass shunts used by Alwall by making them from Teflon. Another key improvement was to connect them to a short piece of silicone elastomer tubing. This formed the basis of the so-called Scribner shunt, perhaps more properly called the Quinton-Scribner shunt. After treatment, the circulatory access would be kept open by connecting the two tubes outside the body using a small U-shaped Teflon tube, which would shunt the blood from the tube in the artery back to the tube in the vein. In 1962, Scribner started the world's first outpatient dialysis facility, the Seattle Artificial Kidney Center, later renamed the Northwest Kidney Centers. Immediately the problem arose of who should be given dialysis, since demand far exceeded the capacity of the six dialysis machines at the center. Scribner decided that he would not make the decision about who would receive dialysis and who would not. Instead, the choices would be made by an anonymous committee, which could be viewed as one of the first bioethics committees. For a detailed history of successful and unsuccessful attempts at dialysis, including pioneers such as Abel and Roundtree, Haas, and Necheles, see this review by Kjellstrand. See also Aluminium toxicity in people on dialysis Dialysis disequilibrium syndrome References External links Your Kidneys and How They Work – (American) National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), NIH. Treatment Methods for Kidney Failure – (American) National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), NIH. Treatment Methods for Kidney Failure: Hemodialysis – (American) National Kidney and Urologic Diseases Information Clearinghouse, NIH. Membrane technology Renal dialysis Toxicology treatments
Hemodialysis
[ "Chemistry", "Environmental_science" ]
6,671
[ "Toxicology treatments", "Membrane technology", "Toxicology", "Separation processes" ]
590,971
https://en.wikipedia.org/wiki/Haversine%20formula
The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles. The first table of haversines in English was published by James Andrew in 1805, but Florian Cajori credits an earlier use by José de Mendoza y Ríos in 1801. The term haversine was coined in 1835 by James Inman. These names follow from the fact that they are customarily written in terms of the haversine function, given by . The formulas could equally be written in terms of any multiple of the haversine, such as the older versine function (twice the haversine). Prior to the advent of computers, the elimination of division and multiplication by factors of two proved convenient enough that tables of haversine values and logarithms were included in 19th- and early 20th-century navigation and trigonometric texts. These days, the haversine form is also convenient in that it has no coefficient in front of the function. Formulation Let the central angle between any two points on a sphere be: where is the distance between the two points along a great circle of the sphere (see spherical distance), is the radius of the sphere. The haversine formula allows the haversine of to be computed directly from the latitude (represented by ) and longitude (represented by ) of the two points: where , are the latitude of point 1 and latitude of point 2, , are the longitude of point 1 and longitude of point 2, , . Finally, the haversine function , applied above to both the central angle and the differences in latitude and longitude, is The haversine function computes half a versine of the angle , or the squares of half chord of the angle on a unit circle (sphere). To solve for the distance , apply the archaversine (inverse haversine) to or use the arcsine (inverse sine) function: or more explicitly: where . When using these formulae, one must ensure that does not exceed 1 due to a floating point error ( is real only for ). only approaches 1 for antipodal points (on opposite sides of the sphere)—in this region, relatively large numerical errors tend to arise in the formula when finite precision is used. Because is then large (approaching , half the circumference) a small error is often not a major concern in this unusual case (although there are other great-circle distance formulas that avoid this problem). (The formula above is sometimes written in terms of the arctangent function, but this suffers from similar numerical problems near .) As described below, a similar formula can be written using cosines (sometimes called the spherical law of cosines, not to be confused with the law of cosines for plane geometry) instead of haversines, but if the two points are close together (e.g. a kilometer apart, on the Earth) one might end up with , leading to an inaccurate answer. Since the haversine formula uses sines, it avoids that problem. Either formula is only an approximation when applied to the Earth, which is not a perfect sphere: the "Earth radius" varies from 6356.752 km at the poles to 6378.137 km at the equator. More importantly, the radius of curvature of a north-south line on the earth's surface is 1% greater at the poles (≈6399.594 km) than at the equator (≈6335.439 km)—so the haversine formula and law of cosines cannot be guaranteed correct to better than 0.5%. More accurate methods that consider the Earth's ellipticity are given by Vincenty's formulae and the other formulas in the geographical distance article. The law of haversines Given a unit sphere, a "triangle" on the surface of the sphere is defined by the great circles connecting three points , , and on the sphere. If the lengths of these three sides are (from to ), (from to ), and (from to ), and the angle of the corner opposite is , then the law of haversines states: Since this is a unit sphere, the lengths , , and are simply equal to the angles (in radians) subtended by those sides from the center of the sphere (for a non-unit sphere, each of these arc lengths is equal to its central angle multiplied by the radius of the sphere). In order to obtain the haversine formula of the previous section from this law, one simply considers the special case where is the north pole, while and are the two points whose separation is to be determined. In that case, and are (that is, the, co-latitudes), is the longitude separation , and is the desired . Noting that , the haversine formula immediately follows. To derive the law of haversines, one starts with the spherical law of cosines: As mentioned above, this formula is an ill-conditioned way of solving for when is small. Instead, we substitute the identity that , and also employ the addition identity , to obtain the law of haversines, above. Proof One can prove the formula: by transforming the points given by their latitude and longitude into cartesian coordinates, then taking their dot product. Consider two points on the unit sphere, given by their latitude and longitude : These representations are very similar to spherical coordinates, however latitude is measured as angle from the equator and not the north pole. These points have the following representations in cartesian coordinates: From here we could directly attempt to calculate the dot product and proceed, however the formulas become significantly simpler when we consider the following fact: the distance between the two points will not change if we rotate the sphere along the z-axis. This will in effect add a constant to . Note that similar considerations do not apply to transforming the latitudes - adding a constant to the latitudes may change the distance between the points. By choosing our constant to be , and setting , our new points become: With denoting the angle between and , we now have that: See also Sight reduction Vincenty's formulae Cosine distance References Further reading U. S. Census Bureau Geographic Information Systems FAQ, (content has been moved to What is the best way to calculate the distance between 2 points?) R. W. Sinnott, "Virtues of the Haversine", Sky and Telescope 68 (2), 159 (1984). W. Gellert, S. Gottwald, M. Hellwich, H. Kästner, and H. Küstner, The VNR Concise Encyclopedia of Mathematics, 2nd ed., ch. 12 (Van Nostrand Reinhold: New York, 1989). External links Implementations of the haversine formula in 91 languages at rosettacode.org and in 17 languages on codecodex.com Other implementations in C++, C (MacOS), Pascal , Python, Ruby, JavaScript, PHP ,Matlab , MySQL Spherical trigonometry Geodesy Distance
Haversine formula
[ "Physics", "Mathematics" ]
1,505
[ "Distance", "Physical quantities", "Applied mathematics", "Quantity", "Size", "Space", "Spacetime", "Wikipedia categories named after physical quantities", "Geodesy" ]
591,253
https://en.wikipedia.org/wiki/Kirchhoff%27s%20circuit%20laws
Kirchhoff's circuit laws are two equalities that deal with the current and potential difference (commonly known as voltage) in the lumped element model of electrical circuits. They were first described in 1845 by German physicist Gustav Kirchhoff. This generalized the work of Georg Ohm and preceded the work of James Clerk Maxwell. Widely used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's laws. These laws can be applied in time and frequency domains and form the basis for network analysis. Both of Kirchhoff's laws can be understood as corollaries of Maxwell's equations in the low-frequency limit. They are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits. Kirchhoff's current law This law, also called Kirchhoff's first law, or Kirchhoff's junction rule, states that, for any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; or equivalently: The algebraic sum of currents in a network of conductors meeting at a point is zero. Recalling that current is a signed (positive or negative) quantity reflecting direction towards or away from a node, this principle can be succinctly stated as: where is the total number of branches with currents flowing towards or away from the node. Kirchhoff's circuit laws were originally obtained from experimental results. However, the current law can be viewed as an extension of the conservation of charge, since charge is the product of current and the time the current has been flowing. If the net charge in a region is constant, the current law will hold on the boundaries of the region. This means that the current law relies on the fact that the net charge in the wires and components is constant. Uses A matrix version of Kirchhoff's current law is the basis of most circuit simulation software, such as SPICE. The current law is used with Ohm's law to perform nodal analysis. The current law is applicable to any lumped network irrespective of the nature of the network; whether unilateral or bilateral, active or passive, linear or non-linear. Kirchhoff's voltage law This law, also called Kirchhoff's second law, or Kirchhoff's loop rule, states the following: The directed sum of the potential differences (voltages) around any closed loop is zero. Similarly to Kirchhoff's current law, the voltage law can be stated as: Here, is the total number of voltages measured. Generalization In the low-frequency limit, the voltage drop around any loop is zero. This includes imaginary loops arranged arbitrarily in space – not limited to the loops delineated by the circuit elements and conductors. In the low-frequency limit, this is a corollary of Faraday's law of induction (which is one of Maxwell's equations). This has practical application in situations involving "static electricity". Limitations Kirchhoff's circuit laws are the result of the lumped-element model and both depend on the model being applicable to the circuit in question. When the model is not applicable, the laws do not apply. The current law is dependent on the assumption that the net charge in any wire, junction or lumped component is constant. Whenever the electric field between parts of the circuit is non-negligible, such as when two wires are capacitively coupled, this may not be the case. This occurs in high-frequency AC circuits, where the lumped element model is no longer applicable. For example, in a transmission line, the charge density in the conductor may be constantly changing. On the other hand, the voltage law relies on the fact that the actions of time-varying magnetic fields are confined to individual components, such as inductors. In reality, the induced electric field produced by an inductor is not confined, but the leaked fields are often negligible. Modelling real circuits with lumped elements The lumped element approximation for a circuit is accurate at low frequencies. At higher frequencies, leaked fluxes and varying charge densities in conductors become significant. To an extent, it is possible to still model such circuits using parasitic components. If frequencies are too high, it may be more appropriate to simulate the fields directly using finite element modelling or other techniques. To model circuits so that both laws can still be used, it is important to understand the distinction between physical circuit elements and the ideal lumped elements. For example, a wire is not an ideal conductor. Unlike an ideal conductor, wires can inductively and capacitively couple to each other (and to themselves), and have a finite propagation delay. Real conductors can be modeled in terms of lumped elements by considering parasitic capacitances distributed between the conductors to model capacitive coupling, or parasitic (mutual) inductances to model inductive coupling. Wires also have some self-inductance. Example Assume an electric network consisting of two voltage sources and three resistors. According to the first law: Applying the second law to the closed circuit , and substituting for voltage using Ohm's law gives: The second law, again combined with Ohm's law, applied to the closed circuit gives: This yields a system of linear equations in , , : which is equivalent to Assuming the solution is The current has a negative sign which means the assumed direction of was incorrect and is actually flowing in the direction opposite to the red arrow labeled . The current in flows from left to right. See also Duality (electrical circuits) Faraday's law of induction Lumped matter discipline Tellegen's Theorem References External links Divider Circuits and Kirchhoff's Laws chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series Circuit theorems Conservation equations Eponymous laws of physics Linear electronic circuits Voltage 1845 in science Gustav Kirchhoff
Kirchhoff's circuit laws
[ "Physics", "Mathematics" ]
1,243
[ "Equations of physics", "Physical quantities", "Electrical systems", "Conservation laws", "Quantity", "Mathematical objects", "Equations", "Physical systems", "Circuit theorems", "Voltage", "Conservation equations", "Wikipedia categories named after physical quantities", "Symmetry", "Physi...
591,280
https://en.wikipedia.org/wiki/Kirchhoff%27s%20law%20of%20thermal%20radiation
In heat transfer, Kirchhoff's law of thermal radiation refers to wavelength-specific radiative emission and absorption by a material body in thermodynamic equilibrium, including radiative exchange equilibrium. It is a special case of Onsager reciprocal relations as a consequence of the time reversibility of microscopic dynamics, also known as microscopic reversibility. A body at temperature radiates electromagnetic energy. A perfect black body in thermodynamic equilibrium absorbs all light that strikes it, and radiates energy according to a unique law of radiative emissive power for temperature (Stefan–Boltzmann law), universal for all perfect black bodies. Kirchhoff's law states that: Here, the dimensionless coefficient of absorption (or the absorptivity) is the fraction of incident light (power) at each spectral frequency that is absorbed by the body when it is radiating and absorbing in thermodynamic equilibrium. In slightly different terms, the emissive power of an arbitrary opaque body of fixed size and shape at a definite temperature can be described by a dimensionless ratio, sometimes called the emissivity: the ratio of the emissive power of the body to the emissive power of a black body of the same size and shape at the same fixed temperature. With this definition, Kirchhoff's law states, in simpler language: In some cases, emissive power and absorptivity may be defined to depend on angle, as described below. The condition of thermodynamic equilibrium is necessary in the statement, because the equality of emissivity and absorptivity often does not hold when the material of the body is not in thermodynamic equilibrium. Kirchhoff's law has another corollary: the emissivity cannot exceed one (because the absorptivity cannot, by conservation of energy), so it is not possible to thermally radiate more energy than a black body, at equilibrium. In negative luminescence the angle and wavelength integrated absorption exceeds the material's emission; however, such systems are powered by an external source and are therefore not in thermodynamic equilibrium. Principle of detailed balance Kirchhoff's law of thermal radiation has a refinement in that not only is thermal emissivity equal to absorptivity, it is equal in detail. Consider a leaf. It is a poor absorber of green light (around 470 nm), which is why it looks green. By the principle of detailed balance, it is also a poor emitter of green light. In other words, if a material, illuminated by black-body radiation of temperature , is dark at a certain frequency , then its thermal radiation will also be dark at the same frequency and the same temperature . More generally, all intensive properties are balanced in detail. So for example, the absorptivity at a certain incidence direction, for a certain frequency, of a certain polarization, is the same as the emissivity at the same direction, for the same frequency, of the same polarization. This is the principle of detailed balance. History Before Kirchhoff's law was recognized, it had been experimentally established that a good absorber is a good emitter, and a poor absorber is a poor emitter. Naturally, a good reflector must be a poor absorber. This is why, for example, lightweight emergency thermal blankets are based on reflective metallic coatings: they lose little heat by radiation. Kirchhoff's great insight was to recognize the universality and uniqueness of the function that describes the black body emissive power. But he did not know the precise form or character of that universal function. Attempts were made by Lord Rayleigh and Sir James Jeans 1900–1905 to describe it in classical terms, resulting in Rayleigh–Jeans law. This law turned out to be inconsistent yielding the ultraviolet catastrophe. The correct form of the law was found by Max Planck in 1900, assuming quantized emission of radiation, and is termed Planck's law. This marks the advent of quantum mechanics. Theory In a blackbody enclosure that contains electromagnetic radiation with a certain amount of energy at thermodynamic equilibrium, this "photon gas" will have a Planck distribution of energies. One may suppose a second system, a cavity with walls that are opaque, rigid, and not perfectly reflective to any wavelength, to be brought into connection, through an optical filter, with the blackbody enclosure, both at the same temperature. Radiation can pass from one system to the other. For example, suppose in the second system, the density of photons at narrow frequency band around wavelength were higher than that of the first system. If the optical filter passed only that frequency band, then there would be a net transfer of photons, and their energy, from the second system to the first. This is in violation of the second law of thermodynamics, which requires that there can be no net transfer of heat between two bodies at the same temperature. In the second system, therefore, at each frequency, the walls must absorb and emit energy in such a way as to maintain the black body distribution. Hence absorptivity and emissivity must be equal. The absorptivity of the wall is the ratio of the energy absorbed by the wall to the energy incident on the wall, for a particular wavelength. Thus the absorbed energy is where is the intensity of black-body radiation at wavelength and temperature . Independent of the condition of thermal equilibrium, the emissivity of the wall is defined as the ratio of emitted energy to the amount that would be radiated if the wall were a perfect black body. The emitted energy is thus where is the emissivity at wavelength . For the maintenance of thermal equilibrium, these two quantities must be equal, or else the distribution of photon energies in the cavity will deviate from that of a black body. This yields Kirchhoff's law: By a similar, but more complicated argument, it can be shown that, since black-body radiation is equal in every direction (isotropic), the emissivity and the absorptivity, if they happen to be dependent on direction, must again be equal for any given direction. Average and overall absorptivity and emissivity data are often given for materials with values which differ from each other. For example, white paint is quoted as having an absorptivity of 0.16, while having an emissivity of 0.93. This is because the absorptivity is averaged with weighting for the solar spectrum, while the emissivity is weighted for the emission of the paint itself at normal ambient temperatures. The absorptivity quoted in such cases is being calculated by: while the average emissivity is given by: where is the emission spectrum of the sun, and is the emission spectrum of the paint. Although, by Kirchhoff's law, in the above equations, the above averages and are not generally equal to each other. The white paint will serve as a very good insulator against solar radiation, because it is very reflective of the solar radiation, and although it therefore emits poorly in the solar band, its temperature will be around room temperature, and it will emit whatever radiation it has absorbed in the infrared, where its emission coefficient is high. Planck's derivation Historically, Planck derived the black body radiation law and detailed balance according to a classical thermodynamic argument, with a single heuristic step, which was later interpreted as a quantization hypothesis. In Planck's set up, he started with a large Hohlraum at a fixed temperature . At thermal equilibrium, the Hohlraum is filled with a distribution of EM waves at thermal equilibrium with the walls of the Hohlraum. Next, he considered connecting the Hohlraum to a single small resonator, such as Hertzian resonators. The resonator reaches a certain form of thermal equilibrium with the Hohlraum, when the spectral input into the resonator equals the spectral output at the resonance frequency. Next, suppose there are two Hohlraums at the same fixed temperature , then Planck argued that the thermal equilibrium of the small resonator is the same when connected to either Hohlraum. For, we can disconnect the resonator from one Hohlraum and connect it to another. If the thermal equilibrium were different, then we have just transported energy from one to another, violating the second law. Therefore, the spectrum of all black bodies are identical at the same temperature. Using a heuristic of quantization, which he gleaned from Boltzmann, Planck argued that a resonator tuned to frequency , with average energy , would contain entropyfor some constant (later termed the Planck constant). Then applying , Planck obtained the black body radiation law. Another argument that does not depend on the precise form of the entropy function, can be given as follows. Next, suppose we have a material that violates Kirchhoff's law when integrated, such that the total coefficient of absorption is not equal to the coefficient of emission at a certain , then if the material at temperature is placed into a Hohlraum at temperature , it would spontaneously emit more than it absorbs, or conversely, thus spontaneously creating a temperature difference, violating the second law. Finally, suppose we have a material that violates Kirchhoff's law in detail, such that the total coefficient of absorption is not equal to the coefficient of emission at a certain and at a certain frequency , then since it does not violate Kirchhoff's law when integrated, there must exist two frequencies , such that the material absorbs more than it emits at , and conversely at . Now, place this material in one Hohlraum. It would spontaneously create a shift in the spectrum, making it higher at than at . However, this then allows us to tap from one Hohlraum with a resonator tuned at , then detach and attach to another Hohlraum at the same temperature, thus transporting energy from one to another, violating the second law. We may apply the same argument for polarization and direction of radiation, obtaining the full principle of detailed balance. Black bodies Near-black materials It has long been known that a lamp-black coating will make a body nearly black. Some other materials are nearly black in particular wavelength bands. Such materials do not survive all the very high temperatures that are of interest. An improvement on lamp-black is found in manufactured carbon nanotubes. Nano-porous materials can achieve refractive indices nearly that of vacuum, in one case obtaining average reflectance of 0.045%. Opaque bodies Bodies that are opaque to thermal radiation that falls on them are valuable in the study of heat radiation. Planck analyzed such bodies with the approximation that they be considered topologically to have an interior and to share an interface. They share the interface with their contiguous medium, which may be rarefied material such as air, or transparent material, through which observations can be made. The interface is not a material body and can neither emit nor absorb. It is a mathematical surface belonging jointly to the two media that touch it. It is the site of refraction of radiation that penetrates it and of reflection of radiation that does not. As such it obeys the Helmholtz reciprocity principle. The opaque body is considered to have a material interior that absorbs all and scatters or transmits none of the radiation that reaches it through refraction at the interface. In this sense the material of the opaque body is black to radiation that reaches it, while the whole phenomenon, including the interior and the interface, does not show perfect blackness. In Planck's model, perfectly black bodies, which he noted do not exist in nature, besides their opaque interior, have interfaces that are perfectly transmitting and non-reflective. Cavity radiation The walls of a cavity can be made of opaque materials that absorb significant amounts of radiation at all wavelengths. It is not necessary that every part of the interior walls be a good absorber at every wavelength. The effective range of absorbing wavelengths can be extended by the use of patches of several differently absorbing materials in parts of the interior walls of the cavity. In thermodynamic equilibrium the cavity radiation will precisely obey Planck's law. In this sense, thermodynamic equilibrium cavity radiation may be regarded as thermodynamic equilibrium black-body radiation to which Kirchhoff's law applies exactly, though no perfectly black body in Kirchhoff's sense is present. A theoretical model considered by Planck consists of a cavity with perfectly reflecting walls, initially with no material contents, into which is then put a small piece of carbon. Without the small piece of carbon, there is no way for non-equilibrium radiation initially in the cavity to drift towards thermodynamic equilibrium. When the small piece of carbon is put in, it radiation frequencies so that the cavity radiation comes to thermodynamic equilibrium. A hole in the wall of a cavity For experimental purposes, a hole in a cavity can be devised to provide a good approximation to a black surface, but will not be perfectly Lambertian, and must be viewed from nearly right angles to get the best properties. The construction of such devices was an important step in the empirical measurements that led to the precise mathematical identification of Kirchhoff's universal function, now known as Planck's law. Kirchhoff's perfect black bodies Planck also noted that the perfect black bodies of Kirchhoff do not occur in physical reality. They are theoretical fictions. Kirchhoff's perfect black bodies absorb all the radiation that falls on them, right in an infinitely thin surface layer, with no reflection and no scattering. They emit radiation in perfect accord with Lambert's cosine law. Original statements Gustav Kirchhoff stated his law in several papers in 1859 and 1860, and then in 1862 in an appendix to his collected reprints of those and some related papers. Prior to Kirchhoff's studies, it was known that for total heat radiation, the ratio of emissive power to absorptive ratio was the same for all bodies emitting and absorbing thermal radiation in thermodynamic equilibrium. This means that a good absorber is a good emitter. Naturally, a good reflector is a poor absorber. For wavelength specificity, prior to Kirchhoff, the ratio was shown experimentally by Balfour Stewart to be the same for all bodies, but the universal value of the ratio had not been explicitly considered in its own right as a function of wavelength and temperature. Kirchhoff's original contribution to the physics of thermal radiation was his postulate of a perfect black body radiating and absorbing thermal radiation in an enclosure opaque to thermal radiation and with walls that absorb at all wavelengths. Kirchhoff's perfect black body absorbs all the radiation that falls upon it. Every such black body emits from its surface with a spectral radiance that Kirchhoff labeled (for specific intensity, the traditional name for spectral radiance). The precise mathematical expression for that universal function was very much unknown to Kirchhoff, and it was just postulated to exist, until its precise mathematical expression was found in 1900 by Max Planck. It is nowadays referred to as Planck's law. Then, at each wavelength, for thermodynamic equilibrium in an enclosure, opaque to heat rays, with walls that absorb some radiation at every wavelength: See also Kirchhoff's laws (disambiguation) Sakuma–Hattori equation Wien's displacement law Stefan–Boltzmann law, which states that the power of emission is proportional to the fourth power of the black body's temperature References Citations Bibliography Translated: Reprinted as General references Evgeny Lifshitz and L. P. Pitaevskii, Statistical Physics: Part 2, 3rd edition (Elsevier, 1980). F. Reif, Fundamentals of Statistical and Thermal Physics (McGraw-Hill: Boston, 1965). Heat transfer Electromagnetic radiation Eponymous laws of physics Gustav Kirchhoff 1859 in science
Kirchhoff's law of thermal radiation
[ "Physics", "Chemistry" ]
3,340
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Electromagnetic radiation", "Radiation", "Thermodynamics" ]
591,492
https://en.wikipedia.org/wiki/Complementary%20good
In economics, a complementary good is a good whose appeal increases with the popularity of its complement. Technically, it displays a negative cross elasticity of demand and that demand for it increases when the price of another good decreases. If is a complement to , an increase in the price of will result in a negative movement along the demand curve of and cause the demand curve for to shift inward; less of each good will be demanded. Conversely, a decrease in the price of will result in a positive movement along the demand curve of and cause the demand curve of to shift outward; more of each good will be demanded. This is in contrast to a substitute good, whose demand decreases when its substitute's price decreases. When two goods are complements, they experience joint demand - the demand of one good is linked to the demand for another good. Therefore, if a higher quantity is demanded of one good, a higher quantity will also be demanded of the other, and vice versa. For example, the demand for razor blades may depend on the number of razors in use; this is why razors have sometimes been sold as loss leaders, to increase demand for the associated blades. Another example is that sometimes a toothbrush is packaged free with toothpaste. The toothbrush is a complement to the toothpaste; the cost of producing a toothbrush may be higher than toothpaste, but its sales depends on the demand of toothpaste. All non-complementary goods can be considered substitutes. If and are rough complements in an everyday sense, then consumers are willing to pay more for each marginal unit of good as they accumulate more . The opposite is true for substitutes: the consumer is willing to pay less for each marginal unit of good "" as it accumulates more of good "". Complementarity may be driven by psychological processes in which the consumption of one good (e.g., cola) stimulates demand for its complements (e.g., a cheeseburger). Consumption of a food or beverage activates a goal to consume its complements: foods that consumers believe would taste better together. Drinking cola increases consumers' willingness to pay for a cheeseburger. This effect appears to be contingent on consumer perceptions of these relationships rather than their sensory properties. Examples An example of this would be the demand for cars and petrol. The supply and demand for cars is represented by the figure, with the initial demand . Suppose that the initial price of cars is represented by with a quantity demanded of . If the price of petrol were to decrease by some amount, this would result in a higher quantity of cars demanded. This higher quantity demanded would cause the demand curve to shift rightward to a new position . Assuming a constant supply curve of cars, the new increased quantity demanded will be at with a new increased price . Other examples include automobiles and fuel, mobile phones and cellular service, printer and cartridge, among others. Perfect complement A perfect complement is a good that must be consumed with another good. The indifference curve of a perfect complement exhibits a right angle, as illustrated by the figure. Such preferences can be represented by a Leontief utility function. Few goods behave as perfect complements. One example is a left shoe and a right; shoes are naturally sold in pairs, and the ratio between sales of left and right shoes will never shift noticeably from 1:1. The degree of complementarity, however, does not have to be mutual; it can be measured by the cross price elasticity of demand. In the case of video games, a specific video game (the complement good) has to be consumed with a video game console (the base good). It does not work the other way: a video game console does not have to be consumed with that game. Example In marketing, complementary goods give additional market power to the producer. It allows vendor lock-in by increasing switching costs. A few types of pricing strategy exist for a complementary good and its base good: Pricing the base good at a relatively low price - this approach allows easy entry by consumers (e.g. low-price consumer printer vs. high-price cartridge) Pricing the base good at a relatively high price to the complementary good - this approach creates a barrier to entry and exit (e.g., a costly car vs inexpensive gas) Gross complements Sometimes the complement-relationship between two goods is not intuitive and must be verified by inspecting the cross-elasticity of demand using market data. Mosak's definition states "a good is a gross complement of if is negative, where for denotes the ordinary individual demand for a certain good." In fact, in Mosak's case, is not a gross complement of but is a gross complement of . The elasticity does not need to be symmetrical. Thus, is a gross complement of while can simultaneously be a gross substitutes for . Proof The standard Hicks decomposition of the effect on the ordinary demand for a good of a simple price change in a good , utility level and chosen bundle is If is a gross substitute for , the left-hand side of the equation and the first term of right-hand side are positive. By the symmetry of Mosak's perspective, evaluating the equation with respect to , the first term of right-hand side stays the same while some extreme cases exist where is large enough to make the whole right-hand-side negative. In this case, is a gross complement of . Overall, and are not symmetrical. Effect of price change of complementary goods Substitute good References Goods (economics) Utility function types
Complementary good
[ "Physics" ]
1,130
[ "Materials", "Goods (economics)", "Matter" ]
591,513
https://en.wikipedia.org/wiki/Optical%20cavity
An optical cavity, resonating cavity or optical resonator is an arrangement of mirrors or other optical elements that confines light waves similarly to how a cavity resonator confines microwaves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times, producing modes with certain resonance frequencies. Modes can be decomposed into longitudinal modes that differ only in frequency and transverse modes that have different intensity patterns across the cross section of the beam. Many types of optical cavities produce standing wave modes. Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them. Flat mirrors are not often used because of the difficulty of aligning them to the needed precision. The geometry (resonator type) must be chosen so that the beam remains stable, i.e. the size of the beam does not continually grow with multiple reflections. Resonator types are also designed to meet other criteria such as a minimum beam waist or having no focal point (and therefore no intense light at a single point) inside the cavity. Optical cavities are designed to have a large Q factor, meaning a beam undergoes many oscillation cycles with little attenuation. In the regime of high Q values, this is equivalent to the frequency line width being small compared to the resonant frequency of the cavity. Resonator modes Light confined in a resonator will reflect multiple times from the mirrors, and due to the effects of interference, only certain patterns and frequencies of radiation will be sustained by the resonator, with the others being suppressed by destructive interference. In general, radiation patterns which are reproduced on every round-trip of the light through the resonator are the most stable. These are known as the modes of the resonator. Resonator modes can be divided into two types: longitudinal modes, which differ in frequency from each other; and transverse modes, which may differ in both frequency and the intensity pattern of the light. The basic, or fundamental transverse mode of a resonator is a Gaussian beam. Resonator types The most common types of optical cavities consist of two facing plane (flat) or spherical mirrors. The simplest of these is the plane-parallel or Fabry–Pérot cavity, consisting of two opposing flat mirrors. While simple, this arrangement is rarely used in large-scale lasers due to the difficulty of alignment; the mirrors must be aligned parallel within a few seconds of arc, or "walkoff" of the intracavity beam will result in it spilling out of the sides of the cavity. However, this problem is much reduced for very short cavities with a small mirror separation distance (L < 1 cm). Plane-parallel resonators are therefore commonly used in microchip and microcavity lasers and semiconductor lasers. In these cases, rather than using separate mirrors, a reflective optical coating may be directly applied to the laser medium itself. The plane-parallel resonator is also the basis of the Fabry–Pérot interferometer. For a resonator with two mirrors with radii of curvature R1 and R2, there are a number of common cavity configurations. If the two radii are equal to half the cavity length (R1 = R2 = L / 2), a concentric or spherical resonator results. This type of cavity produces a diffraction-limited beam waist in the centre of the cavity, with large beam diameters at the mirrors, filling the whole mirror aperture. Similar to this is the hemispherical cavity, with one plane mirror and one mirror of radius equal to the cavity length. A common and important design is the confocal resonator, with mirrors of equal radii to the cavity length (R1 = R2 = L). This design produces the smallest possible beam diameter at the cavity mirrors for a given cavity length, and is often used in lasers where the purity of the transverse mode pattern is important. A concave-convex cavity has one convex mirror with a negative radius of curvature. This design produces no intracavity focus of the beam, and is thus useful in very high-power lasers where the intensity of the light might be damaging to the intracavity medium if brought to a focus. Less common resonator types include optical ring resonators and whispering-gallery mode resonators, in which a resonance is formed by waves moving in a closed loop rather than reflecting between two mirrors. Stability Only certain ranges of values for R1, R2, and L produce stable resonators in which periodic refocussing of the intracavity beam is produced. If the cavity is unstable, the beam size will grow without limit, eventually growing larger than the size of the cavity mirrors and being lost. By using methods such as ray transfer matrix analysis, it is possible to calculate a stability criterion: Values which satisfy the inequality correspond to stable resonators. The stability can be shown graphically by defining a stability parameter, g for each mirror: , and plotting g1 against g2 as shown. Areas bounded by the line g1 g2 = 1 and the axes are stable. Cavities at points exactly on the line are marginally stable; small variations in cavity length can cause the resonator to become unstable, and so lasers using these cavities are in practice often operated just inside the stability line. A simple geometric statement describes the regions of stability: A cavity is stable if the line segments between the mirrors and their centers of curvature overlap, but one does not lie entirely within the other. In the confocal cavity, if a ray is deviated from its original direction in the middle of the cavity, its displacement after reflecting from one of the mirrors is larger than in any other cavity design. This prevents amplified spontaneous emission and is important for designing high power amplifiers with good beam quality. Practical resonators If the optical cavity is not empty (e.g., a laser cavity which contains the gain medium), the value of L needs to be adjusted to account for the index of refraction of the medium. Optical elements such as lenses placed in the cavity alter the stability and mode size. In addition, for most gain media, thermal and other inhomogeneities create a variable lensing effect in the medium, which must be considered in the design of the laser resonator. Practical laser resonators may contain more than two mirrors; three- and four-mirror arrangements are common, producing a "folded cavity". Commonly, a pair of curved mirrors form one or more confocal sections, with the rest of the cavity being quasi-collimated and using plane mirrors. The shape of the laser beam depends on the type of resonator: The beam produced by stable, paraxial resonators can be well modeled by a Gaussian beam. In special cases the beam can be described as a single transverse mode and the spatial properties can be well described by the Gaussian beam, itself. More generally, this beam may be described as a superposition of transverse modes. Accurate description of such a beam involves expansion over some complete, orthogonal set of functions (over two-dimensions) such as Hermite polynomials or the Ince polynomials. Unstable laser resonators on the other hand, have been shown to produce fractal shaped beams. Some intracavity elements are usually placed at a beam waist between folded sections. Examples include acousto-optic modulators for cavity dumping and vacuum spatial filters for transverse mode control. For some low power lasers, the laser gain medium itself may be positioned at a beam waist. Other elements, such as filters, prisms and diffraction gratings often need large quasi-collimated beams. These designs allow compensation of the cavity beam's astigmatism, which is produced by Brewster-cut elements in the cavity. A Z-shaped arrangement of the cavity also compensates for coma while the 'delta' or X-shaped cavity does not. Out of plane resonators lead to rotation of the beam profile and more stability. The heat generated in the gain medium leads to frequency drift of the cavity, therefore the frequency can be actively stabilized by locking it to unpowered cavity. Similarly the pointing stability of a laser may still be improved by spatial filtering by an optical fibre. Alignment Precise alignment is important when assembling an optical cavity. For best output power and beam quality, optical elements must be aligned such that the path followed by the beam is centered through each element. Simple cavities are often aligned with an alignment laser—a well-collimated visible laser that can be directed along the axis of the cavity. Observation of the path of the beam and its reflections from various optical elements allows the elements' positions and tilts to be adjusted. More complex cavities may be aligned using devices such as electronic autocollimators and laser beam profilers. Optical delay lines Optical cavities can also be used as multipass optical delay lines, folding a light beam so that a long path-length may be achieved in a small space. A plane-parallel cavity with flat mirrors produces a flat zigzag light path, but as discussed above, these designs are very sensitive to mechanical disturbances and walk-off. When curved mirrors are used in a nearly confocal configuration, the beam travels on a circular zigzag path. The latter is called a Herriott-type delay line. A fixed insertion mirror is placed off-axis near one of the curved mirrors, and a mobile pickup mirror is similarly placed near the other curved mirror. A flat linear stage with one pickup mirror is used in case of flat mirrors and a rotational stage with two mirrors is used for the Herriott-type delay line. The rotation of the beam inside the cavity alters the polarization state of the beam. To compensate for this, a single pass delay line is also needed, made of either a three or two mirrors in a 3d respective 2d retro-reflection configuration on top of a linear stage. To adjust for beam divergence a second car on the linear stage with two lenses can be used. The two lenses act as a telescope producing a flat phase front of a Gaussian beam on a virtual end mirror. See also Optical feedback Multiple-prism grating laser oscillator (or Multiple-prism grating laser cavity) Coupled mode theory Vertical-cavity surface-emitting laser References Further reading Koechner, William. Solid-state laser engineering, 2nd ed. Springer Verlag (1988). An excellent two-part review of the history of optical cavities: Cavity, optical Laser science
Optical cavity
[ "Materials_science", "Engineering" ]
2,222
[ "Glass engineering and science", "Optical devices" ]
591,587
https://en.wikipedia.org/wiki/Hurewicz%20theorem
In mathematics, the Hurewicz theorem is a basic result of algebraic topology, connecting homotopy theory with homology theory via a map known as the Hurewicz homomorphism. The theorem is named after Witold Hurewicz, and generalizes earlier results of Henri Poincaré. Statement of the theorems The Hurewicz theorems are a key link between homotopy groups and homology groups. Absolute version For any path-connected space X and positive integer n there exists a group homomorphism called the Hurewicz homomorphism, from the n-th homotopy group to the n-th homology group (with integer coefficients). It is given in the following way: choose a canonical generator , then a homotopy class of maps is taken to . The Hurewicz theorem states cases in which the Hurewicz homomorphism is an isomorphism. For , if X is -connected (that is: for all ), then for all , and the Hurewicz map is an isomorphism. This implies, in particular, that the homological connectivity equals the homotopical connectivity when the latter is at least 1. In addition, the Hurewicz map is an epimorphism in this case. For , the Hurewicz homomorphism induces an isomorphism , between the abelianization of the first homotopy group (the fundamental group) and the first homology group. Relative version For any pair of spaces and integer there exists a homomorphism from relative homotopy groups to relative homology groups. The Relative Hurewicz Theorem states that if both and are connected and the pair is -connected then for and is obtained from by factoring out the action of . This is proved in, for example, by induction, proving in turn the absolute version and the Homotopy Addition Lemma. This relative Hurewicz theorem is reformulated by as a statement about the morphism where denotes the cone of . This statement is a special case of a homotopical excision theorem, involving induced modules for (crossed modules if ), which itself is deduced from a higher homotopy van Kampen theorem for relative homotopy groups, whose proof requires development of techniques of a cubical higher homotopy groupoid of a filtered space. Triadic version For any triad of spaces (i.e., a space X and subspaces A, B) and integer there exists a homomorphism from triad homotopy groups to triad homology groups. Note that The Triadic Hurewicz Theorem states that if X, A, B, and are connected, the pairs and are -connected and -connected, respectively, and the triad is -connected, then for and is obtained from by factoring out the action of and the generalised Whitehead products. The proof of this theorem uses a higher homotopy van Kampen type theorem for triadic homotopy groups, which requires a notion of the fundamental -group of an n-cube of spaces. Simplicial set version The Hurewicz theorem for topological spaces can also be stated for n-connected simplicial sets satisfying the Kan condition. Rational Hurewicz theorem Rational Hurewicz theorem: Let X be a simply connected topological space with for . Then the Hurewicz map induces an isomorphism for and a surjection for . Notes References Theorems in homotopy theory Homology theory Theorems in algebraic topology
Hurewicz theorem
[ "Mathematics" ]
705
[ "Theorems in algebraic topology", "Theorems in topology" ]
591,703
https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s%20theorem
In arithmetic combinatorics, Szemerédi's theorem is a result concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured that every set of integers A with positive natural density contains a k-term arithmetic progression for every k. Endre Szemerédi proved the conjecture in 1975. Statement A subset A of the natural numbers is said to have positive upper density if Szemerédi's theorem asserts that a subset of the natural numbers with positive upper density contains an arithmetic progression of length k for all positive integers k. An often-used equivalent finitary version of the theorem states that for every positive integer k and real number , there exists a positive integer such that every subset of {1, 2, ..., N} of size at least contains an arithmetic progression of length k. Another formulation uses the function rk(N), the size of the largest subset of {1, 2, ..., N} without an arithmetic progression of length k. Szemerédi's theorem is equivalent to the asymptotic bound That is, rk(N) grows less than linearly with N. History Van der Waerden's theorem, a precursor of Szemerédi's theorem, was proved in 1927. The cases k = 1 and k = 2 of Szemerédi's theorem are trivial. The case k = 3, known as Roth's theorem, was established in 1953 by Klaus Roth via an adaptation of the Hardy–Littlewood circle method. Szemerédi next proved the case k = 4 through combinatorics. Using an approach similar to the one he used for the case k = 3, Roth gave a second proof for k = 4 in 1972. The general case was settled in 1975, also by Szemerédi, who developed an ingenious and complicated extension of his previous combinatorial argument for k = 4 (called "a masterpiece of combinatorial reasoning" by Erdős). Several other proofs are now known, the most important being those by Hillel Furstenberg in 1977, using ergodic theory, and by Timothy Gowers in 2001, using both Fourier analysis and combinatorics while also introducing what is now called the Gowers norm. Terence Tao has called the various proofs of Szemerédi's theorem a "Rosetta stone" for connecting disparate fields of mathematics. Quantitative bounds It is an open problem to determine the exact growth rate of rk(N). The best known general bounds are where . The lower bound is due to O'Bryant building on the work of Behrend, Rankin, and Elkin. The upper bound is due to Gowers. For small k, there are tighter bounds than the general case. When k = 3, Bourgain, Heath-Brown, Szemerédi, Sanders, and Bloom established progressively smaller upper bounds, and Bloom and Sisask then proved the first bound that broke the so-called "logarithmic barrier". The current best bounds are , for some constant , respectively due to O'Bryant, and Bloom and Sisask (the latter built upon the breakthrough result of Kelley and Meka, who obtained the same upper bound, with "1/9" replaced by "1/12"). For k = 4, Green and Tao proved that For k=5 in 2023 and k≥5 in 2024 Leng, Sah and Sawhney proved in preprints that: Extensions and generalizations A multidimensional generalization of Szemerédi's theorem was first proven by Hillel Furstenberg and Yitzhak Katznelson using ergodic theory. Timothy Gowers, Vojtěch Rödl and Jozef Skokan with Brendan Nagle, Rödl, and Mathias Schacht, and Terence Tao provided combinatorial proofs. Alexander Leibman and Vitaly Bergelson generalized Szemerédi's to polynomial progressions: If is a set with positive upper density and are integer-valued polynomials such that , then there are infinitely many such that for all . Leibman and Bergelson's result also holds in a multidimensional setting. The finitary version of Szemerédi's theorem can be generalized to finite additive groups including vector spaces over finite fields. The finite field analog can be used as a model for understanding the theorem in the natural numbers. The problem of obtaining bounds in the k=3 case of Szemerédi's theorem in the vector space is known as the cap set problem. The Green–Tao theorem asserts the prime numbers contain arbitrarily long arithmetic progressions. It is not implied by Szemerédi's theorem because the primes have density 0 in the natural numbers. As part of their proof, Ben Green and Tao introduced a "relative" Szemerédi theorem which applies to subsets of the integers (even those with 0 density) satisfying certain pseudorandomness conditions. A more general relative Szemerédi theorem has since been given by David Conlon, Jacob Fox, and Yufei Zhao. The Erdős conjecture on arithmetic progressions would imply both Szemerédi's theorem and the Green–Tao theorem. See also Problems involving arithmetic progressions Ergodic Ramsey theory Arithmetic combinatorics Szemerédi regularity lemma Van der Waerden's theorem Notes Further reading External links PlanetMath source for initial version of this page Announcement by Ben Green and Terence Tao – the preprint is available at math.NT/0404188 Discussion of Szemerédi's theorem (part 1 of 5) Ben Green and Terence Tao: Szemerédi's theorem on Scholarpedia Additive combinatorics Ramsey theory Theorems in combinatorics Theorems in number theory
Szemerédi's theorem
[ "Mathematics" ]
1,216
[ "Mathematical theorems", "Theorems in combinatorics", "Additive combinatorics", "Combinatorics", "Theorems in discrete mathematics", "Theorems in number theory", "Mathematical problems", "Ramsey theory", "Number theory" ]
591,768
https://en.wikipedia.org/wiki/Index%20of%20information%20theory%20articles
This is a list of information theory topics. A Mathematical Theory of Communication algorithmic information theory arithmetic coding channel capacity Communication Theory of Secrecy Systems conditional entropy conditional quantum entropy confusion and diffusion cross-entropy data compression entropic uncertainty (Hirchman uncertainty) entropy encoding entropy (information theory) Fisher information Hick's law Huffman coding information bottleneck method information theoretic security information theory joint entropy Kullback–Leibler divergence lossless compression negentropy noisy-channel coding theorem (Shannon's theorem) principle of maximum entropy quantum information science range encoding redundancy (information theory) Rényi entropy self-information Shannon–Hartley theorem Information theory Information theory topics
Index of information theory articles
[ "Mathematics", "Technology", "Engineering" ]
139
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
592,172
https://en.wikipedia.org/wiki/FU%20Orionis%20star
In stellar evolution, an FU Orionis star (also FU Orionis object, or FUor) is a pre–main-sequence star which displays an extreme change in magnitude and spectral type. One example is the star V1057 Cyg, which became six magnitudes brighter and went from spectral type dKe to F-type supergiant during 1969-1970. These stars are named after their type-star, FU Orionis. The current model developed primarily by Lee Hartmann and Scott Jay Kenyon associates the FU Orionis flare with abrupt mass transfer from an accretion disc onto a young, low mass T Tauri star. Mass accretion rates for these objects are estimated to be around 10−4 solar masses per year. The rise time of these eruptions is typically on the order of 1 year, but can be much longer. The lifetime of this high-accretion, high-luminosity phase is on the order of decades. However, even with such a relatively short timespan, no FU Orionis object had been observed shutting off. By comparing the number of FUor outbursts to the rate of star formation in the solar neighborhood, it is estimated that the average young star undergoes approximately 10–20 FUor eruptions over its lifetime. The spectra of FU Orionis stars are dominated by absorption features produced in the inner accretion disc. The spectrum of the inner part produce a spectrum of a F-G supergiant, while the outer parts and slightly colder parts of the disk produce a K-M type supergiant spectrum that can be observed in the near-infrared. In FU Orionis stars the disk radiation dominates, which can be used to study the inner parts of the disk. The prototypes of this class are: FU Orionis, V1057 Cygni, V1515 Cygni, and the embedded protostar V1647 Orionis, which erupted in January 2004. See also Orion variable T Tauri star EX Lup variable star (also called an EXor) References Juhan Frank, Andrew King, Derek Raine (2002). Accretion power in astrophysics, Third Edition, Cambridge University Press. . External links The Furor over FUOrs (15 November 2010) Discovery of possible FU-Ori and UX-Ori type objects (18 November 2009) https://web.archive.org/web/20060831060814/http://www.aavso.org/vstar/vsots/0202.shtml Star types Stellar evolution Articles containing video clips
FU Orionis star
[ "Physics", "Astronomy" ]
530
[ "Astronomical classification systems", "Star types", "Astrophysics", "Stellar evolution" ]
592,198
https://en.wikipedia.org/wiki/Stretch%20rule
In classical mechanics, the stretch rule (sometimes referred to as Routh's rule) states that the moment of inertia of a rigid object is unchanged when the object is stretched parallel to an axis of rotation that is a principal axis, provided that the distribution of mass remains unchanged except in the direction parallel to the axis. This operation leaves cylinders oriented parallel to the axis unchanged in radius. This rule can be applied with the parallel axis theorem and the perpendicular axis theorem to find moments of inertia for a variety of shapes. Derivation The (scalar) moment of inertia of a rigid body around the z-axis is given by: Where is the distance of a point from the z-axis. We can expand as follows, since we are dealing with stretching over the z-axis only: Here, is the body's height. Stretching the object by a factor of along the z-axis is equivalent to dividing the mass density by (meaning ), as well as integrating over new limits and (the new height of the object), thus leaving the total mass unchanged. This means the new moment of inertia will be: References Classical mechanics Moment (physics)
Stretch rule
[ "Physics", "Mathematics" ]
241
[ "Physical quantities", "Quantity", "Classical mechanics", "Mechanics", "Moment (physics)" ]
592,505
https://en.wikipedia.org/wiki/Padding%20%28cryptography%29
In cryptography, padding is any of a number of distinct practices which all include adding data to the beginning, middle, or end of a message prior to encryption. In classical cryptography, padding may include adding nonsense phrases to a message to obscure the fact that many messages end in predictable ways, e.g. sincerely yours. Classical cryptography Official messages often start and end in predictable ways: My dear ambassador, Weather report, Sincerely yours, etc. The primary use of padding with classical ciphers is to prevent the cryptanalyst from using that predictability to find known plaintext that aids in breaking the encryption. Random length padding also prevents an attacker from knowing the exact length of the plaintext message. A famous example of classical padding which caused a great misunderstanding is "the world wonders" incident, which nearly caused an Allied loss at the World War II Battle off Samar, part of the larger Battle of Leyte Gulf. In that example, Admiral Chester Nimitz, the Commander in Chief, U.S. Pacific Fleet in WWII, sent the following message to Admiral Bull Halsey, commander of Task Force Thirty Four (the main Allied fleet) at the Battle of Leyte Gulf, on October 25, 1944: With padding (bolded) and metadata added, the message became: Halsey's radio operator mistook some of the padding for the message and so Admiral Halsey ended up reading the following message: Admiral Halsey interpreted the padding phrase "the world wonders" as a sarcastic reprimand, which caused him to have an emotional outburst and then lock himself in his bridge and sulk for an hour before he moved his forces to assist at the Battle off Samar. Halsey's radio operator should have been tipped off by the letters RR that "the world wonders" was padding; all other radio operators who received Admiral Nimitz's message correctly removed both padding phrases. Many classical ciphers arrange the plaintext into particular patterns (e.g., squares, rectangles, etc.) and if the plaintext does not exactly fit, it is often necessary to supply additional letters to fill out the pattern. Using nonsense letters for this purpose has a side benefit of making some kinds of cryptanalysis more difficult. Symmetric cryptography Hash functions Most modern cryptographic hash functions process messages in fixed-length blocks; all but the earliest hash functions include some sort of padding scheme. It is critical for cryptographic hash functions to employ termination schemes that prevent a hash from being vulnerable to length extension attacks. Many padding schemes are based on appending predictable data to the final block. For example, the pad could be derived from the total length of the message. This kind of padding scheme is commonly applied to hash algorithms that use the Merkle–Damgård construction such as MD-5, SHA-1, and SHA-2 family such as SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256 Block cipher mode of operation Cipher-block chaining (CBC) mode is an example of block cipher mode of operation. Some block cipher modes (CBC and PCBC essentially) for symmetric-key encryption algorithms require plain text input that is a multiple of the block size, so messages may have to be padded to bring them to this length. There is currently a shift to use streaming mode of operation instead of block mode of operation. An example of streaming mode encryption is the counter mode of operation. Streaming modes of operation can encrypt and decrypt messages of any size and therefore do not require padding. More intricate ways of ending a message such as ciphertext stealing or residual block termination avoid the need for padding. A disadvantage of padding is that it makes the plain text of the message susceptible to padding oracle attacks. Padding oracle attacks allow the attacker to gain knowledge of the plain text without attacking the block cipher primitive itself. Padding oracle attacks can be avoided by making sure that an attacker cannot gain knowledge about the removal of the padding bytes. This can be accomplished by verifying a message authentication code (MAC) or digital signature before removal of the padding bytes, or by switching to a streaming mode of operation. Bit padding Bit padding can be applied to messages of any size. A single '1' bit is added to the message and then as many '0' bits as required (possibly none) are added. The number of '0' bits added will depend on the block boundary to which the message needs to be extended. In bit terms this is "1000 ... 0000". This method can be used to pad messages which are any number of bits long, not necessarily a whole number of bytes long. For example, a message of 23 bits that is padded with 9 bits in order to fill a 32-bit block: ... | 1011 1001 1101 0100 0010 0111 0000 0000 | This padding is the first step of a two-step padding scheme used in many hash functions including MD5 and SHA. In this context, it is specified by RFC1321 step 3.1. This padding scheme is defined by ISO/IEC 9797-1 as Padding Method 2. Byte padding Byte padding can be applied to messages that can be encoded as an integral number of bytes. ANSI X9.23 In ANSI X9.23, between 1 and 8 bytes are always added as padding. The block is padded with random bytes (although many implementations use 00) and the last byte of the block is set to the number of bytes added. Example: In the following example the block size is 8 bytes, and padding is required for 4 bytes (in hexadecimal format) ... | DD DD DD DD DD DD DD DD | DD DD DD DD 00 00 00 04 | ISO 10126 ISO 10126 (withdrawn, 2007) specifies that the padding should be done at the end of that last block with random bytes, and the padding boundary should be specified by the last byte. Example: In the following example the block size is 8 bytes and padding is required for 4 bytes ... | DD DD DD DD DD DD DD DD | DD DD DD DD 81 A6 23 04 | PKCS#5 and PKCS#7 PKCS#7 is described in RFC 5652. Padding is in whole bytes. The value of each added byte is the number of bytes that are added, i.e. bytes, each of value are added. The number of bytes added will depend on the block boundary to which the message needs to be extended. The padding will be one of: 01 02 02 03 03 03 04 04 04 04 05 05 05 05 05 06 06 06 06 06 06 etc. This padding method (as well as the previous two) is well-defined if and only if is less than 256. Example: In the following example, the block size is 8 bytes and padding is required for 4 bytes ... | DD DD DD DD DD DD DD DD | DD DD DD DD 04 04 04 04 | If the length of the original data is an integer multiple of the block size , then an extra block of bytes with value is added. This is necessary so the deciphering algorithm can determine with certainty whether the last byte of the last block is a pad byte indicating the number of padding bytes added or part of the plaintext message. Consider a plaintext message that is an integer multiple of bytes with the last byte of plaintext being 01. With no additional information, the deciphering algorithm will not be able to determine whether the last byte is a plaintext byte or a pad byte. However, by adding bytes each of value after the 01 plaintext byte, the deciphering algorithm can always treat the last byte as a pad byte and strip the appropriate number of pad bytes off the end of the ciphertext; said number of bytes to be stripped based on the value of the last byte. PKCS#5 padding is identical to PKCS#7 padding, except that it has only been defined for block ciphers that use a 64-bit (8-byte) block size. In practice, the two can be used interchangeably. The maximum block size is 255, as it is the biggest number a byte can contain. ISO/IEC 7816-4 ISO/IEC 7816-4:2005 is identical to the bit padding scheme, applied to a plain text of N bytes. This means in practice that the first byte is a mandatory byte valued '80' (Hexadecimal) followed, if needed, by 0 to N − 1 bytes set to '00', until the end of the block is reached. ISO/IEC 7816-4 itself is a communication standard for smart cards containing a file system, and in itself does not contain any cryptographic specifications. Example: In the following example the block size is 8 bytes and padding is required for 4 bytes ... | DD DD DD DD DD DD DD DD | DD DD DD DD 80 00 00 00 | The next example shows a padding of just one byte ... | DD DD DD DD DD DD DD DD | DD DD DD DD DD DD DD 80 | Zero padding All the bytes that are required to be padded are padded with zero. The zero padding scheme has not been standardized for encryption, although it is specified for hashes and MACs as Padding Method 1 in ISO/IEC 10118-1 and ISO/IEC 9797-1. Example: In the following example the block size is 8 bytes and padding is required for 4 bytes ... | DD DD DD DD DD DD DD DD | DD DD DD DD 00 00 00 00 | Zero padding may not be reversible if the original file ends with one or more zero bytes, making it impossible to distinguish between plaintext data bytes and padding bytes. It may be used when the length of the message can be derived out-of-band. It is often applied to binary encoded strings (null-terminated string) as the null character can usually be stripped off as whitespace. Zero padding is sometimes also referred to as "null padding" or "zero byte padding". Some implementations may add an additional block of zero bytes if the plaintext is already divisible by the block size. Public key cryptography In public key cryptography, padding is the process of preparing a message for encryption or signing using a specification or scheme such as PKCS#1 v2.2, OAEP, PSS, PSSR, IEEE P1363 EMSA2 and EMSA5. A modern form of padding for asymmetric primitives is OAEP applied to the RSA algorithm, when it is used to encrypt a limited number of bytes. The operation is referred to as "padding" because originally, random material was simply appended to the message to make it long enough for the primitive. This form of padding is not secure and is therefore no longer applied. A modern padding scheme aims to ensure that the attacker cannot manipulate the plaintext to exploit the mathematical structure of the primitive and will usually be accompanied by a proof, often in the random oracle model, that breaking the padding scheme is as hard as solving the hard problem underlying the primitive. Traffic analysis and protection via padding Even if perfect cryptographic routines are used, the attacker can gain knowledge of the amount of traffic that was generated. The attacker might not know what Alice and Bob were talking about, but can know that they were talking and how much they talked. In some circumstances this leakage can be highly compromising. Consider for example when a military is organising a secret attack against another nation: it may suffice to alert the other nation for them to know merely that there is a lot of secret activity going on. As another example, when encrypting Voice Over IP streams that use variable bit rate encoding, the number of bits per unit of time is not obscured, and this can be exploited to guess spoken phrases. Similarly, the burst patterns that common video encoders produce are often sufficient to identify the streaming video a user is watching uniquely. Even the total size of an object alone, such as a website, file, software package download, or online video, can uniquely identify an object, if the attacker knows or can guess a known set the object comes from. The side-channel of encrypted content length was used to extract passwords from HTTPS communications in the well-known CRIME and BREACH attacks. Padding an encrypted message can make traffic analysis harder by obscuring the true length of its payload. The choice of length to pad a message to may be made either deterministically or randomly; each approach has strengths and weaknesses that apply in different contexts. Randomized padding A random number of additional padding bits or bytes may be appended to the end of a message, together with an indication at the end how much padding was added. If the amount of padding is chosen as a uniform random number between 0 and some maximum M, for example, then an eavesdropper will be unable to determine the message's length precisely within that range. If the maximum padding M is small compared to the message's total size, then this padding will not add much overhead, but the padding will obscure only the least-significant bits of the object's total length, leaving the approximate length of large objects readily observable and hence still potentially uniquely identifiable by their length. If the maximum padding M is comparable to the size of the payload, in contrast, an eavesdropper's uncertainty about the message's true payload size is much larger, at the cost that padding may add up to 100% overhead ( blow-up) to the message. In addition, in common scenarios in which an eavesdropper has the opportunity to see many successive messages from the same sender, and those messages are similar in ways the attacker knows or can guess, then the eavesdropper can use statistical techniques to decrease and eventually even eliminate the benefit of randomized padding. For example, suppose a user's application regularly sends messages of the same length, and the eavesdropper knows or can guess fact based on fingerprinting the user's application for example. Alternatively, an active attacker might be able to induce an endpoint to send messages regularly, such as if the victim is a public server. In such cases, the eavesdropper can simply compute the average over many observations to determine the length of the regular message's payload. Deterministic padding A deterministic padding scheme always pads a message payload of a given length to form an encrypted message of a particular corresponding output length. When many payload lengths map to the same padded output length, an eavesdropper cannot distinguish or learn any information about the payload's true length within one of these length buckets, even after many observations of the identical-length messages being transmitted. In this respect, deterministic padding schemes have the advantage of not leaking any additional information with each successive message of the same payload size. On the other hand, suppose an eavesdropper can benefit from learning about small variations in payload size, such as plus or minus just one byte in a password-guessing attack for example. If the message sender is unlucky enough to send many messages whose payload lengths vary by only one byte, and that length is exactly on the border between two of the deterministic padding classes, then these plus-or-minus one payload lengths will consistently yield different padded lengths as well (plus-or-minus one block for example), leaking exactly the fine-grained information the attacker desires. Against such risks, randomized padding can offer more protection by independently obscuring the least-significant bits of message lengths. Common deterministic padding methods include padding to a constant block size and padding to the next-larger power of two. Like randomized padding with a small maximum amount M, however, padding deterministically to a block size much smaller than the message payload obscures only the least-significant bits of the messages true length, leaving the messages's true approximate length largely unprotected. Padding messages to a power of two (or any other fixed base) reduces the maximum amount of information that the message can leak via its length from to . Padding to a power of two increases message size overhead by up to 100%, however, and padding to powers of larger integer bases increase maximum overhead further. The PADMÉ scheme, proposed for padded uniform random blobs or PURBs, deterministically pads messages to lengths representable as a floating point number whose mantissa is no longer (i.e., contains no more significant bits) than its exponent. This length constraint ensures that a message leaks at most bits of information via its length, like padding to a power of two, but incurs much less overhead of at most 12% for tiny messages and decreasing gradually with message size. See also Chaffing and winnowing, mixing in large amounts of nonsense before sending Ciphertext stealing, another approach to deal with messages that are not a multiple of the block length Initialization vector, salt (cryptography), which are sometimes confused with padding Key encapsulation, an alternative to padding for public key systems used to exchange symmetric keys PURB or padded uniform random blob, an encryption discipline that minimizes leakage from either metadata or length Russian copulation, another technique to prevent cribs References Further reading XCBC: csrc.nist.gov/groups/ST/toolkit/BCM/documents/workshop2/presentations/xcbc.pdf Cryptography Padding algorithms
Padding (cryptography)
[ "Mathematics", "Engineering" ]
3,709
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
592,513
https://en.wikipedia.org/wiki/Superlubricity
Superlubricity is a regime of relative motion in which friction vanishes or very nearly vanishes. However, the definition of "vanishing" friction level is not clear, which makes the term vague. As an ad hoc definition, a kinetic coefficient of friction less than 0.01 can be adopted. This definition also requires further discussion and clarification. Superlubricity may occur when two crystalline surfaces slide over each other in dry incommensurate contact. This was first described in the early 1980s for Frenkel–Kontorova models and is called the Aubry transition. It has been extensively studied as a mathematical model, in atomistic simulations and in a range of experimental systems. This effect, also called structural lubricity, was verified between two graphite surfaces in 2004. The atoms in graphite are oriented in a hexagonal manner and form an atomic hill-and-valley landscape, which looks like an egg-crate. When the two graphite surfaces are in registry (every 60 degrees), the friction force is high. When the two surfaces are rotated out of registry, the friction is greatly reduced. This is like two egg-crates which can slide over each other more easily when they are "twisted" with respect to each other. Observation of superlubricity in microscale graphite structures was reported in 2012, by shearing a square graphite mesa a few micrometers across, and observing the self-retraction of the sheared layer. Such effects were also theoretically described for a model of graphene and nickel layers. This observation, which is reproducible even under ambient conditions, shifts interest in superlubricity from a primarily academic topic, accessible only under highly idealized conditions, to one with practical implications for micro and nanomechanical devices. A state of ultralow friction can also be achieved when a sharp tip slides over a flat surface and the applied load is below a certain threshold. Such a "superlubric" threshold depends on the tip-surface interaction and the stiffness of the materials in contact, as described by the Tomlinson model. The threshold can be significantly increased by exciting the sliding system at its resonance frequency, which suggests a practical way to limit wear in nanoelectromechanical systems. Superlubricity was also observed between a gold AFM tip and Teflon substrate due to repulsive Van der Waals forces and hydrogen-bonded layer formed by glycerol on the steel surfaces. Formation of the hydrogen-bonded layer was also shown to lead to superlubricity between quartz glass surfaces lubricated by biological liquid obtained from mucilage of Brasenia schreberi. Other mechanisms of superlubricity may include: (a) thermodynamic repulsion due to a layer of free or grafted macromolecules between the bodies so that the entropy of the intermediate layer decreases at small distances due to stronger confinement; (b) electrical repulsion due to external electrical voltage; (c) repulsion due to electrical double layer; (d) repulsion due to thermal fluctuations. The similarity of the term superlubricity with terms such as superconductivity and superfluidity is misleading; other energy dissipation mechanisms can lead to a finite (normally small) friction force. Superlubricity is more analogous to phenomena such as superelasticity, in which substances such as Nitinol have very low, but nonzero, elastic moduli; supercooling, in which substances remain liquid until a lower-than-normal temperature; super black, which reflects very little light; giant magnetoresistance, in which very large but finite magnetoresistance effects are observed in alternating nonmagnetic and ferromagnetic layers; superhard materials, which are diamond or nearly as hard as diamond; and superlensing, which have a resolution which, while finer than the diffraction limit, is still finite. Macroscale In 2015, researchers first obtained evidence for superlubricity at microscales. The experiments were supported by computational studies. The Mira supercomputer simulated up to 1.2 million atoms for dry environments and up to 10 million atoms for humid environments. The researchers used LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) code to carry out reactive molecular dynamics simulations. The researchers optimized LAMMPS and its implementation of ReaxFF by adding OpenMP threading, replacing MPI point-to-point communication with MPI collectives in key algorithms, and leveraging MPI I/O. These enhancements doubled performance. Applications Friction is known to be a major consumer of energy; for instance in a detailed study it was found that it may lead to one third of the energy losses in new automobile engines. Superlubricious coatings could reduce this. Potential applications include computer hard drives, wind turbine gears, and mechanical rotating seals for microelectromechanical and nanoelectromechanical systems. See also Friction force microscopy Tomlinson model References Condensed matter physics
Superlubricity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,047
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
592,897
https://en.wikipedia.org/wiki/Hellinger%E2%80%93Toeplitz%20theorem
In functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product is bounded. By definition, an operator A is symmetric if for all x, y in the domain of A. Note that symmetric everywhere-defined operators are necessarily self-adjoint, so this theorem can also be stated as follows: an everywhere-defined self-adjoint operator is bounded. The theorem is named after Ernst David Hellinger and Otto Toeplitz. This theorem can be viewed as an immediate corollary of the closed graph theorem, as self-adjoint operators are closed. Alternatively, it can be argued using the uniform boundedness principle. One relies on the symmetric assumption, therefore the inner product structure, in proving the theorem. Also crucial is the fact that the given operator A is defined everywhere (and, in turn, the completeness of Hilbert spaces). The Hellinger–Toeplitz theorem reveals certain technical difficulties in the mathematical formulation of quantum mechanics. Observables in quantum mechanics correspond to self-adjoint operators on some Hilbert space, but some observables (like energy) are unbounded. By Hellinger–Toeplitz, such operators cannot be everywhere defined (but they may be defined on a dense subset). Take for instance the quantum harmonic oscillator. Here the Hilbert space is L2(R), the space of square integrable functions on R, and the energy operator H is defined by (assuming the units are chosen such that ℏ = m = ω = 1) This operator is self-adjoint and unbounded (its eigenvalues are 1/2, 3/2, 5/2, ...), so it cannot be defined on the whole of L2(R). References Reed, Michael and Simon, Barry: Methods of Mathematical Physics, Volume 1: Functional Analysis. Academic Press, 1980. See Section III.5. Theorems in functional analysis Hilbert spaces
Hellinger–Toeplitz theorem
[ "Physics", "Mathematics" ]
422
[ "Hilbert spaces", "Theorems in mathematical analysis", "Theorems in functional analysis", "Quantum mechanics" ]
593,255
https://en.wikipedia.org/wiki/Venturi%20effect
The Venturi effect is the reduction in fluid pressure that results when a moving fluid speeds up as it flows from one section of a pipe to a smaller section. The Venturi effect is named after its discoverer, the 18th-century Italian physicist Giovanni Battista Venturi. The effect has various engineering applications, as the reduction in pressure inside the constriction can be used both for measuring the fluid flow and for moving other fluids (e.g. in a vacuum ejector). Background In inviscid fluid dynamics, an incompressible fluid's velocity must increase as it passes through a constriction in accord with the principle of mass continuity, while its static pressure must decrease in accord with the principle of conservation of mechanical energy (Bernoulli's principle) or according to the Euler equations. Thus, any gain in kinetic energy a fluid may attain by its increased velocity through a constriction is balanced by a drop in pressure because of its loss in potential energy. By measuring pressure, the flow rate can be determined, as in various flow measurement devices such as Venturi meters, Venturi nozzles and orifice plates. Referring to the adjacent diagram, using Bernoulli's equation in the special case of steady, incompressible, inviscid flows (such as the flow of water or other liquid, or low-speed flow of gas) along a streamline, the theoretical static pressure drop at the constriction is given by where is the density of the fluid, is the (slower) fluid velocity where the pipe is wider, and is the (faster) fluid velocity where the pipe is narrower (as seen in the figure). The static pressure at each position is measured using a small tube either outside and ending at the wall or into the pipe where the small tube is perpendicular to the flow direction. Choked flow The limiting case of the Venturi effect is when a fluid reaches the state of choked flow, where the fluid velocity approaches the local speed of sound. When a fluid system is in a state of choked flow, a further decrease in the downstream pressure environment will not lead to an increase in velocity, unless the fluid is compressed. The mass flow rate for a compressible fluid will increase with increased upstream pressure, which will increase the density of the fluid through the constriction (though the velocity will remain constant). This is the principle of operation of a de Laval nozzle. Increasing source temperature will also increase the local sonic velocity, thus allowing increased mass flow rate, but only if the nozzle area is also increased to compensate for the resulting decrease in density. Expansion of the section The Bernoulli equation is invertible, and pressure should rise when a fluid slows down. Nevertheless, if there is an expansion of the tube section, turbulence will appear, and the theorem will not hold. In all experimental Venturi tubes, the pressure in the entrance is compared to the pressure in the middle section; the output section is never compared with them. Experimental apparatus Venturi tubes The simplest apparatus is a tubular setup known as a Venturi tube or simply a Venturi (plural: "Venturis" or occasionally "Venturies"). Fluid flows through a length of pipe of varying diameter. To avoid undue aerodynamic drag, a Venturi tube typically has an entry cone of 30 degrees and an exit cone of 5 degrees. Venturi tubes are often used in processes where permanent pressure loss is not tolerable and where maximum accuracy is needed in case of highly viscous liquids. Orifice plate Venturi tubes are more expensive to construct than simple orifice plates, and both function on the same basic principle. However, for any given differential pressure, orifice plates cause significantly more permanent energy loss. Instrumentation and measurement Both Venturi tubes and orifice plates are used in industrial applications and in scientific laboratories for measuring the flow rate of liquids. Flow rate A Venturi can be used to measure the volumetric flow rate, , using Bernoulli's principle. Since then A Venturi can also be used to mix a liquid with a gas. If a pump forces the liquid through a tube connected to a system consisting of a Venturi to increase the liquid speed (the diameter decreases), a short piece of tube with a small hole in it, and last a Venturi that decreases speed (so the pipe gets wider again), the gas will be sucked in through the small hole because of changes in pressure. At the end of the system, a mixture of liquid and gas will appear. See aspirator and pressure head for discussion of this type of siphon. Differential pressure As fluid flows through a Venturi, the expansion and compression of the fluids cause the pressure inside the Venturi to change. This principle can be used in metrology for gauges calibrated for differential pressures. This type of pressure measurement may be more convenient, for example, to measure fuel or combustion pressures in jet or rocket engines. The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel who used them to measure small and large flows of water and wastewater beginning at the end of the 19th century. While working for the Holyoke Water Power Company, Herschel would develop the means for measuring these flows to determine the water power consumption of different mills on the Holyoke Canal System, first beginning development of the device in 1886, two years later he would describe his invention of the Venturi meter to William Unwin in a letter dated June 5, 1888. Compensation for temperature, pressure, and mass Fundamentally, pressure-based meters measure kinetic energy density. Bernoulli's equation (used above) relates this to mass density and volumetric flow: where constant terms are absorbed into k. Using the definitions of density (), molar concentration (), and molar mass (), one can also derive mass flow or molar flow (i.e. standard volume flow): However, measurements outside the design point must compensate for the effects of temperature, pressure, and molar mass on density and concentration. The ideal gas law is used to relate actual values to design values: Substituting these two relations into the pressure-flow equations above yields the fully compensated flows: Q, m, or n are easily isolated by dividing and taking the square root. Note that pressure-, temperature-, and mass-compensation is required for every flow, regardless of the end units or dimensions. Also we see the relations: Examples The Venturi effect may be observed or used in the following: Machines During Underway replenishment the helmsman of each ship must constantly steer away from the other ship due to the Venturi effect, otherwise they will collide. Cargo eductors on oil product and chemical ship tankers Inspirators mix air and flammable gas in grills, gas stoves and Bunsen burners Water aspirators produce a partial vacuum using the kinetic energy from the faucet water pressure Steam siphons use the kinetic energy from the steam pressure to create a partial vacuum Atomizers disperse perfume or spray paint (i.e. from a spray gun or airbrush) Carburetors often use the effect to suck gasoline into an engine's intake air stream where the upstream air pressure is fed to the float bowl as am alternative to where ambient air pressure is in the float bowl in which the effect comes from Bernoulli's principle Cylinder heads in piston engines have multiple Venturi areas like the valve seat and the port entrance, although these are not part of the design intent, merely a byproduct and any venturi effect is without specific function. Wine aerators infuse air into wine as it is poured into a glass Protein skimmers filter saltwater aquaria Automated pool cleaners use pressure-side water flow to collect sediment and debris Clarinets use a reverse taper to speed the air down the tube, enabling better tone, response and intonation The leadpipe of a trombone, affecting the timbre Industrial vacuum cleaners use compressed air Venturi scrubbers are used to clean flue gas emissions Injectors (also called ejectors) are used to add chlorine gas to water treatment chlorination systems Steam injectors use the Venturi effect and the latent heat of evaporation to deliver feed water to a steam locomotive boiler. Sandblasting nozzles accelerate and air and media mixture Bilge water can be emptied from a moving boat through a small waste gate in the hull. The air pressure inside the moving boat is greater than the water sliding by beneath. A scuba diving regulator uses the Venturi effect to assist maintaining the flow of gas once it starts flowing In recoilless rifles to decrease the recoil of firing The diffuser on an automobile Race cars utilising ground effect to increase downforce and thus become capable of higher cornering speeds Foam proportioners used to induct fire fighting foam concentrate into fire protection systems Trompe air compressors entrain air into a falling column of water The bolts in some brands of paintball markers Low-speed wind tunnels can be considered very large Venturi because they take advantage of the Venturi effect to increase velocity and decrease pressure to simulate expected flight conditions. Architecture Hawa Mahal of Jaipur, also utilizes the Venturi effect, by allowing cool air to pass through, thus making the whole area more pleasant during the high temperatures in summer. Large cities where wind is forced between buildings - the gap between the Twin Towers of the original World Trade Center was an extreme example of the phenomenon, which made the ground level plaza notoriously windswept. In fact, some gusts were so high that pedestrian travel had to be aided by ropes. In the south of Iraq, near the modern town of Nasiriyah, a 4000-year-old flume structure has been discovered at the ancient site of Girsu. This construction by the ancient Sumerians forced the contents of a nineteen kilometre canal through a constriction to enable the side-channeling of water off to agricultural lands from a higher origin than would have been the case without the flume. A recent dig by archaeologists from the British museum confirmed the finding. Nature In windy mountain passes, resulting in erroneous pressure altimeter readings The mistral wind in southern France increases in speed through the Rhone valley. See also Joule–Thomson effect Venturi flume Parshall flume References External links 3D animation of the Differential Pressure Flow Measuring Principle (Venturi meter) Use of the Venturi effect for gas pumps to know when to turn off (video) Fluid dynamics
Venturi effect
[ "Chemistry", "Engineering" ]
2,167
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
14,535,189
https://en.wikipedia.org/wiki/Fisher%27s%20inequality
Fisher's inequality is a necessary condition for the existence of a balanced incomplete block design, that is, a system of subsets that satisfy certain prescribed conditions in combinatorial mathematics. Outlined by Ronald Fisher, a population geneticist and statistician, who was concerned with the design of experiments such as studying the differences among several different varieties of plants, under each of a number of different growing conditions, called blocks. Let: be the number of varieties of plants; be the number of blocks. To be a balanced incomplete block design it is required that: different varieties are in each block, ; no variety occurs twice in any one block; any two varieties occur together in exactly blocks; each variety occurs in exactly blocks. Fisher's inequality states simply that . Proof Let the incidence matrix be a matrix defined so that is 1 if element is in block and 0 otherwise. Then is a matrix such that and for . Since , , so ; on the other hand, , so . Generalization Fisher's inequality is valid for more general classes of designs. A pairwise balanced design (or PBD) is a set together with a family of non-empty subsets of (which need not have the same size and may contain repeats) such that every pair of distinct elements of is contained in exactly (a positive integer) subsets. The set is allowed to be one of the subsets, and if all the subsets are copies of , the PBD is called "trivial". The size of is and the number of subsets in the family (counted with multiplicity) is . Theorem: For any non-trivial PBD, . This result also generalizes the Erdős–De Bruijn theorem: For a PBD with having no blocks of size 1 or size , , with equality if and only if the PBD is a projective plane or a near-pencil (meaning that exactly of the points are collinear). In another direction, Ray-Chaudhuri and Wilson proved in 1975 that in a design, the number of blocks is at least . Notes References R. C. Bose, "A Note on Fisher's Inequality for Balanced Incomplete Block Designs", Annals of Mathematical Statistics, 1949, pages 619–620. R. A. Fisher, "An examination of the different possible solutions of a problem in incomplete blocks", Annals of Eugenics, volume 10, 1940, pages 52–75. Combinatorial design Design of experiments Families of sets Statistical inequalities Extremal combinatorics Ronald Fisher
Fisher's inequality
[ "Mathematics" ]
523
[ "Theorems in statistics", "Extremal combinatorics", "Statistical inequalities", "Combinatorial design", "Combinatorics", "Basic concepts in set theory", "Families of sets", "Inequalities (mathematics)" ]
14,536,459
https://en.wikipedia.org/wiki/VIPR1
Vasoactive intestinal polypeptide receptor 1 also known as VPAC1, is a protein, that in humans is encoded by the VIPR1 gene. VPAC1 is expressed in the brain (cerebral cortex, hippocampus, amygdala), lung, prostate, peripheral blood leukocytes, liver, small intestine, heart, spleen, placenta, kidney, thymus and testis. Function VPAC1 is a receptor for vasoactive intestinal peptide (VIP), a small neuropeptide. Vasoactive intestinal peptide is involved in smooth muscle relaxation, exocrine and endocrine secretion, and water and ion flux in lung and intestinal epithelia. Its actions are effected through integral membrane receptors associated with a guanine nucleotide binding protein which activates adenylate cyclase. VIP acts in an autocrine fashion via VPAC11 to inhibit megakaryocyte proliferation and induce proplatelet formation. Clinical significance Patients with idiopathic achalasia show a significant difference in the distribution of SNPs affecting VIPR1. VIP and PACAP levels were decreased in anterior vaginal wall of stress urinary incontinence and pelvic organ prolapse patients, they may participate in the pathophysiology of these diseases. See also Vasoactive intestinal peptide receptor References Further reading G protein-coupled receptors
VIPR1
[ "Chemistry" ]
300
[ "G protein-coupled receptors", "Signal transduction" ]
14,537,991
https://en.wikipedia.org/wiki/Glass%20casting
Glass casting is the process in which glass objects are cast by directing molten glass into a mould where it solidifies. The technique has been used since the 15th century BCE in both Ancient Egypt and Mesopotamia. Modern cast glass is formed by a variety of processes such as kiln casting or casting into sand, graphite or metal moulds. History Roman period During the Roman period, moulds consisting of two or more interlocking parts were used to create blank glass dishes. Glass could be added to the mould either by frit casting, where the mould was filled with chips of glass (called frit) and then heated to melt the glass, or by pouring molten glass into the mould. Evidence from Pompeii suggests that molten hot glass may have been introduced as early as the mid-1st century CE. Blank vessels were then annealed, fixed to lathes and cut and polished on all surfaces to achieve their final shape. Pliny the Elder indicates in his Natural History (36.193) that lathes were used in the production of most glass of the mid-1st century. Italy is believed to have been the source of the majority of early Imperial polychrome cast glass, whereas monochrome cast glasses are more predominant elsewhere in the Mediterranean. Forms produced show clear inspiration from the Roman bronze and silver industries, and in the case of carinated bowls and dishes, from the ceramic industry. Cast vessel forms became more limited during the late 1st century, but continued in production into the second or third decade of the 2nd century. Colourless cast bowls were widespread throughout the Roman world in the late 1st and early 2nd century CE, and may have been produced at more than one centre. Some revival of the casting technique appears in the 3rd or 4th century, but appears to have produced relatively small numbers of vessels Modern techniques Sand casting Sand casting involves the use of hot molten glass poured directly into a preformed mould. It is a process similar to casting metal into a mould. The sand mould is typically prepared by using a mixture of clean sand and a small proportion of the water-absorbing clay bentonite. Bentonite acts as a binding material. In the process, a small amount of water is added to the sand-bentonite mixture, and this is well mixed and sifted before addition to an open topped container. A template is prepared (typically made of wood, or a found object or even a body part such as a hand or fist) which is tightly pressed into the sand to make a clean impression. This impression then forms the mould. The surface of the mould can be covered in coloured glass powders or frits to give a surface colour to the sand cast glass object. When the mould preparation is complete hot glass is ladled from the furnace at temperatures of about to allow it to freely pour. The hot glass is poured directly into the mould. During the pouring process, glass or compatible objects may be placed to later give the appearance of floating in the solid glass object. This very immediate and dynamic method was pioneered and perfected in the 1980s by the Swedish artist Bertil Vallien. Kiln casting Kiln casting involves the preparation of a mould which is often made of a mixture of plaster and refractory materials such as silica. A model can be made from any solid material, such as wax, wood, or metal, and after taking a cast of the model (a process called investment) the model is removed from the mould. One method of forming a mould is by the Cire perdue or "lost wax" method. Using this method, a model can be made from wax and after investment the wax can be steamed or burned away in a kiln, forming a cavity. The mould is equipped with a funnel-like reservoir filled with solid glass granules or lumps. The heat resistant mould is then placed in a kiln and heated to between and to melt the glass. As the glass melts it runs into and fills the mould. Such kiln cast work can be of very large dimensions, as in the work of Czech artists Stanislav Libenský and Jaroslava Brychtová. Kiln cast glass has become an important material for contemporary artists such as Clifford Rainey, Karen LaMonte and Tomasz Urbanowicz, author of the "United Earth" glass sculpture in the European Parliament in Strasbourg. Pâte de verre Pâte de verre is a form of kiln casting and literally translated means glass paste. In this process, finely crushed glass is mixed with a binding material, such as a mixture of gum arabic and water, and often with colourants and enamels. The resultant paste is applied to the inner surface of a negative mould forming a coating. When the coated mould is fired at the appropriate temperature the glass is fused creating a hollow object that can have thick or thin walls depending on the thickness of the pate de verre layers. Daum, a French commercial crystal manufacturer, produce highly sculptural pieces in pate de verre. Graphite casting Graphite is also used in the hot forming of glass. Graphite moulds are prepared by carving into them, machining them into curved forms, or stacking them into shapes. Molten glass is poured into a mould where it is cooled until hard enough to be removed and placed into an annealing kiln to cool slowly. See also References Further reading Glass art Glass production Casting (manufacturing) Casting
Glass casting
[ "Materials_science", "Engineering" ]
1,120
[ "Glass engineering and science", "Glass production" ]
17,328,425
https://en.wikipedia.org/wiki/Viscoplasticity
Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load. The elastic response of viscoplastic materials can be represented in one-dimension by Hookean spring elements. Rate-dependence can be represented by nonlinear dashpot elements in a manner similar to viscoelasticity. Plasticity can be accounted for by adding sliding frictional elements as shown in Figure 1. In the figure is the modulus of elasticity, is the viscosity parameter and is a power-law type parameter that represents non-linear dashpot . The sliding element can have a yield stress () that is strain rate dependent, or even constant, as shown in Figure 1c. Viscoplasticity is usually modeled in three-dimensions using overstress models of the Perzyna or Duvaut-Lions types. In these models, the stress is allowed to increase beyond the rate-independent yield surface upon application of a load and then allowed to relax back to the yield surface over time. The yield surface is usually assumed not to be rate-dependent in such models. An alternative approach is to add a strain rate dependence to the yield stress and use the techniques of rate independent plasticity to calculate the response of a material. For metals and alloys, viscoplasticity is the macroscopic behavior caused by a mechanism linked to the movement of dislocations in grains, with superposed effects of inter-crystalline gliding. The mechanism usually becomes dominant at temperatures greater than approximately one third of the absolute melting temperature. However, certain alloys exhibit viscoplasticity at room temperature (300 K). For polymers, wood, and bitumen, the theory of viscoplasticity is required to describe behavior beyond the limit of elasticity or viscoelasticity. In general, viscoplasticity theories are useful in areas such as: the calculation of permanent deformations, the prediction of the plastic collapse of structures, the investigation of stability, crash simulations, systems exposed to high temperatures such as turbines in engines, e.g. a power plant, dynamic problems and systems exposed to high strain rates. History Research on plasticity theories started in 1864 with the work of Henri Tresca, Saint Venant (1870) and Levy (1871) on the maximum shear criterion. An improved plasticity model was presented in 1913 by Von Mises which is now referred to as the von Mises yield criterion. In viscoplasticity, the development of a mathematical model heads back to 1910 with the representation of primary creep by Andrade's law. In 1929, Norton developed a one-dimensional dashpot model which linked the rate of secondary creep to the stress. In 1934, Odqvist generalized Norton's law to the multi-axial case. Concepts such as the normality of plastic flow to the yield surface and flow rules for plasticity were introduced by Prandtl (1924) and Reuss (1930). In 1932, Hohenemser and Prager proposed the first model for slow viscoplastic flow. This model provided a relation between the deviatoric stress and the strain rate for an incompressible Bingham solid However, the application of these theories did not begin before 1950, where limit theorems were discovered. In 1960, the first IUTAM Symposium "Creep in Structures" organized by Hoff provided a major development in viscoplasticity with the works of Hoff, Rabotnov, Perzyna, Hult, and Lemaitre for the isotropic hardening laws, and those of Kratochvil, Malinini and Khadjinsky, Ponter and Leckie, and Chaboche for the kinematic hardening laws. Perzyna, in 1963, introduced a viscosity coefficient that is temperature and time dependent. The formulated models were supported by the thermodynamics of irreversible processes and the phenomenological standpoint. The ideas presented in these works have been the basis for most subsequent research into rate-dependent plasticity. Phenomenology For a qualitative analysis, several characteristic tests are performed to describe the phenomenology of viscoplastic materials. Some examples of these tests are hardening tests at constant stress or strain rate, creep tests at constant force, and stress relaxation at constant elongation. Strain hardening test One consequence of yielding is that as plastic deformation proceeds, an increase in stress is required to produce additional strain. This phenomenon is known as Strain/Work hardening. For a viscoplastic material the hardening curves are not significantly different from those of rate-independent plastic material. Nevertheless, three essential differences can be observed. At the same strain, the higher the rate of strain the higher the stress A change in the rate of strain during the test results in an immediate change in the stress–strain curve. The concept of a plastic yield limit is no longer strictly applicable. The hypothesis of partitioning the strains by decoupling the elastic and plastic parts is still applicable where the strains are small, i.e., where is the elastic strain and is the viscoplastic strain. To obtain the stress–strain behavior shown in blue in the figure, the material is initially loaded at a strain rate of 0.1/s. The strain rate is then instantaneously raised to 100/s and held constant at that value for some time. At the end of that time period the strain rate is dropped instantaneously back to 0.1/s and the cycle is continued for increasing values of strain. There is clearly a lag between the strain-rate change and the stress response. This lag is modeled quite accurately by overstress models (such as the Perzyna model) but not by models of rate-independent plasticity that have a rate-dependent yield stress. Creep test Creep is the tendency of a solid material to slowly move or deform permanently under constant stresses. Creep tests measure the strain response due to a constant stress as shown in Figure 3. The classical creep curve represents the evolution of strain as a function of time in a material subjected to uniaxial stress at a constant temperature. The creep test, for instance, is performed by applying a constant force/stress and analyzing the strain response of the system. In general, as shown in Figure 3b this curve usually shows three phases or periods of behavior: A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high. . The secondary creep stage, also known as the steady state, is where the strain rate is constant. . A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain. . Relaxation test As shown in Figure 4, the relaxation test is defined as the stress response due to a constant strain for a period of time. In viscoplastic materials, relaxation tests demonstrate the stress relaxation in uniaxial loading at a constant strain. In fact, these tests characterize the viscosity and can be used to determine the relation which exists between the stress and the rate of viscoplastic strain. The decomposition of strain rate is The elastic part of the strain rate is given by For the flat region of the strain–time curve, the total strain rate is zero. Hence we have, Therefore, the relaxation curve can be used to determine rate of viscoplastic strain and hence the viscosity of the dashpot in a one-dimensional viscoplastic material model. The residual value that is reached when the stress has plateaued at the end of a relaxation test corresponds to the upper limit of elasticity. For some materials such as rock salt such an upper limit of elasticity occurs at a very small value of stress and relaxation tests can be continued for more than a year without any observable plateau in the stress. It is important to note that relaxation tests are extremely difficult to perform because maintaining the condition in a test requires considerable delicacy. Rheological models of viscoplasticity One-dimensional constitutive models for viscoplasticity based on spring-dashpot-slider elements include the perfectly viscoplastic solid, the elastic perfectly viscoplastic solid, and the elastoviscoplastic hardening solid. The elements may be connected in series or in parallel. In models where the elements are connected in series the strain is additive while the stress is equal in each element. In parallel connections, the stress is additive while the strain is equal in each element. Many of these one-dimensional models can be generalized to three dimensions for the small strain regime. In the subsequent discussion, time rates strain and stress are written as and , respectively. Perfectly viscoplastic solid (Norton-Hoff model) In a perfectly viscoplastic solid, also called the Norton-Hoff model of viscoplasticity, the stress (as for viscous fluids) is a function of the rate of permanent strain. The effect of elasticity is neglected in the model, i.e., and hence there is no initial yield stress, i.e., . The viscous dashpot has a response given by where is the viscosity of the dashpot. In the Norton-Hoff model the viscosity is a nonlinear function of the applied stress and is given by where is a fitting parameter, λ is the kinematic viscosity of the material and . Then the viscoplastic strain rate is given by the relation In one-dimensional form, the Norton-Hoff model can be expressed as When the solid is viscoelastic. If we assume that plastic flow is isochoric (volume preserving), then the above relation can be expressed in the more familiar form where is the deviatoric stress tensor, is the von Mises equivalent strain rate, and are material parameters. The equivalent strain rate is defined as These models can be applied in metals and alloys at temperatures higher than two thirds of their absolute melting point (in kelvins) and polymers/asphalt at elevated temperature. The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 6. Elastic perfectly viscoplastic solid (Bingham–Norton model) Two types of elementary approaches can be used to build up an elastic-perfectly viscoplastic mode. In the first situation, the sliding friction element and the dashpot are arranged in parallel and then connected in series to the elastic spring as shown in Figure 7. This model is called the Bingham–Maxwell model (by analogy with the Maxwell model and the Bingham model) or the Bingham–Norton model. In the second situation, all three elements are arranged in parallel. Such a model is called a Bingham–Kelvin model by analogy with the Kelvin model. For elastic-perfectly viscoplastic materials, the elastic strain is no longer considered negligible but the rate of plastic strain is only a function of the initial yield stress and there is no influence of hardening. The sliding element represents a constant yielding stress when the elastic limit is exceeded irrespective of the strain. The model can be expressed as where is the viscosity of the dashpot element. If the dashpot element has a response that is of the Norton form we get the Bingham–Norton model Other expressions for the strain rate can also be observed in the literature with the general form The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 8. Elastoviscoplastic hardening solid An elastic-viscoplastic material with strain hardening is described by equations similar to those for an elastic-viscoplastic material with perfect plasticity. However, in this case the stress depends both on the plastic strain rate and on the plastic strain itself. For an elastoviscoplastic material the stress, after exceeding the yield stress, continues to increase beyond the initial yielding point. This implies that the yield stress in the sliding element increases with strain and the model may be expressed in generic terms as This model is adopted when metals and alloys are at medium and higher temperatures and wood under high loads. The responses for strain hardening, creep, and relaxation tests of such a material are shown in Figure 9. Strain-rate dependent plasticity models Classical phenomenological viscoplasticity models for small strains are usually categorized into two types: the Perzyna formulation the Duvaut–Lions formulation Perzyna formulation In the Perzyna formulation the plastic strain rate is assumed to be given by a constitutive relation of the form where is a yield function, is the Cauchy stress, is a set of internal variables (such as the plastic strain ), is a relaxation time. The notation denotes the Macaulay brackets. The flow rule used in various versions of the Chaboche model is a special case of Perzyna's flow rule and has the form where is the quasistatic value of and is a backstress. Several models for the backstress also go by the name Chaboche model. Duvaut–Lions formulation The Duvaut–Lions formulation is equivalent to the Perzyna formulation and may be expressed as where is the elastic stiffness tensor, is the closest point projection of the stress state on to the boundary of the region that bounds all possible elastic stress states. The quantity is typically found from the rate-independent solution to a plasticity problem. Flow stress models The quantity represents the evolution of the yield surface. The yield function is often expressed as an equation consisting of some invariant of stress and a model for the yield stress (or plastic flow stress). An example is von Mises or plasticity. In those situations the plastic strain rate is calculated in the same manner as in rate-independent plasticity. In other situations, the yield stress model provides a direct means of computing the plastic strain rate. Numerous empirical and semi-empirical flow stress models are used the computational plasticity. The following temperature and strain-rate dependent models provide a sampling of the models in current use: the Johnson–Cook model the Steinberg–Cochran–Guinan–Lund model. the Zerilli–Armstrong model. the Mechanical threshold stress model. the Preston–Tonks–Wallace model. The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 107/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 107/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models. Johnson–Cook flow stress model The Johnson–Cook (JC) model is purely empirical and gives the following relation for the flow stress () where is the equivalent plastic strain, is the plastic strain-rate, and are material constants. The normalized strain-rate and temperature in equation (1) are defined as where is the effective plastic strain-rate of the quasi-static test used to determine the yield and hardening parameters A,B and n. This is not as it is often thought just a parameter to make non-dimensional. is a reference temperature, and is a reference melt temperature. For conditions where , we assume that . Steinberg–Cochran–Guinan–Lund flow stress model The Steinberg–Cochran–Guinan–Lund (SCGL) model is a semi-empirical model that was developed by Steinberg et al. for high strain-rate situations and extended to low strain-rates and bcc materials by Steinberg and Lund. The flow stress in this model is given by where is the athermal component of the flow stress, is a function that represents strain hardening, is the thermally activated component of the flow stress, is the pressure- and temperature-dependent shear modulus, and is the shear modulus at standard temperature and pressure. The saturation value of the athermal stress is . The saturation of the thermally activated stress is the Peierls stress (). The shear modulus for this model is usually computed with the Steinberg–Cochran–Guinan shear modulus model. The strain hardening function () has the form where are work hardening parameters, and is the initial equivalent plastic strain. The thermal component () is computed using a bisection algorithm from the following equation. where is the energy to form a kink-pair in a dislocation segment of length , is the Boltzmann constant, is the Peierls stress. The constants are given by the relations where is the dislocation density, is the length of a dislocation segment, is the distance between Peierls valleys, is the magnitude of the Burgers vector, is the Debye frequency, is the width of a kink loop, and is the drag coefficient. Zerilli–Armstrong flow stress model The Zerilli–Armstrong (ZA) model is based on simplified dislocation mechanics. The general form of the equation for the flow stress is In this model, is the athermal component of the flow stress given by where is the contribution due to solutes and initial dislocation density, is the microstructural stress intensity, is the average grain diameter, is zero for fcc materials, are material constants. In the thermally activated terms, the functional forms of the exponents and are where are material parameters that depend on the type of material (fcc, bcc, hcp, alloys). The Zerilli–Armstrong model has been modified by for better performance at high temperatures. Mechanical threshold stress flow stress model The Mechanical Threshold Stress (MTS) model) has the form where is the athermal component of mechanical threshold stress, is the component of the flow stress due to intrinsic barriers to thermally activated dislocation motion and dislocation-dislocation interactions, is the component of the flow stress due to microstructural evolution with increasing deformation (strain hardening), () are temperature and strain-rate dependent scaling factors, and is the shear modulus at 0 K and ambient pressure. The scaling factors take the Arrhenius form where is the Boltzmann constant, is the magnitude of the Burgers' vector, () are normalized activation energies, () are the strain-rate and reference strain-rate, and () are constants. The strain hardening component of the mechanical threshold stress () is given by an empirical modified Voce law where and is the hardening due to dislocation accumulation, is the contribution due to stage-IV hardening, () are constants, is the stress at zero strain hardening rate, is the saturation threshold stress for deformation at 0 K, is a constant, and is the maximum strain-rate. Note that the maximum strain-rate is usually limited to about /s. Preston–Tonks–Wallace flow stress model The Preston–Tonks–Wallace (PTW) model attempts to provide a model for the flow stress for extreme strain-rates (up to 1011/s) and temperatures up to melt. A linear Voce hardening law is used in the model. The PTW flow stress is given by with where is a normalized work-hardening saturation stress, is the value of at 0K, is a normalized yield stress, is the hardening constant in the Voce hardening law, and is a dimensionless material parameter that modifies the Voce hardening law. The saturation stress and the yield stress are given by where is the value of close to the melt temperature, () are the values of at 0 K and close to melt, respectively, are material constants, , () are material parameters for the high strain-rate regime, and where is the density, and is the atomic mass. See also Viscoelasticity Bingham plastic Dashpot Creep (deformation) Plasticity (physics) Continuum mechanics Quasi-solid References Continuum mechanics Plasticity (physics)
Viscoplasticity
[ "Physics", "Materials_science" ]
4,446
[ "Deformation (mechanics)", "Classical mechanics", "Plasticity (physics)", "Continuum mechanics" ]
17,330,825
https://en.wikipedia.org/wiki/Perveance
Perveance is a notion used in the description of charged particle beams. The value of perveance indicates how significant the space charge effect is on the beam's motion. The term is used primarily for electron beams, in which motion is often dominated by the space charge. Origin of the word The word was probably created from Latin pervenio–to attain. Definition For an electron gun, the gun perveance is determined as a coefficient of proportionality between a space-charge limited current, , and the gun anode voltage, , in three-half power in the Child-Langmuir law The same notion is used for non-relativistic beams propagating through a vacuum chamber. In this case, the beam is assumed to have been accelerated in a stationary electric field so that is the potential difference between the emitter and the vacuum chamber, and the ratio of is referred to as a beam perveance. In equations describing motion of relativistic beams, contribution of the space charge appears as a dimensionless parameter called the generalized perveance defined as , where (for electrons) is the Budker (or Alfven) current; and are the relativistic factors, and is the neutralization factor. Examples The 6S4A is an example of a high perveance triode. The triode section of a 6AU8A becomes a high-perveance diode when its control grid is employed as the anode. Each section of a 6AL5 is a high-perveance diode as opposed to a 1J3 which requires over 100 V to reach only 2 mA. Perveance does not relate directly to current handling. Another high-perveance diode, the diode section of a 33GY7, shows similar perveance to a 6AL5, but handles 15 times greater current, at almost 13 times maximum peak inverse voltage. References Accelerator physics Experimental particle physics
Perveance
[ "Physics" ]
400
[ "Applied and interdisciplinary physics", "Experimental physics", "Particle physics", "Experimental particle physics", "Accelerator physics" ]
11,864,519
https://en.wikipedia.org/wiki/Approximate%20Bayesian%20computation
Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences, e.g. in population genetics, ecology, epidemiology, systems biology, and in radio propagation. History The first ABC-related ideas date back to the 1980s. Donald Rubin, when discussing the interpretation of Bayesian statements in 1984, described a hypothetical sampling mechanism that yields a sample from the posterior distribution. This scheme was more of a conceptual thought experiment to demonstrate what type of manipulations are done when inferring the posterior distributions of parameters. The description of the sampling mechanism coincides exactly with that of the ABC-rejection scheme, and this article can be considered to be the first to describe approximate Bayesian computation. However, a two-stage quincunx was constructed by Francis Galton in the late 1800s that can be seen as a physical implementation of an ABC-rejection scheme for a single unknown (parameter) and a single observation. Another prescient point was made by Rubin when he argued that in Bayesian inference, applied statisticians should not settle for analytically tractable models only, but instead consider computational methods that allow them to estimate the posterior distribution of interest. This way, a wider range of models can be considered. These arguments are particularly relevant in the context of ABC. In 1984, Peter Diggle and Richard Gratton suggested using a systematic simulation scheme to approximate the likelihood function in situations where its analytic form is intractable. Their method was based on defining a grid in the parameter space and using it to approximate the likelihood by running several simulations for each grid point. The approximation was then improved by applying smoothing techniques to the outcomes of the simulations. While the idea of using simulation for hypothesis testing was not new, Diggle and Gratton seemingly introduced the first procedure using simulation to do statistical inference under a circumstance where the likelihood is intractable. Although Diggle and Gratton's approach had opened a new frontier, their method was not yet exactly identical to what is now known as ABC, as it aimed at approximating the likelihood rather than the posterior distribution. An article of Simon Tavaré and co-authors was first to propose an ABC algorithm for posterior inference. In their seminal work, inference about the genealogy of DNA sequence data was considered, and in particular the problem of deciding the posterior distribution of the time to the most recent common ancestor of the sampled individuals. Such inference is analytically intractable for many demographic models, but the authors presented ways of simulating coalescent trees under the putative models. A sample from the posterior of model parameters was obtained by accepting/rejecting proposals based on comparing the number of segregating sites in the synthetic and real data. This work was followed by an applied study on modeling the variation in human Y chromosome by Jonathan K. Pritchard and co-authors using the ABC method. Finally, the term approximate Bayesian computation was established by Mark Beaumont and co-authors, extending further the ABC methodology and discussing the suitability of the ABC-approach more specifically for problems in population genetics. Since then, ABC has spread to applications outside population genetics, such as systems biology, epidemiology, and phylogeography. Approximate Bayesian computation can be understood as a kind of Bayesian version of indirect inference. Several efficient Monte Carlo based approaches have been developed to perform sampling from the ABC posterior distribution for purposes of estimation and prediction problems. A popular choice is the SMC Samplers algorithm adapted to the ABC context in the method (SMC-ABC). Method Motivation A common incarnation of Bayes’ theorem relates the conditional probability (or density) of a particular parameter value given data to the probability of given by the rule , where denotes the posterior, the likelihood, the prior, and the evidence (also referred to as the marginal likelihood or the prior predictive probability of the data). Note that the denominator is normalizing the total probability of the posterior density to one and can be calculated that way. The prior represents beliefs or knowledge (such as e.g. physical constraints) about before is available. Since the prior narrows down uncertainty, the posterior estimates have less variance, but might be biased. For convenience the prior is often specified by choosing a particular distribution among a set of well-known and tractable families of distributions, such that both the evaluation of prior probabilities and random generation of values of are relatively straightforward. For certain kinds of models, it is more pragmatic to specify the prior using a factorization of the joint distribution of all the elements of in terms of a sequence of their conditional distributions. If one is only interested in the relative posterior plausibilities of different values of , the evidence can be ignored, as it constitutes a normalising constant, which cancels for any ratio of posterior probabilities. It remains, however, necessary to evaluate the likelihood and the prior . For numerous applications, it is computationally expensive, or even completely infeasible, to evaluate the likelihood, which motivates the use of ABC to circumvent this issue. The ABC rejection algorithm All ABC-based methods approximate the likelihood function by simulations, the outcomes of which are compared with the observed data. More specifically, with the ABC rejection algorithm — the most basic form of ABC — a set of parameter points is first sampled from the prior distribution. Given a sampled parameter point , a data set is then simulated under the statistical model specified by . If the generated is too different from the observed data , the sampled parameter value is discarded. In precise terms, is accepted with tolerance if: , where the distance measure determines the level of discrepancy between and based on a given metric (e.g. Euclidean distance). A strictly positive tolerance is usually necessary, since the probability that the simulation outcome coincides exactly with the data (event ) is negligible for all but trivial applications of ABC, which would in practice lead to rejection of nearly all sampled parameter points. The outcome of the ABC rejection algorithm is a sample of parameter values approximately distributed according to the desired posterior distribution, and, crucially, obtained without the need to explicitly evaluate the likelihood function. Summary statistics The probability of generating a data set with a small distance to typically decreases as the dimensionality of the data increases. This leads to a substantial decrease in the computational efficiency of the above basic ABC rejection algorithm. A common approach to lessen this problem is to replace with a set of lower-dimensional summary statistics , which are selected to capture the relevant information in . The acceptance criterion in ABC rejection algorithm becomes: . If the summary statistics are sufficient with respect to the model parameters , the efficiency increase obtained in this way does not introduce any error. Indeed, by definition, sufficiency implies that all information in about is captured by . As elaborated below, it is typically impossible, outside the exponential family of distributions, to identify a finite-dimensional set of sufficient statistics. Nevertheless, informative but possibly insufficient summary statistics are often used in applications where inference is performed with ABC methods. Example An illustrative example is a bistable system that can be characterized by a hidden Markov model (HMM) subject to measurement noise. Such models are employed for many biological systems: They have, for example, been used in development, cell signaling, activation/deactivation, logical processing and non-equilibrium thermodynamics. For instance, the behavior of the Sonic hedgehog (Shh) transcription factor in Drosophila melanogaster can be modeled with an HMM. The (biological) dynamical model consists of two states: A and B. If the probability of a transition from one state to the other is defined as in both directions, then the probability to remain in the same state at each time step is . The probability to measure the state correctly is (and conversely, the probability of an incorrect measurement is ). Due to the conditional dependencies between states at different time points, calculation of the likelihood of time series data is somewhat tedious, which illustrates the motivation to use ABC. A computational issue for basic ABC is the large dimensionality of the data in an application like this. The dimensionality can be reduced using the summary statistic , which is the frequency of switches between the two states. The absolute difference is used as a distance measure with tolerance . The posterior inference about the parameter can be done following the five steps presented in. Step 1: Assume that the observed data form the state sequence AAAABAABBAAAAAABAAAA, which is generated using and . The associated summary statistic—the number of switches between the states in the experimental data—is . Step 2: Assuming nothing is known about , a uniform prior in the interval is employed. The parameter is assumed to be known and fixed to the data-generating value , but it could in general also be estimated from the observations. A total of parameter points are drawn from the prior, and the model is simulated for each of the parameter points , which results in sequences of simulated data. In this example, , with each drawn parameter and simulated dataset recorded in Table 1, columns 2-3. In practice, would need to be much larger to obtain an appropriate approximation. Step 3: The summary statistic is computed for each sequence of simulated data . Step 4: The distance between the observed and simulated transition frequencies is computed for all parameter points. Parameter points for which the distance is smaller than or equal to are accepted as approximate samples from the posterior. Step 5: The posterior distribution is approximated with the accepted parameter points. The posterior distribution should have a non-negligible probability for parameter values in a region around the true value of in the system if the data are sufficiently informative. In this example, the posterior probability mass is evenly split between the values 0.08 and 0.43. The posterior probabilities are obtained via ABC with large by utilizing the summary statistic (with and ) and the full data sequence (with ). These are compared with the true posterior, which can be computed exactly and efficiently using the Viterbi algorithm. The summary statistic utilized in this example is not sufficient, as the deviation from the theoretical posterior is significant even under the stringent requirement of . A much longer observed data sequence would be needed to obtain a posterior concentrated around , the true value of . This example application of ABC uses simplifications for illustrative purposes. More realistic applications of ABC are available in a growing number of peer-reviewed articles. Model comparison with ABC Outside of parameter estimation, the ABC framework can be used to compute the posterior probabilities of different candidate models. In such applications, one possibility is to use rejection sampling in a hierarchical manner. First, a model is sampled from the prior distribution for the models. Then, parameters are sampled from the prior distribution assigned to that model. Finally, a simulation is performed as in single-model ABC. The relative acceptance frequencies for the different models now approximate the posterior distribution for these models. Again, computational improvements for ABC in the space of models have been proposed, such as constructing a particle filter in the joint space of models and parameters. Once the posterior probabilities of the models have been estimated, one can make full use of the techniques of Bayesian model comparison. For instance, to compare the relative plausibilities of two models and , one can compute their posterior ratio, which is related to the Bayes factor : . If the model priors are equal—that is, —the Bayes factor equals the posterior ratio. In practice, as discussed below, these measures can be highly sensitive to the choice of parameter prior distributions and summary statistics, and thus conclusions of model comparison should be drawn with caution. Pitfalls and remedies As for all statistical methods, a number of assumptions and approximations are inherently required for the application of ABC-based methods to real modeling problems. For example, setting the tolerance parameter to zero ensures an exact result, but typically makes computations prohibitively expensive. Thus, values of larger than zero are used in practice, which introduces a bias. Likewise, sufficient statistics are typically not available and instead, other summary statistics are used, which introduces an additional bias due to the loss of information. Additional sources of bias- for example, in the context of model selection—may be more subtle. At the same time, some of the criticisms that have been directed at the ABC methods, in particular within the field of phylogeography, are not specific to ABC and apply to all Bayesian methods or even all statistical methods (e.g., the choice of prior distribution and parameter ranges). However, because of the ability of ABC-methods to handle much more complex models, some of these general pitfalls are of particular relevance in the context of ABC analyses. This section discusses these potential risks and reviews possible ways to address them. Approximation of the posterior A non-negligible comes with the price that one samples from instead of the true posterior . With a sufficiently small tolerance, and a sensible distance measure, the resulting distribution should often approximate the actual target distribution reasonably well. On the other hand, a tolerance that is large enough that every point in the parameter space becomes accepted will yield a replica of the prior distribution. There are empirical studies of the difference between and as a function of , and theoretical results for an upper -dependent bound for the error in parameter estimates. The accuracy of the posterior (defined as the expected quadratic loss) delivered by ABC as a function of has also been investigated. However, the convergence of the distributions when approaches zero, and how it depends on the distance measure used, is an important topic that has yet to be investigated in greater detail. In particular, it remains difficult to disentangle errors introduced by this approximation from errors due to model mis-specification. As an attempt to correct some of the error due to a non-zero , the usage of local linear weighted regression with ABC to reduce the variance of the posterior estimates has been suggested. The method assigns weights to the parameters according to how well simulated summaries adhere to the observed ones and performs linear regression between the summaries and the weighted parameters in the vicinity of observed summaries. The obtained regression coefficients are used to correct sampled parameters in the direction of observed summaries. An improvement was suggested in the form of nonlinear regression using a feed-forward neural network model. However, it has been shown that the posterior distributions obtained with these approaches are not always consistent with the prior distribution, which did lead to a reformulation of the regression adjustment that respects the prior distribution. Finally, statistical inference using ABC with a non-zero tolerance is not inherently flawed: under the assumption of measurement errors, the optimal can in fact be shown to be not zero. Indeed, the bias caused by a non-zero tolerance can be characterized and compensated by introducing a specific form of noise to the summary statistics. Asymptotic consistency for such “noisy ABC”, has been established, together with formulas for the asymptotic variance of the parameter estimates for a fixed tolerance. Choice and sufficiency of summary statistics Summary statistics may be used to increase the acceptance rate of ABC for high-dimensional data. Low-dimensional sufficient statistics are optimal for this purpose, as they capture all relevant information present in the data in the simplest possible form. However, low-dimensional sufficient statistics are typically unattainable for statistical models where ABC-based inference is most relevant, and consequently, some heuristic is usually necessary to identify useful low-dimensional summary statistics. The use of a set of poorly chosen summary statistics will often lead to inflated credible intervals due to the implied loss of information, which can also bias the discrimination between models. A review of methods for choosing summary statistics is available, which may provide valuable guidance in practice. One approach to capture most of the information present in data would be to use many statistics, but the accuracy and stability of ABC appears to decrease rapidly with an increasing numbers of summary statistics. Instead, a better strategy is to focus on the relevant statistics only—relevancy depending on the whole inference problem, on the model used, and on the data at hand. An algorithm has been proposed for identifying a representative subset of summary statistics, by iteratively assessing whether an additional statistic introduces a meaningful modification of the posterior. One of the challenges here is that a large ABC approximation error may heavily influence the conclusions about the usefulness of a statistic at any stage of the procedure. Another method decomposes into two main steps. First, a reference approximation of the posterior is constructed by minimizing the entropy. Sets of candidate summaries are then evaluated by comparing the ABC-approximated posteriors with the reference posterior. With both of these strategies, a subset of statistics is selected from a large set of candidate statistics. Instead, the partial least squares regression approach uses information from all the candidate statistics, each being weighted appropriately. Recently, a method for constructing summaries in a semi-automatic manner has attained a considerable interest. This method is based on the observation that the optimal choice of summary statistics, when minimizing the quadratic loss of the parameter point estimates, can be obtained through the posterior mean of the parameters, which is approximated by performing a linear regression based on the simulated data. Summary statistics for model selection have been obtained using multinomial logistic regression on simulated data, treating competing models as the label to predict. Methods for the identification of summary statistics that could also simultaneously assess the influence on the approximation of the posterior would be of substantial value. This is because the choice of summary statistics and the choice of tolerance constitute two sources of error in the resulting posterior distribution. These errors may corrupt the ranking of models and may also lead to incorrect model predictions. Bayes factor with ABC and summary statistics It has been shown that the combination of insufficient summary statistics and ABC for model selection can be problematic. Indeed, if one lets the Bayes factor based on the summary statistic be denoted by , the relation between and takes the form: . Thus, a summary statistic is sufficient for comparing two models and if and only if: , which results in that . It is also clear from the equation above that there might be a huge difference between and if the condition is not satisfied, as can be demonstrated by toy examples. Crucially, it was shown that sufficiency for or alone, or for both models, does not guarantee sufficiency for ranking the models. However, it was also shown that any sufficient summary statistic for a model in which both and are nested is valid for ranking the nested models. The computation of Bayes factors on may therefore be misleading for model selection purposes, unless the ratio between the Bayes factors on and would be available, or at least could be approximated reasonably well. Alternatively, necessary and sufficient conditions on summary statistics for a consistent Bayesian model choice have recently been derived, which can provide useful guidance. However, this issue is only relevant for model selection when the dimension of the data has been reduced. ABC-based inference, in which the actual data sets are directly compared—as is the case for some systems biology applications (e.g., see )—circumvents this problem. Indispensable quality controls As the above discussion makes clear, any ABC analysis requires choices and trade-offs that can have a considerable impact on its outcomes. Specifically, the choice of competing models/hypotheses, the number of simulations, the choice of summary statistics, or the acceptance threshold cannot currently be based on general rules, but the effect of these choices should be evaluated and tested in each study. A number of heuristic approaches to the quality control of ABC have been proposed, such as the quantification of the fraction of parameter variance explained by the summary statistics. A common class of methods aims at assessing whether or not the inference yields valid results, regardless of the actually observed data. For instance, given a set of parameter values, which are typically drawn from the prior or the posterior distributions for a model, one can generate a large number of artificial datasets. In this way, the quality and robustness of ABC inference can be assessed in a controlled setting, by gauging how well the chosen ABC inference method recovers the true parameter values, and also models if multiple structurally different models are considered simultaneously. Another class of methods assesses whether the inference was successful in light of the given observed data, for example, by comparing the posterior predictive distribution of summary statistics to the summary statistics observed. Beyond that, cross-validation techniques and predictive checks represent promising future strategies to evaluate the stability and out-of-sample predictive validity of ABC inferences. This is particularly important when modeling large data sets, because then the posterior support of a particular model can appear overwhelmingly conclusive, even if all proposed models in fact are poor representations of the stochastic system underlying the observation data. Out-of-sample predictive checks can reveal potential systematic biases within a model and provide clues on to how to improve its structure or parametrization. Fundamentally novel approaches for model choice that incorporate quality control as an integral step in the process have recently been proposed. ABC allows, by construction, estimation of the discrepancies between the observed data and the model predictions, with respect to a comprehensive set of statistics. These statistics are not necessarily the same as those used in the acceptance criterion. The resulting discrepancy distributions have been used for selecting models that are in agreement with many aspects of the data simultaneously, and model inconsistency is detected from conflicting and co-dependent summaries. Another quality-control-based method for model selection employs ABC to approximate the effective number of model parameters and the deviance of the posterior predictive distributions of summaries and parameters. The deviance information criterion is then used as measure of model fit. It has also been shown that the models preferred based on this criterion can conflict with those supported by Bayes factors. For this reason, it is useful to combine different methods for model selection to obtain correct conclusions. Quality controls are achievable and indeed performed in many ABC-based works, but for certain problems, the assessment of the impact of the method-related parameters can be challenging. However, the rapidly increasing use of ABC can be expected to provide a more thorough understanding of the limitations and applicability of the method. General risks in statistical inference exacerbated in ABC This section reviews risks that are strictly speaking not specific to ABC, but also relevant for other statistical methods as well. However, the flexibility offered by ABC to analyze very complex models makes them highly relevant to discuss here. Prior distribution and parameter ranges The specification of the range and the prior distribution of parameters strongly benefits from previous knowledge about the properties of the system. One criticism has been that in some studies the “parameter ranges and distributions are only guessed based upon the subjective opinion of the investigators”, which is connected to classical objections of Bayesian approaches. With any computational method, it is typically necessary to constrain the investigated parameter ranges. The parameter ranges should if possible be defined based on known properties of the studied system, but may for practical applications necessitate an educated guess. However, theoretical results regarding objective priors are available, which may for example be based on the principle of indifference or the principle of maximum entropy. On the other hand, automated or semi-automated methods for choosing a prior distribution often yield improper densities. As most ABC procedures require generating samples from the prior, improper priors are not directly applicable to ABC. One should also keep the purpose of the analysis in mind when choosing the prior distribution. In principle, uninformative and flat priors, that exaggerate our subjective ignorance about the parameters, may still yield reasonable parameter estimates. However, Bayes factors are highly sensitive to the prior distribution of parameters. Conclusions on model choice based on Bayes factor can be misleading unless the sensitivity of conclusions to the choice of priors is carefully considered. Small number of models Model-based methods have been criticized for not exhaustively covering the hypothesis space. Indeed, model-based studies often revolve around a small number of models, and due to the high computational cost to evaluate a single model in some instances, it may then be difficult to cover a large part of the hypothesis space. An upper limit to the number of considered candidate models is typically set by the substantial effort required to define the models and to choose between many alternative options. There is no commonly accepted ABC-specific procedure for model construction, so experience and prior knowledge are used instead. Although more robust procedures for a priori model choice and formulation would be beneficial, there is no one-size-fits-all strategy for model development in statistics: sensible characterization of complex systems will always necessitate a great deal of detective work and use of expert knowledge from the problem domain. Some opponents of ABC contend that since only few models—subjectively chosen and probably all wrong—can be realistically considered, ABC analyses provide only limited insight. However, there is an important distinction between identifying a plausible null hypothesis and assessing the relative fit of alternative hypotheses. Since useful null hypotheses, that potentially hold true, can extremely seldom be put forward in the context of complex models, predictive ability of statistical models as explanations of complex phenomena is far more important than the test of a statistical null hypothesis in this context. It is also common to average over the investigated models, weighted based on their relative plausibility, to infer model features (e.g., parameter values) and to make predictions. Large datasets Large data sets may constitute a computational bottleneck for model-based methods. It was, for example, pointed out that in some ABC-based analyses, part of the data have to be omitted. A number of authors have argued that large data sets are not a practical limitation, although the severity of this issue depends strongly on the characteristics of the models. Several aspects of a modeling problem can contribute to the computational complexity, such as the sample size, number of observed variables or features, time or spatial resolution, etc. However, with increasing computing power, this issue will potentially be less important. Instead of sampling parameters for each simulation from the prior, it has been proposed alternatively to combine the Metropolis-Hastings algorithm with ABC, which was reported to result in a higher acceptance rate than for plain ABC. Naturally, such an approach inherits the general burdens of MCMC methods, such as the difficulty to assess convergence, correlation among the samples from the posterior, and relatively poor parallelizability. Likewise, the ideas of sequential Monte Carlo (SMC) and population Monte Carlo (PMC) methods have been adapted to the ABC setting. The general idea is to iteratively approach the posterior from the prior through a sequence of target distributions. An advantage of such methods, compared to ABC-MCMC, is that the samples from the resulting posterior are independent. In addition, with sequential methods the tolerance levels must not be specified prior to the analysis, but are adjusted adaptively. It is relatively straightforward to parallelize a number of steps in ABC algorithms based on rejection sampling and sequential Monte Carlo methods. It has also been demonstrated that parallel algorithms may yield significant speedups for MCMC-based inference in phylogenetics, which may be a tractable approach also for ABC-based methods. Yet an adequate model for a complex system is very likely to require intensive computation irrespectively of the chosen method of inference, and it is up to the user to select a method that is suitable for the particular application in question. Curse of dimensionality High-dimensional data sets and high-dimensional parameter spaces can require an extremely large number of parameter points to be simulated in ABC-based studies to obtain a reasonable level of accuracy for the posterior inferences. In such situations, the computational cost is severely increased and may in the worst case render the computational analysis intractable. These are examples of well-known phenomena, which are usually referred to with the umbrella term curse of dimensionality. To assess how severely the dimensionality of a data set affects the analysis within the context of ABC, analytical formulas have been derived for the error of the ABC estimators as functions of the dimension of the summary statistics. In addition, Blum and François have investigated how the dimension of the summary statistics is related to the mean squared error for different correction adjustments to the error of ABC estimators. It was also argued that dimension reduction techniques are useful to avoid the curse-of-dimensionality, due to a potentially lower-dimensional underlying structure of summary statistics. Motivated by minimizing the quadratic loss of ABC estimators, Fearnhead and Prangle have proposed a scheme to project (possibly high-dimensional) data into estimates of the parameter posterior means; these means, now having the same dimension as the parameters, are then used as summary statistics for ABC. ABC can be used to infer problems in high-dimensional parameter spaces, although one should account for the possibility of overfitting (e.g., see the model selection methods in and ). However, the probability of accepting the simulated values for the parameters under a given tolerance with the ABC rejection algorithm typically decreases exponentially with increasing dimensionality of the parameter space (due to the global acceptance criterion). Although no computational method (based on ABC or not) seems to be able to break the curse-of-dimensionality, methods have recently been developed to handle high-dimensional parameter spaces under certain assumptions (e.g., based on polynomial approximation on sparse grids, which could potentially heavily reduce the simulation times for ABC). However, the applicability of such methods is problem dependent, and the difficulty of exploring parameter spaces should in general not be underestimated. For example, the introduction of deterministic global parameter estimation led to reports that the global optima obtained in several previous studies of low-dimensional problems were incorrect. For certain problems, it might therefore be difficult to know whether the model is incorrect or, as discussed above, whether the explored region of the parameter space is inappropriate. More pragmatic approaches are to cut the scope of the problem through model reduction, discretisation of variables and the use of canonical models such as noisy models. Noisy models exploit information on the conditional independence between variables. Software A number of software packages are currently available for application of ABC to particular classes of statistical models. The suitability of individual software packages depends on the specific application at hand, the computer system environment, and the algorithms required. See also Markov chain Monte Carlo Empirical Bayes Method of moments (statistics) References External links Bayesian statistics Statistical approximations
Approximate Bayesian computation
[ "Mathematics" ]
6,461
[ "Statistical approximations", "Mathematical relations", "Approximations" ]
11,866,035
https://en.wikipedia.org/wiki/Cottrell%20equation
In electrochemistry, the Cottrell equation describes the change in electric current with respect to time in a controlled potential experiment, such as chronoamperometry. Specifically it describes the current response when the potential is a step function in time. It was derived by Frederick Gardner Cottrell in 1903. For a simple redox event, such as the ferrocene/ferrocenium couple, the current measured depends on the rate at which the analyte diffuses to the electrode. That is, the current is said to be "diffusion controlled". The Cottrell equation describes the case for an electrode that is planar but can also be derived for spherical, cylindrical, and rectangular geometries by using the corresponding Laplace operator and boundary conditions in conjunction with Fick's second law of diffusion. where, = current, in units of A = number of electrons (to reduce/oxidize one molecule of analyte , for example) = Faraday constant, 96485 C/mol = area of the (planar) electrode in cm2 = initial concentration of the reducible analyte in mol/cm3; = diffusion coefficient for species in cm2/s = time in s. Deviations from linearity in the plot of vs. sometimes indicate that the redox event is associated with other processes, such as association of a ligand, dissociation of a ligand, or a change in geometry. Deviations from linearity can be expected at very short time scales due to non-ideality in the potential step. At long time scales, buildup of the diffusion layer causes a shift from a linearly dominated to a radially dominated diffusion regime, which causes another deviation from linearity. In practice, the Cottrell equation simplifies to where is the collection of constants for a given system (, , ). See also Voltammetry Electroanalytical methods Limiting current Anson equation References Electrochemical equations
Cottrell equation
[ "Chemistry", "Mathematics" ]
402
[ "Mathematical objects", "Equations", "Electrochemistry", "Electrochemistry stubs", "Physical chemistry stubs", "Electrochemical equations" ]
11,867,217
https://en.wikipedia.org/wiki/ErbB
The ErbB family of proteins contains four receptor tyrosine kinases, structurally related to the epidermal growth factor receptor (EGFR), its first discovered member. In humans, the family includes Her1 (EGFR, ErbB1), Her2 (ErbB2), Her3 (ErbB3), and Her4 (ErbB4). The gene symbol, ErbB, is derived from the name of a viral oncogene to which these receptors are homologous: erythroblastic leukemia viral oncogene. Insufficient ErbB signaling in humans is associated with the development of neurodegenerative diseases, such as multiple sclerosis and Alzheimer's disease, while excessive ErbB signaling is associated with the development of a wide variety of types of solid tumor. ErbB protein family signaling is important for development. For example, ErbB-2 and ErbB-4 knockout mice die at midgestation leads to deficient cardiac function associated with a lack of myocardial ventricular trabeculation and display abnormal development of the peripheral nervous system. In ErbB-3 receptor mutant mice, they have less severe defects in the heart and thus are able to survive longer throughout embryogenesis. Lack of Schwann cell maturation leads to degeneration of motor and sensory neurons. Excessive ErbB signaling is associated with the development of a wide variety of types of solid tumor. ErbB-1 and ErbB-2 are found in many human cancers, and their excessive signaling may be critical factors in the development and malignancy of these tumors. Family members The ErbB protein family consists of 4 members ErbB-1, also named epidermal growth factor receptor (EGFR) ErbB-2, also named HER2 in humans and neu in rodents ErbB-3, also named HER3 ErbB-4, also named HER4 v-ErbBs are homologous to EGFR, but lack sequences within the ligand binding ectodomain. Structure All four ErbB receptor family members are nearly same in the structure having single-chain of modular glycoproteins. This structure is made up of an extracellular region or ectodomain or ligand binding region that contains approximately 620 amino acids, a single transmembrane-spanning region containing approximately 23 residues, and an intracellular cytoplasmic tyrosine kinase domain containing up to approximately 540 residues. The extracellular region of each family member is made up of 4 subdomains, L1, CR1, L2, and CR2, where "L" signifies a leucine-rich repeat domain and "CR" a cysteine-rich region, and these CR domains contain disulfide modules in their structure as 8 disulfide modules in CR1 domain, whereas 7 modules in CR2 domain. These subdomains are shown in blue (L1), green (CR1), yellow (L2), and red (CR2) in the figure below. These subdomains are also referred to as domains I-IV, respectively. The intracellular/cytoplasmic region of the ErbB receptor consists mainly of three subdomains: A juxtamembrane with approximately 40 residues, a kinase domain containing approximately 260 residues and a C-terminal domain of 220-350 amino acid residues that become activated via phosphorylation of its tyrosine residues that mediates interactions of other ErbB proteins and downstream signaling molecules. The figure below shows the tridimensional structure of the ErbB family proteins, using the pdb files 1NQL (ErbB-1), 1S78 (ErbB-2), 1M6B (ErbB-3) and 2AHX (ErbB-4): ErbB and Kinase activation The four members of the ErbB protein family are capable of forming homodimers, heterodimers, and possibly higher-order oligomers upon activation by a subset of potential growth factor ligands. There are 11 growth factors that activate ErbB receptors. The ability ('+') or inability ('-') of each growth factor to activate each of the ErbB receptors is shown in the table below: The dimerization occurs after ligand bind to the extracellular domain of the ErbB monomers and monomer-monomer interaction establishes activating the activation loop in a kinase domain, that activates the further process of transphosphorylation of the specific tyrosine kinases in the kinase domain of ErbB's intracellular part. It is a complex process due to the domain specificity and nature of the members of ErbB family. Notably, the ErbB1 and ErbB4 are the two most studied and intact among the family of ErbB proteins, Which forms functional intracellular tyrosine kinases. ErbB2 has no known binding ligand and absent of an active kinase domain in ErbB3 make this duo preferable to form heterodimers & share each other's active domains to activate transphosphorylation of the tyrosine kinases. The specific tyrosine molecules mainly trans or auto-phosphorylated are at the site Y992, Y1045, Y1068, Y1148, Y1173 in the tail region of the ErbB monomer. For the activation of kinase domain in the ErbB dimer, asymmetric kinase domain dimer of the two monomers is required with the intact asymmetric (N-C lobe) interface at the site of adjoining monomers. Activation of the tyrosine kinase domain leads to the activation of the whole range of downstream signaling pathways like PLCγ, ERK 1/2, p38 MAPK, PI3-K/Akt and more with the cell. When not bound to a ligand, the extracellular regions of ErbB1, ErbB3, and ErbB4 are found in a tethered conformation in which a 10-amino-acid-long dimerization arm is unable to mediate monomer-monomer interactions. In contrast, in ligand-bound ErbB-1 and unliganded ErbB-2, the dimerization arm becomes untethered and exposed at the receptor surface, making monomer-monomer interactions and dimerisation possible. The consequence of ectodomain dimerization is the positioning of two cytoplasmic domains such that transphosphorylation of specific tyrosine, serine, and threonine amino acids can occur within the cytoplasmic domain of each ErbB. At least 10 specific tyrosines, 7 serines, and 2 threonines have been identified within the cytoplasmic domain of ErbB-1, that may become phosphorylated and in some cases de-phosphorylated (e.g., Tyr 992) upon receptor dimerization. Although a number of potential phosphorylation sites exist, upon dimerization only one or much more rarely two of these sites are phosphorylated at any one time. Role in cancer Phosphorylated tyrosine residues act as binding sites for intracellular signal activators such as Ras. The Ras-Raf-MAPK pathway is a major signalling route for the ErbB family, as is the PI3-K/AKT pathway, both of which lead to increased cell proliferation and inhibition of apoptosis. Genetic Ras mutations are infrequent in breast cancer but Ras may be pathologically activated in breast cancer by overexpression of ErbB receptors. Activation of the receptor tyrosine kinases generates a signaling cascade where the Ras GTPase proteins are activated to a GTP-bound state. The RAS pathway can couple with the mitogen-activated protein kinase pathway or a number of other possible effectors. The PI3K/Akt pathway is dysregulated in many human tumors because of mutations altering proteins in the pathway. In relation to breast tumors, somatic activating mutations in Akt and the p110α subunit of the PI3K have been detected in 3–5% and 20–25% of primary breast tumors, respectively. Many breast tumors also have lower levels of PTEN, which is a lipid phosphatase that dephosphorylates phosphatidylinositol (3,4,5)-trisphosphate, thereby reversing the action of PI3K. EGFR has been found to be overexpressed in many cancers such as gliomas and non-small-cell lung carcinoma. Drugs such as panitumumab, cetuximab, gefitinib, erlotinib, afatinib, and lapatinib are used to inhibit it. Cetuximab is a chimeric human: murin immunoglobulin G1 mAb that binds EGFR with high affinity and promotes EGFR internalization. It has recently been shown that acquired resistance to cetuximab and gefitinib can be linked to hyperactivity of ErbB-3. This is linked to an acquired overexpression of c-MET, which phosphorylates ErbB-3, which in turn activates the AKT pathway. Panitumumab is a human mAb with high EGFR affinity that blocks ligand-binding to induce EGFR internalization. Panitumumab efficacy has been tested in a variety of advanced cancer patients, including renal carcinomas and metastatic colorectal cancer in clinical trials. ErbB2 overexpression can occur in breast, ovarian, bladder, non-small-cell lung carcinoma, as well as several other tumor types. Trastuzumab or Herceptin inhibits downstream signal cascades by selectively binding to the extracellular domain of ErbB-2 receptors to inhibit it. This leads to decreased proliferation of tumor cells. Trastuzumab targets tumor cells and causes apoptosis through the immune system by promoting antibody-dependent cellular cytotoxicity. Two thirds of women respond to trastuzumab. Although herceptin works well in most breast cancer cases, it has not been yet elucidated as to why some HER2-positive breast cancers don't respond well. Research suggests that a low FISH test ratio in estrogen receptor positive breast cancers are less likely to respond to this drug. ErbB expression as also been linked to cutaneous Squamous Cell Carcinoma (cSCC) development, where the over-expression of these receptors has been found in cSCC tumors. Based on a study conducted by Cañueto et al. (2017), ErbB over-expression in tumors was linked to lymph node progression and metastasis stage progression in cSCC. References Tyrosine kinase receptors Oncogenes Human genes
ErbB
[ "Chemistry" ]
2,284
[ "Tyrosine kinase receptors", "Signal transduction" ]
1,551,777
https://en.wikipedia.org/wiki/Chemical%20space
Chemical space is a concept in cheminformatics referring to the property space spanned by all possible molecules and chemical compounds adhering to a given set of construction principles and boundary conditions. It contains millions of compounds which are readily accessible and available to researchers. It is a library used in the method of molecular docking. Theoretical spaces A chemical space often referred to in cheminformatics is that of potential pharmacologically active molecules. Its size is estimated to be in the order of 1060 molecules. There are no rigorous methods for determining the precise size of this space. The assumptions used for estimating the number of potential pharmacologically active molecules, however, use the Lipinski rules, in particular the molecular weight limit of 500. The estimate also restricts the chemical elements used to be Carbon, Hydrogen, Oxygen, Nitrogen and Sulfur. It further makes the assumption of a maximum of 30 atoms to stay below 500 daltons, allows for branching and a maximum of 4 rings and arrives at an estimate of 1063. This number is often misquoted in subsequent publications to be the estimated size of the whole organic chemistry space, which would be much larger if including the halogens and other elements. In addition to the drug-like space and lead-like space that are, in part, defined by the Lipinski's rule of five, the concept of known drug space (KDS), which is defined by the molecular descriptors of marketed drugs, has also been introduced. KDS can be used to help predict the boundaries of chemical spaces for drug development by comparing the structure of the molecules that are undergoing design and synthesis to the molecular descriptor parameters that are defined by the KDS. Empirical spaces As of October 2024, 219 million molecules were assigned with a Chemical Abstracts Service (CAS) Registry Number. ChEMBL Database version 33 record biological activities for 2,431,025 distinct molecules. Chemical libraries used for laboratory-based screening for compounds with desired properties are examples for real-world chemical libraries of small size (a few hundred to hundreds of thousands of molecules). Generation Systematic exploration of chemical space is possible by creating in silico databases of virtual molecules, which can be visualized by projecting multidimensional property space of molecules in lower dimensions. Generation of chemical spaces may involve creating stoichiometric combinations of electrons and atomic nuclei to yield all possible topology isomers for the given construction principles. In Cheminformatics, software programs called Structure Generators are used to generate the set of all chemical structure adhering to given boundary conditions. Constitutional Isomer Generators, for example, can generate all possible constitutional isomers of a given molecular gross formula. In the real world, chemical reactions allow us to move in chemical space. The mapping between chemical space and molecular properties is often not unique, meaning that there can be very different molecules exhibiting very similar properties. Materials design and drug discovery both involve the exploration of chemical space. See also Cheminformatics Drug design Sequence space (evolution) Molecule mining References Cheminformatics Computational chemistry
Chemical space
[ "Chemistry" ]
622
[ "Theoretical chemistry", "Computational chemistry", "nan", "Cheminformatics" ]
1,551,873
https://en.wikipedia.org/wiki/ABC%20transporter
The ABC transporters, ATP synthase (ATP)-binding cassette transporters are a transport system superfamily that is one of the largest and possibly one of the oldest gene families. It is represented in all extant phyla, from prokaryotes to humans. ABC transporters belong to translocases. ABC transporters often consist of multiple subunits, one or two of which are transmembrane proteins and one or two of which are membrane-associated AAA ATPases. The ATPase subunits utilize the energy of adenosine triphosphate (ATP) binding and hydrolysis to provide the energy needed for the translocation of substrates across membranes, either for uptake or for export of the substrate. Most of the uptake systems also have an extracytoplasmic receptor, a solute binding protein. Some homologous ATPases function in non-transport-related processes such as translation of RNA and DNA repair. ABC transporters are considered to be an ABC superfamily based on the similarities of the sequence and organization of their ATP-binding cassette (ABC) domains, even though the integral membrane proteins appear to have evolved independently several times, and thus comprise different protein families. Like the ABC exporters, it is possible that the integral membrane proteins of ABC uptake systems also evolved at least three times independently, based on their high resolution three-dimensional structures. ABC uptake porters take up a large variety of nutrients, biosynthetic precursors, trace metals and vitamins, while exporters transport lipids, sterols, drugs, and a large variety of primary and secondary metabolites. Some of these exporters in humans are involved in tumor resistance, cystic fibrosis and a range of other inherited human diseases. High level expression of the genes encoding some of these exporters in both prokaryotic and eukaryotic organisms (including human) result in the development of resistance to multiple drugs such as antibiotics and anti-cancer agents. Hundreds of ABC transporters have been characterized from both prokaryotes and eukaryotes. ABC genes are essential for many processes in the cell, and mutations in human genes cause or contribute to several human genetic diseases. Forty eight ABC genes have been reported in humans. Among these, many have been characterized and shown to be causally related to diseases present in humans such as cystic fibrosis, adrenoleukodystrophy, Stargardt disease, drug-resistant tumors, Dubin–Johnson syndrome, Byler's disease, progressive familiar intrahepatic cholestasis, X-linked sideroblastic anemia, ataxia, and persistent and hyperinsulimenic hypoglycemia. ABC transporters are also involved in multiple drug resistance, and this is how some of them were first identified. When the ABC transport proteins are overexpressed in cancer cells, they can export anticancer drugs and render tumors resistant. Function ABC transporters utilize the energy of ATP binding and hydrolysis to transport various substrates across cellular membranes. They are divided into three main functional categories. In prokaryotes, importers mediate the uptake of nutrients into the cell. The substrates that can be transported include ions, amino acids, peptides, sugars, and other molecules that are mostly hydrophilic. The membrane-spanning region of the ABC transporter protects hydrophilic substrates from the lipids of the membrane bilayer thus providing a pathway across the cell membrane. Eukaryotes do not possess any importers. Exporters or effluxers, which are present both in prokaryotes and eukaryotes, function as pumps that extrude toxins and drugs out of the cell. In gram-negative bacteria, exporters transport lipids and some polysaccharides from the cytoplasm to the periplasm. The third subgroup of ABC proteins do not function as transporters, but are rather involved in translation and DNA repair processes. Prokaryotic Bacterial ABC transporters are essential in cell viability, virulence, and pathogenicity. Iron ABC uptake systems, for example, are important effectors of virulence. Pathogens use siderophores, such as Enterobactin, to scavenge iron that is in complex with high-affinity iron-binding proteins or erythrocytes. These are high-affinity iron-chelating molecules that are secreted by bacteria and reabsorb iron into iron-siderophore complexes. The chvE-gguAB gene in Agrobacterium tumefaciens encodes glucose and galactose importers that are also associated with virulence. Transporters are extremely vital in cell survival such that they function as protein systems that counteract any undesirable change occurring in the cell. For instance, a potential lethal increase in osmotic strength is counterbalanced by activation of osmosensing ABC transporters that mediate uptake of solutes. Other than functioning in transport, some bacterial ABC proteins are also involved in the regulation of several physiological processes. In bacterial efflux systems, certain substances that need to be extruded from the cell include surface components of the bacterial cell (e.g. capsular polysaccharides, lipopolysaccharides, and teichoic acid), proteins involved in bacterial pathogenesis (e.g. hemolysis, heme-binding protein, and alkaline protease), heme, hydrolytic enzymes, S-layer proteins, competence factors, toxins, antibiotics, bacteriocins, peptide antibiotics, drugs and siderophores. They also play important roles in biosynthetic pathways, including extracellular polysaccharide biosynthesis and cytochrome biogenesis. Eukaryotic Although most eukaryotic ABC transporters are effluxers, some are not directly involved in transporting substrates. In the cystic fibrosis transmembrane regulator (CFTR) and in the sulfonylurea receptor (SUR), ATP hydrolysis is associated with the regulation of opening and closing of ion channels carried by the ABC protein itself or other proteins. Human ABC transporters are involved in several diseases that arise from polymorphisms in ABC genes and rarely due to complete loss of function of single ABC proteins. Such diseases include Mendelian diseases and complex genetic disorders such as cystic fibrosis, adrenoleukodystrophy, Stargardt disease, Tangier disease, immune deficiencies, progressive familial intrahepatic cholestasis, Dubin–Johnson syndrome, Pseudoxanthoma elasticum, persistent hyperinsulinemic hypoglycemia of infancy due to focal adenomatous hyperplasia, X-linked sideroblastosis and anemia, age-related macular degeneration, familial hypoapoproteinemia, Retinitis pigmentosum, cone rod dystrophy, and others. The human ABCB (MDR/TAP) family is responsible for multiple drug resistance (MDR) against a variety of structurally unrelated drugs. ABCB1 or MDR1 P-glycoprotein is also involved in other biological processes for which lipid transport is the main function. It is found to mediate the secretion of the steroid aldosterone by the adrenals, and its inhibition blocked the migration of dendritic immune cells, possibly related to the outward transport of the lipid platelet activating factor (PAF). It has also been reported that ABCB1 mediates transport of cortisol and dexamethasone, but not of progesterone in ABCB1 transfected cells. MDR1 can also transport cholesterol, short-chain and long-chain analogs of phosphatidylcholine (PC), phosphatidylethanolamine (PE), phosphatidylserine (PS), sphingomyelin (SM), and glucosylceramide (GlcCer). Multispecific transport of diverse endogenous lipids through the MDR1 transporter can possibly affect the transbilayer distribution of lipids, in particular of species normally predominant on the inner plasma membrane leaflet such as PS and PE. More recently, ABC-transporters have been shown to exist within the placenta, indicating they could play a protective role for the developing fetus against xenobiotics. Evidence has shown that placental expression of the ABC-transporters P-glycoprotein (P-gp) and breast cancer resistance protein (BCRP) are increased in preterm compared to term placentae, with P-gp expression further increased in preterm pregnancies with chorioamnionitis. To a lesser extent, increasing maternal BMI also associated with increased placental ABC-transporter expression, but only at preterm. Structure All ABC transport proteins share a structural organization consisting of four core domains. These domains consist of two trans-membrane (T) domains and two cytosolic (A) domains. The two T domains alternate between an inward and outward facing orientation, and the alternation is powered by the hydrolysis of adenosine triphosphate or ATP. ATP binds to the A subunits and it is then hydrolyzed to power the alternation, but the exact process by which this happens is not known. The four domains can be present in four separate polypeptides, which occur mostly in bacteria, or present in one or two multi-domain polypeptides. When the polypeptides are one domain, they can be referred to as a full domain, and when they are two multi-domains they can be referred to as a half domain. The T domains are each built of typically 10 membrane spanning alpha helices, through which the transported substance can cross through the plasma membrane. Also, the structure of the T domains determines the specificity of each ABC protein. In the inward facing conformation, the binding site on the A domain is open directly to the surrounding aqueous solutions. This allows hydrophilic molecules to enter the binding site directly from the inner leaflet of the phospholipid bilayer. In addition, a gap in the protein is accessible directly from the hydrophobic core of the inner leaflet of the membrane bilayer. This allows hydrophobic molecules to enter the binding site directly from the inner leaflet of the phospholipid bilayer. After the ATP powered move to the outward facing conformation, molecules are released from the binding site and allowed to escape into the exoplasmic leaflet or directly into the extracellular medium. The common feature of all ABC transporters is that they consist of two distinct domains, the transmembrane domain (TMD) and the nucleotide-binding domain (NBD). The TMD, also known as membrane-spanning domain (MSD) or integral membrane (IM) domain, consists of alpha helices, embedded in the membrane bilayer. It recognizes a variety of substrates and undergoes conformational changes to transport the substrate across the membrane. The sequence and architecture of TMDs is variable, reflecting the chemical diversity of substrates that can be translocated. The NBD or ATP-binding cassette (ABC) domain, on the other hand, is located in the cytoplasm and has a highly conserved sequence. The NBD is the site for ATP binding. In most exporters, the N-terminal transmembrane domain and the C-terminal ABC domains are fused as a single polypeptide chain, arranged as TMD-NBD-TMD-NBD. An example is the E. coli hemolysin exporter HlyB. Importers have an inverted organization, that is, NBD-TMD-NBD-TMD, where the ABC domain is N-terminal whereas the TMD is C-terminal, such as in the E. coli MacB protein responsible for macrolide resistance. The structural architecture of ABC transporters consists minimally of two TMDs and two NBDs. Four individual polypeptide chains including two TMD and two NBD subunits, may combine to form a full transporter such as in the E. coli BtuCD importer involved in the uptake of vitamin B12. Most exporters, such as in the multidrug exporter Sav1866 from Staphylococcus aureus, are made up of a homodimer consisting of two half transporters or monomers of a TMD fused to a nucleotide-binding domain (NBD). A full transporter is often required to gain functionality. Some ABC transporters have additional elements that contribute to the regulatory function of this class of proteins. In particular, importers have a high-affinity binding protein (BP) that specifically associates with the substrate in the periplasm for delivery to the appropriate ABC transporter. Exporters do not have the binding protein but have an intracellular domain (ICD) that joins the membrane-spanning helices and the ABC domain. The ICD is believed to be responsible for communication between the TMD and NBD. Transmembrane domain (TMD) Most transporters have transmembrane domains that consist of a total of 12 α-helices with 6 α-helices per monomer. Since TMDs are structurally diverse, some transporters have varying number of helices (between six and eleven). The TM domains are categorized into three distinct sets of folds: type I ABC importer, type II ABC importer and ABC exporter folds. The classification of importer folds is based on detailed characterization of the sequences. The type I ABC importer fold was originally observed in the ModB TM subunit of the molybdate transporter. This diagnostic fold can also be found in the MalF and MalG TM subunits of MalFGK2 and the Met transporter MetI. In the MetI transporter, a minimal set of 5 transmembrane helices constitute this fold while an additional helix is present for both ModB and MalG. The common organization of the fold is the "up-down" topology of the TM2-5 helices that lines the translocation pathway and the TM1 helix wrapped around the outer, membrane-facing surface and contacts the other TM helices. The type II ABC importer fold is observed in the twenty TM helix-domain of BtuCD and in Hi1471, a homologous transporter from Haemophilus influenzae. In BtuCD, the packing of the helices is complex. The noticeable pattern is that the TM2 helix is positioned through the center of the subunit where it is surrounded in close proximity by the other helices. Meanwhile, the TM5 and TM10 helices are positioned in the TMD interface. The membrane spanning region of ABC exporters is organized into two "wings" that are composed of helices TM1 and TM2 from one subunit and TM3-6 of the other, in a domain-swapped arrangement. A prominent pattern is that helices TM1-3 are related to TM4-6 by an approximate twofold rotation around an axis in the plane of the membrane. The exporter fold is originally observed in the Sav1866 structure. It contains 12 TM helices, 6 per monomer. Nucleotide-binding domain (NBD) The ABC domain consists of two domains, the catalytic core domain similar to RecA-like motor ATPases and a smaller, structurally diverse α-helical subdomain that is unique to ABC transporters. The larger domain typically consists of two β-sheets and six α helices, where the catalytic Walker A motif (GXXGXGKS/T where X is any amino acid) or P-loop and Walker B motif (ΦΦΦΦD, of which Φ is a hydrophobic residue) is situated. The helical domain consists of three or four helices and the ABC signature motif, also known as LSGGQ motif, linker peptide or C motif. The ABC domain also has a glutamine residue residing in a flexible loop called Q loop, lid or γ-phosphate switch, that connects the TMD and ABC. The Q loop is presumed to be involved in the interaction of the NBD and TMD, particularly in the coupling of nucleotide hydrolysis to the conformational changes of the TMD during substrate translocation. The H motif or switch region contains a highly conserved histidine residue that is also important in the interaction of the ABC domain with ATP. The name ATP-binding cassette is derived from the diagnostic arrangement of the folds or motifs of this class of proteins upon formation of the ATP sandwich and ATP hydrolysis. ATP binding and hydrolysis Dimer formation of the two ABC domains of transporters requires ATP binding. It is generally observed that the ATP bound state is associated with the most extensive interface between ABC domains, whereas the structures of nucleotide-free transporters exhibit conformations with greater separations between the ABC domains. Structures of the ATP-bound state of isolated NBDs have been reported for importers including HisP, GlcV, MJ1267, E. coli MalK (E.c.MalK), T. litoralis MalK (TlMalK), and exporters such as TAP, HlyB, MJ0796, Sav1866, and MsbA. In these transporters, ATP is bound to the ABC domain. Two molecules of ATP are positioned at the interface of the dimer, sandwiched between the Walker A motif of one subunit and the LSGGQ motif of the other. This was first observed in Rad50 and reported in structures of MJ0796, the NBD subunit of the LolD transporter from Methanococcus jannaschii and E.c.MalK of a maltose transporter. These structures were also consistent with results from biochemical studies revealing that ATP is in close contact with residues in the P-loop and LSGGQ motif during catalysis. Nucleotide binding is required to ensure the electrostatic and/or structural integrity of the active site and contribute to the formation of an active NBD dimer. Binding of ATP is stabilized by the following interactions: (1) ring-stacking interaction of a conserved aromatic residue preceding the Walker A motif and the adenosine ring of ATP, (2) hydrogen-bonds between a conserved lysine residue in the Walker A motif and the oxygen atoms of the β- and γ-phosphates of ATP and coordination of these phosphates and some residues in the Walker A motif with Mg2+ ion, and (3) γ-phosphate coordination with side chain of serine and backbone amide groups of glycine residues in the LSGGQ motif. In addition, a residue that suggests the tight coupling of ATP binding and dimerization, is the conserved histidine in the H-loop. This histidine contacts residues across the dimer interface in the Walker A motif and the D loop, a conserved sequence following the Walker B motif. The enzymatic hydrolysis of ATP requires proper binding of the phosphates and positioning of the γ-phosphate to the attacking water. In the nucleotide binding site, the oxygen atoms of the β- and γ-phosphates of ATP are stabilized by residues in the Walker A motif and coordinate with Mg2+. This Mg2+ ion also coordinates with the terminal aspartate residue in the Walker B motif through the attacking H2O. A general base, which may be the glutamate residue adjacent to the Walker B motif, glutamine in the Q-loop, or a histidine in the switch region that forms a hydrogen bond with the γ-phosphate of ATP, is found to catalyze the rate of ATP hydrolysis by promoting the attacking H2O. The precise molecular mechanism of ATP hydrolysis is still controversial. Mechanism of transport ABC transporters are active transporters, that is, they use energy in the form of adenosine triphosphate (ATP) to translocate substrates across cell membranes. These proteins harness the energy of ATP binding and/or hydrolysis to drive conformational changes in the transmembrane domain (TMD) and consequently transport molecules. ABC importers and exporters have a common mechanism for transporting substrates. They are similar in their structures. The model that describes the conformational changes associated with the binding of the substrate is the alternating-access model. In this model, the substrate binding site alternates between outward- and inward-facing conformations. The relative binding affinities of the two conformations for the substrate largely determines the net direction of transport. For importers, since translocation is directed from the periplasm to the cytoplasm, the outward-facing conformation has higher binding affinity for the substrate. In contrast, the substrate binding affinity in exporters is greater in the inward-facing conformation. A model that describes the conformational changes in the nucleotide-binding domain (NBD) as a result of ATP binding and hydrolysis is the ATP-switch model. This model presents two principal conformations of the NBDs: formation of a closed dimer upon binding two ATP molecules and dissociation to an open dimer facilitated by ATP hydrolysis and release of inorganic phosphate (Pi) and adenosine diphosphate (ADP). Switching between the open and closed dimer conformations induces conformational changes in the TMD resulting in substrate translocation. The general mechanism for the transport cycle of ABC transporters has not been fully elucidated, but substantial structural and biochemical data has accumulated to support a model in which ATP binding and hydrolysis is coupled to conformational changes in the transporter. The resting state of all ABC transporters has the NBDs in an open dimer configuration, with low affinity for ATP. This open conformation possesses a chamber accessible to the interior of the transporter. The transport cycle is initiated by binding of substrate to the high-affinity site on the TMDs, which induces conformational changes in the NBDs and enhances the binding of ATP. Two molecules of ATP bind, cooperatively, to form the closed dimer configuration. The closed NBD dimer induces a conformational change in the TMDs such that the TMD opens, forming a chamber with an opening opposite to that of the initial state. The affinity of the substrate to the TMD is reduced, thereby releasing the substrate. Hydrolysis of ATP follows and then sequential release of Pi and then ADP restores the transporter to its basal configuration. Although a common mechanism has been suggested, the order of substrate binding, nucleotide binding and hydrolysis, and conformational changes, as well as interactions between the domains is still debated. Several groups studying ABC transporters have differing assumptions on the driving force of transporter function. It is generally assumed that ATP hydrolysis provides the principal energy input or "power stroke" for transport and that the NBDs operate alternately and are possibly involved in different steps in the transport cycle. However, recent structural and biochemical data shows that ATP binding, rather than ATP hydrolysis, provides the "power stroke". It may also be that since ATP binding triggers NBD dimerization, the formation of the dimer may represent the "power stroke." In addition, some transporters have NBDs that do not have similar abilities in binding and hydrolyzing ATP and that the interface of the NBD dimer consists of two ATP binding pockets suggests a concurrent function of the two NBDs in the transport cycle. Some evidence to show that ATP binding is indeed the power stroke of the transport cycle was reported. It has been shown that ATP binding induces changes in the substrate-binding properties of the TMDs. The affinity of ABC transporters for substrates has been difficult to measure directly, and indirect measurements, for instance through stimulation of ATPase activity, often reflects other rate-limiting steps. Recently, direct measurement of vinblastine binding to permease-glycoprotein (P-glycoprotein) in the presence of nonhydrolyzable ATP analogs, e.g. 5'-adenylyl-β-γ-imidodiphosphate (AMP-PNP), showed that ATP binding, in the absence of hydrolysis, is sufficient to reduce substrate-binding affinity. Also, ATP binding induces substantial conformational changes in the TMDs. Spectroscopic, protease accessibility and crosslinking studies have shown that ATP binding to the NBDs induces conformational changes in multidrug resistance-associated protein-1 (MRP1), HisPMQ, LmrA, and Pgp. Two dimensional crystal structures of AMP-PNP-bound Pgp showed that the major conformational change during the transport cycle occurs upon ATP binding and that subsequent ATP hydrolysis introduces more limited changes. Rotation and tilting of transmembrane α-helices may both contribute to these conformational changes. Other studies have focused on confirming that ATP binding induces NBD closed dimer formation. Biochemical studies of intact transport complexes suggest that the conformational changes in the NBDs are relatively small. In the absence of ATP, the NBDs may be relatively flexible, but they do not involve a major reorientation of the NBDs with respect to the other domains. ATP binding induces a rigid body rotation of the two ABC subdomains with respect to each other, which allows the proper alignment of the nucleotide in the active site and interaction with the designated motifs. There is strong biochemical evidence that binding of two ATP molecules can be cooperative, that is, ATP must bind to the two active site pockets before the NBDs can dimerize and form the closed, catalytically active conformation. ABC importers Most ABC transporters that mediate the uptake of nutrients and other molecules in bacteria rely on a high-affinity solute binding protein (BP). BPs are soluble proteins located in the periplasmic space between the inner and outer membranes of gram-negative bacteria. Gram-positive microorganisms lack a periplasm such that their binding protein is often a lipoprotein bound to the external face of the cell membrane. Some gram-positive bacteria have BPs fused to the transmembrane domain of the transporter itself. The first successful x-ray crystal structure of an intact ABC importer is the molybdenum transporter (ModBC-A) from Archaeoglobus fulgidus. Atomic-resolution structures of three other bacterial importers, E. coli BtuCD, E. coli maltose transporter (MalFGK2-E), and the putative metal-chelate transporter of Haemophilus influenzae, HI1470/1, have also been determined. The structures provided detailed pictures of the interaction of the transmembrane and ABC domains as well as revealed two different conformations with an opening in two opposite directions. Another common feature of importers is that each NBD is bound to one TMD primarily through a short cytoplasmic helix of the TMD, the "coupling helix". This portion of the EAA loop docks in a surface cleft formed between the RecA-like and helical ABC subdomains and lies approximately parallel to the membrane bilayer. Large ABC importers The BtuCD and HI1470/1 are classified as large (Type II) ABC importers. The transmembrane subunit of the vitamin B12 importer, BtuCD, contains 10 TM helices and the functional unit consists of two copies each of the nucleotide binding domain (NBD) and transmembrane domain (TMD). The TMD and NBD interact with one another via the cytoplasmic loop between two TM helices and the Q loop in the ABC. In the absence of nucleotide, the two ABC domains are folded and the dimer interface is open. A comparison of the structures with (BtuCDF) and without (BtuCD) binding protein reveals that BtuCD has an opening that faces the periplasm whereas in BtuCDF, the outward-facing conformation is closed to both sides of the membrane. The structures of BtuCD and the BtuCD homolog, HI1470/1, represent two different conformational states of an ABC transporter. The predicted translocation pathway in BtuCD is open to the periplasm and closed at the cytoplasmic side of the membrane while that of HI1470/1 faces the opposite direction and open only to the cytoplasm. The difference in the structures is a 9° twist of one TM subunit relative to the other. Small ABC importers Structures of the ModBC-A and MalFGK2-E, which are in complex with their binding protein, correspond to small (Type I) ABC importers. The TMDs of ModBC-A and MalFGK2-E have only six helices per subunit. The homodimer of ModBC-A is in a conformation in which the TM subunits (ModB) orient in an inverted V-shape with a cavity accessible to the cytoplasm. The ABC subunits (ModC), on the other hand, are arranged in an open, nucleotide-free conformation, in which the P-loop of one subunit faces but is detached from the LSGGQ motif of the other. The binding protein ModA is in a closed conformation with substrate bound in a cleft between its two lobes and attached to the extracellular loops of ModB, wherein the substrate is sitting directly above the closed entrance of the transporter. The MalFGK2-E structure resembles the catalytic transition state for ATP hydrolysis. It is in a closed conformation where it contains two ATP molecules, sandwiched between the Walker A and B motifs of one subunit and the LSGGQ motif of the other subunit. The maltose binding protein (MBP or MalE) is docked on the periplasmic side of the TM subunits (MalF and MalG) and a large, occluded cavity can be found at the interface of MalF and MalG. The arrangement of the TM helices is in a conformation that is closed toward the cytoplasm but with an opening that faces outward. The structure suggests a possibility that MBP may stimulate the ATPase activity of the transporter upon binding. Mechanism of transport for importers The mechanism of transport for importers supports the alternating-access model. The resting state of importers is inward-facing, where the nucleotide binding domain (NBD) dimer interface is held open by the TMDs and facing outward but occluded from the cytoplasm. Upon docking of the closed, substrate-loaded binding protein towards the periplasmic side of the transmembrane domains, ATP binds and the NBD dimer closes. This switches the resting state of transporter into an outward-facing conformation, in which the TMDs have reoriented to receive substrate from the binding protein. After hydrolysis of ATP, the NBD dimer opens and substrate is released into the cytoplasm. Release of ADP and Pi reverts the transporter into its resting state. The only inconsistency of this mechanism to the ATP-switch model is that the conformation in its resting, nucleotide-free state is different from the expected outward-facing conformation. Although that is the case, the key point is that the NBD does not dimerize unless ATP and binding protein is bound to the transporter. ABC exporters Prokaryotic ABC exporters are abundant and have close homologues in eukaryotes. This class of transporters is studied based on the type of substrate that is transported. One class is involved in the protein (e.g. toxins, hydrolytic enzymes, S-layer proteins, lantibiotics, bacteriocins, and competence factors) export and the other in drug efflux. ABC transporters have gained extensive attention because they contribute to the resistance of cells to antibiotics and anticancer agents by pumping drugs out of the cells. A common mechanism is the overexpression of ABC exporters like P-glycoprotein (P-gp/ABCB1), multidrug resistance-associated protein 1 (MRP1/ABCC1), and breast cancer resistance protein (BCRP/ABCG2) in cancer cells that limit the exposure to anticancer drugs. In gram-negative organisms, ABC transporters mediate secretion of protein substrates across inner and outer membranes simultaneously without passing through the periplasm. This type of secretion is referred to as type I secretion, which involves three components that function in concert: an ABC exporter, a membrane fusion protein (MFP), and an outer membrane factor (OMF). An example is the secretion of hemolysin (HlyA) from E. coli where the inner membrane ABC transporter HlyB interacts with an inner membrane fusion protein HlyD and an outer membrane facilitator TolC. TolC allows hemolysin to be transported across the two membranes, bypassing the periplasm. Bacterial drug resistance has become an increasingly major health problem. One of the mechanisms for drug resistance is associated with an increase in antibiotic efflux from the bacterial cell. Drug resistance associated with drug efflux, mediated by P-glycoprotein, was originally reported in mammalian cells. In bacteria, Levy and colleagues presented the first evidence that antibiotic resistance was caused by active efflux of a drug. P-glycoprotein is the best-studied efflux pump and as such has offered important insights into the mechanism of bacterial pumps. Although some exporters transport a specific type of substrate, most transporters extrude a diverse class of drugs with varying structure. These transporters are commonly called multi-drug resistant (MDR) ABC transporters and sometimes referred to as "hydrophobic vacuum cleaners". Human ABCB1/MDR1 P-glycoprotein P-glycoprotein (3.A.1.201.1) is a well-studied protein associated with multi-drug resistance. It belongs to the human ABCB (MDR/TAP) family and is also known as ABCB1 or MDR1 Pgp. MDR1 consists of a functional monomer with two transmembrane domains (TMD) and two nucleotide-binding domains (NBD). This protein can transport mainly cationic or electrically neutral substrates as well as a broad spectrum of amphiphilic substrates. The structure of the full-size ABCB1 monomer was obtained in the presence and absence of nucleotide using electron cryo crystallography. Without the nucleotide, the TMDs are approximately parallel and form a barrel surrounding a central pore, with the opening facing towards the extracellular side of the membrane and closed at the intracellular face. In the presence of the nonhydrolyzable ATP analog, AMP-PNP, the TMDs have a substantial reorganization with three clearly segregated domains. A central pore, which is enclosed between the TMDs, is slightly open towards the intracellular face with a gap between two domains allowing access of substrate from the lipid phase. Substantial repacking and possible rotation of the TM helices upon nucleotide binding suggests a helix rotation model for the transport mechanism. Plant transporters The genome of the model plant Arabidopsis thaliana is capable of encoding 120 ABC proteins compared to 50-70 ABC proteins that are encoded by the human genome and fruit flies (Drosophila melanogaster). Plant ABC proteins are categorized in 13 subfamilies on the basis of size (full, half or quarter), orientation, and overall amino acid sequence similarity. Multidrug resistant (MDR) homologs, also known as P-glycoproteins, represent the largest subfamily in plants with 22 members and the second largest overall ABC subfamily. The B subfamily of plant ABC transporters (ABCBs) are characterized by their localization to the plasma membrane. Plant ABCB transporters are characterized by heterologously expressing them in Escherichia coli, Saccharomyces cerevisiae, Schizosaccharomyces pombe (fission yeast), and HeLa cells to determine substrate specificity. Plant ABCB transporters have shown to transport the phytohormone indole-3-acetic acid ( IAA), also known as auxin, the essential regulator for plant growth and development. The directional polar transport of auxin mediates plant environmental responses through processes such as phototropism and gravitropism. Two of the best studied auxin transporters, ABCB1 and ABCB19, have been characterized to be primary auxin exporters Other ABCB transporters such as ABCB4 participate in both the export and import of auxin At low intracellular auxin concentrations ABCB4 imports auxin until it reaches a certain threshold which then reverses function to only export auxin. Sav1866 The first high-resolution structure reported for an ABC exporter was that of Sav1866 (3.A.1.106.2) from Staphylococcus aureus. Sav1866 is a homolog of multidrug ABC transporters. It shows significant sequence similarity to human ABC transporters of subfamily B that includes MDR1 and TAP1/TAP2. The ATPase activity of Sav1866 is known to be stimulated by cancer drugs such as doxorubicin, vinblastine and others, which suggests similar substrate specificity to P-glycoprotein and therefore a possible common mechanism of substrate translocation. Sav1866 is a homodimer of half transporters, and each subunit contains an N-terminal TMD with six helices and a C-terminal NBD. The NBDs are similar in structure to those of other ABC transporters, in which the two ATP binding sites are formed at the dimer interface between the Walker A motif of one NBD and the LSGGQ motif of the other. The ADP-bound structure of Sav1866 shows the NBDs in a closed dimer and the TM helices split into two "wings" oriented towards the periplasm, forming the outward-facing conformation. Each wing consists of helices TM1-2 from one subunit and TM3-6 from the other subunit. It contains long intracellular loops (ICLs or ICD) connecting the TMDs that extend beyond the lipid bilayer into the cytoplasm and interacts with the 8=D. Whereas the importers contain a short coupling helix that contact a single NBD, Sav1866 has two intracellular coupling helices, one (ICL1) contacting the NBDs of both subunits and the other (ICL2) interacting with only the opposite NBD subunit. MsbA MsbA (3.A.1.106.1) is a multi-drug resistant (MDR) ABC transporter and possibly a lipid flippase. It is an ATPase that transports lipid A, the hydrophobic moiety of lipopolysaccharide (LPS), a glucosamine-based saccharolipid that makes up the outer monolayer of the outer membranes of most gram-negative bacteria. Lipid A is an endotoxin and so loss of MsbA from the cell membrane or mutations that disrupt transport results in the accumulation of lipid A in the inner cell membrane resulting to cell death. It is a close bacterial homolog of P-glycoprotein (Pgp) by protein sequence homology and has overlapping substrate specificities with the MDR-ABC transporter LmrA from Lactococcus lactis. MsbA from E. coli is 36% identical to the NH2-terminal half of human MDR1, suggesting a common mechanism for transport of amphiphatic and hydrophobic substrates. The MsbA gene encodes a half transporter that contains a transmembrane domain (TMD) fused with a nucleotide-binding domain (NBD). It is assembled as a homodimer with a total molecular mass of 129.2 kD. MsbA contains 6 TMDs on the periplasmic side, an NBD located on the cytoplasmic side of the cell membrane, and an intracellular domain (ICD), bridging the TMD and NBD. This conserved helix extending from the TMD segments into or near the active site of the NBD is largely responsible for crosstalk between TMD and NBD. In particular, ICD1 serves as a conserved pivot about which the NBD can rotate, therefore allowing the NBD to disassociate and dimerize during ATP binding and hydrolysis. Previously published (and now retracted) X-ray structures of MsbA were inconsistent with the bacterial homolog Sav1866. The structures were reexamined and found to have an error in the assignment of the hand resulting to incorrect models of MsbA. Recently, the errors have been rectified and new structures have been reported. The resting state of E. coli MsbA exhibits an inverted "V" shape with a chamber accessible to the interior of the transporter suggesting an open, inward-facing conformation. The dimer contacts are concentrated between the extracellular loops and while the NBDs are ≈50Å apart, the subunits are facing each other. The distance between the residues in the site of the dimer interface have been verified by cross-linking experiments and EPR spectroscopy studies. The relatively large chamber allows it to accommodate large head groups such as that present in lipid A. Significant conformational changes are required to move the large sugar head groups across the membrane. The difference between the two nucleotide-free (apo) structures is the ≈30° pivot of TM4/TM5 helices relative to the TM3/TM6 helices. In the closed apo state (from V. cholerae MsbA), the NBDs are aligned and although closer, have not formed an ATP sandwich, and the P loops of opposing monomers are positioned next to one another. In comparison to the open conformation, the dimer interface of the TMDs in the closed, inward-facing conformation has extensive contacts. For both apo conformations of MsbA, the chamber opening is facing inward. The structure of MsbA-AMP-PNP (5'-adenylyl-β-γ-imidodiphosphate), obtained from S. typhimurium, is similar to Sav1866. The NBDs in this nucleotide-bound, outward-facing conformation, come together to form a canonical ATP dimer sandwich, that is, the nucleotide is situated in between the P-loop and LSGGQ motif. The conformational transition from MsbA-closed-apo to MsbA-AMP-PNP involves two steps, which are more likely concerted: a ≈10° pivot of TM4/TM5 helices towards TM3/TM6, bringing the NBDs closer but not into alignment followed by tilting of TM4/TM5 helices ≈20° out of plane. The twisting motion results in the separation of TM3/TM6 helices away from TM1/TM2 leading to a change from an inward- to an outward- facing conformation. Thus, changes in both the orientation and spacing of the NBDs dramatically rearrange the packing of transmembrane helices and effectively switch access to the chamber from the inner to the outer leaflet of the membrane. The structures determined for MsbA is basis for the tilting model of transport. The structures described also highlight the dynamic nature of ABC exporters as also suggested by fluorescence and EPR studies. Recent work has resulted in the discovery of MsbA inhibitors. Mechanism of transport for exporters ABC exporters have a transport mechanism that is consistent with both the alternating-access model and ATP-switch model. In the apo states of exporters, the conformation is inward-facing and the TMDs and NBDs are relatively far apart to accommodate amphiphilic or hydrophobic substrates. For MsbA, in particular, the size of the chamber is large enough to accommodate the sugar groups from lipopolysaccharides (LPS). As has been suggested by several groups, binding of substrate initiates the transport cycle. The "power stroke", that is, ATP binding that induces NBD dimerization and formation of the ATP sandwich, drives the conformational changes in the TMDs. In MsbA, the sugar head groups are sequestered within the chamber during the "power stroke". The cavity is lined with charged and polar residues that are likely solvated creating an energetically unfavorable environment for hydrophobic substrates and energetically favorable for polar moieties in amphiphilic compounds or sugar groups from LPS. Since the lipid cannot be stable for a long time in the chamber environment, lipid A and other hydrophobic molecules may "flip" into an energetically more favorable position within the outer membrane leaflet. The "flipping" may also be driven by the rigid-body shearing of the TMDs while the hydrophobic tails of the LPS are dragged through the lipid bilayer. Repacking of the helices switches the conformation into an outward-facing state. ATP hydrolysis may widen the periplasmic opening and push the substrate towards the outer leaflet of the lipid bilayer. Hydrolysis of the second ATP molecule and release of Pi separates the NBDs followed by restoration of the resting state, opening the chamber towards the cytoplasm for another cycle. Role in multi drug resistance ABC transporters are known to play a crucial role in the development of multidrug resistance (MDR). In MDR, patients that are on medication eventually develop resistance not only to the drug they are taking but also to several different types of drugs. This is caused by several factors, one of which is increased expulsion of the drug from the cell by ABC transporters. For example, the ABCB1 protein (P-glycoprotein) functions in pumping tumor suppression drugs out of the cell. Pgp also called MDR1, ABCB1, is the prototype of ABC transporters and also the most extensively-studied gene. Pgp is known to transport organic cationic or neutral compounds. A few ABCC family members, also known as MRP, have also been demonstrated to confer MDR to organic anion compounds. The most-studied member in ABCG family is ABCG2, also known as BCRP (breast cancer resistance protein) confer resistance to most Topoisomerase I or II inhibitors such as topotecan, irinotecan, and doxorubicin. It is unclear exactly how these proteins can translocate such a wide variety of drugs, however, one model (the hydrophobic vacuum cleaner model) states that, in P-glycoprotein, the drugs are bound indiscriminately from the lipid phase based on their hydrophobicity. The Discovery of the first eukaryotic ABC transporter protein came from studies on tumor cells and cultured cells that exhibited resistance to several drugs with unrelated chemical structures. These cells were shown to express elevated levels of multidrug-resistance (MDR) transport protein which was originally called P-glycoprotein (P-gp), but it is also referred to as multidrug resistance protein 1 (MDR1) or ABCB1. This protein uses ATP hydrolysis, just like the other ABC transporters, to export a large variety of drugs from the cytosol to the extracellular medium. In multidrug-resistant cells, the MDR1 gene is frequently amplified. This results in a large overproduction of the MDR1 protein. The substrates of mammalian ABCB1 are primarily planar, lipid-soluble molecules with one or more positive charges. All of these substrates compete with one another for transport, suggesting that they bind to the same or overlapping sites on the protein. Many of the drugs that are transported out by ABCB1 are small, nonpolar drugs that diffuse across the extracellular medium into the cytosol, where they block various cellular functions. Drugs such as colchicine and vinblastine, which block assembly of microtubules, freely cross the membrane into the cytosol, but the export of these drugs by ABCB1 reduces their concentration in the cell. Therefore, it takes a higher concentration of the drugs is required to kill the cells that express ABCB1 than those that do not express the gene. Other ABC transporters that contribute to multidrug resistance are ABCC1 (MRP1) and ABCG2 (breast cancer resistance protein). To solve the problems associated with multidrug-resistance by MDR1, different types of drugs can be used or the ABC transporters themselves must be inhibited. For other types of drugs to work, they must bypass the resistance mechanism, which is the ABC transporter. To do this other anticancer drugs can be utilized such as alkylating drugs (cyclophosphamide), antimetabolites (5-fluorouracil), and the anthracycline modified drugs (annamycin and doxorubicin-peptide). These drugs would not function as a substrate of ABC transporters, and would thus not be transported. The other option is to use a combination of ABC inhibitory drugs and anticancer drugs at the same time. This would reverse the resistance to the anticancer drugs so that they could function as intended. The substrates that reverse the resistance to anticancer drugs are called chemosensitizers. Reversal of multi drug resistance Drug resistance is a common clinical problem that occurs in patients with infectious diseases and in patients with cancer. Prokaryotic and eukaryotic microorganisms as well as neoplastic cells are often found to be resistant to drugs. MDR is frequently associated with overexpression of ABC transporters. Inhibition of ABC transporters by low-molecular weight compounds has been extensively investigated in cancer patients; however, the clinical results have been disappointing. Recently various RNAi strategies have been applied to reverse MDR in different tumor models and this technology is effective in reversing ABC-transporter-mediated MDR in cancer cells and is therefore a promising strategy for overcoming MDR by gene therapeutic applications. RNAi technology could also be considered for overcoming MDR in infectious diseases caused by microbial pathogens. Physiological role In addition to conferring MDR in tumor cells, ABC transporters are also expressed in the membranes of healthy cells, where they facilitate the transport of various endogenous substances, as well as of substances foreign to the body. For instance, ABC transporters such as Pgp, the MRPs and BCRP limit the absorption of many drugs from the intestine, and pump drugs from the liver cells to the bile as a means of removing foreign substances from the body. A large number of drugs are either transported by ABC transporters themselves or affect the transport of other drugs. The latter scenario can lead to drug-drug interactions, sometimes resulting in altered effects of the drugs. Methods to characterize ABC transporter interactions There are a number of assay types that allow the detection of ABC transporter interactions with endogenous and xenobiotic compounds. The complexity of assay range from relatively simple membrane assays. like vesicular transport assay, ATPase assay to more complex cell based assays up to intricate in vivo detection methodologies. Membrane assays The vesicular transport assay detects the translocation of molecules by ABC transporters. Membranes prepared under suitable conditions contain inside-out oriented vesicles with the ATP binding site and substrate binding site of the transporter facing the buffer outside. Substrates of the transporter are taken up into the vesicles in an ATP dependent manner. Rapid filtration using glass fiber filters or nitrocellulose membranes are used to separate the vesicles from the incubation solution and the test compound trapped inside the vesicles is retained on the filter. The quantity of the transported unlabelled molecules is determined by HPLC, LC/MS, LC/MS/MS. Alternatively, the compounds are radiolabeled, fluorescent or have a fluorescent tag so that the radioactivity or fluorescence retained on the filter can be quantified. Various types of membranes from different sources (e.g. insect cells, transfected or selected mammalian cell lines) are used in vesicular transport studies. Membranes are commercially available or can be prepared from various cells or even tissues e.g. liver canalicular membranes. This assay type has the advantage of measuring the actual disposition of the substrate across the cell membrane. Its disadvantage is that compounds with medium-to-high passive permeability are not retained inside the vesicles making direct transport measurements with this class of compounds difficult to perform. The vesicular transport assay can be performed in an "indirect" setting, where interacting test drugs modulate the transport rate of a reporter compound. This assay type is particularly suitable for the detection of possible drug-drug interactions and drug-endogenous substrate interactions. It is not sensitive to the passive permeability of the compounds and therefore detects all interacting compounds. Yet, it does not provide information on whether the compound tested is an inhibitor of the transporter, or a substrate of the transporter inhibiting its function in a competitive fashion. A typical example of an indirect vesicular transport assay is the detection of the inhibition of taurocholate transport by ABCB11 (BSEP). Whole cell based assays Efflux transporter-expressing cells actively pump substrates out of the cell, which results in a lower rate of substrate accumulation, lower intracellular concentration at steady state, or a faster rate of substrate elimination from cells loaded with the substrate. Transported radioactive substrates or labeled fluorescent dyes can be directly measured, or in an indirect set up, the modulation of the accumulation of a probe substrate (e.g. fluorescent dyes like rhodamine 123, or calcein) can be determined in the presence of a test drug. Calcein-AM, A highly permeable derivative of calcein readily penetrates into intact cells, where the endogenous esterases rapidly hydrolyze it to the fluorescent calcein. In contrast to calcein-AM, calcein has low permeability and therefore gets trapped in the cell and accumulates. As calcein-AM is an excellent substrate of the MDR1 and MRP1 efflux transporters, cells expressing MDR1 and/or MRP1 transporters pump the calcein-AM out of the cell before esterases can hydrolyze it. This results in a lower cellular accumulation rate of calcein. The higher the MDR activity is in the cell membrane, the less Calcein is accumulated in the cytoplasm. In MDR-expressing cells, the addition of an MDR inhibitor or an MDR substrate in excess dramatically increases the rate of Calcein accumulation. Activity of multidrug transporter is reflected by the difference between the amounts of dye accumulated in the presence and the absence of inhibitor. Using selective inhibitors, transport activity of MDR1 and MRP1 can be easily distinguished. This assay can be used to screen drugs for transporter interactions, and also to quantify the MDR activity of cells. The calcein assay is the proprietary assay of SOLVO Biotechnology. Subfamilies Mammalian subfamilies There are 49 known ABC transporters present in humans, which are classified into seven families by the Human Genome Organization. A full list of human ABC transporters can be found from. ABCA The ABCA subfamily is composed of 12 full transporters split into two subgroups. The first subgroup consists of seven genes that map to six different chromosomes. These are ABCA1, ABCA2, ABCA3, and ABCA4, ABCA7, ABCA12, and ABCA13. The other subgroup consists of ABCA5 and ABCA6 and ABCA8, ABCA9 and ABCA10. A8-10. All of subgroup 2 is organized into a head to tail cluster of chromosomes on chromosome 17q24. Genes in this second subgroup are distinguished from ABCA1-like genes by having 37-38 exons as opposed to the 50 exons in ABCA1. The ABCA1 subgroup is implicated in the development of genetic diseases. In the recessive Tangier's disease, the ABCA1 protein is mutated. Also, the ABCA4 maps to a region of chromosome 1p21 that contains the gene for Stargardt's disease. This gene is found to be highly expressed in rod photoreceptors and is mutated in Stargardt's disease, recessive retinitis pigmentism, and the majority of recessive cone-rod dystrophy. ABCB The ABCB subfamily is composed of four full transporters and two half transporters. This is the only human subfamily to have both half and full types of transporters. ABCB1 was discovered as a protein overexpressed in certain drug resistant tumor cells. It is expressed primarily in the blood–brain barrier and liver and is thought to be involved in protecting cells from toxins. Cells that overexpress this protein exhibit multi-drug resistance. ABCC Subfamily ABCC contains thirteen members and nine of these transporters are referred to as the Multidrug Resistance Proteins (MRPs). The MRP proteins are found throughout nature and they mediate many important functions. They are known to be involved in ion transport, toxin secretion, and signal transduction. Of the nine MRP proteins, four of them, MRP4, 5, 8, 9, (ABCC4, 5, 11, and 12), have a typical ABC structure with four domains, comprising two membrane spanning domains, with each spanning domain followed by a nucleotide binding domain. These are referred to as short MRPs. The remaining 5 MRP's (MRP1, 2, 6, 7) (ABCC1, 2, 3, 6 and 10) are known as long MRPs and feature an additional fifth domain at their N terminus. CFTR, the transporter involved in the disease cystic fibrosis, is also considered part of this subfamily. Cystic fibrosis occurs upon mutation and loss of function of CFTR. The sulfonylurea receptors (SUR), involved in insulin secretion, neuronal function, and muscle function, are also part of this family of proteins. Mutations in SUR proteins are a potential cause of Neonatal diabetes mellitus. SUR is also the binding site for drugs such as sulfonylureas and potassium-channel openers activators such as diazoxide. ABCD The ABCD subfamily consists of four genes that encode half transporters expressed exclusively in the peroxisome. ABCD1 is responsible for the X-linked form of Adrenoleukodystrophy (ALD) which is a disease characterized by neurodegeneration and adrenal deficiency that typically is initiated in late childhood. The cells of ALD patients feature accumulation of unbranched saturated fatty acids, but the exact role of ABCD1 in the process is still undetermined. In addition, the function of other ABCD genes have yet to be determined but have been thought to exert related functions in fatty acid metabolism. ABCE and ABCF Both of these subgroups are composed of genes that have ATP binding domains that are closely related to other ABC transporters, but these genes do not encode for trans-membrane domains. ABCE consists of only one member, OABP or ABCE1, which is known to recognize certain oligodendrocytes produced in response to certain viral infections. Each member of the ABCF subgroup consist of a pair of ATP binding domains. ABCG Six half transporters with ATP binding sites on the N terminus and trans-membrane domains at the C terminus make up the ABCG subfamily. This orientation is opposite of all other ABC genes. There are only 5 ABCG genes in the human genome, but there are 15 in the Drosophila genome and 10 in yeast. The ABCG2 gene was discovered in cell lines selected for high level resistance for mitoxantrone and no expression of ABCB1 or ABCC1. ABCG2 can export anthracycline anticancer drugs, as well as topotecan, mitoxantrone, or doxorubicin as substrates. Chromosomal translocations have been found to cause the ABCG2 amplification or rearrangement found in resistant cell lines. Cross-species subfamilies The following classification system for transmembrane solute transporters has been constructed in the TCDB. Three families of ABC exporters are defined by their evolutionary origins. ABC1 exporters evolved by intragenic triplication of a 2 TMS precursor (TMS = transmembrane segment. A "2 TMS" protein has 2 transmembrane segments) to give 6 TMS proteins. ABC2 exporters evolved by intragenic duplication of a 3 TMS precursor, and ABC3 exporters evolved from a 4 TMS precursor which duplicated either extragenicly to give two 4 TMS proteins, both required for transport function, or intragenicly to give 8 or 10 TMS proteins. The 10 TMS proteins appear to have two extra TMSs between the two 4 TMS repeat units. Most uptake systems (all except 3.A.1.21) are of the ABC2 type, divided into type I and type II by the way they handle nucleotides. A special subfamily of ABC2 importers called ECF use a separate subunit for substrate recognition. ABC1 (): 3.A.1.106 The Lipid Exporter (LipidE) Family 3.A.1.108 The β-Glucan Exporter (GlucanE) Family 3.A.1.109 The Protein-1 Exporter (Prot1E) Family 3.A.1.110 The Protein-2 Exporter (Prot2E) Family 3.A.1.111 The Peptide-1 Exporter (Pep1E) Family 3.A.1.112 The Peptide-2 Exporter (Pep2E) Family 3.A.1.113 The Peptide-3 Exporter (Pep3E) Family 3.A.1.117 The Drug Exporter-2 (DrugE2) Family 3.A.1.118 The Microcin J25 Exporter (McjD) Family 3.A.1.119 The Drug/Siderophore Exporter-3 (DrugE3) Family 3.A.1.123 The Peptide-4 Exporter (Pep4E) Family 3.A.1.127 The AmfS Peptide Exporter (AmfS-E) Family 3.A.1.129 The CydDC Cysteine Exporter (CydDC-E) Family 3.A.1.135 The Drug Exporter-4 (DrugE4) Family 3.A.1.139 The UDP-Glucose Exporter (U-GlcE) Family (UPF0014 Family) 3.A.1.201 The Multidrug Resistance Exporter (MDR) Family (ABCB) 3.A.1.202 The Cystic Fibrosis Transmembrane Conductance Exporter (CFTR) Family (ABCC) 3.A.1.203 The Peroxysomal Fatty Acyl CoA Transporter (P-FAT) Family (ABCD) 3.A.1.206 The a-Factor Sex Pheromone Exporter (STE) Family (ABCB) 3.A.1.208 The Drug Conjugate Transporter (DCT) Family (ABCC) (Dębska et al., 2011) 3.A.1.209 The MHC Peptide Transporter (TAP) Family (ABCB) 3.A.1.210 The Heavy Metal Transporter (HMT) Family (ABCB) 3.A.1.212 The Mitochondrial Peptide Exporter (MPE) Family (ABCB) 3.A.1.21 The Siderophore-Fe3+ Uptake Transporter (SIUT) Family ABC2 ( [partial]): 3.A.1.101 The Capsular Polysaccharide Exporter (CPSE) Family 3.A.1.102 The Lipooligosaccharide Exporter (LOSE) Family 3.A.1.103 The Lipopolysaccharide Exporter (LPSE) Family 3.A.1.104 The Teichoic Acid Exporter (TAE) Family 3.A.1.105 The Drug Exporter-1 (DrugE1) Family 3.A.1.107 The Putative Heme Exporter (HemeE) Family 3.A.1.115 The Na+ Exporter (NatE) Family 3.A.1.116 The Microcin B17 Exporter (McbE) Family 3.A.1.124 The 3-component Peptide-5 Exporter (Pep5E) Family 3.A.1.126 The β-Exotoxin I Exporter (βETE) Family 3.A.1.128 The SkfA Peptide Exporter (SkfA-E) Family 3.A.1.130 The Multidrug/Hemolysin Exporter (MHE) Family 3.A.1.131 The Bacitracin Resistance (Bcr) Family 3.A.1.132 The Gliding Motility ABC Transporter (Gld) Family 3.A.1.133 The Peptide-6 Exporter (Pep6E) Family 3.A.1.138 The Unknown ABC-2-type (ABC2-1) Family 3.A.1.141 The Ethyl Viologen Exporter (EVE) Family (DUF990 Family; ) 3.A.1.142 The Glycolipid Flippase (G.L.Flippase) Family 3.A.1.143 The Exoprotein Secretion System (EcsAB(C)) 3.A.1.144: Functionally Uncharacterized ABC2-1 (ABC2-1) Family 3.A.1.145: Peptidase Fused Functionally Uncharacterized ABC2-2 (ABC2-2) Family 3.A.1.146: The actinorhodin (ACT) and undecylprodigiosin (RED) exporter (ARE) family 3.A.1.147: Functionally Uncharacterized ABC2-2 (ABC2-2) Family 3.A.1.148: Functionally Uncharacterized ABC2-3 (ABC2-3) Family 3.A.1.149: Functionally Uncharacterized ABC2-4 (ABC2-4) Family 3.A.1.150: Functionally Uncharacterized ABC2-5 (ABC2-5) Family 3.A.1.151: Functionally Uncharacterized ABC2-6 (ABC2-6) Family 3.A.1.152: The lipopolysaccharide export (LptBFG) Family () 3.A.1.204 The Eye Pigment Precursor Transporter (EPP) Family (ABCG) 3.A.1.205 The Pleiotropic Drug Resistance (PDR) Family (ABCG) 3.A.1.211 The Cholesterol/Phospholipid/Retinal (CPR) Flippase Family (ABCA) 9.B.74 The Phage Infection Protein (PIP) Family all uptake systems (3.A.1.1 - 3.A.1.34 except 3.A.1.21) 3.A.1.1 Carbohydrate Uptake Transporter-1 (CUT1) 3.A.1.2 Carbohydrate Uptake Transporter-2 (CUT2) 3.A.1.3 Polar Amino Acid Uptake Transporter (PAAT) 3.A.1.4 Hydrophobic Amino Acid Uptake Transporter (HAAT) 3.A.1.5 Peptide/Opine/Nickel Uptake Transporter (PepT) 3.A.1.6 Sulfate/Tungstate Uptake Transporter (SulT) 3.A.1.7 Phosphate Uptake Transporter (PhoT) 3.A.1.8 Molybdate Uptake Transporter (MolT) 3.A.1.9 Phosphonate Uptake Transporter (PhnT) 3.A.1.10 Ferric Iron Uptake Transporter (FeT) 3.A.1.11 Polyamine/Opine/Phosphonate Uptake Transporter (POPT) 3.A.1.12 Quaternary Amine Uptake Transporter (QAT) 3.A.1.13 Vitamin B12 Uptake Transporter (B12T) 3.A.1.14 Iron Chelate Uptake Transporter (FeCT) 3.A.1.15 Manganese/Zinc/Iron Chelate Uptake Transporter (MZT) 3.A.1.16 Nitrate/Nitrite/Cyanate Uptake Transporter (NitT) 3.A.1.17 Taurine Uptake Transporter (TauT) 3.A.1.19 Thiamin Uptake Transporter (ThiT) 3.A.1.20 Brachyspira Iron Transporter (BIT) 3.A.1.21 Siderophore-Fe3+ Uptake Transporter (SIUT) 3.A.1.24 The Methionine Uptake Transporter (MUT) Family (Similar to 3.A.1.3 and 3.A.1.12) 3.A.1.27 The γ-Hexachlorocyclohexane (HCH) Family (Similar to 3.A.1.24 and 3.A.1.12) 3.A.1.34 The Tryptophan (TrpXYZ) Family ECF uptake systems 3.A.1.18 The Cobalt Uptake Transporter (CoT) Family 3.A.1.22 The Nickel Uptake Transporter (NiT) Family 3.A.1.23 The Nickel/Cobalt Uptake Transporter (NiCoT) Family 3.A.1.25 The Biotin Uptake Transporter (BioMNY) Family 3.A.1.26 The Putative Thiamine Uptake Transporter (ThiW) Family 3.A.1.28 The Queuosine (Queuosine) Family 3.A.1.29 The Methionine Precursor (Met-P) Family 3.A.1.30 The Thiamin Precursor (Thi-P) Family 3.A.1.31 The Unknown-ABC1 (U-ABC1) Family 3.A.1.32 The Cobalamin Precursor (B12-P) Family 3.A.1.33 The Methylthioadenosine (MTA) Family ABC3 (): 3.A.1.114 The Probable Glycolipid Exporter (DevE) Family 3.A.1.122 The Macrolide Exporter (MacB) Family 3.A.1.125 The Lipoprotein Translocase (LPT) Family 3.A.1.134 The Peptide-7 Exporter (Pep7E) Family 3.A.1.136 The Uncharacterized ABC-3-type (U-ABC3-1) Family 3.A.1.137 The Uncharacterized ABC-3-type (U-ABC3-2) Family 3.A.1.140 The FtsX/FtsE Septation (FtsX/FtsE) Family 3.A.1.207 The Eukaryotic ABC3 (E-ABC3) Family Images Many structures of water-soluble domains of ABC proteins have been produced in recent years. See also ATP-binding domain of ABC transporters Transmembrane domain of ABC transporters Elizabeth P. Carpenter, British structural biologist, first to describe structure of human ABC-transporter ABC10 References Further reading External links Classification of ABC transporters in TCDB ABCdb Archaeal and Bacterial ABC Systems database, ABCdb ATP-binding cassette transporters Protein families
ABC transporter
[ "Biology" ]
15,490
[ "Protein families", "Protein classification" ]
1,552,505
https://en.wikipedia.org/wiki/Vickers%20hardness%20test
The Vickers hardness test was developed in 1921 by Robert L. Smith and George E. Sandland at Vickers Ltd as an alternative to the Brinell method to measure the hardness of materials. The Vickers test is often easier to use than other hardness tests since the required calculations are independent of the size of the indenter, and the indenter can be used for all materials irrespective of hardness. The basic principle, as with all common measures of hardness, is to observe a material's ability to resist plastic deformation from a standard source. The Vickers test can be used for all metals and has one of the widest scales among hardness tests. The unit of hardness given by the test is known as the Vickers Pyramid Number (HV) or Diamond Pyramid Hardness (DPH). The hardness number can be converted into units of pascals, but should not be confused with pressure, which uses the same units. The hardness number is determined by the load over the surface area of the indentation and not the area normal to the force, and is therefore not pressure. Implementation It was decided that the indenter shape should be capable of producing geometrically similar impressions, irrespective of size; the impression should have well-defined points of measurement; and the indenter should have high resistance to self-deformation. A diamond in the form of a square-based pyramid satisfied these conditions. It had been established that the ideal size of a Brinell impression was of the ball diameter. As two tangents to the circle at the ends of a chord 3d/8 long intersect at 136°, it was decided to use this as the included angle between plane faces of the indenter tip. This gives an angle from each face normal to the horizontal plane normal of 22° on each side. The angle was varied experimentally and it was found that the hardness value obtained on a homogeneous piece of material remained constant, irrespective of load. Accordingly, loads of various magnitudes are applied to a flat surface, depending on the hardness of the material to be measured. The HV number is then determined by the ratio F/A, where F is the force applied to the diamond in kilograms-force and A is the surface area of the resulting indentation in square millimeters. which can be approximated by evaluating the sine term to give, where d is the average length of the diagonal left by the indenter in millimeters. Hence, , where F is in kgf and d is in millimeters. The corresponding unit of HV is then the kilogram-force per square millimeter (kgf/mm2) or HV number. In the above equation, F could be in N and d in mm, giving HV in the SI unit of MPa. To calculate Vickers hardness number (VHN) using SI units one needs to convert the force applied from newtons to kilogram-force by dividing by 9.806 65 (standard gravity). This leads to the following equation: where F is in N and d is in millimeters. A common error is that the above formula to calculate the HV number does not result in a number with the unit newton per square millimeter (N/mm2), but results directly in the Vickers hardness number (usually given without units), which is in fact one kilogram-force per square millimeter (1 kgf/mm2). Vickers hardness numbers are reported as xxxHVyy, e.g. 440HV30, or if duration of force differs from 10 s to 15 s, e.g. 440HV30/20, where: 440 is the hardness number, HV names the hardness scale (Vickers), 30 indicates the load used in kgf. 20 indicates the loading time if it differs from 10 s to 15 s Precautions When doing the hardness tests, the minimum distance between indentations and the distance from the indentation to the edge of the specimen must be taken into account to avoid interaction between the work-hardened regions and effects of the edge. These minimum distances are different for ISO 6507-1 and ASTM E384 standards. Vickers values are generally independent of the test force: they will come out the same for 500 gf and 50 kgf, as long as the force is at least 200 gf. However, lower load indents often display a dependence of hardness on indent depth known as the indentation size effect (ISE). Small indent sizes will also have microstructure-dependent hardness values. For thin samples indentation depth can be an issue due to substrate effects. As a rule of thumb the sample thickness should be kept greater than 2.5 times the indent diameter. Alternatively indent depth, , can be calculated according to: Conversion to SI units To convert the Vickers hardness number to SI units the hardness number in kilograms-force per square millimeter (kgf/mm2) has to be multiplied with the standard gravity, , to get the hardness in MPa (N/mm2) and furthermore divided by 1000 to get the hardness in GPa. Vickers hardness can also be converted to an SI hardness based on the projected area of the indent rather than the surface area. The projected area, , is defined as the following for a Vickers indenter geometry: This hardness is sometimes referred to as the mean contact area or Meyer hardness, and ideally can be directly compared with other hardness tests also defined using projected area. Care must be used when comparing other hardness tests due to various size scale factors which can impact the measured hardness. Estimating tensile strength If HV is first expressed in N/mm2 (MPa), or otherwise by converting from kgf/mm2, then the tensile strength (in MPa) of the material can be approximated as ≈ HV/ , where is a constant determined by yield strength, Poisson's ratio, work-hardening exponent and geometrical factors usually ranging between 2 and 4. In other words, if HV is expressed in N/mm2 (i.e. in MPa) then the tensile strength (in MPa) ≈ HV/3. This empirical law depends variably on the work-hardening behavior of the material. Application The fin attachment pins and sleeves in the Convair 580 airliner were specified by the aircraft manufacturer to be hardened to a Vickers Hardness specification of 390HV5, the '5' meaning five kiloponds. However, on the aircraft flying Partnair Flight 394 the pins were later found to have been replaced with sub-standard parts, leading to rapid wear and finally loss of the aircraft. On examination, accident investigators found that the sub-standard pins had a hardness value of only some 200–230HV5. See also Indentation hardness Leeb Rebound Hardness Test Hardness comparison Knoop hardness test Meyer hardness test Mohs scale Rockwell scale Vickers toughness test of ceramics Superhard material References Further reading ASTM E92: Standard method for Vickers hardness of metallic materials (withdrawn and replaced by E384-10e2) ASTM E384: Standard Test Method for Knoop and Vickers Hardness of Materials ISO 6507-1: Metallic materials – Vickers hardness test – Part 1: Test method ISO 6507-2: Metallic materials – Vickers hardness test – Part 2: Verification and calibration of testing machines ISO 6507-3: Metallic materials – Vickers hardness test – Part 3: Calibration of reference blocks ISO 6507-4: Metallic materials – Vickers hardness test – Part 4: Tables of hardness values ISO 18265: Metallic materials – Conversion of Hardness Values External links Video on the Vickers hardness test Vickers hardness test Conversion table – Vickers, Brinell, and Rockwell scales Hardness tests de:Härte#Härteprüfung nach Vickers (HV)
Vickers hardness test
[ "Materials_science" ]
1,612
[ "Hardness tests", "Materials testing" ]
1,553,317
https://en.wikipedia.org/wiki/Optical%20medium
In optics, an optical medium is material through which light and other electromagnetic waves propagate. It is a form of transmission medium. The permittivity and permeability of the medium define how electromagnetic waves propagate in it. Properties The optical medium has an intrinsic impedance, given by where and are the electric field and magnetic field, respectively. In a region with no electrical conductivity, the expression simplifies to: For example, in free space the intrinsic impedance is called the characteristic impedance of vacuum, denoted Z0, and Waves propagate through a medium with velocity , where is the frequency and is the wavelength of the electromagnetic waves. This equation also may be put in the form where is the angular frequency of the wave and is the wavenumber of the wave. In electrical engineering, the symbol , called the phase constant, is often used instead of . The propagation velocity of electromagnetic waves in free space, an idealized standard reference state (like absolute zero for temperature), is conventionally denoted by c0: where is the electric constant and is the magnetic constant. For a general introduction, see Serway For a discussion of synthetic media, see Joannopoulus. Types Homogeneous medium vs. heterogeneous medium Transparent medium vs. opaque body Translucent medium See also Čerenkov radiation Electromagnetic spectrum Electromagnetic radiation Optics SI units Free space Metamaterial Photonic crystal Photonic crystal fiber Notes and references Optics Electric and magnetic fields in matter
Optical medium
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
299
[ "Applied and interdisciplinary physics", "Optics", "Electric and magnetic fields in matter", "Materials science", " molecular", "Condensed matter physics", "Atomic", " and optical physics" ]
1,553,856
https://en.wikipedia.org/wiki/Aircraft%20flight%20mechanics
Aircraft flight mechanics are relevant to fixed wing (gliders, aeroplanes) and rotary wing (helicopters) aircraft. An aeroplane (airplane in US usage), is defined in ICAO Document 9110 as, "a power-driven heavier than air aircraft, deriving its lift chiefly from aerodynamic reactions on surface which remain fixed under given conditions of flight". Note that this definition excludes both dirigibles (because they derive lift from buoyancy rather than from airflow over surfaces), and ballistic rockets (because their lifting force is typically derived directly and entirely from near-vertical thrust). Technically, both of these could be said to experience "flight mechanics" in the more general sense of physical forces acting on a body moving through air; but they operate very differently, and are normally outside the scope of this term. Take-off A heavier-than-air craft (aircraft) can only fly if a series of aerodynamic forces come to bear. In regard to fixed wing aircraft, the fuselage of the craft holds up the wings before takeoff. At the instant of takeoff, the reverse happens and the wings support the plane in flight. Straight and level flight of aircraft In flight a powered aircraft can be considered as being acted on by four forces: lift, weight, thrust, and drag. Thrust is the force generated by the engine (whether that engine be a jet engine, a propeller, or -- in exotic cases such as the X-15 -- a rocket) and acts in a forward direction for the purpose of overcoming drag. Lift acts perpendicular to the vector representing the aircraft's velocity relative to the atmosphere. Drag acts parallel to the aircraft's velocity vector, but in the opposite direction because drag resists motion through the air. Weight acts through the aircraft's centre of gravity, towards the centre of the Earth. In straight and level flight, lift is approximately equal to the weight, and acts in the opposite direction. In addition, if the aircraft is not accelerating, thrust is equal and opposite to drag. In straight climbing flight, lift is less than weight. At first, this seems incorrect because if an aircraft is climbing it seems lift must exceed weight. When an aircraft is climbing at constant speed it is its thrust that enables it to climb and gain extra potential energy. Lift acts perpendicular to the vector representing the velocity of the aircraft relative to the atmosphere, so lift is unable to alter the aircraft's potential energy or kinetic energy. This can be seen by considering an aerobatic aircraft in straight vertical flight (one that is climbing straight upwards or descending straight downwards). Vertical flight requires no lift. When flying straight upwards the aircraft can reach zero airspeed before falling earthwards; the wing is generating no lift and so does not stall. In straight, climbing flight at constant airspeed, thrust exceeds drag. In straight descending flight, lift is less than weight. In addition, if the aircraft is not accelerating, thrust is less than drag. In turning flight, lift exceeds weight and produces a load factor greater than one, determined by the aircraft's angle of bank. Aircraft control and movement There are three primary ways for an aircraft to change its orientation relative to the passing air. Pitch (movement of the nose up or down, rotation around the transversal axis), roll (rotation around the longitudinal axis, that is, the axis which runs along the length of the aircraft) and yaw (movement of the nose to left or right, rotation about the vertical axis). Turning the aircraft (change of heading) requires the aircraft firstly to roll to achieve an angle of bank (in order to produce a centripetal force); when the desired change of heading has been accomplished the aircraft must again be rolled in the opposite direction to reduce the angle of bank to zero. Lift acts vertically up through centre of pressure which depends on the position of wings. The position of the centre of pressure will change with changes in the angle of attack and aircraft wing flaps setting. Aircraft control surfaces Yaw is induced by a moveable rudder-fin. The movement of the rudder changes the size and orientation of the force the vertical surface produces. Since the force is created at a distance behind the centre of gravity, this sideways force causes a yawing moment then a yawing motion. On a large aircraft there may be several independent rudders on the single fin for both safety and to control the inter-linked yaw and roll actions. Using yaw alone is not a very efficient way of executing a level turn in an aircraft and will result in some sideslip. A precise combination of bank and lift must be generated to cause the required centripetal forces without producing a sideslip. Pitch is controlled by the rear part of the tailplane's horizontal stabilizer being hinged to create an elevator. By moving the elevator control backwards the pilot moves the elevator up (a position of negative camber) and the downwards force on the horizontal tail is increased. The angle of attack on the wings increased so the nose is pitched up and lift is generally increased. In micro-lights and hang gliders the pitch action is reversed—the pitch control system is much simpler so when the pilot moves the elevator control backwards it produces a nose-down pitch and the angle of attack on the wing is reduced. The system of a fixed tail surface and moveable elevators is standard in subsonic aircraft. Craft capable of supersonic flight often have a stabilator, an all-moving tail surface. Pitch is changed in this case by moving the entire horizontal surface of the tail. This seemingly simple innovation was one of the key technologies that made supersonic flight possible. In early attempts, as pilots exceeded the critical Mach number, a strange phenomenon made their control surfaces useless, and their aircraft uncontrollable. It was determined that as an aircraft approaches the speed of sound, the air approaching the aircraft is compressed and shock waves begin to form at all the leading edges and around the hinge lines of the elevator. These shock waves caused movements of the elevator to cause no pressure change on the stabilizer upstream of the elevator. The problem was solved by changing the stabilizer and hinged elevator to an all-moving stabilizer—the entire horizontal surface of the tail became a one-piece control surface. Also, in supersonic flight the change in camber has less effect on lift and a stabilator produces less drag. Aircraft that need control at extreme angles of attack are sometimes fitted with a canard configuration, in which pitching movement is created using a forward foreplane (roughly level with the cockpit). Such a system produces an immediate increase in pitch authority, and therefore a better response to pitch controls. This system is common in delta-wing aircraft (deltaplane), which use a stabilator-type canard foreplane. A disadvantage to a canard configuration compared to an aft tail is that the wing cannot use as much extension of flaps to increase wing lift at slow speeds due to stall performance. A combination tri-surface aircraft uses both a canard and an aft tail (in addition to the main wing) to achieve advantages of both configurations. A further design of tailplane is the V-tail, so named because that instead of the standard inverted T or T-tail, there are two fins angled away from each other in a V. The control surfaces then act both as rudders and elevators, moving in the appropriate direction as needed. Roll is controlled by movable sections on the trailing edge of the wings called ailerons. The ailerons move in opposition to one another—one goes up as the other goes down. The difference in camber of the wing cause a difference in lift and thus a rolling movement. As well as ailerons, there are sometimes also spoilers—small hinged plates on the upper surface of the wing, originally used to produce drag to slow the aircraft down and to reduce lift when descending. On modern aircraft, which have the benefit of automation, they can be used in combination with the ailerons to provide roll control. The earliest powered aircraft built by the Wright brothers did not have ailerons. The whole wing was warped using wires. Wing warping is efficient since there is no discontinuity in the wing geometry, but as speeds increased, unintentional warping became a problem, and so ailerons were developed. See also Aerodynamics Flight dynamics (fixed wing aircraft) Steady flight Aircraft Aircraft flight control system Banked turn Departure resistance Flight dynamics Fixed-wing aircraft Longitudinal static stability Mass properties Skid-to-turn References L. J. Clancy (1975). Aerodynamics. Chapter 14 Elementary Mechanics of Flight. Pitman Publishing Limited, London. Aerodynamics Aircraft manufacturing
Aircraft flight mechanics
[ "Chemistry", "Engineering" ]
1,775
[ "Aircraft manufacturing", "Aerodynamics", "Mechanical engineering by discipline", "Aerospace engineering", "Fluid dynamics" ]
1,554,012
https://en.wikipedia.org/wiki/Weigh%20in%20motion
Weigh-in-motion or weighing-in-motion (WIM) devices are designed to capture and record the axle weights and gross vehicle weights as vehicles drive over a measurement site. Unlike static scales, WIM systems are capable of measuring vehicles traveling at a reduced or normal traffic speed and do not require the vehicle to come to a stop. This makes the weighing process more efficient, and, in the case of commercial vehicles, allows for trucks under the weight limit to bypass static scales or inspection. Introduction Weigh-in-motion is a technology that can be used for various private and public purposes (i.e. applications) related to the weights and axle loads of road and rail vehicles. WIM systems are installed on the road or rail track or on a vehicle and measure, store and provide data from the traffic flow and/or the specific vehicle. For WIM systems certain specific conditions apply. These conditions have an impact on the quality and reliability of the data measured by the WIM system and of the durability of the sensors and WIM system itself. WIM systems measure the dynamic axle loads of the vehicles and try to calculate the best possible estimate of the related static values. The WIM systems have to perform unattended, under harsh traffic and environmental conditions, often without any control over the way the vehicle is moving, or the driver is behaving. As a result of these specific measurement conditions, a successful implementation of a WIM system requires specific knowledge and experience. The weight information consists of the gross vehicle weight and axle (group) loads combined with other parameters like: date and time, location, speed and vehicle class. For on-board WIM systems this pertains to the specific vehicle only. For in-road WIM systems this applies to the entire vehicle traffic flow. This weight information provides the user with detailed knowledge of the loading of heavy goods vehicles. This information is better than with older technologies, so, for example, it is easier to match heavy goods vehicles and the road/rail infrastructure. (Moffatt, 2017). Road applications Especially for trucks, gross vehicle and axle weight monitoring is useful in an array of applications including: Pavement design, monitoring, and research Bridge design, monitoring, and research To inform weight overload enforcement policies and to directly facilitate enforcement Planning and freight movement studies Toll by weight Data to facilitate legislation and regulation The most common road application of WIM data is probably pavement design and assessment. In the United States, a histogram of WIM data is used for this purpose. In the absence of WIM data, default histograms are available. Pavements are damaged through a mechanistic-empirical fatigue process that is commonly simplified as the fourth power law. In its original form, the fourth power law states that the rate of pavement damage is proportional to axle weight raised to the fourth power. WIM data provides information on the numbers of axles in each significant weight category which allows these kinds of calculations to be carried out. Weigh in motion scales are often used to facilitate weight overload enforcement, such as the Federal Motor Carrier Safety Administration's Commercial Vehicle Information Systems and Networks program. Weigh-in-motion systems can be used as part of traditional roadside inspection stations, or as part of virtual inspection stations. In most countries, WIM systems are not considered sufficiently accurate for direct enforcement of overloaded vehicles but this may change in the future. The most common bridge application of WIM is the assessment of traffic loading. The intensity of traffic on a bridge varies greatly as some roads are much busier than others. For bridges that have deteriorated, this is important as a less heavily trafficked bridge is safer and more heavily trafficked bridges should be prioritized for maintenance and repair. A great deal of research has been carried out on the subject of traffic loading on bridges, both short-span, including an allowance for dynamics, and long-span. Recent years have seen the rise of several "specialty" Weigh-in-Motion systems. One popular example is the front fork garbage truck scale. In this application, a container is weighed—while it is full—as the driver lifts, and again—while it is empty—as the container is returned to the ground. The difference between the full and empty weights is equal to the weight of the contents. Use Countries using Weigh in motion on highways include: Australia Belgium Brazil Czech Republic France Germany China Italy Japan Poland The Netherlands Ukraine United Arab Emirates United Kingdom United States (Usage varies from state to state) Accuracy The accuracy of weigh-in-motion data is generally much less than for static weigh scales where the environment is better controlled. The European COST 323 group developed an accuracy classification framework in the 1990s. They also coordinated three independently controlled road tests of commercially available and prototype WIM systems, one in Switzerland, one in France (Continental Motorway Test) and one in Northern Sweden (Cold Environment Test). Better accuracy can be achieved with multiple-sensor WIM systems and careful compensation for the effects of temperature. The Federal Highway Administration in the United States has published quality assurance criteria for WIM systems whose data is included in the Long Term Pavement Performance project. System basics of most systems Sensors WIM systems can employ various types of sensors for measurement. The earliest WIM systems, still used in a minority of installations, use an instrumented existing bridge as the weighing platform. Bending plates span a void cut into the pavement and use the flexure as the wheel passes over as a measure of weight. Load cells use strain sensors in the corner supports of a large platform embedded in the road. The majority of systems today are strip sensors - pressure sensitive materials installed in a 2 to 3 cm groove cut into the road pavement. In strip sensors, various sensing materials are used, including piezo-polymer, piezo-ceramic, capacitive and piezo-quartz. Many of these sensing systems are temperature-dependent and algorithms are used to correct for this. Strain transducers are used in bridge WIM systems. Strain gauges are used to measure the flexure in bending plates and the deformation in load cells. The strip sensor systems use piezo-electric materials in the groove. Capacitive systems measure the capacitance between two closely placed charged plates. More recently, weighing sensors using optical fiber grating sensors have been proposed. Charge amplifiers High impedance charge signals are amplified with MOSFET based charge amplifiers and converted to a voltage output, which is connected to analysis system. Inductive loops Inductive loops define the vehicle entry and exit from the WIM station. These signals are used as triggering inputs to start and stop the measurement to initiate totaling gross vehicle weight of each vehicle. They also measure total vehicle length and help with vehicle classification. For toll gate or low speed applications, inductive loops may be replaced by other types of vehicle sensors such as light curtains, axle sensors or piezocables. Measurement system The high speed measurement system is programmed to perform calculations of the following parameters: Axle distances, individual axle weights, gross vehicle weight, vehicle speed, distance between vehicles, and the GPS synchronized time stamp for each vehicle measurement. The measurement system should be environmentally protected, should have a wide operating temperature range and withstand condensation. Registration plate reading Cameras for automatic number-plate recognition may be part of the system to check the measured weight against maximum allowable weight for the vehicle and, in case of exceeded limits, inform law enforcement in order to pursue the vehicle or to directly fine the owner. Communications Variety of communication methods need to be installed on the measurement system. A modem or cellular modem can be provided. In older installations or where no communication infrastructure exists, WIM systems can be self-operating while saving the data, to later physically retrieve it. Data archiving A WIM system connected with any available communication means can be connected to a central monitoring server. Automatic data archiving software is required to retrieve the data from many remote WIM stations to be available for any further processing. A central database can be built to link many WIMs to a server for a variety of monitoring and enforcement purposes. Rail applications Weighing in motion is also a common application in rail transport. Known applications are Asset protection (imbalances, overloading) Asset management Maintenance planning Legislation and regulation Administration and planning System basics There are two main parts to the measurement system: the track-side component, which contains hardware for communication, power, computation, and data acquisition, and the rail-mounted component, which consists of sensors and cabling. Known sensor principles include: strain gauges: measuring the strain usually in the hub of the rail fiber optical sensors: measuring a change of light intensity caused by the bending of the rail load cells: Measuring the strain change in the load cell rather than directly on the rail itself. laser based systems: measuring the displacement of the rail Yards and main line Trains are weighed, either on the main line or at yards. Weighing in Motion systems installed on the main lines measure the complete weight (distribution) of the trains as they pass by at the designated line speed. Weighing in motion on the mainline is therefore also referred to as "coupled-in-motion weighing": all of the railcars are coupled. Weighing in motion at yards often measure individual wagons. It requires that the railcar are uncoupled on both ends in order to weigh. Weighing in motion at yards is therefore also referred to as "uncoupled-in-motion weighing". Systems installed at yards usually works at lower speeds and are capable of higher accuracies. Airport applications Some airports use airplane weighing, whereby the plane taxis across the scale bed, and its weight is measured. The weight may then be used to correlate with the pilot's log entry, to ensure there is just enough fuel, with a little margin for safety. This has been used for some time to conserve jet fuel. Also, the main difference in these platforms, which are basically a "transmission of weight" application, there are checkweighers, also known as dynamic scales or in-motion scales. International cooperation and standards The International Society for Weigh-In-Motion (ISWIM, www.is-wim) is an international non-profit organization, legally established in Switzerland in 2007. ISWIM is an international network of, and for, people and organisations active in the field of weigh-in-motion. The society brings together users, researchers, and vendors of WIM systems. This includes systems installed in or under the road pavements, bridges, rail tracks and on board vehicles. ISWIM organises periodically the international conferences on WIM (ICWIM), regional seminars and workshops as part of other international conferences and exhibitions. In the 1990s, the first WIM standard ASTM-E1318-09 was published in North America, and the COST 323 action provided draft European specifications of WIM as well as reports on Pan-European tests of WIM system. The European research project WAVE and other initiatives delivered improved technologies and new methodologies of WIM. These first tests were done with the combination of WIM systems with video as a tool to assist overloading enforcement controls. In the early 2000s, the accuracy and reliability of WIM systems were significantly improved, and they were used more frequently for overload screening and pre-selection for road side weight enforcement controls (virtual weigh stations). The OIML R134 was published as an international standard of low speed WIM systems for legal applications like tolling by weight and direct weight enforcement. Most recently, the NMi-WIM standard offers a basis for the introduction of high speed WIM systems for direct automatic enforcement and free flow tolling by weight. References External links International Society for Weigh-In-Motion Road infrastructure Rail infrastructure Weighing instruments Trucking industry in the United States
Weigh in motion
[ "Physics", "Technology", "Engineering" ]
2,405
[ "Weighing instruments", "Mass", "Matter", "Measuring instruments" ]
1,554,948
https://en.wikipedia.org/wiki/Huemul%20Project
The Huemul Project () was an early 1950s Argentine effort to develop a fusion power device known as the Thermotron. The concept was invented by Austrian scientist Ronald Richter, who claimed to have a design that would produce effectively unlimited power. Richter was able to pitch the idea to President of Argentina Juan Perón in 1948, and soon received massive funding to build an experimental site on Huemul Island, on a lake just outside the town of San Carlos de Bariloche in Patagonia, near the Andes mountains. Construction began late in 1949, and by 1951 the site was completed and carrying out tests. On 16 February 1951, Richter measured high temperatures that suggested fusion had been achieved. On 24 March, the day before an important international meeting of the leaders of the Americas, Perón publicly announced that Richter had been successful, adding that in the future energy would be sold in packages the size of a milk bottle. A worldwide interest followed, along with significant skepticism on the part of other physicists. Little information was forthcoming: no papers were published on the topic, and over the next year a number of reporters visited the site but were denied access to the buildings. After increasing pressure, Perón arranged for a team to investigate Richter's claims and return individual reports, all of which were negative. A review of these reports was equally negative, and the project was ended in 1952. By this time, the optimism of the earlier news had inspired groups around the world to begin their own research in nuclear fusion. Perón was overthrown in 1955, and in the aftermath, Richter was arrested for fraud. He appears to have spent periods of time abroad, including some time in Libya. Eventually he returned to Argentina, where he died in 1991. Prior to Huemul According to Rainer Karlsch's Hitler's Bomb, during World War II German scientists under Walter Gerlach and Kurt Diebner carried out experiments to explore the possibility of inducing thermonuclear reactions in deuterium using high explosive-driven convergent shock waves, following Karl Gottfried Guderley's convergent shock wave solution. At the same time, Richter proposed in a memorandum to German government officials the induction of nuclear fusion through shock waves by high-velocity particles shot into a highly compressed deuterium plasma contained in an ordinary uranium vessel. The proposal was not carried through. Early Argentine nuclear efforts Shortly after his election in 1946, Perón began a purge of Argentina's universities that eventually resulted in over 1,000 professors being fired or quitting, causing a serious setback in Argentine science and lasting enmity between Perón and Argentine intelligentsia. In response, the Physical Association of Argentina (AFA) began to organize as a community to retain links between Argentine scientists, who now spread to industry. In 1946, the director of the AFA, physicist Enrique Gaviola, wrote a proposal to set up the Comisión Nacional de Investigaciones Científicas (National Scientific Research Commission), arguing that post-World War II friction (leading to the Cold War) would present the opportunity for various Northern Hemisphere scientists to move south to escape limits on their research. In the same paper, Gaviola argued for the formation of a body to explore the peaceful use of atomic power. In spite of the poor relations between the scientific community and the Argentine government, the proposal was seriously studied and Congress debated the matter on several occasions before Perón decided to place it under military control. Gaviola objected, starting a long and acrimonious debate over the nature and aims of the program. By 1947, plans to form an atomic study group were progressing slowly when the entire issue was shut down by an article in the U.S. political newsmagazine, New Republic. The 24 February 1947 issue contained an article by William Mizelle on "Peron's Atomic Plans", which claimed: With world famous German atom-splitter Werner Heisenberg invited to come to Argentina by Peron's Government and with a major uranium source discovered in Argentina, that Nation is launching a military nuclear research program to crack Pandora's box of atomic energy wide open. Argentina's determined atomic adventure and its frankly military purposes cannot be dismissed as the impractical dream of a small nation. International pressure on Argentina following the publication was intense, and the plans were soon dropped. This event appears to have made Perón more determined than ever to both develop atomic energy as well as prove its peaceful intentions. Germans in Argentina In 1947, a dossier was provided to Argentina by the Spanish embassy in Buenos Aires listing a number of German aeronautical engineers who were looking to sneak out of Germany. Among them was Kurt Tank, designer of the famed Focke-Wulf Fw 190 and many other successful designs. The dossier was passed to the recently formed Argentine Air Force's Commander in Chief, who passed it to Brigadier César Raúl Ojeda, who was in charge of aerodynamics research. Ojeda and Tank communicated and formulated plans to begin building a jet fighter in Argentina, which would eventually emerge as the FMA IAe 33 Pulqui II. Just before leaving for Argentina, Tank briefly met Richter in London, where Richter told Tank of his ideas for nuclear-powered aircraft. Richter was at that time doing some work in the German chemical industry. Tank had also contacted a number of other engineers and even famed fighter pilot and Luftwaffe general Adolf Galland. Various members of the group made their way to Argentina under false passports during late 1947 and 1948. The Germans were warmly received by Perón, who effectively gave them a blank cheque in an effort to rapidly develop the Argentine economy. Tank set up an aircraft development plant in Córdoba, and continued to contact other German engineers and scientists who might be interested in joining them. A total of 184 German scientists and engineers are known to have moved to Argentina during this period. Richter was invited to join the group and arrived in Argentina on 16 August 1948, travelling under the name "Dr. Pedro Matthies". Tank personally introduced him to Perón on 24 August, and Richter pitched Perón on the idea of a nuclear fusion device which would provide unlimited power, make Argentina a world scientific leader, and be of purely civilian intent. Perón was intrigued, and clearly impressed, later telling reporters that "in half an hour he explained to me all the secrets of nuclear physics and he did it so well that now I have a pretty good idea of the subject". Gaviola, still maintaining pressure to form a nuclear research group, saw all interest evaporate. From that point on he offered his services only as a "member of Richter's firing squad." Other German scientists, including Guido Beck, Walter Seelmann-Eggbert, and the now-elderly Richard Gans quickly realized something was amiss in the entire affair, and began to align themselves with the AFA, steering clear of Richter and the government in general. At an AFA meeting in September 1951, Beck publicly resigned from the University of Buenos Aires over the issue. The project Richter was soon given a laboratory at Tank's Córdoba site, but in early 1949 a fire destroyed some of the equipment. Richter claimed it was sabotage, and demanded a more protected location free from spies. When support was not immediately forthcoming, Richter went on a tour, visiting Canada and perhaps the U.S. and Europe as well. A year later, Lise Meitner recalled meeting "a strange Austrian with an Argentine visa" in Vienna, where he demonstrated a device he claimed was a thermonuclear system but which Meitner later dismissed as a chemical effect. Richter's tour was a thinly veiled threat to leave Argentina, which prompted action. Perón handed the problem of selecting a suitable experimental site to Colonel González, a friend from the 1943 Argentine coup d'état. González selected a location deep within the country's interior on Huemul Island, in Nahuel Huapi Lake, where it would be easy to protect from prying eyes. Construction work began in July, causing a nationwide shortage of brick and cement. Richter moved to the site in March 1950 while construction on Laboratory 1, the reactor, was still ongoing. In May 1950, Perón formed the National Atomic Energy Commission (CNEA), bypassing Gaviola's earlier efforts and placing himself in the position of president, with Richter and the minister of technical affairs as the other chairs. A year later, he formed the National Atomic Energy Directorate (DNEA), under González, to provide project assistance and logistics support. When the reactor was finally completed in May, Richter noticed there was no way to access the interior of the wide concrete cylinder, requiring a series of holes to be drilled through the thick walls. But before this could be completed, Richter declared that a crack on the outside rendered the entire reactor useless, and had it torn down. While this was taking place, Richter began experiments in the much smaller reactor in Laboratory 2. The experiments injected lithium and hydrogen into the cylinder and discharged a spark through it. The cylinder was supposed to reflect the energy created by these reactions back into the chamber to keep the reaction going. Diagnostic measurements were provided by taking photographs of the spectrum and using Doppler widening to measure the temperature of the resulting reactions. Announcement On 16 February 1951, Richter claimed he had successfully demonstrated fusion. He re-ran the experiment for members of the CNEA, later claiming that they had witnessed the world's first thermonuclear reaction. On 23 February, a technician working for the project expressed his concerns about the claims, suggesting that the measurement was likely due to the accidental tilting of the spectrograph's photographic plate while the experimental run was being set up. Richter refused to re-run the experiment. Instead, a week later he ordered the reactor to be disassembled so a new one could be built that included a magnetic confinement system. Meanwhile, plans for a new Laboratory 1 were started with this new design, this time to be buried underground. A deep hole in hard rock was constructed, but Richter changed the design and had the hole filled in with concrete. On 2 March, Edward Miller, the U.S. Assistant Secretary of Station for Inter-American Affairs, visited Argentina. This was ostensibly to visit the Pan American Games, but in reality was in advance of calling a meeting of American leaders later that month to discuss China's entry into the Korean War. Perón gave Miller an introduction to Richter's work, and Miller filed a memo on it on 6 March. During this period, Perón seized the Argentine newspaper La Prensa, whose editor fled to the U.S. This led to harsh criticism in the U.S. Miller suggested a policy of "masterful inaction", not actively denying support for the project, but simply never providing any. The leadership meeting was to take place between 26 March and 7 April, by which time the Chinese "emergency" had passed and the war was entering a new phase. Perón then took the opportunity to announce Richter's results to the world. On 24 March, Perón held a press conference at Casa Rosada and stated that: On February 16, 1951, in the atomic energy pilot plant on Huemul Island... thermonuclear experiments were carried out under conditions of control on a technical scale. Perón justified the project by noting that Argentina's enormous energy shortage would be addressed by building nuclear plants across the country, and that the energy would be bought and sold in containers the size of a milk bottle. He went on to note that the country was simply unable to afford the cost of developing a uranium-based energy program, or that of a system using tritium, normally generated in special fission plants. Richter's fuel meant the reaction could only take place in a reactor, not a bomb, and he then recommitted the country to exploring only peaceful uses of atomic energy. Richter added that he understood the secret of the hydrogen bomb, but that Perón had forbidden any work on it. The next day Richter held another press conference on the topic, a meeting that became known as the "10,000 word interview". He explained that a hydrogen bomb required a fission trigger, and that the country was unable and unwilling to build such a device. Very little explanation of the Thermotron was mentioned, beyond the announcement that he used the Doppler effect to measure speeds of 3,300 km/s and that the fuel was either lithium hydride or deuterium which was introduced into pre-heated hydrogen. He was careful to explain that these were small-scale experimental results, and refused to state whether it would work well at the industrial scale. On 7 April, Perón awarded Richter the gold Peronista Party Medal in a highly publicized event. With the U.S. refusing any support for the program, Richter turned to other countries for equipment. In April, Prince Bernhard of The Netherlands visited Perón, and offered technical help to the project from Philips. A visit by Cornelis Bakker, later the director of CERN, was arranged and a synchrotron and Cockcroft–Walton generator were suggested as possible products of interest. Perón wrote to Richter to arrange the visit, during which Richter refused to show Bakker any of the reactors. In spite of this, Perón offered to fund the purchase of a Cockcroft–Walton generator and a synchrotron from the company. Public reaction Shortly after Richter's conference, the matter was discussed in the Bulletin of the Atomic Scientists, where it was noted that Richter's announcement had revealed no details of the system of operation. They also noted that Richter claimed three key advances during experimentation, but failed to mention any of them during the conference. Finally, although the method for measuring temperature was announced, the temperature itself was not. The United States Atomic Energy Commission's (AEC) comment on the announcement was simply that "the Argentine Government announced more than a year ago that it was planning to engage in nuclear research." American physicists were universally dismissive of the announcement. Among the more famous responses was that of George Gamow, who said "It seemed to be 95% pure propaganda, 4¾% thermonuclear reactions on a very small scale, and the remaining ¼% probably something better." Edward Lawrence was not so dismissive, noting that, "There is a tendency to laugh it off as being a lot of hot air or something. Well it may be, but we don't know all, and we should make every effort to find out." Edward Teller put it succinctly, "Reading one line one has to think he's a genius. Reading the next line, one realizes he's crazy." British scientists, at that time working secretly on the z-pinch fusion concept, did not rule out the possibility of small-scale reactions. George Thomson, at that time leading the United Kingdom Atomic Energy Authority (AEA), suggested it was simply exaggerated. This opinion was mirrored by Mark Oliphant in Australia, and Werner Heisenberg and Otto Hahn in Germany. Perhaps the most biting criticism came from Manfred von Ardenne, a German physicist now working in the Soviet Union. He advised that people should ignore Richter's claims, noting that he had worked with Richter during the war and said he confused fantasy with reality. In May, the United Nations World magazine carried a short article by Hans Thirring, the director of the Institute for Theoretical Physics in Vienna and a well known author on nuclear matters. He stated that "the chances are 99 to 1 that the explosion in Argentina occurred only in the imagination of a crank or a fraud." When Thirring heard the announcement, he had gone searching for anyone that knew Richter from before he arrived in Argentina. He found that Richter had studied under Heinrich Rausch von Traubenberg in the 1930s, who described him as a peculiar eccentric, but von Traubenberg had died in 1944 so there was no way to follow up on the story. Richter's dissertation was never published, and the university in Prague burned during the war. Richter was invited to prepare a rebuttal, which appeared in the July issue. He simply dismissed Thirring as "a typical textbook professor with a strong scientific inferiority complex, probably supported by political hatred." Private reaction Although essentially dismissed by the scientific community, the Richter announcement nevertheless had a major effect on the history of controlled fusion experiments. The most direct outcome of the announcement was its effect on Lyman Spitzer, an astrophysicist at Princeton University. Just prior to leaving for a ski trip to Aspen, Spitzer's father called and mentioned the announcement in The New York Times. Spitzer read the articles and dismissed them, noting the system could not deliver enough energy to heat the gases to fusion temperatures. This led him to begin considering ways to confine a hot plasma for longer periods of time, giving the system enough time to be heated to 10 to 100 million degrees Celsius. Considering the problem of confining a plasma in a toroid pointed out by Enrico Fermi, he hit upon the solution now known as the stellarator. Spitzer was able to use the notoriety surrounding Richter's announcement to gain the attention of the U.S. Atomic Energy Commission with the suggestion that the basic idea of controlled fusion was feasible. He eventually managed to arrange a meeting with the director of the AEC to pitch the stellarator concept. Researchers in the UK had been experimenting with fusion since 1947 using a system known today as z-pinch. Small experimental devices had been built at the Atomic Energy Research Establishment (AERE, "Harwell") and Imperial College London, but requests for funding of a larger system were repeatedly refused. Jim Tuck had seen the work while in the UK, and introduced z-pinch to his coworkers at Los Alamos in 1950. When Tuck heard of Spitzer's efforts to gain funding, he immediately applied as well, presenting his concept as the Perhapsatron. He felt that Spitzer's claims to have a fast track to fusion were "incredibly ambitious". Both Spitzer and Tuck met with AEC officials in May 1951; Spitzer was granted $50,000 to build an experimental device, while Tuck was turned away empty-handed. Not to be outdone, Tuck soon arranged to receive $50,000 from the director of Los Alamos instead. When news of the U.S. efforts reached the UK, the researchers there started pushing for funding of a much larger machine. This time they found a much more favorable reaction from the AERE, and both teams soon began construction of larger devices. This work, through fits and starts, led to the ZETA system, the first truly large-scale fusion reactor. Compared to the small tabletop devices built in the U.S., ZETA filled a hangar and operated at energy levels far beyond any other machine. When news of ZETA was made public, the U.S. and Soviet Union were soon demanding funding to build devices of similar scale in order to catch up with the UK. The announcement had a direct effect on research in the USSR as well. Previously, several researchers, notably Igor Kurchatov and I. N. Golovin had put together a development plan similar to the ones being developed in the UK. They too were facing disinterest on the part of the funding groups, which was immediately swept away when Huemul hit the newspapers. Cancellation Argentine physicists were also critical of the announcement, but found little interest on the part of Perón, who was still at odds with the academic mainstream. González was growing increasingly frustrated with Richter, and in February 1952 told Perón that either Richter left the project, or he did. Perón accepted González's resignation and replaced him with his aide, Navy Captain Pedro Iraolagoitía. Iraolagoitía soon began to protest as well, finally convincing Perón to have the project investigated. Instead of calling upon the local physics community, Perón put together a team consisting of Iraolagoitía, a priest, two engineers including Mario Báncora, and young physicist José Antonio Balseiro, who was at that time studying in England and was asked to return with all haste. The team visited the site for a series of demonstrations between 5 and 8 September 1952. The committee analyzed Richter's work and published separate reports on the topic on 15 September. Balseiro, in particular, was convinced nothing nuclear was taking place. His report critiqued Richter's claims about how the system was supposed to work, especially the claims that the system was reaching the temperatures needed to demonstrate fusion; he stated that fusion reactions would require something on the order of 40 million kelvin, while the center of the electric arc would be perhaps 4,000 to 100,000 kelvin at most. He then pointed out that Richter's radiation detectors showed large activity whenever the arc was discharged, even if there was no fuel present. Meanwhile, the team's own detectors showed low activity throughout. They reported their findings to Perón on 15 February. Richter was allowed to officially respond to the report. The government appointed physicists Richard Gans and Antonio Rodríguez to review the first report as well as Richter's response to it. This second group endorsed the findings of the first review panel and found Richter's response inadequate. On 22 November, while Richter was in Buenos Aires, a military team occupied the site. They found that many of the instruments were not even connected, and the project was pronounced a fraud. Argentines jokingly referred to the affair as the Huele a mula, or "it smells like a con". After the project In the period immediately after the military takeover, Balseiro wrote a proposal to create a nuclear physics institute on the mainland in nearby Bariloche using the equipment on the island. Originally known as the Instituto de Física de Bariloche, it was renamed the Instituto Balseiro in his honour in 1962. Between 1952 and 1955, Richter was effectively under house arrest in Buenos Aires, with an offer from Perón to "facilitate any travel he might have to make". After Perón was deposed in September 1955, the new government arrested Richter on the night of 4 October 1955. He was accused of fraud, and spent a short time in jail. At the time, it was estimated that 62.5 million Pesos had been spent on the project, about $15 million USD ($ million in ). A more recent estimate places the value closer to $300 million in 2003 dollars ($ million in ). Richter remained in Argentina for a time, but began to travel, eventually landing in Libya. He returned to Argentina and was extensively interviewed by Mario Mariscotti for his book on Huemul, which remains the most detailed account of the project. Mariscotti blames the affair primarily on Richter, who Mariscotti states was capable of great self-delusion, adding an autocratic and paranoid management style, and lack of oversight to the ills. Perón remains a controversial figure to this day, and opinions of Richter tend to be colored by how closely the author associates him with Perón. Argentine accounts often refer to Richter as an outright con man, while accounts written outside Argentina generally describe him as a deluded amateur. Huemul today The island remained closed and under military control until the 1970s, when the Army began using it for artillery target practice. In 1995 a tourist company took control of the island, and began to offer tours by boat from docks in Bariloche. The ruins of the historic facilities (at ), can be visited by tourists by boat from the port of Bariloche. Notes References Citations Bibliography Further reading Mariscotti, Mario, 1985, El Secreto Atómico de Huemul: Crónica del Origen de la Energía Atómica en la Argentina, Sudamericana/Planeta, Buenos Aires, Argentina López Dávalos A., Badino N., 2000 J. A. Balseiro: Crónica de una ilusión, Fondo de Cultura Económica de Argentina, . External links El litio: materia prima para la tecnología de la fusión termonuclear (1997) Spanish Guillermo Giménez de Castro: La quimera atómica de Richter (2004) Spanish Fusion power Hoaxes in science Nuclear technology in Argentina Science and technology in Argentina Scientific misconduct incidents
Huemul Project
[ "Physics", "Chemistry" ]
5,017
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
1,554,995
https://en.wikipedia.org/wiki/Polyol
In organic chemistry, a polyol is an organic compound containing multiple hydroxyl groups (). The term "polyol" can have slightly different meanings depending on whether it is used in food science or polymer chemistry. Polyols containing two, three and four hydroxyl groups are diols, triols, and tetrols, respectively. Classification Polyols may be classified according to their chemistry. Some of these chemistries are polyether, polyester, polycarbonate and also acrylic polyols. Polyether polyols may be further subdivided and classified as polyethylene oxide or polyethylene glycol (PEG), polypropylene glycol (PPG) and Polytetrahydrofuran or PTMEG. These have 2, 3 and 4 carbons respectively per oxygen atom in the repeat unit. Polycaprolactone polyols are also commercially available. There is also an increasing trend to use biobased (and hence renewable) polyols. Uses Polyether polyols have numerous uses. As an example, polyurethane foam is a big user of polyether polyols. Polyester polyols can be used to produce rigid foam. They are available in both aromatic and aliphatic versions. They are also available in mixed aliphatic-aromatic versions often made from recycled raw materials, typically polyethylene terephthalate (PET). Acrylic polyols are generally used in higher performance applications where stability to ultraviolet light is required and also lower VOC coatings. Other uses include direct to metal coatings. As they are used where good UV resistance is required, such as automotive coatings, the isocyanate component also tends to be UV resistant and hence isocyanate oligomers or prepolymers based on Isophorone diisocyanate are generally used. Caprolactone-based polyols produce polyurethanes with enhanced hydrolysis resistance. Polycarbonate polyols are more expensive than other polyols and are thus used in more demanding applications. They have been used to make an isophorone diisocyanate based prepolymer which is then used in glass coatings. They may be used in reactive hotmelt adhesives. All polyols may be used to produce polyurethane prepolymers. These then find use in coatings, adhesives, sealants and elastomers. Low molecular weight polyols Low molecular weight polyols are widely used in polymer chemistry where they function as crosslinking agents and chain extenders. Alkyd resins for example, use polyols in their synthesis and are used in paints and in molds for casting. They are the dominant resin or "binder" in most commercial "oil-based" coatings. Approximately 200,000 tons of alkyd resins are produced each year. They are based on linking reactive monomers through ester formation. Polyols used in the production of commercial alkyd resins are glycerol, trimethylolpropane, and pentaerythritol. In polyurethane prepolymer production, a low molecular weight polyol-diol such as 1,4-butanediol may be used as a chain extender to further increase molecular weight though it does increase viscosity because more hydrogen bonding is introduced. Sugar alcohols Sugar alcohols, a class of low molecular weight polyols, are commonly obtained by hydrogenation of sugars. They have the formula (CHOH)nH2, where n = 4–6. Sugar alcohols are added to foods because of their lower caloric content than sugars; however, they are also, in general, less sweet, and are often combined with high-intensity sweeteners. They are also added to chewing gum because they are not broken down by bacteria in the mouth or metabolized to acids, and thus do not contribute to tooth decay. Maltitol, sorbitol, xylitol, erythritol, and isomalt are common sugar alcohols. Polymeric polyols The term polyol is used for various chemistries of the molecular backbone. Polyols may be reacted with diisocyanates or polyisocyanates to produce polyurethanes. MDI finds considerable use in PU foam production. Polyurethanes are used to make flexible foam for mattresses and seating, rigid foam insulation for refrigerators and freezers, elastomeric shoe soles, fibers (e.g. Spandex), coatings, sealants and adhesives. The term polyol is also attributed to other molecules containing hydroxyl groups. For instance, polyvinyl alcohol is (CH2CHOH)n with n hydroxyl groups where n can be in the thousands. Cellulose is a polymer with many hydroxyl groups, but it is not referred to as a polyol. Polyols from recycled or renewable sources There are polyols based on renewable sources such as plant-based materials including castor oil and cottonseed oil. Vegetable oils and biomass are also potential renewable polyol raw materials. Seed oil can even be used to produce polyester polyols. Properties Since the generic term polyol is only derived from chemical nomenclature and just indicates the presence of several hydroxyl groups, no common properties can be assigned to all polyols. However, polyols are usually viscous at room temperature due to hydrogen bonding. See also Cyclitol Oligomer Polyurethane References External links Sugar substitutes Organic polymers Commodity chemicals Polymer chemistry Synthetic resins Polyurethanes
Polyol
[ "Chemistry", "Materials_science", "Engineering" ]
1,175
[ "Organic polymers", "Synthetic resins", "Products of chemical industry", "Synthetic materials", "Materials science", "Organic compounds", "Polymer chemistry", "Commodity chemicals" ]
9,394,772
https://en.wikipedia.org/wiki/Barnes%E2%80%93Hut%20simulation
The Barnes–Hut simulation (named after Josh Barnes and Piet Hut) is an approximation algorithm for performing an N-body simulation. It is notable for having order O(n log n) compared to a direct-sum algorithm which would be O(n2). The simulation volume is usually divided up into cubic cells via an octree (in a three-dimensional space), so that only particles from nearby cells need to be treated individually, and particles in distant cells can be treated as a single large particle centered at the cell's center of mass (or as a low-order multipole expansion). This can dramatically reduce the number of particle pair interactions that must be computed. Some of the most demanding high-performance computing projects perform computational astrophysics using the Barnes–Hut treecode algorithm, such as DEGIMA. Algorithm The Barnes–Hut tree In a three-dimensional N-body simulation, the Barnes–Hut algorithm recursively divides the n bodies into groups by storing them in an octree (or a quad-tree in a 2D simulation). Each node in this tree represents a region of the three-dimensional space. The topmost node represents the whole space, and its eight children represent the eight octants of the space. The space is recursively subdivided into octants until each subdivision contains 0 or 1 bodies (some regions do not have bodies in all of their octants). There are two types of nodes in the octree: internal and external nodes. An external node has no children and is either empty or represents a single body. Each internal node represents the group of bodies beneath it, and stores the center of mass and the total mass of all its children bodies. Calculating the force acting on a body To calculate the net force on a particular body, the nodes of the tree are traversed, starting from the root. If the center of mass of an internal node is sufficiently far from the body, the bodies contained in that part of the tree are treated as a single particle whose position and mass is respectively the center of mass and total mass of the internal node. If the internal node is sufficiently close to the body, the process is repeated for each of its children. Whether a node is or isn't sufficiently far away from a body, depends on the quotient , where s is the width of the region represented by the internal node, and d is the distance between the body and the node's center of mass. The node is sufficiently far away when this ratio is smaller than a threshold value θ. The parameter θ determines the accuracy of the simulation; larger values of θ increase the speed of the simulation but decreases its accuracy. If θ = 0, no internal node is treated as a single body and the algorithm degenerates to a direct-sum algorithm. See also NEMO (Stellar Dynamics Toolbox) Nearest neighbor search Fast multipole method References and sources References Sources External links Treecodes, J. Barnes Parallel TreeCode HTML5/JavaScript Example Graphical Barnes–Hut Simulation PEPC – The Pretty Efficient Parallel Coulomb solver, an open-source parallel Barnes–Hut tree code with exchangeable interaction kernel for a multitude of applications Parallel GPU N-body simulation program with fast stackless particles tree traversal at beltoforion.de Simulation Gravity Physical cosmology Numerical integration (quadrature) Articles containing video clips
Barnes–Hut simulation
[ "Physics", "Astronomy" ]
689
[ "Astrophysics", "Theoretical physics", "Physical cosmology", "Astronomical sub-disciplines" ]
9,397,319
https://en.wikipedia.org/wiki/Gauss%E2%80%93Lucas%20theorem
In complex analysis, a branch of mathematics, the Gauss–Lucas theorem gives a geometric relation between the roots of a polynomial and the roots of its derivative . The set of roots of a real or complex polynomial is a set of points in the complex plane. The theorem states that the roots of all lie within the convex hull of the roots of , that is the smallest convex polygon containing the roots of . When has a single root then this convex hull is a single point and when the roots lie on a line then the convex hull is a segment of this line. The Gauss–Lucas theorem, named after Carl Friedrich Gauss and Félix Lucas, is similar in spirit to Rolle's theorem. Formal statement If is a (nonconstant) polynomial with complex coefficients, all zeros of belong to the convex hull of the set of zeros of . Special cases It is easy to see that if is a second degree polynomial, the zero of is the average of the roots of . In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment. For a third degree complex polynomial (cubic function) with three distinct zeros, Marden's theorem states that the zeros of are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle formed by the zeros of . For a fourth degree complex polynomial (quartic function) with four distinct zeros forming a concave quadrilateral, one of the zeros of lies within the convex hull of the other three; all three zeros of lie in two of the three triangles formed by the interior zero of and two others zeros of . In addition, if a polynomial of degree of real coefficients has distinct real zeros we see, using Rolle's theorem, that the zeros of the derivative polynomial are in the interval which is the convex hull of the set of roots. The convex hull of the roots of the polynomial particularly includes the point Proof See also Marden's theorem Bôcher's theorem Sendov's conjecture Routh–Hurwitz theorem Hurwitz's theorem (complex analysis) Descartes' rule of signs Rouché's theorem Properties of polynomial roots Cauchy interlacing theorem Notes References . Craig Smorynski: MVT: A Most Valuable Theorem. Springer, 2017, ISBN 978-3-319-52956-1, pp. 411–414 External links Lucas–Gauss Theorem by Bruce Torrence, the Wolfram Demonstrations Project. Gauss-Lucas theorem - interactive illustration Convex analysis Articles containing proofs Theorems in complex analysis Theorems about polynomials
Gauss–Lucas theorem
[ "Mathematics" ]
567
[ "Theorems in mathematical analysis", "Theorems in algebra", "Theorems in complex analysis", "Theorems about polynomials", "Articles containing proofs" ]
9,400,139
https://en.wikipedia.org/wiki/Tesseractic%20honeycomb
In four-dimensional euclidean geometry, the tesseractic honeycomb is one of the three regular space-filling tessellations (or honeycombs), represented by Schläfli symbol {4,3,3,4}, and consisting of a packing of tesseracts (4-hypercubes). Its vertex figure is a 16-cell. Two tesseracts meet at each cubic cell, four meet at each square face, eight meet on each edge, and sixteen meet at each vertex. It is an analog of the square tiling, {4,4}, of the plane and the cubic honeycomb, {4,3,4}, of 3-space. These are all part of the hypercubic honeycomb family of tessellations of the form {4,3,...,3,4}. Tessellations in this family are self-dual. Coordinates Vertices of this honeycomb can be positioned in 4-space in all integer coordinates (i,j,k,l). Sphere packing Like all regular hypercubic honeycombs, the tesseractic honeycomb corresponds to a sphere packing of edge-length-diameter spheres centered on each vertex, or (dually) inscribed in each cell instead. In the hypercubic honeycomb of 4 dimensions, vertex-centered 3-spheres and cell-inscribed 3-spheres will both fit at once, forming the unique regular body-centered cubic lattice of equal-sized spheres (in any number of dimensions). Since the tesseract is radially equilateral, there is exactly enough space in the hole between the 16 vertex-centered 3-spheres for another edge-length-diameter 3-sphere. (This 4-dimensional body centered cubic lattice is actually the union of two tesseractic honeycombs, in dual positions.) This is the same densest known regular 3-sphere packing, with kissing number 24, that is also seen in the other two regular tessellations of 4-space, the 16-cell honeycomb and the 24-cell-honeycomb. Each tesseract-inscribed 3-sphere kisses a surrounding shell of 24 3-spheres, 16 at the vertices of the tesseract and 8 inscribed in the adjacent tesseracts. These 24 kissing points are the vertices of a 24-cell of radius (and edge length) 1/2. Constructions There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,3,3,4}. Another form has two alternating tesseract facets (like a checkerboard) with Schläfli symbol {4,3,31,1}. The lowest symmetry Wythoff construction has 16 types of facets around each vertex and a prismatic product Schläfli symbol {∞}4. One can be made by stericating another. Related polytopes and tessellations The 24-cell honeycomb is similar, but in addition to the vertices at integers (i,j,k,l), it has vertices at half integers (i+1/2,j+1/2,k+1/2,l+1/2) of odd integers only. It is a half-filled body centered cubic (a checkerboard in which the red 4-cubes have a central vertex but the black 4-cubes do not). The tesseract can make a regular tessellation of the 4-sphere, with three tesseracts per face, with Schläfli symbol {4,3,3,3}, called an order-3 tesseractic honeycomb. It is topologically equivalent to the regular polytope penteract in 5-space. The tesseract can make a regular tessellation of 4-dimensional hyperbolic space, with 5 tesseracts around each face, with Schläfli symbol {4,3,3,5}, called an order-5 tesseractic honeycomb. The Ammann–Beenker tiling is an aperiodic tiling in 2 dimensions obtained by cut-and-project on the tesseractic honeycomb along an eightfold rotational axis of symmetry. Birectified tesseractic honeycomb A birectified tesseractic honeycomb, , contains all rectified 16-cell (24-cell) facets and is the Voronoi tessellation of the D4* lattice. Facets can be identically colored from a doubled ×2, [[4,3,3,4]] symmetry, alternately colored from , [4,3,3,4] symmetry, three colors from , [4,3,31,1] symmetry, and 4 colors from , [31,1,1,1] symmetry. See also Regular and uniform honeycombs in 4-space: 16-cell honeycomb 24-cell honeycomb 5-cell honeycomb Truncated 5-cell honeycomb Omnitruncated 5-cell honeycomb References Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) - Model 1 x∞o x∞o x∞o x∞o, x∞x x∞o x∞o x∞o, x∞x x∞x x∞o x∞o, x∞x x∞x x∞x x∞o,x∞x x∞x x∞x x∞x, x∞o x∞o x4o4o, x∞o x∞o o4x4o, x∞x x∞o x4o4o, x∞x x∞o o4x4o, x∞o x∞o x4o4x, x∞x x∞x x4o4o, x∞x x∞x o4x4o, x∞x x∞o x4o4x, x∞x x∞x x4o4x, x4o4x x4o4x, x4o4x o4x4o, x4o4x x4o4o, o4x4o o4x4o, x4o4o o4x4o, x4o4o x4o4o, x∞x o3o3o *d4x, x∞o o3o3o *d4x, x∞x x4o3o4x, x∞o x4o3o4x, x∞x x4o3o4o, x∞o x4o3o4o, o3o3o *b3o4x, x4o3o3o4x, x4o3o3o4o - test - O1 Honeycombs (geometry) 5-polytopes Regular tessellations
Tesseractic honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
1,577
[ "Regular tessellations", "Honeycombs (geometry)", "Tessellation", "Crystallography", "Symmetry" ]
9,402,045
https://en.wikipedia.org/wiki/Pinwheel%20tiling
In geometry, pinwheel tilings are non-periodic tilings defined by Charles Radin and based on a construction due to John Conway. They are the first known non-periodic tilings to each have the property that their tiles appear in infinitely many orientations. Definition Let be the right triangle with side length , and . Conway noticed that can be divided in five isometric copies of its image by the dilation of factor . The pinwheel tiling is obtained by repeatedly inflating by a factor of and then subdividing each tile in this manner. Conversely, the tiles of the pinwheel tiling can be grouped into groups of five that form a larger pinwheel tiling. In this tiling, isometric copies of appear in infinitely many orientations because the small angle of , , is not a rational multiple of . Radin found a collection of five prototiles, each of which is a marking of , so that the matching rules on these tiles and their reflections enforce the pinwheel tiling. All of the vertices have rational coordinates, and tile orientations are uniformly distributed around the circle. Generalizations Radin and Conway proposed a three-dimensional analogue which was dubbed the quaquaversal tiling. There are other variants and generalizations of the original idea. One gets a fractal by iteratively dividing in five isometric copies, following the Conway construction, and discarding the middle triangle (ad infinitum). This "pinwheel fractal" has Hausdorff dimension . Use in architecture Federation Square, a building complex in Melbourne, Australia, features the pinwheel tiling. In the project, the tiling pattern is used to create the structural sub-framing for the facades, allowing for the facades to be fabricated off-site, in a factory and later erected to form the facades. The pinwheel tiling system was based on the single triangular element, composed of zinc, perforated zinc, sandstone or glass (known as a tile), which was joined to 4 other similar tiles on an aluminum frame, to form a "panel". Five panels were affixed to a galvanized steel frame, forming a "mega-panel", which were then hoisted onto support frames for the facade. The rotational positioning of the tiles gives the facades a more random, uncertain compositional quality, even though the process of its construction is based on pre-fabrication and repetition. The same pinwheel tiling system is used in the development of the structural frame and glazing for the "Atrium" at Federation Square, although in this instance, the pin-wheel grid has been made "3-dimensional" to form a portal frame structure. References External links Pinwheel at the Tilings Encyclopedia Dynamic Pinwheel made in GeoGebra Discrete geometry Aperiodic tilings
Pinwheel tiling
[ "Physics", "Mathematics" ]
575
[ "Discrete mathematics", "Tessellation", "Discrete geometry", "Aperiodic tilings", "Symmetry" ]
4,148,025
https://en.wikipedia.org/wiki/Radiation-absorbent%20material
In materials science, radiation-absorbent material (RAM) is a material which has been specially designed and shaped to absorb incident RF radiation (also known as non-ionising radiation), as effectively as possible, from as many incident directions as possible. The more effective the RAM, the lower the resulting level of reflected RF radiation. Many measurements in electromagnetic compatibility (EMC) and antenna radiation patterns require that spurious signals arising from the test setup, including reflections, are negligible to avoid the risk of causing measurement errors and ambiguities. Introduction One of the most effective types of RAM comprises arrays of pyramid-shaped pieces, each of which is constructed from a suitably lossy material. To work effectively, all internal surfaces of the anechoic chamber must be entirely covered with RAM. Sections of RAM may be temporarily removed to install equipment but they must be replaced before performing any tests. To be sufficiently lossy, RAM can be neither a good electrical conductor nor a good electrical insulator as neither type actually absorbs any power. Typically pyramidal RAM will comprise a rubberized foam material impregnated with controlled mixtures of carbon and iron. The length from base to tip of the pyramid structure is chosen based on the lowest expected frequency and the amount of absorption required. For low frequency damping, this distance is often , while high-frequency panels are as short as . Panels of RAM are typically installed on the walls of an EMC test chamber with the tips pointing inward to the chamber. Pyramidal RAM attenuates signal by two effects: scattering and absorption. Scattering can occur both coherently, when reflected waves are in-phase but directed away from the receiver, or incoherently where waves are picked up by the receiver but are out of phase and thus have lower signal strength. This incoherent scattering also occurs within the foam structure, with the suspended carbon particles promoting destructive interference. Internal scattering can result in as much as 10 dB of attenuation. Meanwhile, the pyramid shapes are cut at angles that maximize the number of bounces a wave makes within the structure. With each bounce, the wave loses energy to the foam material and thus exits with lower signal strength. An alternative type of RAM comprises flat plates of ferrite material, in the form of flat tiles fixed to all interior surfaces of the chamber. This type has a smaller effective frequency range than the pyramidal RAM and is designed to be fixed to good conductive surfaces. It is generally easier to fit and more durable than the pyramidal type RAM but is less effective at higher frequencies. Its performance might however be quite adequate if tests are limited to lower frequencies (ferrite plates have a damping curve that makes them most effective between 30–1000 MHz). There is also a hybrid type, a ferrite in pyramidal shape. Containing the advantages of both technologies, the frequency range can be maximized while the pyramid remains small, about . For physically-realizable radiation-absorbent materials, there is a trade-off between thickness and bandwidth: optimal thickness to bandwidth ratio of a radiation-absorbent material is given by the Rozanov limit. Use in stealth technology Radar-absorbent materials are used in stealth technology to disguise a vehicle or structure from radar detection. A material's absorbency at a given frequency of radar wave depends upon its composition. RAM cannot perfectly absorb radar at any frequency, but any given composition does have greater absorbency at some frequencies than others; no one RAM is suited to absorption of all radar frequencies. A common misunderstanding is that RAM makes an object invisible to radar. A radar-absorbent material can significantly reduce an object's radar cross-section in specific radar frequencies, but it does not result in "invisibility" on any frequency. History The earliest forms of stealth coating were radar absorbing paints developed by Major K. Mano of the Tama Technical Institute, and Dr. Shiba of the Tokyo Engineering College for the IJAAF. Multiple paint mixtures were tested with ferric oxide and liquid rubber, as well as ferric oxide, asphalt and airplane dope having the best results. Despite success in laboratory tests, the paints saw little practical application as they were heavy and would significantly impact the performance of any aircraft they were applied to. Conversely the IJN saw great potential in anti-radar materials and the Second Naval Technical Institute began research on layered materials to absorb radar waves rather than paint. Rubber and plastic with carbon powder with varying ratios were layered to absorb and disperse radar waves. The results were promising against 3 GHz (S band) frequencies, but poor against 3 cm wave length (10 GHz, X band) radar. Work on the program was halted due to allied bombing raids, but research was continued post war by the Americans to mild success. In September of 1944, materials called Sumpf and Schornsteinfeger, coatings used by the German navy during World War II for the snorkels (or periscopes) of submarines, to lower their reflectivity in the 20 cm radar band (1.5 GHz, L band) the Allies used. The material had a layered structure and was based on graphite particles and other semiconductive materials embedded in a rubber matrix. The material's efficiency was partially reduced by the action of sea water. A related use was planned for the Horten Ho 229 aircraft. The adhesive which bonded plywood sheets in its skin was impregnated with graphite particles which were intended to reduce its visibility to Britain's radar. Types of radar-absorbent material (RAM) Iron ball paint absorber One of the most commonly known types of RAM is iron ball paint. It contains tiny spheres coated with carbonyl iron or ferrite. Radar waves induce molecular oscillations from the alternating magnetic field in this paint, which leads to conversion of the radar energy into heat. The heat is then transferred to the aircraft and dissipated. The iron particles in the paint are obtained by decomposition of iron pentacarbonyl and may contain traces of carbon, oxygen, and nitrogen. One technique used in the F-117A Nighthawk and other such stealth aircraft is to use electrically isolated carbonyl iron balls of specific dimensions suspended in a two-part epoxy paint. Each of these microscopic spheres is coated in silicon dioxide as an insulator through a proprietary process. Then, during the panel fabrication process, while the paint is still liquid, a magnetic field is applied with a specific Gauss strength and at a specific distance to create magnetic field patterns in the carbonyl iron balls within the liquid paint ferrofluid. The paint then hardens with the magnetic field holding the particles in their magnetic pattern. Some experimentation has been done applying opposing north–south magnetic fields to opposing sides of the painted panels, causing the carbonyl iron particles to align (standing up on end so they are three-dimensionally parallel to the magnetic field). The carbonyl iron ball paint is most effective when the balls are evenly dispersed, electrically isolated, and present a gradient of progressively greater density to the incoming radar waves. A related type of RAM consists of neoprene polymer sheets with ferrite grains or conductive carbon black particles (containing about 0.30% of crystalline graphite by cured weight) embedded in the polymer matrix. The tiles were used on early versions of the F-117A Nighthawk, although more recent models use painted RAM. The painting of the F-117 is done by industrial robots so the paint can be applied consistently in specific layer thicknesses and densities. The plane is covered in tiles "glued" to the fuselage and the remaining gaps are filled with iron ball "glue." The United States Air Force introduced a radar-absorbent paint made from both ferrofluidic and nonmagnetic substances. By reducing the reflection of electromagnetic waves, this material helps to reduce the visibility of RAM-painted aircraft on radar. The Israeli firm Nanoflight has also made a radar-absorbing paint that uses nanoparticles. The Republic of China (Taiwan)'s military has also successfully developed radar-absorbing paint which is currently used on Taiwanese stealth warships and the Taiwanese-built stealth jet fighter which is currently in development in response to the development of stealth technology by their rival, the mainland People's Republic of China which is known to have displayed both stealth warships and planes to the public. Foam absorber Foam absorber is used as lining of anechoic chambers for electromagnetic radiation measurements. This material typically consists of a fireproofed urethane foam loaded with conductive carbon black [carbonyl iron spherical particles, and/or crystalline graphite particles] in mixtures between 0.05% and 0.1% (by weight in finished product), and cut into square pyramids with dimensions set specific to the wavelengths of interest. Further improvements can be made when the conductive particulates are layered in a density gradient, so the tip of the pyramid has the lowest percentage of particles and the base contains the highest density of particles. This presents a "soft" impedance change to incoming radar waves and further reduces reflection (echo). The length from base to tip, and width of the base of the pyramid structure is chosen based on the lowest expected frequency when a wide-band absorber is sought. For low-frequency damping in military applications, this distance is often , while high-frequency panels are as short as . An example of a high-frequency application would be the police radar (speed-measuring radar K and Ka band), the pyramids would have a dimension around long and a base. That pyramid would set on a 5 cm x 5 cm cubical base that is high (total height of pyramid and base of about ). The four edges of the pyramid are softly sweeping arcs giving the pyramid a slightly "bloated" look. This arc provides some additional scatter and prevents any sharp edge from creating a coherent reflection. Panels of RAM are installed with the tips of the pyramids pointing toward the radar source. These pyramids may also be hidden behind an outer nearly radar-transparent shell where aerodynamics are required. Pyramidal RAM attenuates signal by scattering and absorption. Scattering can occur both coherently, when reflected waves are in-phase but directed away from the receiver, or incoherently where waves may be reflected back to the receiver but are out of phase and thus have lower signal strength. A good example of coherent reflection is in the faceted shape of the F-117A stealth aircraft which presents angles to the radar source such that coherent waves are reflected away from the point of origin (usually the detection source). Incoherent scattering also occurs within the foam structure, with the suspended conductive particles promoting destructive interference. Internal scattering can result in as much as 10 dB of attenuation. Meanwhile, the pyramid shapes are cut at angles that maximize the number of bounces a wave makes within the structure. With each bounce, the wave loses energy to the foam material and thus exits with lower signal strength. Other foam absorbers are available in flat sheets, using an increasing gradient of carbon loadings in different layers. Absorption within the foam material occurs when radar energy is converted to heat in the conductive particle. Therefore, in applications where high radar energies are involved, cooling fans are used to exhaust the heat generated. Jaumann absorber A Jaumann absorber or Jaumann layer is a radar-absorbent substance. When first introduced in 1943, the Jaumann layer consisted of two equally spaced reflective surfaces and a conductive ground plane. One can think of it as a generalized, multilayered Salisbury screen, as the principles are similar. Being a resonant absorber (i.e. it uses wave interfering to cancel the reflected wave), the Jaumann layer is dependent upon the λ/4 spacing between the first reflective surface and the ground plane and between the two reflective surfaces (a total of λ/4 + λ/4 ). Because the wave can resonate at two frequencies, the Jaumann layer produces two absorption maxima across a band of wavelengths (if using the two layers configuration). These absorbers must have all of the layers parallel to each other and the ground plane that they conceal. More elaborate Jaumann absorbers use series of dielectric surfaces that separate conductive sheets. The conductivity of those sheets increases with proximity to the ground plane. Split-ring resonator absorber Split-ring resonators (SRRs) in various test configurations have been shown to be extremely effective as radar absorbers. SRR technology can be used in conjunction with the technologies above to provide a cumulative absorption effect. SRR technology is particularly effective when used on faceted shapes that have perfectly flat surfaces that present no direct reflections back to the radar source (such as the F-117A). This technology uses photographic process to create a resist layer on a thin (about ) copper foil on a dielectric backing (thin circuit board material) etched into tuned resonator arrays, each individual resonator being in a "C" shape (or other shape—such as a square). Each SRR is electrically isolated and all dimensions are carefully specified to optimize absorption at a specific radar wavelength. Not being a closed loop "O", the opening in the "C" presents a gap of specific dimension which acts as a capacitor. At 35 GHz, the diameter of the "C" is near . The resonator can be tuned to specific wavelengths and multiple SRRs can be stacked with insulating layers of specific thicknesses between them to provide a wide-band absorption of radar energy. When stacked, the smaller SRRs (high-frequency) in the range face the radar source first (like a stack of donuts that get progressively larger as one moves away from the radar source) stacks of three have been shown to be effective in providing wide-band attenuation. SRR technology acts very much in the same way that antireflective coatings operate at optical wavelengths. SRR technology provides the most effective radar attenuation of any technologies known previously and is one step closer to reaching complete invisibility (total stealth, "cloaking"). Work is also progressing in visual wavelengths, as well as infrared wavelengths (LIDAR-absorbing materials). Carbon nanotube Radars work in the microwave frequency range, which can be absorbed by multi-wall nanotubes (MWNTs). Applying the MWNTs to the aircraft would cause the radar to be absorbed and therefore seem to have a smaller radar cross-section. One such application could be to paint the nanotubes onto the plane. Recently there has been some work done at the University of Michigan regarding carbon nanotubes usefulness as stealth technology on aircraft. It has been found that in addition to the radar absorbing properties, the nanotubes neither reflect nor scatter visible light, making it essentially invisible at night, much like painting current stealth aircraft black except much more effective. Current limitations in manufacturing, however, mean that current production of nanotube-coated aircraft is not possible. One theory to overcome these current limitations is to cover small particles with the nanotubes and suspend the nanotube-covered particles in a medium such as paint, which can then be applied to a surface, like a stealth aircraft. See also Lidar Radar cross-section (RCS) Stealth technology Radar jamming and deception References Notes Bibliography The Schornsteinfeger Project, CIOS Report XXVI-24. External links Suppliers of Radar absorbent materials Electromagnetic compatibility Radar Military technology Materials
Radiation-absorbent material
[ "Physics", "Engineering" ]
3,196
[ "Electromagnetic compatibility", "Radio electronics", "Materials", "Electrical engineering", "Matter" ]
4,148,657
https://en.wikipedia.org/wiki/Liquid%20metal%20embrittlement
Liquid metal embrittlement (also known as LME and liquid metal induced embrittlement) is a phenomenon of practical importance, where certain ductile metals experience drastic loss in tensile ductility or undergo brittle fracture when exposed to specific liquid metals. Generally, tensile stress, either externally applied or internally present, is needed to induce embrittlement. Exceptions to this rule have been observed, as in the case of aluminium in the presence of liquid gallium. This phenomenon has been studied since the beginning of the 20th century. Many of its phenomenological characteristics are known and several mechanisms have been proposed to explain it. The practical significance of liquid metal embrittlement is revealed by the observation that several steels experience ductility losses and cracking during hot-dip galvanizing or during subsequent fabrication. Cracking can occur catastrophically and very high crack growth rates have been measured. Similar metal embrittlement effects can be observed even in the solid state, when one of the metals is brought close to its melting point; e.g. cadmium-coated parts operating at high temperature. This phenomenon is known as solid metal embrittlement. Characteristics Mechanical behavior Liquid metal embrittlement is characterized by the reduction in the threshold stress intensity, true fracture stress or in the strain to fracture when tested in the presence of liquid metals as compared to that obtained in tests. The reduction in fracture strain is generally temperature dependent and a “ductility trough” is observed as the test temperature is decreased. A ductile-to-brittle transition behaviour is also exhibited by many metal couples. The shape of the elastic region of the stress-strain curve is not altered, but the plastic region may be changed during LME. Very high crack propagation rates, varying from a few centimeters per second to several meters per second are induced in solid metals by the embrittling liquid metals. An incubation period and a slow pre-critical crack propagation stage generally precede the final fracture. Metal chemistry It is believed that there is specificity in the solid-liquid metal combinations experiencing LME. There should be limited mutual solubilities for the metal couple to cause embrittlement. Excess solubility makes sharp crack propagation difficult, but no solubility condition prevents wetting of the solid surfaces by liquid metal and prevents LME. The presence of an oxide layer on the solid metal surface also prevents good contact between the two metals and stops LME. The chemical compositions of the solid and liquid metals affect the severity of embrittlement. The addition of third elements to the liquid metal may increase or decrease the embrittlement and alter the temperature region over which embrittlement is seen. Metal combinations which form intermetallic compounds do not cause LME. There are a wide variety of LME couples. Most technologically important are the LME of aluminum and steel alloys. Metallurgy Alloying of the solid metal alters its LME. Some alloying elements may increase the severity while others may prevent LME. The action of the alloying element is known to be segregation to grain boundaries of the solid metal and alteration of the grain boundary properties. Accordingly, maximum LME is seen in cases where alloy addition elements have saturated the grain boundaries of the solid metal. The hardness and deformation behaviour of the solid metal affects its susceptibility to LME. Generally, harder metals are more severely embrittled. Grain size greatly influences LME. Solids with larger grains are more severely embrittled and the fracture stress varies inversely with the square root of grain diameter. Also the brittle to ductile transition temperature is increased by increasing grain size. Physico-chemical properties The interfacial energy between the solid and liquid metals and the grain boundary energy of the solid metal greatly influence LME. These energies depend upon the chemical compositions of the metal couple. Test parameters External parameters like temperature, strain rate, stress and time of exposure to the liquid metal prior to testing affect LME. Temperature produces a ductility trough and a ductile to brittle transition behaviour in the solid metal. The temperature range of the trough as well as the transition temperature are altered by the composition of the liquid and solid metals, the structure of the solid metal and other experimental parameters. The lower limit of the ductility trough generally coincides with the melting point of the liquid metal. The upper limit is strain rate sensitive. Temperature also affects the kinetics of LME. An increase in strain rate increases the upper limit temperature as well as the crack propagation rate. In most metal couples LME does not occur below a threshold stress level. Testing typically involves tensile specimens but more sophisticated testing using fracture mechanics specimens is also performed. Mechanisms Many theories have been proposed for LME. The major ones are listed below; The dissolution-diffusion model of Robertson and Glikman says that absorption of the liquid metal on the solid metal induces dissolution and inward diffusion. Under stress, these processes lead to crack nucleation and propagation. The brittle fracture theory of Stoloff and Johnson, Westwood and Kamdar proposed that the adsorption of the liquid metal atoms at the crack tip weakens inter-atomic bonds and propagates the crack. Gordon postulated a model based on diffusion-penetration of liquid metal atoms to nucleate cracks which, under stress, grow to cause failure. The ductile failure model of Lynch and Popovich predicted that adsorption of the liquid metal leads to the weakening of atomic bonds and nucleation of dislocations, which move under stress, pile up and work harden the solid. Also, dissolution helps in the nucleation of voids which grow under stress and cause ductile failure. All of these models, with the exception of Robertson, utilize the concept of an adsorption-induced surface energy lowering of the solid metal as the central cause of LME. They have succeeded in predicting many of the phenomenological observations. However, quantitative prediction of LME is still elusive. Mercury embrittlement The most common liquid metal to cause embrittlement is mercury, as it is a common contaminant in the processing of hydrocarbons in petroleum reservoirs. The embrittling effects of mercury were first recognized by Pliny the Elder circa 78 AD. Mercury spills present an especially significant danger for airplanes. The aluminium-zinc-magnesium-copper alloy DTD 5050B is especially susceptible. The Al-Cu alloy DTD 5020A is less susceptible. Spilled elemental mercury can be immobilized and made relatively harmless by silver nitrate. On 1 January 2004, the Moomba, South Australia, natural gas processing plant operated by Santos suffered a major fire. The gas release that led to the fire was caused by the failure of a heat exchanger (cold box) inlet nozzle in the liquids recovery plant. The failure of the inlet nozzle was due to liquid metal embrittlement of the train B aluminium cold box by elemental mercury. Popular culture Liquid metal embrittlement plays a central role in the novel Killer Instinct by Joseph Finder. In the film Big Hero 6, Honey Lemon, voiced by Genesis Rodriguez, uses liquid metal embrittlement in her lab. See also Embrittlement Hydrogen embrittlement References Building defects Materials degradation Fracture mechanics
Liquid metal embrittlement
[ "Materials_science", "Engineering" ]
1,492
[ "Structural engineering", "Fracture mechanics", "Materials science", "Building defects", "Materials degradation", "Mechanical failure" ]
4,148,957
https://en.wikipedia.org/wiki/Weapons-grade%20nuclear%20material
Weapons-grade nuclear material is any fissionable nuclear material that is pure enough to make a nuclear weapon and has properties that make it particularly suitable for nuclear weapons use. Plutonium and uranium in grades normally used in nuclear weapons are the most common examples. (These nuclear materials have other categorizations based on their purity.) Only fissile isotopes of certain elements have the potential for use in nuclear weapons. For such use, the concentration of fissile isotopes uranium-235 and plutonium-239 in the element used must be sufficiently high. Uranium from natural sources is enriched by isotope separation, and plutonium is produced in a suitable nuclear reactor. Experiments have been conducted with uranium-233 (the fissile material at the heart of the thorium fuel cycle). Neptunium-237 and some isotopes of americium might be usable, but it is not clear that this has ever been implemented. The latter substances are part of the minor actinides in spent nuclear fuel. Critical mass Any weapons-grade nuclear material must have a critical mass that is small enough to justify its use in a weapon. The critical mass for any material is the smallest amount needed for a sustained nuclear chain reaction. Moreover, different isotopes have different critical masses, and the critical mass for many radioactive isotopes is infinite, because the mode of decay of one atom cannot induce similar decay of more than one neighboring atom. For example, the critical mass of uranium-238 is infinite, while the critical masses of uranium-233 and uranium-235 are finite. The critical mass for any isotope is influenced by any impurities and the physical shape of the material. The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the accompanying table. Most information on bare sphere masses is classified, but some documents have been declassified. Countries that have produced weapons-grade nuclear material At least ten countries have produced weapons-grade nuclear material: Five recognized "nuclear-weapon states" under the terms of the Nuclear Non-Proliferation Treaty (NPT): the United States (first nuclear weapon tested and two bombs used as weapons in 1945), Russia (first weapon tested in 1949), the United Kingdom (1952), France (1960), and China (1964) Three other declared nuclear states that are not signatories of the NPT: India (not a signatory, weapon tested in 1974), Pakistan (not a signatory, weapon tested in 1998), and North Korea (withdrew from the NPT in 2003, weapon tested in 2006) Israel, which is widely known to have developed nuclear weapons (likely first tested in the 1960s or 1970s) but has not openly declared its capability South Africa, which also had enrichment capabilities and developed nuclear weapons (possibly tested in 1979), but disassembled its arsenal and joined the NPT in 1991 Weapons-grade uranium Natural uranium is made weapons-grade through isotopic enrichment. Initially only about 0.7% of it is fissile U-235, with the rest being almost entirely uranium-238 (U-238). They are separated by their differing masses. Highly enriched uranium is considered weapons-grade when it has been enriched to about 90% U-235. U-233 is produced from thorium-232 by neutron capture. The U-233 produced thus does not require enrichment and can be relatively easily chemically separated from residual Th-232. It is therefore regulated as a special nuclear material only by the total amount present. U-233 may be intentionally down-blended with U-238 to remove proliferation concerns. While U-233 would thus seem ideal for weaponization, a significant obstacle to that goal is the co-production of trace amounts of uranium-232 due to side-reactions. U-232 hazards, a result of its highly radioactive decay products such as thallium-208, are significant even at 5 parts per million. Implosion nuclear weapons require U-232 levels below 50 PPM (above which the U-233 is considered "low grade"; cf. "Standard weapon grade plutonium requires a Pu-240 content of no more than 6.5%." which is 65,000 PPM, and the analogous Pu-238 was produced in levels of 0.5% (5000 PPM) or less). Gun-type fission weapons would require low U-232 levels and low levels of light impurities on the order of 1 PPM. Weapons-grade plutonium Pu-239 is produced artificially in nuclear reactors when a neutron is absorbed by U-238, forming U-239, which then decays in a rapid two-step process into Pu-239. It can then be separated from the uranium in a nuclear reprocessing plant. Weapons-grade plutonium is defined as being predominantly Pu-239, typically about 93% Pu-239. Pu-240 is produced when Pu-239 absorbs an additional neutron and fails to fission. Pu-240 and Pu-239 are not separated by reprocessing. Pu-240 has a high rate of spontaneous fission, which can cause a nuclear weapon to pre-detonate. This makes plutonium unsuitable for use in gun-type nuclear weapons. To reduce the concentration of Pu-240 in the plutonium produced, weapons program plutonium production reactors (e.g. B Reactor) irradiate the uranium for a far shorter time than is normal for a nuclear power reactor. More precisely, weapons-grade plutonium is obtained from uranium irradiated to a low burnup. This represents a fundamental difference between these two types of reactor. In a nuclear power station, high burnup is desirable. Power stations such as the obsolete British Magnox and French UNGG reactors, which were designed to produce either electricity or weapons material, were operated at low power levels with frequent fuel changes using online refuelling to produce weapons-grade plutonium. Such operation is not possible with the light water reactors most commonly used to produce electric power. In these the reactor must be shut down and the pressure vessel disassembled to gain access to the irradiated fuel. Plutonium recovered from LWR spent fuel, while not weapons grade, can be used to produce nuclear weapons at all levels of sophistication, though in simple designs it may produce only a fizzle yield. Weapons made with reactor-grade plutonium would require special cooling to keep them in storage and ready for use. A 1962 test at the U.S. Nevada National Security Site (then known as the Nevada Proving Grounds) used non-weapons-grade plutonium produced in a Magnox reactor in the United Kingdom. The plutonium used was provided to the United States under the 1958 US–UK Mutual Defence Agreement. Its isotopic composition has not been disclosed, other than the description reactor grade, and it has not been disclosed which definition was used in describing the material this way. The plutonium was apparently sourced from the Magnox reactors at Calder Hall or Chapelcross. The content of Pu-239 in material used for the 1962 test was not disclosed, but has been inferred to have been at least 85%, much higher than typical spent fuel from currently operating reactors. Occasionally, low-burnup spent fuel has been produced by a commercial LWR when an incident such as a fuel cladding failure has required early refuelling. If the period of irradiation has been sufficiently short, this spent fuel could be reprocessed to produce weapons grade plutonium. References External links Reactor-Grade and Weapons-Grade Plutonium in Nuclear Explosives, Canadian Coalition for Nuclear Responsibility Nuclear weapons and power-reactor plutonium , Amory B. Lovins, February 28, 1980, Nature, Vol. 283, No. 5750, pp. 817–823 Nuclear weapons Nuclear materials Plutonium Uranium
Weapons-grade nuclear material
[ "Physics" ]
1,620
[ "Materials", "Nuclear materials", "Matter" ]
4,149,039
https://en.wikipedia.org/wiki/A23187
A23187 is a mobile ion-carrier that forms stable complexes with divalent cations (ions with a charge of +2). A23187 is also known as Calcimycin, Calcium Ionophore, Antibiotic A23187 and Calcium Ionophore A23187. It is produced at fermentation of Streptomyces chartreusensis. Actions and uses A23187 has antibiotic properties against gram positive bacteria and fungi. It also acts as a divalent cation ionophore, allowing these ions to cross cell membranes, which are usually impermeable to them. A23187 is most selective for Mn2+, somewhat less selective for Ca2+ and Mg2+, much less selective for Sr2+, and even less selective for Ba2+. The ionophore is used in laboratories to increase intracellular Ca2+ levels in intact cells. It also uncouples oxidative phosphorylation, the process cells use to synthesize Adenosine triphosphate, which they use for energy. In addition, A23187 inhibits mitochondrial ATPase activity. A23187 also induces apoptosis in some cells (e.g. mouse lymphoma cell line, or S49, and Jurkat cells) and prevents it in others (e.g. cells dependent on interleukin 3 that have had the factor withdrawn). Inex Pharmaceuticals Corporation (Canada) reported an innovative application of A23187. Inex used A23187 as a molecular tool in order to make artificial liposomes loaded with anti-cancer drugs such as Topotecan. In IVF field, Ca Ionophore can be used in case of low fertilization rate after ICSI procedure, particularly with Globozoospermia (Round Head sperm syndrome), Ca Ionophore will replace absence of sperm acrosome, and plays role in oocyte activation after ICSI. Recommended use is 0.5 microgram/ml twice for 10 min interrupted with fresh media with 30 min incubation, followed with regular injected eggs culture for IVF. Biosynthesis The core biosynthetic enzymes are thought to include 3 proteins for the biosynthesis of the α-ketopyrrole moiety, 5 for modular type I polyketide synthases for the spiroketal ring, 4 for the biosynthesis of 3-hydroxyanthranilic acid, an N-methyltransferase tailoring enzyme, and a type II thioesterase. Commercial availability Commercially, A23187 is available as free acid, Ca2+ salt, and 4-brominated analog. References External links A23187 from AG Scientific, another vendor A21387 from BIOMOL, a vendor's product page Calcimycin from Bioaustralis, a vendor's product page Antibiotics Ionophores Benzoxazoles Pyrroles Uncouplers
A23187
[ "Chemistry", "Biology" ]
633
[ "Cellular respiration", "Biotechnology products", "Antibiotics", "Biocides", "Uncouplers" ]
4,149,080
https://en.wikipedia.org/wiki/Aphidicolin
Aphidicolin is a tetracyclic diterpene antibiotic isolated from the fungus Cephalosporum aphidicola with antiviral and antimitotic properties. Aphidicolin is a reversible inhibitor of eukaryotic nuclear DNA replication. It blocks the cell cycle at early S phase. It is a specific inhibitor of DNA polymerase Alpha and Delta in eukaryotic cells and in some viruses (vaccinia and herpesviruses) and an apoptosis inducer in HeLa cells. Natural aphidicolin is a secondary metabolite of the fungus Nigrospora oryzae. Bibliography References Antibiotics Transferase inhibitors Diterpenes Cyclopentanes DNA polymerase inhibitors
Aphidicolin
[ "Chemistry", "Biology" ]
158
[ "Biotechnology products", "Organic compounds", "Antibiotics", "Biocides", "Organic compound stubs", "Organic chemistry stubs" ]
4,149,526
https://en.wikipedia.org/wiki/Adapter%20%28genetics%29
An adapter or adaptor in genetic engineering is a short, chemically synthesized, double-stranded oligonucleotide that can be ligated to the ends of other DNA or RNA molecules. Double stranded adapters are different from linkers in that they contain one blunt end and one sticky end. For instance, a double stranded DNA adapter can be used to link the ends of two other DNA molecules (i.e., ends that do not have "sticky ends", that is complementary protruding single strands by themselves). It may be used to add sticky ends to cDNA allowing it to be ligated into the plasmid much more efficiently. Two adapters could base pair to each other to form dimers. Types of Adapters A conversion adapter is used to join a DNA insert cut with one restriction enzyme, say EcoRl, with a vector opened with another enzyme, Bam Hl. This adapter can be used to convert the cohesive end produced by Bam Hl to one produced by Eco Rl or vice versa. One of its applications is ligating cDNA into a plasmid or other vectors instead of using Terminal deoxynucleotide Transferase enzyme to add poly A to the cDNA fragment. NGS adapters are short ~80 BP fragments that bind to DNA to aid in amplification during library preparation and are also useful to bind DNA to the flow cell during sequencing. These adapters are made up of three parts that flank the DNA sequence of interest. There is the flow cell binding sequence, the primer binding site, and also tagged barcoded regions to allow pooled sequencing. References Genetic engineering
Adapter (genetics)
[ "Chemistry", "Engineering", "Biology" ]
341
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Genetic engineering", "Molecular biology" ]
4,149,813
https://en.wikipedia.org/wiki/Clinical%20Information%20Access%20Portal
The Clinical Information Access Portal, commonly referred to as CIAP, is a project of the New South Wales Department of Health that provides online clinical resources for health professionals working within the New South Wales public health system (NSW Health). Major resources available through CIAP include: Australian Medicines Handbook Harrison's Online Journal databases – Medline, EMBASE, PsycINFO MD Consult MIMS Online Therapeutic Guidelines Micromedex BMJ Best Practice Various full text journals and eBooks References External links CIAP website – password restricted to NSW Health employees Healthcare in Australia Health informatics
Clinical Information Access Portal
[ "Biology" ]
117
[ "Health informatics", "Medical technology" ]
4,150,495
https://en.wikipedia.org/wiki/Schofield%20equation
The Schofield Equation is a method of estimating the basal metabolic rate (BMR) of adult men and women published in 1985. This is the equation used by the WHO in their technical report series. The equation that is recommended to estimate BMR by the US Academy of Nutrition and Dietetics is the Mifflin-St. Jeor equation. The equations for estimating BMR in kJ/day (kilojoules per day) from body mass (kg) are: Men: Women: The equations for estimating BMR in kcal/day (kilocalories per day) from body mass (kg) are: Men: Women: Key: W = Body weight in kilograms SEE = Standard error of estimation The raw figure obtained by the equation should be adjusted up or downwards, within the confidence limit suggested by the quoted estimation errors, and according to the following principles: Subjects leaner and more muscular than usual require more energy than the average. Obese subjects require less. Patients at the young end of the age range for a given equation require more energy. Patients at the high end of the age range for a given equation require less energy. Effects of age and body mass may cancel out: an obese 30-year-old or an athletic 60-year-old may need no adjustment from the raw figure. Physical activity levels To find total body energy expenditure (actual energy needed per day), the base metabolism must then be multiplied by a physical activity level factor. These are as follows: The FAO/WHO uses different PALs in their recommendations when recommending how to calculate TEE. See Table 5.3 of their working document. Energy Requirements of Adults, Report of a Joint FAO/WHO/UNU Expert Consultation. These equations were published in 1989 in the dietary guidelines and formed the RDA's for a number of years. The activity factor used by the USDA was 1.6. In the UK, a lower activity factor of 1.4 is used. The equation has now been replaced by the Institute of Medicine Equation in September 2002 in the US, however is still currently used by the FAO/WHO/UNU. See also Harris–Benedict equation Institute of Medicine Equation References Mass Nutrition Obesity Mathematics in medicine
Schofield equation
[ "Physics", "Mathematics" ]
466
[ "Scalar physical quantities", "Physical quantities", "Applied mathematics", "Quantity", "Mass", "Size", "Wikipedia categories named after physical quantities", "Mathematics in medicine", "Matter" ]
4,151,504
https://en.wikipedia.org/wiki/Polytropic%20process
A polytropic process is a thermodynamic process that obeys the relation: where p is the pressure, V is volume, n is the polytropic index, and C is a constant. The polytropic process equation describes expansion and compression processes which include heat transfer. Particular cases Some specific values of n correspond to particular cases: for an isobaric process, for an isochoric process. In addition, when the ideal gas law applies: for an isothermal process, for an isentropic process. Where is the ratio of the heat capacity at constant pressure () to heat capacity at constant volume (). Equivalence between the polytropic coefficient and the ratio of energy transfers For an ideal gas in a closed system undergoing a slow process with negligible changes in kinetic and potential energy the process is polytropic, such that where C is a constant, , , and with the polytropic coefficient Relationship to ideal processes For certain values of the polytropic index, the process will be synonymous with other common processes. Some examples of the effects of varying index values are given in the following table. When the index n is between any two of the former values (0, 1, γ, or ∞), it means that the polytropic curve will cut through (be bounded by) the curves of the two bounding indices. For an ideal gas, 1 < γ < 5/3, since by Mayer's relation Other A solution to the Lane–Emden equation using a polytropic fluid is known as a polytrope. See also Adiabatic process Compressor Internal combustion engine Isentropic process Isobaric process Isochoric process Isothermal process Polytrope Quasistatic equilibrium Thermodynamics Vapor-compression refrigeration References Thermodynamic processes
Polytropic process
[ "Physics", "Chemistry" ]
374
[ "Thermodynamic processes", "Thermodynamics" ]
4,152,321
https://en.wikipedia.org/wiki/Reliability%20theory%20of%20aging%20and%20longevity
The reliability theory of aging is an attempt to apply the principles of reliability theory to create a mathematical model of senescence. The theory was published in Russian by Leonid A. Gavrilov and Natalia S. Gavrilova as Biologiia prodolzhitelʹnosti zhizni in 1986, and in English translation as The Biology of Life Span: A Quantitative Approach in 1991. One of the models suggested in the book is based on an analogy with the reliability theory. The underlying hypothesis is based on the previously suggested premise that humans are born in a highly defective state. This is then made worse by environmental and mutational damage; exceptionally high redundancy due to the extremely high number of low-reliable components (e.g.., cells) allows the organism to survive for a while. The theory suggests an explanation of two aging phenomena for higher organisms: the Gompertz law of exponential increase in mortality rates with age and the "late-life mortality plateau" (mortality deceleration compared to the Gompertz law at higher ages). The book criticizes a number of hypotheses known at the time, discusses drawbacks of the hypotheses put forth by the authors themselves, and concludes that regardless of the suggested mathematical models, the underlying biological mechanisms remain unknown. See also • DNA damage theory of aging References Systems theory Reliability engineering Failure Survival analysis Theories of biological ageing
Reliability theory of aging and longevity
[ "Engineering", "Biology" ]
286
[ "Senescence", "Systems engineering", "Theories of biological ageing", "Reliability engineering" ]
4,152,892
https://en.wikipedia.org/wiki/Domain%20engineering
Domain engineering, is the entire process of reusing domain knowledge in the production of new software systems. It is a key concept in systematic software reuse and product line engineering. A key idea in systematic software reuse is the domain. Most organizations work in only a few domains. They repeatedly build similar systems within a given domain with variations to meet different customer needs. Rather than building each new system variant from scratch, significant savings may be achieved by reusing portions of previous systems in the domain to build new ones. The process of identifying domains, bounding them, and discovering commonalities and variabilities among the systems in the domain is called domain analysis. This information is captured in models that are used in the domain implementation phase to create artifacts such as reusable components, a domain-specific language, or application generators that can be used to build new systems in the domain. In product line engineering as defined by ISO26550:2015, the Domain Engineering is complemented by Application Engineering which takes care of the life cycle of the individual products derived from the product line. Purpose Domain engineering is designed to improve the quality of developed software products through reuse of software artifacts. Domain engineering shows that most developed software systems are not new systems but rather variants of other systems within the same field. As a result, through the use of domain engineering, businesses can maximize profits and reduce time-to-market by using the concepts and implementations from prior software systems and applying them to the target system. The reduction in cost is evident even during the implementation phase. One study showed that the use of domain-specific languages allowed code size, in both number of methods and number of symbols, to be reduced by over 50%, and the total number of lines of code to be reduced by nearly 75%. Domain engineering focuses on capturing knowledge gathered during the software engineering process. By developing reusable artifacts, components can be reused in new software systems at low cost and high quality. Because this applies to all phases of the software development cycle, domain engineering also focuses on the three primary phases: analysis, design, and implementation, paralleling application engineering. This produces not only a set of software implementation components relevant to the domain, but also reusable and configurable requirements and designs. Given the growth of data on the Web and the growth of the Internet of Things, a domain engineering approach is becoming relevant to other disciplines as well. The emergence of deep chains of Web services highlights that the service concept is relative. Web services developed and operated by one organization can be utilized as part of a platform by another organization. As services may be used in different contexts and hence require different configurations, the design of families of services may benefit from a domain engineering approach. Phases Domain engineering, like application engineering, consists of three primary phases: analysis, design, and implementation. However, where software engineering focuses on a single system, domain engineering focuses on a family of systems. A good domain model serves as a reference to resolve ambiguities later in the process, a repository of knowledge about the domain characteristics and definition, and a specification to developers of products which are part of the domain. Domain analysis Domain analysis is used to define the domain, collect information about the domain, and produce a domain model. Through the use of feature models (initially conceived as part of the feature-oriented domain analysis method), domain analysis aims to identify the common points in a domain and the varying points in the domain. Through the use of domain analysis, the development of configurable requirements and architectures, rather than static configurations which would be produced by a traditional application engineering approach, is possible. Domain analysis is significantly different from requirements engineering, and as such, traditional approaches to deriving requirements are ineffective for development of configurable requirements as would be present in a domain model. To effectively apply domain engineering, reuse must be considered in the earlier phases of the software development life cycle. Through the use of selection of features from developed feature models, consideration of reuse of technology is performed very early and can be adequately applied throughout the development process. Domain analysis is derived primarily from artifacts produced from past experience in the domain. Existing systems, their artifacts (such as design documents, requirement documents and user manuals), standards, and customers are all potential sources of domain analysis input. However, unlike requirements engineering, domain analysis does not solely consist of collection and formalization of information; a creative component exists as well. During the domain analysis process, engineers aim to extend knowledge of the domain beyond what is already known and to categorize the domain into similarities and differences to enhance reconfigurability. Domain analysis primarily produces a domain model, representing the common and varying properties of systems within the domain. The domain model assists with the creation of architectures and components in a configurable manner by acting as a foundation upon which to design these components. An effective domain model not only includes the varying and consistent features in a domain, but also defines the vocabulary used in the domain and defines concepts, ideas and phenomena, within the system. Feature models decompose concepts into their required and optional features to produce a fully formalized set of configurable requirements. Domain design Domain design takes the domain model produced during the domain analysis phase and aims to produce a generic architecture to which all systems within the domain can conform. In the same way that application engineering uses the functional and non-functional requirements to produce a design, the domain design phase of domain engineering takes the configurable requirements developed during the domain analysis phase and produces a configurable, standardized solution for the family of systems. Domain design aims to produce architectural patterns which solve a problem common across the systems within the domain, despite differing requirement configurations. In addition to the development of patterns during domain design, engineers must also take care to identify the scope of the pattern and the level to which context is relevant to the pattern. Limitation of context is crucial: too much context results in the pattern not being applicable to many systems, and too little context results in the pattern being insufficiently powerful to be useful. A useful pattern must be both frequently recurring and of high quality. The objective of domain design is to satisfy as many domain requirements as possible while retaining the flexibility offered by the developed feature model. The architecture should be sufficiently flexible to satisfy all of the systems within the domain while rigid enough to provide a solid framework upon which to base the solution. Domain implementation Domain implementation is the creation of a process and tools for efficiently generating a customized program in the domain. Criticism Domain engineering has been criticized for focusing too much on "engineering-for-reuse" or "engineering-with-reuse" of generic software features rather than concentrating on "engineering-for-use" such that an individual's world-view, language, or context is integrated into the design of software. See also Domain-driven design Product family engineering References Sources Software design Ontology (information science) Software development process Systems engineering Business analysis
Domain engineering
[ "Engineering" ]
1,422
[ "Systems engineering", "Design", "Software design" ]
10,897,878
https://en.wikipedia.org/wiki/Mesoscopic%20physics
Mesoscopic physics is a subdiscipline of condensed matter physics that deals with materials of an intermediate size. These materials range in size between the nanoscale for a quantity of atoms (such as a molecule) and of materials measuring micrometres. The lower limit can also be defined as being the size of individual atoms. At the microscopic scale are bulk materials. Both mesoscopic and macroscopic objects contain many atoms. Whereas average properties derived from constituent materials describe macroscopic objects, as they usually obey the laws of classical mechanics, a mesoscopic object, by contrast, is affected by thermal fluctuations around the average, and its electronic behavior may require modeling at the level of quantum mechanics. A macroscopic electronic device, when scaled down to a meso-size, starts revealing quantum mechanical properties. For example, at the macroscopic level the conductance of a wire increases continuously with its diameter. However, at the mesoscopic level, the wire's conductance is quantized: the increases occur in discrete, or individual, whole steps. During research, mesoscopic devices are constructed, measured and observed experimentally and theoretically in order to advance understanding of the physics of insulators, semiconductors, metals, and superconductors. The applied science of mesoscopic physics deals with the potential of building nanodevices. Mesoscopic physics also addresses fundamental practical problems which occur when a macroscopic object is miniaturized, as with the miniaturization of transistors in semiconductor electronics. The mechanical, chemical, and electronic properties of materials change as their size approaches the nanoscale, where the percentage of atoms at the surface of the material becomes significant. For bulk materials larger than one micrometre, the percentage of atoms at the surface is insignificant in relation to the number of atoms in the entire material. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits. There is no rigid definition for mesoscopic physics but the systems studied are normally in the range of 100 nm (the size of a typical virus) to 1 000 nm (the size of a typical bacterium): 100 nanometers is the approximate upper limit for a nanoparticle. Thus, mesoscopic physics has a close connection to the fields of nanofabrication and nanotechnology. Devices used in nanotechnology are examples of mesoscopic systems. Three categories of new electronic phenomena in such systems are interference effects, quantum confinement effects and charging effects. Quantum confinement effects Quantum confinement effects describe electrons in terms of energy levels, potential wells, valence bands, conduction bands, and electron energy band gaps. Electrons in bulk dielectric materials (larger than 10 nm) can be described by energy bands or electron energy levels. Electrons exist at different energy levels or bands. In bulk materials these energy levels are described as continuous because the difference in energy is negligible. As electrons stabilize at various energy levels, most vibrate in valence bands below a forbidden energy level, named the band gap. This region is an energy range in which no electron states exist. A smaller amount have energy levels above the forbidden gap, and this is the conduction band. The quantum confinement effect can be observed once the diameter of the particle is of the same magnitude as the wavelength of the electron's wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials. As the material is miniaturized towards nano-scale the confining dimension naturally decreases. The characteristics are no longer averaged by bulk, and hence continuous, but are at the level of quanta and thus discrete. In other words, the energy spectrum becomes discrete, measured as quanta, rather than continuous as in bulk materials. As a result, the bandgap asserts itself: there is a small and finite separation between energy levels. This situation of discrete energy levels is called quantum confinement. In addition, quantum confinement effects consist of isolated islands of electrons that may be formed at the patterned interface between two different semiconducting materials. The electrons typically are confined to disk-shaped regions termed quantum dots. The confinement of the electrons in these systems changes their interaction with electromagnetic radiation significantly, as noted above. Because the electron energy levels of quantum dots are discrete rather than continuous, the addition or subtraction of just a few atoms to the quantum dot has the effect of altering the boundaries of the bandgap. Changing the geometry of the surface of the quantum dot also changes the bandgap energy, owing again to the small size of the dot, and the effects of quantum confinement. Interference effects In the mesoscopic regime, scattering from defects – such as impurities – induces interference effects which modulate the flow of electrons. The experimental signature of mesoscopic interference effects is the appearance of reproducible fluctuations in physical quantities. For example, the conductance of a given specimen oscillates in an apparently random manner as a function of fluctuations in experimental parameters. However, the same pattern may be retraced if the experimental parameters are cycled back to their original values; in fact, the patterns observed are reproducible over a period of days. These are known as universal conductance fluctuations. Time-resolved mesoscopic dynamics Time-resolved experiments in mesoscopic dynamics: the observation and study, at nanoscales, of condensed phase dynamics such as crack formation in solids, phase separation, and rapid fluctuations in the liquid state or in biologically relevant environments; and the observation and study, at nanoscales, of the ultrafast dynamics of non-crystalline materials. Related s References External links Condensed matter physics Quantum mechanics
Mesoscopic physics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,175
[ "Theoretical physics", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Mesoscopic physics", "Matter" ]
10,901,904
https://en.wikipedia.org/wiki/Thermophotonics
Thermophotonics (often abbreviated as TPX) is a concept for generating usable power from heat which shares some features of thermophotovoltaic (TPV) power generation. Thermophotonics was first publicly proposed by solar photovoltaic researcher Martin Green in 2000. However, no TPX device is known to have been demonstrated to date, apparently because of the stringent requirement on the emitter efficiency. A TPX system consists of a light-emitting diode (LED) (though other types of emitters are conceivable), a photovoltaic (PV) cell, an optical coupling between the two, and an electronic control circuit. The LED is heated to a temperature higher than the PV temperature by an external heat source. If no power is applied to the LED, the system functions much like a very inefficient TPV system, but if a forward bias is applied at some fraction of the bandgap potential, an increased number of electron-hole pairs (EHPs) will be thermally excited to the bandgap energy. These EHPs can then recombine radiatively so that the LED emits light at a rate higher than the thermal radiation rate ("superthermal" emission). This light is then delivered to the cooler PV cell over the optical coupling and converted to electricity. The control circuit presents a load to the PV cell (presumably at the maximum power point) and converts this voltage to a voltage level that can be used to sustain the bias of the emitter. Provided that the conversion efficiencies of electricity to light and light to electricity are sufficiently high, the power harnessed from the PV cell can exceed the power going into the bias circuit, and this small fraction of excess power (originating from the heat difference) can be utilized. It is thus in some sense a photonic heat engine. Possible applications of thermophotonic generators include solar thermal electricity generation and utilization of waste heat. TPX systems may have the potential to generate power with useful levels of output at temperatures where only thermoelectric systems are now practical, but with higher efficiency. A patent application for a thermophotonic generator using a vacuum gap with thickness on the order of a micrometer or less was published by the US Patent Office in 2009 and assigned to MTPV Corporation of Austin, Texas, USA. This proposed variant of the technology allows better thermal insulation because of the gap between the hot emitter and cold receiver, while maintaining relatively good optical coupling between them due to the gap's being small relative to the optical wavelength. References Thermodynamics Photovoltaics
Thermophotonics
[ "Physics", "Chemistry", "Mathematics" ]
555
[ "Thermodynamics", "Dynamical systems" ]
10,902,751
https://en.wikipedia.org/wiki/Mesoionic%20compounds
In chemistry, mesoionic compounds are one in which a heterocyclic structure is dipolar and where both the negative and the positive charges are delocalized. A completely uncharged structure cannot be written and mesoionic compounds cannot be represented satisfactorily by any one mesomeric structure. Mesoionic compounds are a subclass of betaines. Examples are sydnones and sydnone imines (e.g. the stimulant mesocarb), münchnones, and mesoionic carbenes. The formal positive charge is associated with the ring atoms and the formal negative charge is associated either with ring atoms or an exocyclic nitrogen or other atom. These compounds are stable zwitterionic compounds and belong to nonbenzenoid aromatics. See also Mesomeric betaine References Further reading Heterocyclic compounds Zwitterions
Mesoionic compounds
[ "Physics", "Chemistry" ]
197
[ "Matter", "Organic compounds", "Heterocyclic compounds", "Zwitterions", "Ions" ]
10,906,395
https://en.wikipedia.org/wiki/Neutron%20supermirror
A neutron supermirror is a highly polished, layered material used to reflect neutron beams. Supermirrors are a special case of multi-layer neutron reflectors with varying layer thicknesses. The first neutron supermirror concept was proposed by Ferenc Mezei, inspired by earlier work with X-rays. Supermirrors are produced by depositing alternating layers of strongly contrasting substances, such as nickel and titanium, on a smooth substrate. A single layer of high refractive index material (e.g. nickel) exhibits total external reflection at small grazing angles up to a critical angle . For nickel with natural isotopic abundances, in degrees is approximately where is the neutron wavelength in Angstrom units. A mirror with a larger effective critical angle can be made by exploiting diffraction (with non-zero losses) that occurs from stacked multilayers. The critical angle of total reflection, in degrees, becomes approximately , where is the "m-value" relative to natural nickel. values in the range of 1–3 are common, in specific areas for high-divergence (e.g. using focussing optics near the source, choppers, or experimental areas) m=6 is readily available. Nickel has a positive scattering cross section, and titanium has a negative scattering cross section, and in both elements the absorption cross section is small, which makes Ni-Ti the most efficient technology with neutrons. The number of Ni-Ti layers needed increases rapidly as , with in the range 2–4, which affects the cost. This has a strong bearing on the economic strategy of neutron instrument design.] References Optical materials Hungarian inventions
Neutron supermirror
[ "Physics" ]
338
[ "Optical materials", "Materials", "Particle physics", "Particle physics stubs", "Matter" ]
10,908,732
https://en.wikipedia.org/wiki/Younger%20Memnon
The Younger Memnon is an Ancient Egyptian statue, one of two colossal granite statues from the Ramesseum mortuary temple in Thebes, Upper Egypt. It depicts the Nineteenth Dynasty Pharaoh Ramesses II wearing the Nemes head-dress with a cobra diadem on top. The damaged statue has since been separated from its upper torso and head. These sections can now be found in the British Museum. The remainder of the statue remains in Egypt. It is one of a pair that originally flanked the Ramesseum's doorway. The head of the other statue is still found at the temple. Description The Younger Memnon is high × wide (across the shoulders). It weighs 7.25 tons and was cut from a single block of two-coloured granite. There is a slight variation of normal conventions in that the eyes look down slightly more than usual, and to exploit the different colours (broadly speaking, the head is in one colour, and the body another). Acquisition Belzoni Napoleon's men tried but failed to dig and remove it to France during his 1798 expedition there, during which he did acquire but then lost the Rosetta Stone. It was during this attempt that the hole on the right of the torso (just above Ramesses's right nipple) is said to have been made. Following an idea mentioned to him by his friend Johann Ludwig Burckhardt of digging the statue and bringing it to Britain, the British Consul General Henry Salt hired the adventurer Giovanni Belzoni in Cairo in 1815 for this purpose. Using his hydraulics and engineering skills, it was pulled on wooden rollers by ropes to the bank of the Nile opposite Luxor by hundreds of workmen. However, no boat was yet available to take it up to Alexandria and so Belzoni carried out an expedition to Nubia, returning by October. With French collectors also in the area possibly looking to acquire the statue, he then sent workmen to Esna to gain a suitable boat and in the meantime carried out further excavations in Thebes. He finally loaded the products of these digs, plus the Memnon, onto this boat and got it to Cairo by 15 December 1816. There he received and obeyed orders from Salt to unload all but the Memnon, which was then sent on to Alexandria and London without him. Anticipated by Shelley's poem "Ozymandias", the head arrived in 1818 on in Deptford. In London it acquired its name "The Younger Memnon", after the "Memnonianum" (the name in classical times for the Ramesseum – the two statues at the entrance of the mortuary temple of Amenhotep III were associated with Memnon in classical times, and are still known as the Colossi of Memnon. The British Museum sculpture and its pair seem to have either been mistaken for them or suffered a similar misnaming). British Museum It was later acquired from Salt in 1821 by the British Museum and was at first displayed in the old Townley Galleries (now demolished) for several years, then installed (using heavy ropes and lifting equipment and with help from the Royal Engineers) in 1834 in the new Egyptian Sculpture Gallery (now Room 4, where it now resides). The soldiers were commanded by a Waterloo veteran, Major Charles Cornwallis Dansey, lame from a wound sustained there, who therefore sat whilst commanding them. On its arrival there, it could be said to be the first piece of Egyptian sculpture to be recognized as a work of art rather than a curiosity low down in the chain of art (with ancient Greek art at the pinnacle of this chain). It is museum number EA 19. In February 2010, the statue was featured as object 20 in A History of the World in 100 Objects, a BBC Radio 4 programme by British Museum director Neil MacGregor. References Sources British Museum Catalogue entry 3D model of the Younger Memnon via photogrammetric survey Encyclopaedic.net – extracts from Belzoni's account Publications G. Belzoni, Narrative of the operations and recent discoveries within the pyramids, temples, tombs, and excavations in Egypt and Nubia I (London, John Murray, 1822), pp. 61–80 S. Quirke and A.J. Spencer, The British Museum book of ancient Egypt (London, The British Museum Press, 1992), pp. 126–7 Albert M. Lythgoe, 'Statues of the Goddess Sekhmet', The Metropolitan Museum of Art Bulletin Vol. 14, No. 10, Part 2 (Oct., 1919), pp. 1+3-23 Stephanie Moser, Wondrous Curiosities: Ancient Egypt at the British Museum (University of Chicago Press, 2006), Sculptures of ancient Egypt Ancient Egyptian sculptures in the British Museum 13th-century BC works Colossal statues Sculptures in the United Kingdom Ramesses II
Younger Memnon
[ "Physics", "Mathematics" ]
1,002
[ "Quantity", "Colossal statues", "Physical quantities", "Size" ]
7,214,278
https://en.wikipedia.org/wiki/Decision%20field%20theory
Decision field theory (DFT) is a dynamic-cognitive approach to human decision making. It is a cognitive model that describes how people actually make decisions rather than a rational or normative theory that prescribes what people should or ought to do. It is also a dynamic model of decision-making rather than a static model, because it describes how a person's preferences evolve across time until a decision is reached rather than assuming a fixed state of preference. The preference evolution process is mathematically represented as a stochastic process called a diffusion process. It is used to predict how humans make decisions under uncertainty, how decisions change under time pressure, and how choice context changes preferences. This model can be used to predict not only the choices that are made but also decision or response times. The paper "Decision Field Theory" was published by Jerome R. Busemeyer and James T. Townsend in 1993. The DFT has been shown to account for many puzzling findings regarding human choice behavior including violations of stochastic dominance, violations of strong stochastic transitivity, violations of independence between alternatives, serial-position effects on preference, speed accuracy tradeoff effects, inverse relation between probability and decision time, changes in decisions under time pressure, as well as preference reversals between choices and prices. The DFT also offers a bridge to neuroscience. Recently, the authors of decision field theory also have begun exploring a new theoretical direction called Quantum Cognition. Introduction The name decision field theory was chosen to reflect the fact that the inspiration for this theory comes from an earlier approach – avoidance conflict model contained in Kurt Lewin's general psychological theory, which he called field theory. DFT is a member of a general class of sequential sampling models that are commonly used in a variety of fields in cognition. The basic ideas underlying the decision process for sequential sampling models is illustrated in Figure 1 below. Suppose the decision maker is initially presented with a choice between three risky prospects, A, B, C, at time t = 0. The horizontal axis on the figure represents deliberation time (in seconds), and the vertical axis represents preference strength. Each trajectory in the figure represents the preference state for one of the risky prospects at each moment in time. Intuitively, at each moment in time, the decision maker thinks about various payoffs of each prospect, which produces an affective reaction, or valence, to each prospect. These valences are integrated across time to produce the preference state at each moment. In this example, during the early stages of processing (between 200 and 300 ms), attention is focused on advantages favoring prospect C, but later (after 600 ms) attention is shifted toward advantages favoring prospect A. The stopping rule for this process is controlled by a threshold (which is set equal to 1.0 in this example): the first prospect to reach the top threshold is accepted, which in this case is prospect A after about two seconds. Choice probability is determined by the first option to win the race and cross the upper threshold, and decision time is equal to the deliberation time required by one of the prospects to reach this threshold. The threshold is an important parameter for controlling speed–accuracy tradeoffs. If the threshold is set to a lower value (about .30) in Figure 1, then prospect C would be chosen instead of prospect A (and done so earlier). Thus decisions can reverse under time pressure. High thresholds require a strong preference state to be reached, which allows more information about the prospects to be sampled, prolonging the deliberation process, and increasing accuracy. Low thresholds allow a weak preference state to determine the decision, which cuts off sampling information about the prospects, shortening the deliberation process, and decreasing accuracy. Under high time pressure, decision makers must choose a low threshold; but under low time pressure, a higher threshold can be used to increase accuracy. Very careful and deliberative decision makers tend to use a high threshold, and impulsive and careless decision makers use a low threshold. To provide a bit more formal description of the theory, assume that the decision maker has a choice among three actions, and also suppose for simplicity that there are only four possible final outcomes. Thus each action is defined by a probability distribution across these four outcomes. The affective values produced by each payoff are represented by the values mj. At any moment in time, the decision maker anticipates the payoff of each action, which produces a momentary evaluation, Ui(t), for action i. This momentary evaluation is an attention-weighted average of the affective evaluation of each payoff: Ui(t) = Σ Wij(t)mj. The attention weight at time t, Wij(t), for payoff j offered by action i, is assumed to fluctuate according to a stationary stochastic process. This reflects the idea that attention is shifting from moment to moment, causing changes in the anticipated payoff of each action across time. The momentary evaluation of each action is compared with other actions to form a valence for each action at each moment, vi(t) = Ui(t) – U.(t), where U.(t) equals the average across all the momentary actions. The valence represents the momentary advantage or disadvantage of each action. The total valence balances out to zero so that all the options cannot become attractive simultaneously. Finally, the valences are the inputs to a dynamic system that integrates the valences over time to generate the output preference states. The output preference state for action i at time t is symbolized as Pi(t). The dynamic system is described by the following linear stochastic difference equation for a small time step h in the deliberation process: Pi(t+h) = Σ sijPj(t)+vi(t+h).The positive self feedback coefficient, sii = s > 0, controls the memory for past input valences for a preference state. Values of sii < 1 suggest decay in the memory or impact of previous valences over time, whereas values of sii > 1 suggest growth in impact over time (primacy effects). The negative lateral feedback coefficients, sij = sji < 0 for i not equal to j, produce competition among actions so that the strong inhibit the weak. In other words, as preference for one action grows stronger, then this moderates the preference for other actions. The magnitudes of the lateral inhibitory coefficients are assumed to be an increasing function of the similarity between choice options. These lateral inhibitory coefficients are important for explaining context effects on preference described later. Formally, this is a Markov process; matrix formulas have been mathematically derived for computing the choice probabilities and distribution of choice response times. The decision field theory can also be seen as a dynamic and stochastic random walk theory of decision making, presented as a model positioned between lower-level neural activation patterns and more complex notions of decision making found in psychology and economics. Explaining context effects The DFT is capable of explaining context effects that many decision making theories are unable to explain. Many classic probabilistic models of choice satisfy two rational types of choice principles. One principle is called independence of irrelevant alternatives, and according to this principle, if the probability of choosing option X is greater than option Y when only X,Y are available, then option X should remain more likely to be chosen over Y even when a new option Z is added to the choice set. In other words, adding an option should not change the preference relation between the original pair of options. A second principle is called regularity, and according to this principle, the probability of choosing option X from a set containing only X and Y should be greater than or equal to the probability of choosing option X from a larger set containing options X, Y, and a new option Z. In other words, adding an option should only decrease the probability of choosing one of the original pair of options. However, empirical findings obtained by consumer researchers studying human choice behavior have found systematic context effects that systematically violate both of these principles. The first context effect is the similarity effect. This effect occurs with the introduction of a third option S that is similar to X but it is not dominated by X. For example, suppose X is a BMW, Y is a Ford focus, and S is an Audi. The Audi is similar to the BMW because both are not very economical but they are both high quality and sporty. The Ford focus is different from the BMW and Audi because it is more economical but lower quality. Suppose in a binary choice, X is chosen more frequently than Y. Next suppose a new choice set is formed by adding an option S that is similar to X. If X is similar to S, and both are very different from Y, the people tend to view X and S as one group and Y as another option. Thus the probability of Y remains the same whether S is presented as an option or not. However, the probability of X will decrease by approximately half with the introduction of S. This causes the probability of choosing X to drop below Y when S is added to the choice set. This violates the independence of irrelevant alternatives property because in a binary choice, X is chosen more frequently than Y, but when S is added, then Y is chosen more frequently than X. The second context effect is the compromise effect. This effect occurs when an option C is added that is a compromise between X and Y. For example, when choosing between C = Honda and X = BMW, the latter is less economical but higher quality. However, if another option Y = Ford Focus is added to the choice set, then C = Honda becomes a compromise between X = BMW and Y = Ford Focus. Suppose in a binary choice, X (BMW) is chosen more often than C (Honda). But when option Y (Ford Focus) is added to the choice set, then option C (Honda) becomes the compromise between X (BMW) and Y (Ford Focus), and C is then chosen more frequently than X. This is another violation of the independence of irrelevant alternatives property because X is chosen more often than C in a binary choice, but C when option Y is added to the choice set, then C is chosen more often than X. The third effect is called the attraction effect. This effect occurs when the third option D is very similar to X but D is defective compared to X. For example D may be a new sporty car developed by a new manufacturer that is similar to option X = BMW, but costs more than the BMW. Therefore, there is little or no reason to choose D over X, and in this situation D is rarely ever chosen over X. However, adding D to a choice set boosts the probability of choosing X. In particular, the probability of choosing X from a set containing X,Y,D is larger than the probability of choosing X from a set containing only X and Y. The defective option D makes X shine, and this attraction effect violates the principle of regularity, which says that adding another option cannot increase the popularity of an option over the original subset. DFT accounts for all three effects using the same principles and same parameters across all three findings. According to DFT, the attention switching mechanism is crucial for producing the similarity effect, but the lateral inhibitory connections are critical for explaining the compromise and attraction effects. If the attention switching process is eliminated, then the similarity effect disappears, and if the lateral connections are all set to zero, then the attraction and compromise effects disappear. This property of the theory entails an interesting prediction about the effects of time pressure on preferences. The contrast effects produced by lateral inhibition require time to build up, which implies that the attraction and compromise effects should become larger under prolonged deliberation (see ). Alternatively, if context effects are produced by switching from a weighted average rule under binary choice to a quick heuristic strategy for the triadic choice, then these effects should get larger under time pressure. Empirical tests show that prolonging the decision process increases the effects and time pressure decreases the effects. Neuroscience The Decision Field Theory has demonstrated an ability to account for a wide range of findings from behavioral decision making for which the purely algebraic and deterministic models often used in economics and psychology cannot account. Recent studies that record neural activations in non-human primates during perceptual decision making tasks have revealed that neural firing rates closely mimic the accumulation of preference theorized by behaviorally-derived diffusion models of decision making. The decision processes of sensory-motor decisions are beginning to be fairly well understood both at the behavioral and neural levels. Typical findings indicate that neural activation regarding stimulus movement information is accumulated across time up to a threshold, and a behavioral response is made as soon as the activation in the recorded area exceeds the threshold. A conclusion that one can draw is that the neural areas responsible for planning or carrying out certain actions are also responsible for deciding the action to carry out, a decidedly embodied notion. Mathematically, the spike activation pattern, as well as the choice and response time distributions, can be well described by what are known as diffusion models—especially in two-alternative forced choice tasks. Diffusion models, such as the decision field theory, can be viewed as stochastic recurrent neural network models, except that the dynamics are approximated by linear systems. The linear approximation is important for maintaining a mathematically tractable analysis of systems perturbed by noisy inputs. In addition to these neuroscience applications, diffusion models (or their discrete time, random walk, analogues) have been used by cognitive scientists to model performance in a variety of tasks ranging from sensory detection, and perceptual discrimination, to memory recognition, and categorization. Thus, diffusion models provide the potential to form a theoretical bridge between neural models of sensory-motor tasks and behavioral models of complex-cognitive tasks. Notes References Models of computation Decision theory Cognitive science Cognitive modeling Mathematical psychology
Decision field theory
[ "Mathematics" ]
2,859
[ "Applied mathematics", "Mathematical psychology" ]
7,214,369
https://en.wikipedia.org/wiki/Global%20distance%20test
The global distance test (GDT), also written as GDT_TS to represent "total score", is a measure of similarity between two protein structures with known amino acid correspondences (e.g. identical amino acid sequences) but different tertiary structures. It is most commonly used to compare the results of protein structure prediction to the experimentally determined structure as measured by X-ray crystallography, protein NMR, or, increasingly, cryoelectron microscopy. The GDT metric was developed by Adam Zemla at Lawrence Livermore National Laboratory and originally implemented in the Local-Global Alignment (LGA) program. It is intended as a more accurate measurement than the common root-mean-square deviation (RMSD) metric - which is sensitive to outlier regions created, for example, by poor modeling of individual loop regions in a structure that is otherwise reasonably accurate. The conventional GDT_TS score is computed over the alpha carbon atoms and is reported as a percentage, ranging from 0 to 100. In general, the higher the GDT_TS score, the more closely a model approximates a given reference structure. GDT_TS measurements are used as major assessment criteria in the production of results from the Critical Assessment of Structure Prediction (CASP), a large-scale experiment in the structure prediction community dedicated to assessing current modeling techniques. The metric was first introduced as an evaluation standard in the third iteration of the biannual experiment (CASP3) in 1998. Various extensions to the original method have been developed; variations that accounts for the positions of the side chains are known as global distance calculations (GDC). Calculation The GDT score is calculated as the largest set of amino acid residues' alpha carbon atoms in the model structure falling within a defined distance cutoff of their position in the experimental structure, after iteratively superimposing the two structures. By the original design the GDT algorithm calculates 20 GDT scores, i.e. for each of 20 consecutive distance cutoffs (0.5 Å, 1.0 Å, 1.5 Å, ... 10.0 Å). For structure similarity assessment it is intended to use the GDT scores from several cutoff distances, and scores generally increase with increasing cutoff. A plateau in this increase may indicate an extreme divergence between the experimental and predicted structures, such that no additional atoms are included in any cutoff of a reasonable distance. The conventional GDT_TS total score in CASP is the average result of cutoffs at 1, 2, 4, and 8 Å. Variations and extensions The original GDT_TS is calculated based on the superimpositions and GDT scores produced by the Local-Global Alignment (LGA) program. A "high accuracy" version called GDT_HA is computed by selection of smaller cutoff distances (half the size of GDT_TS) and thus more heavily penalizes larger deviations from the reference structure. It was used in the high accuracy category of CASP7. CASP8 defined a new "TR score", which is GDT_TS minus a penalty for residues clustered too close, meant to penalize steric clashes in the predicted structure, sometimes to game the cutoff measure of GDT. The primary GDT assessment uses only the alpha carbon atoms. To apply superposition‐based scoring to the amino acid residue side chains, a GDT‐like score called "global distance calculation for sidechains" (GDC_sc) was designed and implemented within the LGA program in 2008. Instead of comparing residue positions on the basis of alpha carbons, GDC_sc uses a predefined "characteristic atom" near the end of each residue for the evaluation of inter-residue distance deviations. An "all atoms" variant of the GDC score (GDC_all) is calculated using full-model information, and is one of the standard measures used by CASP's organizers and assessors to evaluate accuracy of predicted structural models. GDT scores are generally computed with respect to a single reference structure. In some cases, structural models with lower GDT scores to a reference structure determined by protein NMR are nevertheless better fits to the underlying experimental data. Methods have been developed to estimate the uncertainty of GDT scores due to protein flexibility and uncertainty in the reference structure. See also Root mean square deviation (bioinformatics) — A different structure comparison measure. TM-score — A different structure comparison measure. References External links CASP14 results - summary tables of the latest CASP experiment run in 2020, including example plots of GDT score as a function of cutoff distance GDT, GDC, LCS and LGA description services and documentation on structure comparison and similarity measures. Bioinformatics Computational chemistry
Global distance test
[ "Chemistry", "Engineering", "Biology" ]
984
[ "Bioinformatics", "Theoretical chemistry", "Computational chemistry", "Biological engineering" ]
7,215,216
https://en.wikipedia.org/wiki/Enriched%20Xenon%20Observatory
The Enriched Xenon Observatory (EXO) is a particle physics experiment searching for neutrinoless double beta decay of xenon-136 at WIPP near Carlsbad, New Mexico, U.S. Neutrinoless double beta decay (0νββ) detection would prove the Majorana nature of neutrinos and impact the neutrino mass values and ordering. These are important open topics in particle physics. EXO currently has a 200-kilogram xenon liquid time projection chamber (EXO-200) with R&D efforts on a ton-scale experiment (nEXO). Xenon double beta decay was detected and limits have been set for 0νββ. Overview EXO measures the rate of neutrinoless decay events above the expected background of similar signals, to find or limit the double beta decay half-life, which relates to the effective neutrino mass using nuclear matrix elements. A limit on effective neutrino mass below 0.01 eV would determine the neutrino mass order. The effective neutrino mass is dependent on the lightest neutrino mass in such a way that that bound indicates the normal mass hierarchy. The expected rate of 0νββ events is very low, so background radiation is a significant problem. WIPP has of rock overburden—equivalent to of water—to screen incoming cosmic rays. Lead shielding and a cryostat also protect the setup. The neutrinoless decays would appear as narrow spike in the energy spectrum around the xenon Q-value (Qββ = 2457.8 keV), which is fairly high and above most gamma decays. EXO-200 History EXO-200 was designed with a goal of less than 40 events per year within two standard deviations of expected decay energy. This background was achieved by selecting and screening all materials for radiopurity. Originally the vessel was to be made of Teflon, but the final design of the vessel uses thin, ultra-pure copper. EXO-200 was relocated from Stanford to WIPP in the summer of 2007. Assembly and commissioning continued until the end of 2009 with data taking beginning in May 2011. Calibration was done using 228Th, 137Cs, and 60Co gamma sources. Design The prototype EXO-200 uses a copper cylindrical time projection chamber filled with of pure liquid xenon. Xenon is a scintillator, so decay particles produce prompt light which is detected by avalanche photodiodes, providing the event time. A large electric field drives ionization electrons to wires for collection. The time between the light and first collection determines the z coordinate of the event, while a grid of wires determines the radial and angular coordinates. Results The background from earth radioactivity(Th/U) and 137Xe contamination led to ≈2×10−3 counts/(keV·kg·yr) in the detector. Energy resolution near Qββ of 1.53% was achieved. In August 2011, EXO-200 was the first experiment to observe double beta decay of 136Xe, with a half life of 2.11×1021 years. This is the slowest directly observed process. An improved half life of 2.165 ±0.016(stat) ±0.059(sys) × 1021 years was published in 2014. EXO set a limit on neutrinoless beta decay of 1.6×1025 years in 2012. A revised analysis of run 2 data with 100 kg·yr exposure, reported in the June issue of Nature reduced the limits on half-life to 1.1×1025 yr, and mass to 450 meV. This was used to confirm the power of the design and validate the proposed expansion. Additional running for two years was taken. EXO-200 has performed two scientific operations, Phase I (2011-2014) and after upgrades, Phase II (2016 - 2018) for a total exposure of 234.1 kg·yr. No evidence of neutrinoless double beta decay has been found in the combined Phase I and II data, giving the lower bound of years for the half-life and upper mass of 239 meV. Phase II was the final operation of EXO-200. nEXO A ton-scale experiment, nEXO ("next EXO"), must overcome many backgrounds. The EXO collaboration is exploring many possibilities to do so, including barium tagging in liquid xenon. Any double beta decay event will leave behind a daughter barium ion, while backgrounds, such as radioactive impurities or neutrons, will not. Requiring a barium ion at the location of an event eliminates all backgrounds. Tagging of a single ion of barium has been demonstrated and progress has been made on a method for extracting ions out of the liquid xenon. A freezing probe method has been demonstrated, and gaseous tagging is also being developed. The 2014 EXO-200 paper indicated a 5000 kg TPC can improve the background by xenon self-shielding and better electronics. Diameter would be increased to 130 cm and a water tank would be added as shielding and muon veto. This is much larger than the attenuation length for gamma rays. Radiopure copper for nEXO has been completed. It is planned for installation in the SNOLAB "Cryopit". An Oct. 2017 paper details the experiment and discusses the sensitivity and the discovery potential of nEXO for neutrinoless double beta decay. Details on the ionization readout of the TPC have also been published. The pre-Conceptual Design Report (pCDR) for nEXO was published in 2018. The planned location is SNOLAB, Canada. References External links EXO web site nEXO web site EXO experiment record on INSPIRE-HEP Particle experiments Neutrino experiments Radioactivity Xenon
Enriched Xenon Observatory
[ "Physics", "Chemistry" ]
1,230
[ "Radioactivity", "Nuclear physics" ]
7,216,032
https://en.wikipedia.org/wiki/Journal%20of%20High%20Energy%20Physics
The Journal of High Energy Physics is a monthly peer-reviewed open access scientific journal covering the field of high energy physics. It is published by Springer Science+Business Media on behalf of the International School for Advanced Studies. The journal is part of the SCOAP3 initiative. According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.810. References External links Journal page at International School for Advanced Studies website English-language journals Monthly journals Physics journals Academic journals established in 1997 Springer Science+Business Media academic journals Academic journals associated with learned and professional societies Particle physics journals
Journal of High Energy Physics
[ "Physics" ]
120
[ "Particle physics stubs", "Particle physics", "Particle physics journals" ]
7,216,822
https://en.wikipedia.org/wiki/Contact%20order
The contact order of a protein is a measure of the locality of the inter-amino acid contacts in the protein's native state tertiary structure. It is calculated as the average sequence distance between residues that form native contacts in the folded protein divided by the total length of the protein. Higher contact orders indicate longer folding times, and low contact order has been suggested as a predictor of potential downhill folding, or protein folding that occurs without a free energy barrier. This effect is thought to be due to the lower loss of conformational entropy associated with the formation of local as opposed to nonlocal contacts. Relative contact order (CO) is formally defined as: where N is the total number of contacts, ΔSi,j is the sequence separation, in residues, between contacting residues i and j, and L is the total number of residues in the protein. The value of contact order typically ranges from 5% to 25% for single-domain proteins, with lower contact order belonging to mainly helical proteins, and higher contact order belonging to proteins with a high beta-sheet content. Protein structure prediction methods are more accurate in predicting the structures of proteins with low contact orders. This may be partly because low contact order proteins tend to be small, but is likely to be explained by the smaller number of possible long-range residue-residue interactions to be considered during global optimization procedures that minimize an energy function. Even successful structure prediction methods such as the Rosetta method overproduce low-contact-order structure predictions compared to the distributions observed in experimentally determined protein structures. The percentage of the natively folded contact order can also be used as a measure of the "nativeness" of folding transition states. Phi value analysis in concert with molecular dynamics has produced transition-state models whose contact order is close to that of the folded state in proteins that are small and fast-folding. Further, contact orders in transition states as well as those in native states are highly correlated with overall folding time. In addition to their role in structure prediction, contact orders can themselves be predicted based on a sequence alignment, which can be useful in classifying the fold of a novel sequence with some degree of homology to known sequences. See also Circuit topology: topological arrangement of contacts References Bioinformatics Protein structure
Contact order
[ "Chemistry", "Engineering", "Biology" ]
463
[ "Bioinformatics", "Biological engineering", "Protein structure", "Structural biology" ]
7,219,559
https://en.wikipedia.org/wiki/Aurophilicity
In chemistry, aurophilicity refers to the tendency of gold complexes to aggregate via formation of weak metallophilic interactions. The main evidence for aurophilicity is from the crystallographic analysis of Au(I) complexes. The aurophilic bond has a length of about 3.0 Å and a strength of about 7–12 kcal/mol, which is comparable to the strength of a hydrogen bond. The effect is greatest for gold as compared with copper or silver—the higher elements in its periodic table group—due to increased relativistic effects. Observations and theory show that, on average, 28% of the binding energy in the aurophilic interaction can be attributed to relativistic expansion of the gold d orbitals. An example of aurophilicity is the propensity of gold centres to aggregate. While both intramolecular and intermolecular aurophilic interactions have been observed, only intramolecular aggregation has been observed at such nucleation sites. Role in self-assembly The similarity in strength between hydrogen bonding and aurophilic interaction has proven to be a convenient tool in the field of polymer chemistry. Much research has been conducted on self-assembling supramolecular structures, both those that aggregate by aurophilicity alone and those that contain both aurophilic and hydrogen-bonding interactions. An important and exploitable property of aurophilic interactions relevant to their supramolecular chemistry is that while both inter- and intramolecular interactions are possible, intermolecular aurophilic linkages are comparatively weak and easily broken by solvation; most complexes that exhibit intramolecular aurophilic interactions retain such moieties in solution. References Gold Chemical bonding
Aurophilicity
[ "Physics", "Chemistry", "Materials_science" ]
373
[ "Chemical bonding", "Condensed matter physics", "nan" ]
7,220,589
https://en.wikipedia.org/wiki/Bundle%20map
In mathematics, a bundle map (or bundle morphism) is a morphism in the category of fiber bundles. There are two distinct, but closely related, notions of bundle map, depending on whether the fiber bundles in question have a common base space. There are also several variations on the basic theme, depending on precisely which category of fiber bundles is under consideration. In the first three sections, we will consider general fiber bundles in the category of topological spaces. Then in the fourth section, some other examples will be given. Bundle maps over a common base Let and be fiber bundles over a space M. Then a bundle map from E to F over M is a continuous map such that . That is, the diagram should commute. Equivalently, for any point x in M, maps the fiber of E over x to the fiber of F over x. General morphisms of fiber bundles Let πE:E→ M and πF:F→ N be fiber bundles over spaces M and N respectively. Then a continuous map is called a bundle map from E to F if there is a continuous map f:M→ N such that the diagram commutes, that is, . In other words, is fiber-preserving, and f is the induced map on the space of fibers of E: since πE is surjective, f is uniquely determined by . For a given f, such a bundle map is said to be a bundle map covering f. Relation between the two notions It follows immediately from the definitions that a bundle map over M (in the first sense) is the same thing as a bundle map covering the identity map of M. Conversely, general bundle maps can be reduced to bundle maps over a fixed base space using the notion of a pullback bundle. If πF:F→ N is a fiber bundle over N and f:M→ N is a continuous map, then the pullback of F by f is a fiber bundle f*F over M whose fiber over x is given by (f*F)x = Ff(x). It then follows that a bundle map from E to F covering f is the same thing as a bundle map from E to f*F over M. Variants and generalizations There are two kinds of variation of the general notion of a bundle map. First, one can consider fiber bundles in a different category of spaces. This leads, for example, to the notion of a smooth bundle map between smooth fiber bundles over a smooth manifold. Second, one can consider fiber bundles with extra structure in their fibers, and restrict attention to bundle maps which preserve this structure. This leads, for example, to the notion of a (vector) bundle homomorphism between vector bundles, in which the fibers are vector spaces, and a bundle map φ is required to be a linear map on each fiber. In this case, such a bundle map φ (covering f) may also be viewed as a section of the vector bundle Hom(E,f*F) over M, whose fiber over x is the vector space Hom(Ex,Ff(x)) (also denoted L(Ex,Ff(x))) of linear maps from Ex to Ff(x). Notes References Fiber bundles Theory of continuous functions
Bundle map
[ "Mathematics" ]
664
[ "Theory of continuous functions", "Topology" ]
7,221,237
https://en.wikipedia.org/wiki/Tannakian%20formalism
In mathematics, a Tannakian category is a particular kind of monoidal category C, equipped with some extra structure relative to a given field K. The role of such categories C is to generalise the category of linear representations of an algebraic group G defined over K. A number of major applications of the theory have been made, or might be made in pursuit of some of the central conjectures of contemporary algebraic geometry and number theory. The name is taken from Tadao Tannaka and Tannaka–Krein duality, a theory about compact groups G and their representation theory. The theory was developed first in the school of Alexander Grothendieck. It was later reconsidered by Pierre Deligne, and some simplifications made. The pattern of the theory is that of Grothendieck's Galois theory, which is a theory about finite permutation representations of groups G which are profinite groups. The gist of the theory is that the fiber functor Φ of the Galois theory is replaced by an exact and faithful tensor functor F from C to the category of finite-dimensional vector spaces over K. The group of natural transformations of Φ to itself, which turns out to be a profinite group in the Galois theory, is replaced by the group G of natural transformations of F into itself, that respect the tensor structure. This is in general not an algebraic group but a more general group scheme that is an inverse limit of algebraic groups (pro-algebraic group), and C is then found to be equivalent to the category of finite-dimensional linear representations of G. More generally, it may be that fiber functors F as above only exists to categories of finite dimensional vector spaces over non-trivial extension fields L/K. In such cases the group scheme G is replaced by a gerbe on the fpqc site of Spec(K), and C is then equivalent to the category of (finite-dimensional) representations of . Formal definition of Tannakian categories Let K be a field and C a K-linear abelian rigid tensor (i.e., a symmetric monoidal) category such that . Then C is a Tannakian category (over K) if there is an extension field L of K such that there exists a K-linear exact and faithful tensor functor (i.e., a strong monoidal functor) F from C to the category of finite dimensional L-vector spaces. A Tannakian category over K is neutral if such exact faithful tensor functor F exists with L=K. Applications The tannakian construction is used in relations between Hodge structure and l-adic representation. Morally, the philosophy of motives tells us that the Hodge structure and the Galois representation associated to an algebraic variety are related to each other. The closely-related algebraic groups Mumford–Tate group and motivic Galois group arise from categories of Hodge structures, category of Galois representations and motives through Tannakian categories. Mumford-Tate conjecture proposes that the algebraic groups arising from the Hodge strucuture and the Galois representation by means of Tannakian categories are isomorphic to one another up to connected components. Those areas of application are closely connected to the theory of motives. Another place in which Tannakian categories have been used is in connection with the Grothendieck–Katz p-curvature conjecture; in other words, in bounding monodromy groups. The Geometric Satake equivalence establishes an equivalence between representations of the Langlands dual group of a reductive group G and certain equivariant perverse sheaves on the affine Grassmannian associated to G. This equivalence provides a non-combinatorial construction of the Langlands dual group. It is proved by showing that the mentioned category of perverse sheaves is a Tannakian category and identifying its Tannaka dual group with . Extensions has established partial Tannaka duality results in the situation where the category is R-linear, where R is no longer a field (as in classical Tannakian duality), but certain valuation rings. has initiated and developed Tannaka duality in the context of infinity-categories. References Further reading M. Larsen and R. Pink. Determining representations from invariant dimensions. Invent. math., 102:377–389, 1990. Monoidal categories Algebraic groups Duality theories
Tannakian formalism
[ "Mathematics" ]
907
[ "Mathematical structures", "Monoidal categories", "Category theory", "Duality theories", "Geometry" ]
7,221,435
https://en.wikipedia.org/wiki/Wind%20direction
Wind direction is generally reported by the direction from which the wind originates. For example, a north or northerly wind blows from the north to the south; the exceptions are onshore winds (blowing onto the shore from the water) and offshore winds (blowing off the shore to the water). Wind direction is usually reported in cardinal (or compass) direction, or in degrees. Consequently, a wind blowing from the north has a wind direction referred to as 0° (360°); a wind blowing from the east has a wind direction referred to as 90°, etc. Weather forecasts typically give the direction of the wind along with its speed, for example a "northerly wind at 15 km/h" is a wind blowing from the north at a speed of 15 km/h. If wind gusts are present, their speed may also be reported. Measurement techniques A variety of instruments can be used to measure wind direction, such as the anemoscope, windsock, and wind vane. All these instruments work by moving to minimize air resistance. The way a weather vane is pointed by prevailing winds indicates the direction from which the wind is blowing. The larger opening of a windsock faces the direction that the wind is blowing from; its tail, with the smaller opening, points in the same direction as the wind is blowing. Modern instruments used to measure wind speed and direction are called anemoscopes, anemometers and wind vanes. These types of instruments are used by the wind energy industry, both for wind resource assessment and turbine control. When a high measurement frequency is needed (such as in research applications), wind can be measured by the propagation speed of ultrasound signals or by the effect of ventilation on the resistance of a heated wire. Another type of anemometer uses pitot tubes that take advantage of the pressure differential between an inner tube and an outer tube that is exposed to the wind to determine the dynamic pressure, which is then used to compute the wind speed. In situations where modern instruments are not available, an index finger can be used to test the direction of wind. This is accomplished by wetting the finger and pointing it upwards. The side of the finger that feels "cool" is (approximately) the direction from which the wind is blowing. The "cool" sensation is caused by an increased rate of evaporation of the moisture on the finger due to the air flow across the finger, and consequently the "finger technique" of measuring wind direction does not work well in either very humid or very hot conditions. The same principle is used to measure the dew point using a sling psychrometer (a more accurate instrument than the human finger). Another primitive technique for measuring wind direction is to take a pinch of grass and drop it; the direction that the grass falls is the direction the wind is blowing. This last technique is often used by golfers because it allows them to gauge the strength of the wind. See also Air masses Apparent wind Beaufort scale Wind fetch Wind power Wind rose Wind transducer Yamartino method for calculating the standard deviation of wind direction References Meteorological phenomena Meteorological quantities Wind
Wind direction
[ "Physics", "Mathematics" ]
635
[ "Physical phenomena", "Earth phenomena", "Physical quantities", "Quantity", "Meteorological quantities", "Meteorological phenomena" ]
7,222,821
https://en.wikipedia.org/wiki/Cast%20stone
Cast stone or reconstructed stone is a highly refined building material, a form of precast concrete used as masonry intended to simulate natural-cut stone. It is used for architectural features: trim, or ornament; facing buildings or other structures; statuary; and for garden ornaments. Cast stone can be made from white and/or grey cements, manufactured or natural sands, crushed stone or natural gravels, and colored with mineral coloring pigments. Cast stone may replace such common natural building stones as limestone, brownstone, sandstone, bluestone, granite, slate, coral, and travertine. History The earliest known use of cast stone dates from about 1138 in the Cité de Carcassonne, France. Cast stone was first used extensively in London in the 19th century and gained widespread acceptance in America in 1920. One of the earliest developments in the industry was Coade stone, a fired ceramic form of stoneware. Today most artificial stone consists of fine Portland cement-based concrete placed to set in wooden, rubber-lined fiberglass or iron moulds. It is cheaper and more uniform than natural stone, and widely used. In engineering projects, it allows transporting the bulk materials and casting near the place of use, which is cheaper than transporting and carving very large pieces of stone. According to Rupert Gunnis a Dutchman named Van Spangen set up an artificial stone manufactury at Bow in London in 1800. Having later gone into partnership with a Mr. Powell the firm was broken up in 1828, and the moulds sold to a sculptor, Felix Austin. Another well-known variety was Victoria stone, which is composed of three parts finely crushed Mount Sorrel (Leicestershire) granite to one of Portland cement, carefully mechanically mixed and filled into moulds. After setting the blocks are placed in a solution of silicate of soda to indurate and harden them. Many manufacturers turned out a very non-porous product able to resist corrosive sea air and industrial and residential air pollution. Manufacturing Cast stone is commonly manufactured by two methods, the first method is the dry tamp method and the second is the wet cast process. Both methods manufactured a simulated natural cut stone look. Wood, plaster, glue, sand, sheet metal, and gelatin are the molding materials that are used to manufacture drawing work and casting molds like delineate section, bed, and face templates. A low slump mixture is required for the dry tamp method that should be tamped into the mold. The dry stone consists of two layers, an inner layer of concrete and an outer layer of decoration which is also known as the facing layer. In the wet cast method, to flow material easily on the mold mixture of integrally colored with water and plastic is used. In dry tamp mixtures molds can be used many times, but in wet cast mixtures molds only can be used once. Standards In the US and some other countries, the industry standard today for physical properties and raw materials constituents is ASTM C 1364, the Standard Specification for Architectural Cast Stone. Membership in ASTM International (founded in 1898 as the American Chapter of the International Association for Testing and Materials and most recently known as the American Society for Testing and Materials) exceeds 30,000 technical experts from more than 100 countries who comprise a worldwide standards forum. The ASTM method of developing standards has been based on consensus of both users and producers of all kinds of materials. The ASTM process ensures that interested individuals and organizations representing industry, academia, consumers, and governments alike, all have an equal vote in determining a standard's content. In the UK and Europe, it is more normal to use the Standard "BS 1217 Cast stone - Specification" from the BSI Group. The European Commission's "Construction Products Regulations" legislation states that by mid-2013 CE marking becomes mandatory for certain construction products sold in Europe, including some Cast Stone items". See also Geopolymers Anthropic rock Fambrini & Daniels Cast stone manufacturers. References Dictionnaire raisonné de l’architecture française du XIe au XVIe siècle/Béton Concrete Building materials Masonry Building stone Artificial stone
Cast stone
[ "Physics", "Engineering" ]
857
[ "Structural engineering", "Matter", "Building engineering", "Architecture", "Construction", "Materials", "Concrete", "Masonry", "Building materials" ]
16,066,056
https://en.wikipedia.org/wiki/Digital%20modeling%20and%20fabrication
Digital modeling and fabrication is a design and production process that combines 3D modeling or computing-aided design (CAD) with additive and subtractive manufacturing. Additive manufacturing is also known as 3D printing, while subtractive manufacturing may also be referred to as machining, and many other technologies can be exploited to physically produce the designed objects. Modeling Digitally fabricated objects are created with a variety of CAD software packages, using both 2D vector drawing, and 3D modeling. Types of 3D models include wireframe, solid, surface and mesh. A design has one or more of these model types. Machines for fabrication Three machines are popular for fabrication: 1. CNC router 2. Laser cutter 3. 3D Printer CNC milling machine CNC stands for "computer numerical control". CNC mills or routers include proprietary software which interprets 2D vector drawings or 3D models and converts this information to a G-code, which represents specific CNC functions in an alphanumeric format, which the CNC mill can interpret. The G-codes drive a machine tool, a powered mechanical device typically used to fabricate components. CNC machines are classified according to the number of axes that they possess, with 3, 4 and 5 axis machines all being common, and industrial robots being described with having as many as 9 axes. CNC machines are specifically successful in milling materials such as plywood, plastics, foam board, and metal at a fast speed. CNC machine beds are typically large enough to allow 4' × 8' (123 cm x 246 cm) sheets of material, including foam several inches thick, to be cut. Laser cutter The laser cutter is a machine that uses a laser to cut materials such as chip board, matte board, felt, wood, and acrylic up to 3/8 inch (1 cm) thickness. The laser cutter is often bundled with a driver software which interprets vector drawings produced by any number of CAD software platforms. The laser cutter is able to modulate the speed of the laser head, as well as the intensity and resolution of the laser beam, and as such is able in both to cut and to score material, as well as approximate raster graphics. Objects cut out of materials can be used in the fabrication of physical models, which will only require the assembly of the flat parts. 3D printers 3D printers use a variety of methods and technology to assemble physical versions of digital objects. Typically desktop 3D printers can make small plastic 3D objects. They use a roll of thin plastic filament, melting the plastic and then depositing it precisely to cool and harden. They normally build 3D objects from bottom to top in a series of many very thin plastic horizontal layers. This process often happens over the course of several hours. Fused deposition modeling Fused deposition modeling, also known as fused filament fabrication, uses a 3-axis robotic system that extrudes material, typically a thermoplastic, one thin layer at a time and progressively builds up a shape. Examples of machines that use this method are the Dimension 768 and the Ultimaker. Stereolithography Stereolithography uses a high intensity light projector, usually using DLP technology, with a photosensitive polymer resin. It will project the profile of an object to build a single layer, curing the resin into a solid shape. Then the printer will move the object out of the way by a small amount and project the profile of the next layer. Examples of devices that use this method are the Form-One printer and Os-RC Illios. Selective laser sintering Selective laser sintering uses a laser to trace out the shape of an object in a bed of finely powdered material that can be fused together by the application of heat from the laser. After one layer has been traced by a laser, the bed and partially finished part is moved out of the way, a thin layer of the powdered material is spread, and the process is repeated. Typical materials used are alumide, steel, glass, thermoplastics (especially nylon), and certain ceramics. Example devices include the Formiga P 110 and the Eos EosINT P730. Powder printer Powder printers work in a similar manner to SLS machines, and typically use powders that can be cured, hardened, or otherwise made solid by the application of a liquid binder that is delivered via an inkjet printhead. Common materials are plaster of paris, clay, powdered sugar, wood-filler bonding putty, and flour, which are typically cured with water, alcohol, vinegar, or some combination thereof. The major advantage of powder and SLS machines is their ability to continuously support all parts of their objects throughout the printing process with unprinted powder. This permits the production of geometries not easily otherwise created. However, these printers are often more complex and expensive. Examples of printers using this method are the ZCorp Zprint 400 and 450. See also Direct digital manufacturing Industry 4.0 Rapid Prototyping Responsive computer-aided design Technology education References 3D imaging 3D printing Computer-aided design Building technology Numerical control Laser applications Modelling Geometry processing
Digital modeling and fabrication
[ "Technology", "Engineering" ]
1,042
[ "Computer-aided design", "Design engineering", "Industrial computing", "Digital manufacturing" ]
16,070,103
https://en.wikipedia.org/wiki/Electron-beam%20processing
Electron-beam processing or electron irradiation (EBI) is a process that involves using electrons, usually of high energy, to treat an object for a variety of purposes. This may take place under elevated temperatures and nitrogen atmosphere. Possible uses for electron irradiation include sterilization, alteration of gemstone colors, and cross-linking of polymers. Electron energies typically vary from the keV to MeV range, depending on the depth of penetration required. The irradiation dose is usually measured in grays but also in Mrads ( is equivalent to ). The basic components of a typical electron-beam processing device include: an electron gun (consisting of a cathode, grid, and anode), used to generate and accelerate the primary beam; and, a magnetic optical (focusing and deflection) system, used for controlling the way in which the electron beam impinges on the material being processed (the "workpiece"). In operation, the gun cathode is the source of thermally emitted electrons that are both accelerated and shaped into a collimated beam by the electrostatic field geometry established by the gun electrode (grid and anode) configuration used. The electron beam then emerges from the gun assembly through an exit hole in the ground-plane anode with an energy equal to the value of the negative high voltage (gun operating voltage) being applied to the cathode. This use of a direct high voltage to produce a high-energy electron beam allows the conversion of input electrical power to beam power at greater than 95% efficiency, making electron-beam material processing a highly energy-efficient technique. After exiting the gun, the beam passes through an electromagnetic lens and deflection coil system. The lens is used for producing either a focused or defocused beam spot on the workpiece, while the deflection coil is used to either position the beam spot on a stationary location or provide some form of oscillatory motion. In polymers, an electron beam may be used on the material to induce effects such as chain scission (which makes the polymer chain shorter) and cross-linking. The result is a change in the properties of the polymer, which is intended to extend the range of applications for the material. The effects of irradiation may also include changes in crystallinity, as well as microstructure. Usually, the irradiation process degrades the polymer. The irradiated polymers may sometimes be characterized using DSC, XRD, FTIR, or SEM. In poly(vinylidene fluoride-trifluoroethylene) copolymers, high-energy electron irradiation lowers the energy barrier for the ferroelectric-paraelectric phase transition and reduces polarization hysteresis losses in the material. Electron-beam processing involves irradiation (treatment) of products using a high-energy electron-beam accelerator. Electron-beam accelerators utilize an on-off technology, with a common design being similar to that of a cathode ray television. Electron-beam processing is used in industry primarily for three product modifications: Crosslinking of polymer-based products to improve mechanical, thermal, chemical and other properties, Material degradation often used in the recycling of materials, Sterilization of medical and pharmaceutical goods. Nanotechnology is one of the fastest-growing new areas in science and engineering. Radiation is early applied tool in this area; arrangement of atoms and ions has been performed using ion or electron beams for many years. New applications concern nanocluster and nanocomposites synthesis. Crosslinking The cross-linking of polymers through electron-beam processing changes a thermoplastic material into a thermoset. When polymers are crosslinked, the molecular movement is severely impeded, making the polymer stable against heat. This locking together of molecules is the origin of all of the benefits of crosslinking, including the improvement of the following properties: Thermal: resistance to temperature, aging, low-temperature impact, etc. Mechanical: tensile strength, modulus, abrasion resistance, pressure rating, creep resistance, etc. Chemical: stress crack resistance, etc. Other: heat shrink memory properties, positive temperature coefficient, etc. Cross-linking is the interconnection of adjacent long molecules with networks of bonds induced by chemical treatment or electron-beam treatment. Electron-beam processing of thermoplastic material results in an array of enhancements, such as an increase in tensile strength and resistance to abrasions, stress cracking and solvents. Joint replacements such as knees and hips are being manufactured from cross-linked ultra-high-molecular-weight polyethylene because of the excellent wear characteristics due to extensive research. Polymers commonly crosslinked using the electron-beam irradiation process include polyvinyl chloride (PVC), thermoplastic polyurethanes and elastomers (TPUs), polybutylene terephthalate (PBT), polyamides / nylon (PA66, PA6, PA11, PA12), polyvinylidene fluoride (PVDF), polymethylpentene (PMP), polyethylenes (LLDPE, LDPE, MDPE, HDPE, UHMWPE), and ethylene copolymers such as ethylene-vinyl acetate (EVA) and ethylene tetrafluoroethylene (ETFE). Some of the polymers utilize additives to make the polymer more readily irradiation-crosslinkable. An example of an electron-beam crosslinked part is connector made from polyamide, designed to withstand the higher temperatures needed for soldering with the lead-free solder required by the RoHS initiative. Cross-linked polyethylene piping called PEX is commonly used as an alternative to copper piping for water lines in newer home construction. PEX piping will outlast copper and has performance characteristics that are superior to copper in many ways. Foam is also produced using electron-beam processing to produce high-quality, fine-celled, aesthetically pleasing product. Long-chain branching The resin pellets used to produce the foam and thermoformed parts can be electron-beam-processed to a lower dose level than when crosslinking and gels occur. These resin pellets, such as polypropylene and polyethylene can be used to create lower-density foams and other parts, as the "melt strength" of the polymer is increased. Chain scissioning Chain scissioning or polymer degradation can also be achieved through electron-beam processing. The effect of the electron beam can cause the degradation of polymers, breaking chains and therefore reducing the molecular weight. The chain scissioning effects observed in polytetrafluoroethylene (PTFE) have been used to create fine micropowders from scrap or off-grade materials. Chain scission is the breaking apart of molecular chains to produce required molecular sub-units from the chain. Electron-beam processing provides Chain scission without the use of harsh chemicals usually utilized to initiate chain scission. An example of this process is the breaking down of cellulose fibers extracted from wood in order to shorten the molecules, thereby producing a raw material that can then be used to produce biodegradable detergents and diet-food substitutes. "Teflon" (PTFE) is also electron-beam-processed, allowing it to be ground to a fine powder for use in inks and as coatings for the automotive industry. Microbiological sterilization Electron-beam processing has the ability to break the chains of DNA in living organisms, such as bacteria, resulting in microbial death and rendering the space they inhabit sterile. E-beam processing has been used for the sterilization of medical products and aseptic packaging materials for foods, as well as disinfestation, the elimination of live insects from grain, tobacco, and other unprocessed bulk crops. Sterilization with electrons has significant advantages over other methods of sterilization currently in use. The process is quick, reliable, and compatible with most materials, and does not require any quarantine following the processing. For some materials and products that are sensitive to oxidative effects, radiation tolerance levels for electron-beam irradiation may be slightly higher than for gamma exposure. This is due to the higher dose rates and shorter exposure times of e-beam irradiation, which have been shown to reduce the degradative effects of oxygen. Notes Electromagnetism Electron beams in manufacturing Industrial processes Plastics industry
Electron-beam processing
[ "Physics" ]
1,792
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
16,071,200
https://en.wikipedia.org/wiki/Husimi%20Q%20representation
The Husimi Q representation, introduced by Kôdi Husimi in 1940, is a quasiprobability distribution commonly used in quantum mechanics to represent the phase space distribution of a quantum state such as light in the phase space formulation. It is used in the field of quantum optics and particularly for tomographic purposes. It is also applied in the study of quantum effects in superconductors. Definition and properties The Husimi Q distribution (called Q-function in the context of quantum optics) is one of the simplest distributions of quasiprobability in phase space. It is constructed in such a way that observables written in anti-normal order follow the optical equivalence theorem. This means that it is essentially the density matrix put into normal order. This makes it relatively easy to calculate compared to other quasiprobability distributions through the formula which is proportional to a trace of the operator involving the projection to the coherent state . It produces a pictorial representation of the state ρ to illustrate several of its mathematical properties. Its relative ease of calculation is related to its smoothness compared to other quasiprobability distributions. In fact, it can be understood as the Weierstrass transform of the Wigner quasiprobability distribution, i.e. a smoothing by a Gaussian filter, Such Gauss transforms being essentially invertible in the Fourier domain via the convolution theorem, Q provides an equivalent description of quantum mechanics in phase space to that furnished by the Wigner distribution. Alternatively, one can compute the Husimi Q distribution by taking the Segal–Bargmann transform of the wave function and then computing the associated probability density. Q is normalized to unity, and is non-negative definite and bounded: Despite the fact that is non-negative definite and bounded like a standard joint probability distribution, this similarity may be misleading, because different coherent states are not orthogonal. Two different points do not represent disjoint physical contingencies; thus, Q(α) does not represent the probability of mutually exclusive states, as needed in the third axiom of probability theory. may also be obtained by a different Weierstrass transform of the Glauber–Sudarshan P representation, given , and the standard inner product of coherent states. See also Nonclassical light Glauber–Sudarshan P-representation Wehrl entropy References Quantum optics Particle statistics
Husimi Q representation
[ "Physics" ]
489
[ "Particle statistics", "Statistical mechanics", "Quantum optics", "Quantum mechanics" ]
49,718
https://en.wikipedia.org/wiki/Poynting%20vector
In physics, the Poynting vector (or Umov–Poynting vector) represents the directional energy flux (the energy transfer per unit area, per unit time) or power flow of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2); kg/s3 in base SI units. It is named after its discoverer John Henry Poynting who first derived it in 1884. Nikolay Umov is also credited with formulating the concept. Oliver Heaviside also discovered it independently in the more general form that recognises the freedom of adding the curl of an arbitrary vector field to the definition. The Poynting vector is used throughout electromagnetics in conjunction with Poynting's theorem, the continuity equation expressing conservation of electromagnetic energy, to calculate the power flow in electromagnetic fields. Definition In Poynting's original paper and in most textbooks, the Poynting vector is defined as the cross product where bold letters represent vectors and E is the electric field vector; H is the magnetic field's auxiliary field vector or magnetizing field. This expression is often called the Abraham form and is the most widely used. The Poynting vector is usually denoted by S or N. In simple terms, the Poynting vector S depicts the direction and rate of transfer of energy, that is power, due to electromagnetic fields in a region of space that may or may not be empty. More rigorously, it is the quantity that must be used to make Poynting's theorem valid. Poynting's theorem essentially says that the difference between the electromagnetic energy entering a region and the electromagnetic energy leaving a region must equal the energy converted or dissipated in that region, that is, turned into a different form of energy (often heat). So if one accepts the validity of the Poynting vector description of electromagnetic energy transfer, then Poynting's theorem is simply a statement of the conservation of energy. If electromagnetic energy is not gained from or lost to other forms of energy within some region (e.g., mechanical energy, or heat), then electromagnetic energy is locally conserved within that region, yielding a continuity equation as a special case of Poynting's theorem: where is the energy density of the electromagnetic field. This frequent condition holds in the following simple example in which the Poynting vector is calculated and seen to be consistent with the usual computation of power in an electric circuit. Example: Power flow in a coaxial cable Although problems in electromagnetics with arbitrary geometries are notoriously difficult to solve, we can find a relatively simple solution in the case of power transmission through a section of coaxial cable analyzed in cylindrical coordinates as depicted in the accompanying diagram. We can take advantage of the model's symmetry: no dependence on θ (circular symmetry) nor on Z (position along the cable). The model (and solution) can be considered simply as a DC circuit with no time dependence, but the following solution applies equally well to the transmission of radio frequency power, as long as we are considering an instant of time (during which the voltage and current don't change), and over a sufficiently short segment of cable (much smaller than a wavelength, so that these quantities are not dependent on Z). The coaxial cable is specified as having an inner conductor of radius R1 and an outer conductor whose inner radius is R2 (its thickness beyond R2 doesn't affect the following analysis). In between R1 and R2 the cable contains an ideal dielectric material of relative permittivity εr and we assume conductors that are non-magnetic (so μ = μ0) and lossless (perfect conductors), all of which are good approximations to real-world coaxial cable in typical situations. The center conductor is held at voltage V and draws a current I toward the right, so we expect a total power flow of P = V · I according to basic laws of electricity. By evaluating the Poynting vector, however, we are able to identify the profile of power flow in terms of the electric and magnetic fields inside the coaxial cable. The electric fields are of course zero inside of each conductor, but in between the conductors () symmetry dictates that they are strictly in the radial direction and it can be shown (using Gauss's law) that they must obey the following form: W can be evaluated by integrating the electric field from to which must be the negative of the voltage V: so that: The magnetic field, again by symmetry, can only be non-zero in the θ direction, that is, a vector field looping around the center conductor at every radius between R1 and R2. Inside the conductors themselves the magnetic field may or may not be zero, but this is of no concern since the Poynting vector in these regions is zero due to the electric field's being zero. Outside the entire coaxial cable, the magnetic field is identically zero since paths in this region enclose a net current of zero (+I in the center conductor and −I in the outer conductor), and again the electric field is zero there anyway. Using Ampère's law in the region from R1 to R2, which encloses the current +I in the center conductor but with no contribution from the current in the outer conductor, we find at radius r: Now, from an electric field in the radial direction, and a tangential magnetic field, the Poynting vector, given by the cross-product of these, is only non-zero in the Z direction, along the direction of the coaxial cable itself, as we would expect. Again only a function of r, we can evaluate S(r): where W is given above in terms of the center conductor voltage V. The total power flowing down the coaxial cable can be computed by integrating over the entire cross section A of the cable in between the conductors: Substituting the earlier solution for the constant W we find: that is, the power given by integrating the Poynting vector over a cross section of the coaxial cable is exactly equal to the product of voltage and current as one would have computed for the power delivered using basic laws of electricity. Other similar examples in which the P = V · I result can be analytically calculated are: the parallel-plate transmission line, using Cartesian coordinates, and the two-wire transmission line, using bipolar cylindrical coordinates. Other forms In the "microscopic" version of Maxwell's equations, this definition must be replaced by a definition in terms of the electric field E and the magnetic flux density B (described later in the article). It is also possible to combine the electric displacement field D with the magnetic flux B to get the Minkowski form of the Poynting vector, or use D and H to construct yet another version. The choice has been controversial: Pfeifer et al. summarize and to a certain extent resolve the century-long dispute between proponents of the Abraham and Minkowski forms (see Abraham–Minkowski controversy). The Poynting vector represents the particular case of an energy flux vector for electromagnetic energy. However, any type of energy has its direction of movement in space, as well as its density, so energy flux vectors can be defined for other types of energy as well, e.g., for mechanical energy. The Umov–Poynting vector discovered by Nikolay Umov in 1874 describes energy flux in liquid and elastic media in a completely generalized view. Interpretation The Poynting vector appears in Poynting's theorem (see that article for the derivation), an energy-conservation law: where Jf is the current density of free charges and u is the electromagnetic energy density for linear, nondispersive materials, given by where E is the electric field; D is the electric displacement field; B is the magnetic flux density; H is the magnetizing field. The first term in the right-hand side represents the electromagnetic energy flow into a small volume, while the second term subtracts the work done by the field on free electrical currents, which thereby exits from electromagnetic energy as dissipation, heat, etc. In this definition, bound electrical currents are not included in this term and instead contribute to S and u. For light in free space, the linear momentum density is For linear, nondispersive and isotropic (for simplicity) materials, the constitutive relations can be written as where ε is the permittivity of the material; μ is the permeability of the material. Here ε and μ are scalar, real-valued constants independent of position, direction, and frequency. In principle, this limits Poynting's theorem in this form to fields in vacuum and nondispersive linear materials. A generalization to dispersive materials is possible under certain circumstances at the cost of additional terms. One consequence of the Poynting formula is that for the electromagnetic field to do work, both magnetic and electric fields must be present. The magnetic field alone or the electric field alone cannot do any work. Plane waves In a propagating electromagnetic plane wave in an isotropic lossless medium, the instantaneous Poynting vector always points in the direction of propagation while rapidly oscillating in magnitude. This can be simply seen given that in a plane wave, the magnitude of the magnetic field H(r,t) is given by the magnitude of the electric field vector E(r,t) divided by η, the intrinsic impedance of the transmission medium: where |A| represents the vector norm of A. Since E and H are at right angles to each other, the magnitude of their cross product is the product of their magnitudes. Without loss of generality let us take X to be the direction of the electric field and Y to be the direction of the magnetic field. The instantaneous Poynting vector, given by the cross product of E and H will then be in the positive Z direction: Finding the time-averaged power in the plane wave then requires averaging over the wave period (the inverse frequency of the wave): where Erms is the root mean square (RMS) electric field amplitude. In the important case that E(t) is sinusoidally varying at some frequency with peak amplitude Epeak, Erms is , with the average Poynting vector then given by: This is the most common form for the energy flux of a plane wave, since sinusoidal field amplitudes are most often expressed in terms of their peak values, and complicated problems are typically solved considering only one frequency at a time. However, the expression using Erms is totally general, applying, for instance, in the case of noise whose RMS amplitude can be measured but where the "peak" amplitude is meaningless. In free space the intrinsic impedance η is simply given by the impedance of free space η0 ≈377Ω. In non-magnetic dielectrics (such as all transparent materials at optical frequencies) with a specified dielectric constant εr, or in optics with a material whose refractive index , the intrinsic impedance is found as: In optics, the value of radiated flux crossing a surface, thus the average Poynting vector component in the direction normal to that surface, is technically known as the irradiance, more often simply referred to as the intensity (a somewhat ambiguous term). Formulation in terms of microscopic fields The "microscopic" (differential) version of Maxwell's equations admits only the fundamental fields E and B, without a built-in model of material media. Only the vacuum permittivity and permeability are used, and there is no D or H. When this model is used, the Poynting vector is defined as where μ0 is the vacuum permeability; E is the electric field vector; B is the magnetic flux. This is actually the general expression of the Poynting vector. The corresponding form of Poynting's theorem is where J is the total current density and the energy density u is given by where ε0 is the vacuum permittivity. It can be derived directly from Maxwell's equations in terms of total charge and current and the Lorentz force law only. The two alternative definitions of the Poynting vector are equal in vacuum or in non-magnetic materials, where . In all other cases, they differ in that and the corresponding u are purely radiative, since the dissipation term covers the total current, while the E × H definition has contributions from bound currents which are then excluded from the dissipation term. Since only the microscopic fields E and B occur in the derivation of and the energy density, assumptions about any material present are avoided. The Poynting vector and theorem and expression for energy density are universally valid in vacuum and all materials. Time-averaged Poynting vector The above form for the Poynting vector represents the instantaneous power flow due to instantaneous electric and magnetic fields. More commonly, problems in electromagnetics are solved in terms of sinusoidally varying fields at a specified frequency. The results can then be applied more generally, for instance, by representing incoherent radiation as a superposition of such waves at different frequencies and with fluctuating amplitudes. We would thus not be considering the instantaneous and used above, but rather a complex (vector) amplitude for each which describes a coherent wave's phase (as well as amplitude) using phasor notation. These complex amplitude vectors are not functions of time, as they are understood to refer to oscillations over all time. A phasor such as is understood to signify a sinusoidally varying field whose instantaneous amplitude follows the real part of where is the (radian) frequency of the sinusoidal wave being considered. In the time domain, it will be seen that the instantaneous power flow will be fluctuating at a frequency of 2ω. But what is normally of interest is the average power flow in which those fluctuations are not considered. In the math below, this is accomplished by integrating over a full cycle . The following quantity, still referred to as a "Poynting vector", is expressed directly in terms of the phasors as: where ∗ denotes the complex conjugate. The time-averaged power flow (according to the instantaneous Poynting vector averaged over a full cycle, for instance) is then given by the real part of . The imaginary part is usually ignored, however, it signifies "reactive power" such as the interference due to a standing wave or the near field of an antenna. In a single electromagnetic plane wave (rather than a standing wave which can be described as two such waves travelling in opposite directions), and are exactly in phase, so is simply a real number according to the above definition. The equivalence of to the time-average of the instantaneous Poynting vector can be shown as follows. The average of the instantaneous Poynting vector S over time is given by: The second term is the double-frequency component having an average value of zero, so we find: According to some conventions, the factor of 1/2 in the above definition may be left out. Multiplication by 1/2 is required to properly describe the power flow since the magnitudes of and refer to the peak fields of the oscillating quantities. If rather the fields are described in terms of their root mean square (RMS) values (which are each smaller by the factor ), then the correct average power flow is obtained without multiplication by 1/2. Resistive dissipation If a conductor has significant resistance, then, near the surface of that conductor, the Poynting vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters the conductor, it is bent to a direction that is almost perpendicular to the surface. This is a consequence of Snell's law and the very slow speed of light inside a conductor. The definition and computation of the speed of light in a conductor can be given. Inside the conductor, the Poynting vector represents energy flow from the electromagnetic field into the wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law see Reitz page 454. Radiation pressure The density of the linear momentum of the electromagnetic field is S/c2 where S is the magnitude of the Poynting vector and c is the speed of light in free space. The radiation pressure exerted by an electromagnetic wave on the surface of a target is given by Uniqueness of the Poynting vector The Poynting vector occurs in Poynting's theorem only through its divergence , that is, it is only required that the surface integral of the Poynting vector around a closed surface describe the net flow of electromagnetic energy into or out of the enclosed volume. This means that adding a solenoidal vector field (one with zero divergence) to S will result in another field that satisfies this required property of a Poynting vector field according to Poynting's theorem. Since the divergence of any curl is zero, one can add the curl of any vector field to the Poynting vector and the resulting vector field S′ will still satisfy Poynting's theorem. However even though the Poynting vector was originally formulated only for the sake of Poynting's theorem in which only its divergence appears, it turns out that the above choice of its form is unique. The following section gives an example which illustrates why it is not acceptable to add an arbitrary solenoidal field to E × H. Static fields The consideration of the Poynting vector in static fields shows the relativistic nature of the Maxwell equations and allows a better understanding of the magnetic component of the Lorentz force, . To illustrate, the accompanying picture is considered, which describes the Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page) generated by a permanent magnet. Although there are only static electric and magnetic fields, the calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy, with no beginning or end. While the circulating energy flow may seem unphysical, its existence is necessary to maintain conservation of angular momentum. The momentum of an electromagnetic wave in free space is equal to its power divided by c, the speed of light. Therefore, the circular flow of electromagnetic energy implies an angular momentum. If one were to connect a wire between the two plates of the charged capacitor, then there would be a Lorentz force on that wire while the capacitor is discharging due to the discharge current and the crossed magnetic field; that force would be tangential to the central axis and thus add angular momentum to the system. That angular momentum would match the "hidden" angular momentum, revealed by the Poynting vector, circulating before the capacitor was discharged. See also Wave vector References Further reading Electromagnetic radiation Optical quantities Vectors (mathematics and physics)
Poynting vector
[ "Physics", "Mathematics" ]
3,921
[ "Physical phenomena", "Physical quantities", "Electromagnetic radiation", "Quantity", "Radiation", "Optical quantities" ]
49,887
https://en.wikipedia.org/wiki/Standard%20enthalpy%20of%20formation
In chemistry and thermodynamics, the standard enthalpy of formation or standard heat of formation of a compound is the change of enthalpy during the formation of 1 mole of the substance from its constituent elements in their reference state, with all substances in their standard states. The standard pressure value is recommended by IUPAC, although prior to 1982 the value 1.00 atm (101.325 kPa) was used. There is no standard temperature. Its symbol is ΔfH⦵. The superscript Plimsoll on this symbol indicates that the process has occurred under standard conditions at the specified temperature (usually 25 °C or 298.15 K). Standard states are defined for various types of substances. For a gas, it is the hypothetical state the gas would assume if it obeyed the ideal gas equation at a pressure of 1 bar. For a gaseous or solid solute present in a diluted ideal solution, the standard state is the hypothetical state of concentration of the solute of exactly one mole per liter (1 M) at a pressure of 1 bar extrapolated from infinite dilution. For a pure substance or a solvent in a condensed state (a liquid or a solid) the standard state is the pure liquid or solid under a pressure of 1 bar. For elements that have multiple allotropes, the reference state usually is chosen to be the form in which the element is most stable under 1 bar of pressure. One exception is phosphorus, for which the most stable form at 1 bar is black phosphorus, but white phosphorus is chosen as the standard reference state for zero enthalpy of formation. For example, the standard enthalpy of formation of carbon dioxide is the enthalpy of the following reaction under the above conditions: C(s, graphite) + O2(g) -> CO2(g) All elements are written in their standard states, and one mole of product is formed. This is true for all enthalpies of formation. The standard enthalpy of formation is measured in units of energy per amount of substance, usually stated in kilojoule per mole (kJ mol−1), but also in kilocalorie per mole, joule per mole or kilocalorie per gram (any combination of these units conforming to the energy per mass or amount guideline). All elements in their reference states (oxygen gas, solid carbon in the form of graphite, etc.) have a standard enthalpy of formation of zero, as there is no change involved in their formation. The formation reaction is a constant pressure and constant temperature process. Since the pressure of the standard formation reaction is fixed at 1 bar, the standard formation enthalpy or reaction heat is a function of temperature. For tabulation purposes, standard formation enthalpies are all given at a single temperature: 298 K, represented by the symbol . Hess' law For many substances, the formation reaction may be considered as the sum of a number of simpler reactions, either real or fictitious. The enthalpy of reaction can then be analyzed by applying Hess' law, which states that the sum of the enthalpy changes for a number of individual reaction steps equals the enthalpy change of the overall reaction. This is true because enthalpy is a state function, whose value for an overall process depends only on the initial and final states and not on any intermediate states. Examples are given in the following sections. Ionic compounds: Born–Haber cycle For ionic compounds, the standard enthalpy of formation is equivalent to the sum of several terms included in the Born–Haber cycle. For example, the formation of lithium fluoride, Li(s) + 1/2 F2(g) -> LiF(s) may be considered as the sum of several steps, each with its own enthalpy (or energy, approximately): , the standard enthalpy of atomization (or sublimation) of solid lithium. , the first ionization energy of gaseous lithium. , the standard enthalpy of atomization (or bond energy) of fluorine gas. , the electron affinity of a fluorine atom. , the lattice energy of lithium fluoride. The sum of these enthalpies give the standard enthalpy of formation () of lithium fluoride: In practice, the enthalpy of formation of lithium fluoride can be determined experimentally, but the lattice energy cannot be measured directly. The equation is therefore rearranged to evaluate the lattice energy: Organic compounds The formation reactions for most organic compounds are hypothetical. For instance, carbon and hydrogen will not directly react to form methane (), so that the standard enthalpy of formation cannot be measured directly. However the standard enthalpy of combustion is readily measurable using bomb calorimetry. The standard enthalpy of formation is then determined using Hess's law. The combustion of methane: CH4 + 2 O2 -> CO2 + 2 H2O is equivalent to the sum of the hypothetical decomposition into elements followed by the combustion of the elements to form carbon dioxide () and water (): CH4 -> C + 2H2 C + O2 -> CO2 2H2 + O2 -> 2H2O Applying Hess's law, Solving for the standard of enthalpy of formation, The value of is determined to be −74.8 kJ/mol. The negative sign shows that the reaction, if it were to proceed, would be exothermic; that is, methane is enthalpically more stable than hydrogen gas and carbon. It is possible to predict heats of formation for simple unstrained organic compounds with the heat of formation group additivity method. Use in calculation for other reactions The standard enthalpy change of any reaction can be calculated from the standard enthalpies of formation of reactants and products using Hess's law. A given reaction is considered as the decomposition of all reactants into elements in their standard states, followed by the formation of all products. The heat of reaction is then minus the sum of the standard enthalpies of formation of the reactants (each being multiplied by its respective stoichiometric coefficient, ) plus the sum of the standard enthalpies of formation of the products (each also multiplied by its respective stoichiometric coefficient), as shown in the equation below: If the standard enthalpy of the products is less than the standard enthalpy of the reactants, the standard enthalpy of reaction is negative. This implies that the reaction is exothermic. The converse is also true; the standard enthalpy of reaction is positive for an endothermic reaction. This calculation has a tacit assumption of ideal solution between reactants and products where the enthalpy of mixing is zero. For example, for the combustion of methane, CH4 + 2O2 -> CO2 + 2H2O: However O2 is an element in its standard state, so that , and the heat of reaction is simplified to which is the equation in the previous section for the enthalpy of combustion . Key concepts for enthalpy calculations When a reaction is reversed, the magnitude of ΔH stays the same, but the sign changes. When the balanced equation for a reaction is multiplied by an integer, the corresponding value of ΔH must be multiplied by that integer as well. The change in enthalpy for a reaction can be calculated from the enthalpies of formation of the reactants and the products Elements in their standard states make no contribution to the enthalpy calculations for the reaction, since the enthalpy of an element in its standard state is zero. Allotropes of an element other than the standard state generally have non-zero standard enthalpies of formation. Examples: standard enthalpies of formation at 25 °C Thermochemical properties of selected substances at 298.15 K and 1 atm Inorganic substances Aliphatic hydrocarbons Other organic compounds See also Calorimetry Thermochemistry References External links NIST Chemistry WebBook Enthalpy Thermochemistry de:Enthalpie#Standardbildungsenthalpie
Standard enthalpy of formation
[ "Physics", "Chemistry", "Mathematics" ]
1,688
[ "Thermodynamic properties", "Thermochemistry", "Physical quantities", "Quantity", "Enthalpy" ]
50,416
https://en.wikipedia.org/wiki/Differential%20calculus
In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve. The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point. Differential calculus and integral calculus are connected by the fundamental theorem of calculus. This states that differentiation is the reverse process to integration. Differentiation has applications in nearly all quantitative disciplines. In physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories. Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra. Derivative The derivative of at the point is the slope of the tangent to . In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the form . The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in by the change in , meaning that . For, the graph of has a slope of , as shown in the diagram below: For brevity, is often written as , with being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such as vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'just touches' a particular point. The slope of a curve at a particular point is equal to the slope of the tangent to that point. For example, has a slope of at because the slope of the tangent line to that point is equal to : The derivative of a function is then simply the slope of this tangent line. Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar: The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph and , where is a small number. As before, the slope of the line passing through these two points can be calculated with the formula . This gives As gets closer and closer to , the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as The above expression means 'as gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of ; this can be written as . If , the derivative can also be written as , with representing an infinitesimal change. For example, represents an infinitesimal change in x. In summary, if , then the derivative of is provided such a limit exists. We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of is : As approaches , approaches . Therefore, . This proof can be generalised to show that if and are constants. This is known as the power rule. For example, . However, many other functions cannot be differentiated as easily as polynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include the chain rule, product rule, and quotient rule. Other functions cannot be differentiated at all, giving rise to the concept of differentiability. A closely related concept to the derivative of a function is its differential. When and are real variables, the derivative of at is the slope of the tangent line to the graph of at . Because the source and target of are one-dimensional, the derivative of is a real number. If and are vectors, then the best linear approximation to the graph of depends on how changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted . The linearization of in all directions at once is called the total derivative. History of differentiation The concept of a derivative in the sense of a tangent line is a very old one, familiar to ancient Greek mathematicians such as Euclid (c. 300 BC), Archimedes (c. 287–212 BC), and Apollonius of Perga (c. 262–190 BC). Archimedes also made use of indivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (see The Method of Mechanical Theorems). The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II (1114–1185); indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem". The mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), in his Treatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He obtained, for example, that the maximum (for positive ) of the cubic occurs when , and concluded therefrom that the equation has exactly one positive solution when , and two positive solutions whenever . The historian of science, Roshdi Rashed, has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known. The modern development of calculus is usually credited to Isaac Newton (1643–1727) and Gottfried Wilhelm Leibniz (1646–1716), who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes. For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat (1607-1665), Isaac Barrow (1630–1677), René Descartes (1596–1650), Christiaan Huygens (1629–1695), Blaise Pascal (1623–1662) and John Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general." Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today. Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789–1857), Bernhard Riemann (1826–1866), and Karl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane. The 20th century brought two major steps towards our present understanding and practice of derivation : Lebesgue integration, besides extending integral calculus to many more functions, clarified the relation between derivation and integration with the notion of absolute continuity. Later the theory of distributions (after Laurent Schwartz) extended derivation to generalized functions (e.g., the Dirac delta function previously introduced in Quantum Mechanics) and became fundamental to nowadays applied analysis especially by the use of weak solutions to partial differential equations. Applications of derivatives Optimization If is a differentiable function on (or an open interval) and is a local maximum or a local minimum of , then the derivative of at is zero. Points where are called critical points or stationary points (and the value of at is called a critical value). If is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points. If is twice differentiable, then conversely, a critical point of can be analysed by considering the second derivative of at : if it is positive, is a local minimum; if it is negative, is a local maximum; if it is zero, then could be a local minimum, a local maximum, or neither. (For example, has a critical point at , but it has neither a maximum nor a minimum there, whereas has a critical point at and a minimum and a maximum, respectively, there.) This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the on each side of the critical point. Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints. This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points. In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive. Calculus of variations One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations. Physics Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics: velocity is the derivative (with respect to time) of an object's displacement (distance from the original position) acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position. For example, if an object's position on a line is given by then the object's velocity is and the object's acceleration is which is constant. Differential equations A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation Here is the temperature of the rod at position and time and is a constant that depends on how fast heat diffuses through the rod. Mean value theorem The mean value theorem gives a relationship between values of the derivative and values of the original function. If is a real-valued function and and are numbers with , then the mean value theorem says that under mild hypotheses, the slope between the two points and is equal to the slope of the tangent line to at some point between and . In other words, In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of must equal the slope of one of the tangent lines of . All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function. Taylor polynomials and Taylor series The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function at the point is a linear polynomial , and it may be possible to get a better approximation by considering a quadratic polynomial . Still better might be a cubic polynomial , and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients , , , and that makes the approximation as good as possible. In the neighbourhood of , for the best possible choice is always , and for the best possible choice is always . For , , and higher-degree coefficients, these coefficients are determined by higher derivatives of . should always be , and should always be . Using these coefficients gives the Taylor polynomial of . The Taylor polynomial of degree is the polynomial of degree which best approximates , and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If is a polynomial of degree less than or equal to , then the Taylor polynomial of degree equals . The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there exist smooth functions which are also not analytic. Implicit function theorem Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if , then the circle is the set of all pairs such that . This set is called the zero set of , and is not the same as the graph of , which is a paraboloid. The implicit function theorem converts relations such as into functions. It states that if is continuously differentiable, then around most points, the zero set of looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of . The circle, for instance, can be pasted together from the graphs of the two functions . In a neighborhood of every point on the circle except and , one of these two functions has a graph that looks like the circle. (These two functions also happen to meet and , but this is not guaranteed by the implicit function theorem.) The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together. See also Differential (calculus) Numerical differentiation Techniques for differentiation List of calculus topics Notation for differentiation Notes References Citations Works cited Other sources Boman, Eugene, and Robert Rogers. Differential Calculus: From Practice to Theory. 2022, personal.psu.edu/ecb5/DiffCalc.pdf . Calculus
Differential calculus
[ "Mathematics" ]
3,730
[ "Differential calculus", "Calculus" ]
50,425
https://en.wikipedia.org/wiki/Quantum%20Hall%20effect
The quantum Hall effect (or integer quantum Hall effect) is a quantized version of the Hall effect which is observed in two-dimensional electron systems subjected to low temperatures and strong magnetic fields, in which the Hall resistance exhibits steps that take on the quantized values where is the Hall voltage, is the channel current, is the elementary charge and is the Planck constant. The divisor can take on either integer () or fractional () values. Here, is roughly but not exactly equal to the filling factor of Landau levels. The quantum Hall effect is referred to as the integer or fractional quantum Hall effect depending on whether is an integer or fraction, respectively. The striking feature of the integer quantum Hall effect is the persistence of the quantization (i.e. the Hall plateau) as the electron density is varied. Since the electron density remains constant when the Fermi level is in a clean spectral gap, this situation corresponds to one where the Fermi level is an energy with a finite density of states, though these states are localized (see Anderson localization). The fractional quantum Hall effect is more complicated and still considered an open research problem. Its existence relies fundamentally on electron–electron interactions. In 1988, it was proposed that there was a quantum Hall effect without Landau levels. This quantum Hall effect is referred to as the quantum anomalous Hall (QAH) effect. There is also a new concept of the quantum spin Hall effect which is an analogue of the quantum Hall effect, where spin currents flow instead of charge currents. Applications Electrical resistance standards The quantization of the Hall conductance () has the important property of being exceedingly precise. Actual measurements of the Hall conductance have been found to be integer or fractional multiples of to better than one part in a billion. It has allowed for the definition of a new practical standard for electrical resistance, based on the resistance quantum given by the von Klitzing constant . This is named after Klaus von Klitzing, the discoverer of exact quantization. The quantum Hall effect also provides an extremely precise independent determination of the fine-structure constant, a quantity of fundamental importance in quantum electrodynamics. In 1990, a fixed conventional value was defined for use in resistance calibrations worldwide. On 16 November 2018, the 26th meeting of the General Conference on Weights and Measures decided to fix exact values of (the Planck constant) and (the elementary charge), superseding the 1990 conventional value with an exact permanent value (intrinsic standard) . Research status The fractional quantum Hall effect is considered part of exact quantization. Exact quantization in full generality is not completely understood but it has been explained as a very subtle manifestation of the combination of the principle of gauge invariance together with another symmetry (see Anomalies). The integer quantum Hall effect instead is considered a solved research problem and understood in the scope of TKNN formula and Chern–Simons Lagrangians. The fractional quantum Hall effect is still considered an open research problem. The fractional quantum Hall effect can be also understood as an integer quantum Hall effect, although not of electrons but of charge–flux composites known as composite fermions. Other models to explain the fractional quantum Hall effect also exists. Currently it is considered an open research problem because no single, confirmed and agreed list of fractional quantum numbers exists, neither a single agreed model to explain all of them, although there are such claims in the scope of composite fermions and Non Abelian Chern–Simons Lagrangians. History In 1957, Carl Frosch and Lincoln Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. This enabled physicists to study electron behavior in a nearly ideal two-dimensional gas. In a MOSFET, conduction electrons travel in a thin surface layer, and a "gate" voltage controls the number of charge carriers in this layer. This allows researchers to explore quantum effects by operating high-purity MOSFETs at liquid helium temperatures. The integer quantization of the Hall conductance was originally predicted by University of Tokyo researchers Tsuneya Ando, Yukio Matsumoto and Yasutada Uemura in 1975, on the basis of an approximate calculation which they themselves did not believe to be true. In 1978, the Gakushuin University researchers Jun-ichi Wakabayashi and Shinji Kawaji subsequently observed the effect in experiments carried out on the inversion layer of MOSFETs. In 1980, Klaus von Klitzing, working at the high magnetic field laboratory in Grenoble with silicon-based MOSFET samples developed by Michael Pepper and Gerhard Dorda, made the unexpected discovery that the Hall resistance was exactly quantized. For this finding, von Klitzing was awarded the 1985 Nobel Prize in Physics. A link between exact quantization and gauge invariance was subsequently proposed by Robert Laughlin, who connected the quantized conductivity to the quantized charge transport in a Thouless charge pump. Most integer quantum Hall experiments are now performed on gallium arsenide heterostructures, although many other semiconductor materials can be used. In 2007, the integer quantum Hall effect was reported in graphene at temperatures as high as room temperature, and in the magnesium zinc oxide ZnO–MgxZn1−xO. Integer quantum Hall effect Landau levels In two dimensions, when classical electrons are subjected to a magnetic field they follow circular cyclotron orbits. When the system is treated quantum mechanically, these orbits are quantized. To determine the values of the energy levels the Schrödinger equation must be solved. Since the system is subjected to a magnetic field, it has to be introduced as an electromagnetic vector potential in the Schrödinger equation. The system considered is an electron gas that is free to move in the x and y directions, but is tightly confined in the z direction. Then, a magnetic field is applied in the z direction and according to the Landau gauge the electromagnetic vector potential is and the scalar potential is . Thus the Schrödinger equation for a particle of charge and effective mass in this system is: where is the canonical momentum, which is replaced by the operator and is the total energy. To solve this equation it is possible to separate it into two equations since the magnetic field just affects the movement along x and y axes. The total energy becomes then, the sum of two contributions . The corresponding equations in z axis is: To simplify things, the solution is considered as an infinite well. Thus the solutions for the z direction are the energies , and the wavefunctions are sinusoidal. For the and directions, the solution of the Schrödinger equation can be chosen to be the product of a plane wave in -direction with some unknown function of , i.e., . This is because the vector potential does not depend on and the momentum operator therefore commutes with the Hamiltonian. By substituting this Ansatz into the Schrödinger equation one gets the one-dimensional harmonic oscillator equation centered at . where is defined as the cyclotron frequency and the magnetic length. The energies are: , And the wavefunctions for the motion in the plane are given by the product of a plane wave in and Hermite polynomials attenuated by the gaussian function in , which are the wavefunctions of a harmonic oscillator. From the expression for the Landau levels one notices that the energy depends only on , not on . States with the same but different are degenerate. Density of states At zero field, the density of states per unit surface for the two-dimensional electron gas taking into account degeneration due to spin is independent of the energy . As the field is turned on, the density of states collapses from the constant to a Dirac comb, a series of Dirac functions, corresponding to the Landau levels separated . At finite temperature, however, the Landau levels acquire a width being the time between scattering events. Commonly it is assumed that the precise shape of Landau levels is a Gaussian or Lorentzian profile. Another feature is that the wave functions form parallel strips in the -direction spaced equally along the -axis, along the lines of . Since there is nothing special about any direction in the -plane if the vector potential was differently chosen one should find circular symmetry. Given a sample of dimensions and applying the periodic boundary conditions in the -direction being an integer, one gets that each parabolic potential is placed at a value . The number of states for each Landau Level and can be calculated from the ratio between the total magnetic flux that passes through the sample and the magnetic flux corresponding to a state. Thus the density of states per unit surface is . Note the dependency of the density of states with the magnetic field. The larger the magnetic field is, the more states are in each Landau level. As a consequence, there is more confinement in the system since fewer energy levels are occupied. Rewriting the last expression as it is clear that each Landau level contains as many states as in a 2DEG in a . Given the fact that electrons are fermions, for each state available in the Landau levels it corresponds to two electrons, one electron with each value for the spin . However, if a large magnetic field is applied, the energies split into two levels due to the magnetic moment associated with the alignment of the spin with the magnetic field. The difference in the energies is being a factor which depends on the material ( for free electrons) and the Bohr magneton. The sign is taken when the spin is parallel to the field and when it is antiparallel. This fact called spin splitting implies that the density of states for each level is reduced by a half. Note that is proportional to the magnetic field so, the larger the magnetic field is, the more relevant is the split. In order to get the number of occupied Landau levels, one defines the so-called filling factor as the ratio between the density of states in a 2DEG and the density of states in the Landau levels. In general the filling factor is not an integer. It happens to be an integer when there is an exact number of filled Landau levels. Instead, it becomes a non-integer when the top level is not fully occupied. In actual experiments, one varies the magnetic field and fixes electron density (and not the Fermi energy!) or varies the electron density and fixes the magnetic field. Both cases correspond to a continuous variation of the filling factor and one cannot expect to be an integer. Since , by increasing the magnetic field, the Landau levels move up in energy and the number of states in each level grow, so fewer electrons occupy the top level until it becomes empty. If the magnetic field keeps increasing, eventually, all electrons will be in the lowest Landau level () and this is called the magnetic quantum limit. Longitudinal resistivity It is possible to relate the filling factor to the resistivity and hence, to the conductivity of the system. When is an integer, the Fermi energy lies in between Landau levels where there are no states available for carriers, so the conductivity becomes zero (it is considered that the magnetic field is big enough so that there is no overlap between Landau levels, otherwise there would be few electrons and the conductivity would be approximately ). Consequently, the resistivity becomes zero too (At very high magnetic fields it is proven that longitudinal conductivity and resistivity are proportional). With the conductivity one finds If the longitudinal resistivity is zero and transversal is finite, then . Thus both the longitudinal conductivity and resistivity become zero. Instead, when is a half-integer, the Fermi energy is located at the peak of the density distribution of some Landau Level. This means that the conductivity will have a maximum . This distribution of minimums and maximums corresponds to ¨quantum oscillations¨ called Shubnikov–de Haas oscillations which become more relevant as the magnetic field increases. Obviously, the height of the peaks are larger as the magnetic field increases since the density of states increases with the field, so there are more carriers which contribute to the resistivity. It is interesting to notice that if the magnetic field is very small, the longitudinal resistivity is a constant which means that the classical result is reached. Transverse resistivity From the classical relation of the transverse resistivity and substituting one finds out the quantization of the transverse resistivity and conductivity: One concludes then, that the transverse resistivity is a multiple of the inverse of the so-called conductance quantum if the filling factor is an integer. In experiments, however, plateaus are observed for whole plateaus of filling values , which indicates that there are in fact electron states between the Landau levels. These states are localized in, for example, impurities of the material where they are trapped in orbits so they can not contribute to the conductivity. That is why the resistivity remains constant in between Landau levels. Again if the magnetic field decreases, one gets the classical result in which the resistivity is proportional to the magnetic field. Photonic quantum Hall effect The quantum Hall effect, in addition to being observed in two-dimensional electron systems, can be observed in photons. Photons do not possess inherent electric charge, but through the manipulation of discrete optical resonators and coupling phases or on-site phases, an artificial magnetic field can be created. This process can be expressed through a metaphor of photons bouncing between multiple mirrors. By shooting the light across multiple mirrors, the photons are routed and gain additional phase proportional to their angular momentum. This creates an effect like they are in a magnetic field. Topological classification The integers that appear in the Hall effect are examples of topological quantum numbers. They are known in mathematics as the first Chern numbers and are closely related to Berry's phase. A striking model of much interest in this context is the Azbel–Harper–Hofstadter model whose quantum phase diagram is the Hofstadter butterfly shown in the figure. The vertical axis is the strength of the magnetic field and the horizontal axis is the chemical potential, which fixes the electron density. The colors represent the integer Hall conductances. Warm colors represent positive integers and cold colors negative integers. Note, however, that the density of states in these regions of quantized Hall conductance is zero; hence, they cannot produce the plateaus observed in the experiments. The phase diagram is fractal and has structure on all scales. In the figure there is an obvious self-similarity. In the presence of disorder, which is the source of the plateaus seen in the experiments, this diagram is very different and the fractal structure is mostly washed away. Also, the experiments control the filling factor and not the Fermi energy. If this diagram is plotted as a function of filling factor, all the features are completely washed away, hence, it has very little to do with the actual Hall physics. Concerning physical mechanisms, impurities and/or particular states (e.g., edge currents) are important for both the 'integer' and 'fractional' effects. In addition, Coulomb interaction is also essential in the fractional quantum Hall effect. The observed strong similarity between integer and fractional quantum Hall effects is explained by the tendency of electrons to form bound states with an even number of magnetic flux quanta, called composite fermions. Bohr atom interpretation of the von Klitzing constant The value of the von Klitzing constant may be obtained already on the level of a single atom within the Bohr model while looking at it as a single-electron Hall effect. While during the cyclotron motion on a circular orbit the centrifugal force is balanced by the Lorentz force responsible for the transverse induced voltage and the Hall effect, one may look at the Coulomb potential difference in the Bohr atom as the induced single atom Hall voltage and the periodic electron motion on a circle as a Hall current. Defining the single atom Hall current as a rate a single electron charge is making Kepler revolutions with angular frequency and the induced Hall voltage as a difference between the hydrogen nucleus Coulomb potential at the electron orbital point and at infinity: One obtains the quantization of the defined Bohr orbit Hall resistance in steps of the von Klitzing constant as which for the Bohr atom is linear but not inverse in the integer n. Relativistic analogs Relativistic examples of the integer quantum Hall effect and quantum spin Hall effect arise in the context of lattice gauge theory. See also Quantum Hall transitions Fractional quantum Hall effect Quantum anomalous Hall effect Quantum cellular automata Composite fermions Conductance Quantum Hall effect Hall probe Graphene Quantum spin Hall effect Coulomb potential between two current loops embedded in a magnetic field References Further reading 25 years of Quantum Hall Effect, K. von Klitzing, Poincaré Seminar (Paris-2004). Postscript. Pdf. Magnet Lab Press Release Quantum Hall Effect Observed at Room Temperature Zyun F. Ezawa: Quantum Hall Effects - Field Theoretical Approach and Related Topics. World Scientific, Singapore 2008, Sankar D. Sarma, Aron Pinczuk: Perspectives in Quantum Hall Effects. Wiley-VCH, Weinheim 2004, E. I. Rashba and V. B. Timofeev, Quantum Hall Effect, Sov. Phys. – Semiconductors v. 20, pp. 617–647 (1986). Hall effect Condensed matter physics Quantum electronics Spintronics Quantum phases Mesoscopic physics Articles containing video clips 1980 in science
Quantum Hall effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,646
[ "Quantum phases", "Physical phenomena", "Quantum electronics", "Hall effect", "Spintronics", "Phases of matter", "Quantum mechanics", "Electric and magnetic fields in matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Nanotechnology", "Mesoscopic physics", "...
50,482
https://en.wikipedia.org/wiki/Flood
A flood is an overflow of water (or rarely other fluids) that submerges land that is usually dry. In the sense of "flowing water", the word may also be applied to the inflow of the tide. Floods are of significant concern in agriculture, civil engineering and public health. Human changes to the environment often increase the intensity and frequency of flooding. Examples for human changes are land use changes such as deforestation and removal of wetlands, changes in waterway course or flood controls such as with levees. Global environmental issues also influence causes of floods, namely climate change which causes an intensification of the water cycle and sea level rise. For example, climate change makes extreme weather events more frequent and stronger. This leads to more intense floods and increased flood risk. Natural types of floods include river flooding, groundwater flooding coastal flooding and urban flooding sometimes known as flash flooding. Tidal flooding may include elements of both river and coastal flooding processes in estuary areas. There is also the intentional flooding of land that would otherwise remain dry. This may take place for agricultural, military, or river-management purposes. For example, agricultural flooding may occur in preparing paddy fields for the growing of semi-aquatic rice in many countries. Flooding may occur as an overflow of water from water bodies, such as a river, lake, sea or ocean. In these cases, the water overtops or breaks levees, resulting in some of that water escaping its usual boundaries. Flooding may also occur due to an accumulation of rainwater on saturated ground. This is called an areal flood. The size of a lake or other body of water naturally varies with seasonal changes in precipitation and snow melt. Those changes in size are however not considered a flood unless they flood property or drown domestic animals. Floods can also occur in rivers when the flow rate exceeds the capacity of the river channel, particularly at bends or meanders in the waterway. Floods often cause damage to homes and businesses if these buildings are in the natural flood plains of rivers. People could avoid riverine flood damage by moving away from rivers. However, people in many countries have traditionally lived and worked by rivers because the land is usually flat and fertile. Also, the rivers provide easy travel and access to commerce and industry. Flooding can damage property and also lead to secondary impacts. These include in the short term an increased spread of waterborne diseases and vector-bourne disesases, for example those diseases transmitted by mosquitos. Flooding can also lead to long-term displacement of residents. Floods are an area of study of hydrology and hydraulic engineering. A large amount of the world's population lives in close proximity to major coastlines, while many major cities and agricultural areas are located near floodplains. There is significant risk for increased coastal and fluvial flooding due to changing climatic conditions. Types Areal flooding Floods can happen on flat or low-lying areas when water is supplied by rainfall or snowmelt more rapidly than it can either infiltrate or run off. The excess accumulates in place, sometimes to hazardous depths. Surface soil can become saturated, which effectively stops infiltration, where the water table is shallow, such as a floodplain, or from intense rain from one or a series of storms. Infiltration also is slow to negligible through frozen ground, rock, concrete, paving, or roofs. Areal flooding begins in flat areas like floodplains and in local depressions not connected to a stream channel, because the velocity of overland flow depends on the surface slope. Endorheic basins may experience areal flooding during periods when precipitation exceeds evaporation. River flooding Floods occur in all types of river and stream channels, from the smallest ephemeral streams in humid zones to normally-dry channels in arid climates to the world's largest rivers. When overland flow occurs on tilled fields, it can result in a muddy flood where sediments are picked up by run off and carried as suspended matter or bed load. Localized flooding may be caused or exacerbated by drainage obstructions such as landslides, ice, debris, or beaver dams. Slow-rising floods most commonly occur in large rivers with large catchment areas. The increase in flow may be the result of sustained rainfall, rapid snow melt, monsoons, or tropical cyclones. However, large rivers may have rapid flooding events in areas with dry climates, since they may have large basins but small river channels, and rainfall can be very intense in smaller areas of those basins. In extremely flat areas, such as the Red River Valley of the North in Minnesota, North Dakota, and Manitoba, a type of hybrid river/areal flooding can occur, known locally as "overland flooding". This is different from "overland flow" defined as "surface runoff". The Red River Valley is a former glacial lakebed, created by Lake Agassiz, and over a length of , the river course drops only , for an average slope of about 5 inches per mile (or 8.2 cm per kilometer). In this very large area, spring snowmelt happens at different rates in different places, and if winter snowfall was heavy, a fast snowmelt can push water out of the banks of a tributary river so that it moves overland, to a point further downstream in the river or completely to another streambed. Overland flooding can be devastating because it is unpredictable, it can occur very suddenly with surprising speed, and in such flat land it can run for miles. It is these qualities that set it apart from simple "overland flow". Rapid flooding events, including flash floods, more often occur on smaller rivers, rivers with steep valleys, rivers that flow for much of their length over impermeable terrain, or normally-dry channels. The cause may be localized convective precipitation (intense thunderstorms) or sudden release from an upstream impoundment created behind a dam, landslide, or glacier. In one instance, a flash flood killed eight people enjoying the water on a Sunday afternoon at a popular waterfall in a narrow canyon. Without any observed rainfall, the flow rate increased from about in just one minute. Two larger floods occurred at the same site within a week, but no one was at the waterfall on those days. The deadly flood resulted from a thunderstorm over part of the drainage basin, where steep, bare rock slopes are common and the thin soil was already saturated. Flash floods are the most common flood type in normally-dry channels in arid zones, known as arroyos in the southwest United States and many other names elsewhere. In that setting, the first flood water to arrive is depleted as it wets the sandy stream bed. The leading edge of the flood thus advances more slowly than later and higher flows. As a result, the rising limb of the hydrograph becomes ever quicker as the flood moves downstream, until the flow rate is so great that the depletion by wetting soil becomes insignificant. Coastal flooding Coastal areas may be flooded by storm surges combining with high tides and large wave events at sea, resulting in waves over-topping flood defenses or in severe cases by tsunami or tropical cyclones. A storm surge, from either a tropical cyclone or an extratropical cyclone, falls within this category. A storm surge is "an additional rise of water generated by a storm, over and above the predicted astronomical tides". Due to the effects of climate change (e.g. sea level rise and an increase in extreme weather events) and an increase in the population living in coastal areas, the damage caused by coastal flood events has intensified and more people are being affected. Flooding in estuaries is commonly caused by a combination of storm surges caused by winds and low barometric pressure and large waves meeting high upstream river flows. Urban flooding Intentional floods The intentional flooding of land that would otherwise remain dry may take place for agricultural, military or river-management purposes. This is a form of hydraulic engineering. Agricultural flooding may occur in preparing paddy fields for the growing of semi-aquatic rice in many countries. Flooding for river management may occur in the form of diverting flood waters in a river at flood stage upstream from areas that are considered more valuable than the areas that are sacrificed in this way. This may be done ad hoc, or permanently, as in the so-called overlaten (literally "let-overs"), an intentionally lowered segment in Dutch riparian levees, like the Beerse Overlaat in the left levee of the Meuse between the villages of Gassel and Linden, North Brabant. Military inundation creates an obstacle in the field that is intended to impede the movement of the enemy. This may be done both for offensive and defensive purposes. Furthermore, in so far as the methods used are a form of hydraulic engineering, it may be useful to differentiate between controlled inundations and uncontrolled ones. Examples for controlled inundations include those in the Netherlands under the Dutch Republic and its successor states in that area and exemplified in the two Hollandic Water Lines, the Stelling van Amsterdam, the Frisian Water Line, the IJssel Line, the Peel-Raam Line, and the Grebbe line in that country. To count as controlled, a military inundation has to take the interests of the civilian population into account, by allowing them a timely evacuation, by making the inundation reversible, and by making an attempt to minimize the adverse ecological impact of the inundation. That impact may also be adverse in a hydrogeological sense if the inundation lasts a long time. Examples for uncontrolled inundations are the second Siege of Leiden during the first part of the Eighty Years' War, the flooding of the Yser plain during the First World War, and the Inundation of Walcheren, and the Inundation of the Wieringermeer during the Second World War). Causes Floods are caused by many factors or a combination of any of these generally prolonged heavy rainfall (locally concentrated or throughout a catchment area), highly accelerated snowmelt, severe winds over water, unusual high tides, tsunamis, or failure of dams, levees, retention ponds, or other structures that retained the water. Flooding can be exacerbated by increased amounts of impervious surface or by other natural hazards such as wildfires, which reduce the supply of vegetation that can absorb rainfall. During times of rain, some of the water is retained in ponds or soil, some is absorbed by grass and vegetation, some evaporates, and the rest travels over the land as surface runoff. Floods occur when ponds, lakes, riverbeds, soil, and vegetation cannot absorb all the water. This has been exacerbated by human activities such as draining wetlands that naturally store large amounts of water and building paved surfaces that do not absorb any water. Water then runs off the land in quantities that cannot be carried within stream channels or retained in natural ponds, lakes, and human-made reservoirs. About 30 percent of all precipitation becomes runoff and that amount might be increased by water from melting snow. Upslope factors River flooding is often caused by heavy rain, sometimes increased by melting snow. A flood that rises rapidly, with little or no warning, is called a flash flood. Flash floods usually result from intense rainfall over a relatively small area, or if the area was already saturated from previous precipitation. The amount, location, and timing of water reaching a drainage channel from natural precipitation and controlled or uncontrolled reservoir releases determines the flow at downstream locations. Some precipitation evaporates, some slowly percolates through soil, some may be temporarily sequestered as snow or ice, and some may produce rapid runoff from surfaces including rock, pavement, roofs, and saturated or frozen ground. The fraction of incident precipitation promptly reaching a drainage channel has been observed from nil for light rain on dry, level ground to as high as 170 percent for warm rain on accumulated snow. Most precipitation records are based on a measured depth of water received within a fixed time interval. Frequency of a precipitation threshold of interest may be determined from the number of measurements exceeding that threshold value within the total time period for which observations are available. Individual data points are converted to intensity by dividing each measured depth by the period of time between observations. This intensity will be less than the actual peak intensity if the duration of the rainfall event was less than the fixed time interval for which measurements are reported. Convective precipitation events (thunderstorms) tend to produce shorter duration storm events than orographic precipitation. Duration, intensity, and frequency of rainfall events are important to flood prediction. Short duration precipitation is more significant to flooding within small drainage basins. The most important upslope factor in determining flood magnitude is the land area of the watershed upstream of the area of interest. Rainfall intensity is the second most important factor for watersheds of less than approximately . The main channel slope is the second most important factor for larger watersheds. Channel slope and rainfall intensity become the third most important factors for small and large watersheds, respectively. Time of Concentration is the time required for runoff from the most distant point of the upstream drainage area to reach the point of the drainage channel controlling flooding of the area of interest. The time of concentration defines the critical duration of peak rainfall for the area of interest. The critical duration of intense rainfall might be only a few minutes for roof and parking lot drainage structures, while cumulative rainfall over several days would be critical for river basins. Downslope factors Water flowing downhill ultimately encounters downstream conditions slowing movement. The final limitation in coastal flooding lands is often the ocean or some coastal flooding bars which form natural lakes. In flooding low lands, elevation changes such as tidal fluctuations are significant determinants of coastal and estuarine flooding. Less predictable events like tsunamis and storm surges may also cause elevation changes in large bodies of water. Elevation of flowing water is controlled by the geometry of the flow channel and, especially, by depth of channel, speed of flow and amount of sediments in it Flow channel restrictions like bridges and canyons tend to control water elevation above the restriction. The actual control point for any given reach of the drainage may change with changing water elevation, so a closer point may control for lower water levels until a more distant point controls at higher water levels. Effective flood channel geometry may be changed by growth of vegetation, accumulation of ice or debris, or construction of bridges, buildings, or levees within the flood channel. Periodic floods occur on many rivers, forming a surrounding region known as the flood plain. Even when rainfall is relatively light, the shorelines of lakes and bays can be flooded by severe winds—such as during hurricanes—that blow water into the shore areas. Climate change Coincidence Extreme flood events often result from coincidence such as unusually intense, warm rainfall melting heavy snow pack, producing channel obstructions from floating ice, and releasing small impoundments like beaver dams. Coincident events may cause extensive flooding to be more frequent than anticipated from simplistic statistical prediction models considering only precipitation runoff flowing within unobstructed drainage channels. Debris modification of channel geometry is common when heavy flows move uprooted woody vegetation and flood-damaged structures and vehicles, including boats and railway equipment. Recent field measurements during the 2010–11 Queensland floods showed that any criterion solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by velocity and water depth fluctuations. These considerations ignore further the risks associated with large debris entrained by the flow motion. Negative impacts Floods can be a huge destructive power. When water flows, it has the ability to demolish all kinds of buildings and objects, such as bridges, structures, houses, trees, and cars. Economical, social and natural environmental damages are common factors that are impacted by flooding events and the impacts that flooding has on these areas can be catastrophic. Impacts on infrastructure and societies There have been numerous flood incidents around the world which have caused devastating damage to infrastructure, the natural environment and human life. Floods can have devastating impacts to human societies. Flooding events worldwide are increasing in frequency and severity, leading to increasing costs to societies. Catastrophic riverine flooding can result from major infrastructure failures, often the collapse of a dam. It can also be caused by drainage channel modification from a landslide, earthquake or volcanic eruption. Examples include outburst floods and lahars. Tsunamis can cause catastrophic coastal flooding, most commonly resulting from undersea earthquakes. Economic impacts The primary effects of flooding include loss of life and damage to buildings and other structures, including bridges, sewerage systems, roadways, and canals. The economic impacts caused by flooding can be severe. Every year flooding causes countries billions of dollars worth of damage that threatens the livelihood of individuals. As a result, there is also significant socio-economic threats to vulnerable populations around the world from flooding. For example, in Bangladesh in 2007, a flood was responsible for the destruction of more than one million houses. And yearly in the United States, floods cause over $7 billion in damage. Flood waters typically inundate farm land, making the land unworkable and preventing crops from being planted or harvested, which can lead to shortages of food both for humans and farm animals. Entire harvests for a country can be lost in extreme flood circumstances. Some tree species may not survive prolonged flooding of their root systems. Flooding in areas where people live also has significant economic implications for affected neighborhoods. In the United States, industry experts estimate that wet basements can lower property values by 10–25 percent and are cited among the top reasons for not purchasing a home. According to the U.S. Federal Emergency Management Agency (FEMA), almost 40 percent of small businesses never reopen their doors following a flooding disaster. In the United States, insurance is available against flood damage to both homes and businesses. Economic hardship due to a temporary decline in tourism, rebuilding costs, or food shortages leading to price increases is a common after-effect of severe flooding. The impact on those affected may cause psychological damage to those affected, in particular where deaths, serious injuries and loss of property occur. Health impacts Fatalities connected directly to floods are usually caused by drowning; the waters in a flood are very deep and have strong currents. Deaths do not just occur from drowning, deaths are connected with dehydration, heat stroke, heart attack and any other illness that needs medical supplies that cannot be delivered. Injuries can lead to an excessive amount of morbidity when a flood occurs. Injuries are not isolated to just those who were directly in the flood, rescue teams and even people delivering supplies can sustain an injury. Injuries can occur anytime during the flood process; before, during and after. During floods accidents occur with falling debris or any of the many fast moving objects in the water. After the flood rescue attempts are where large numbers injuries can occur. Communicable diseases are increased due to many pathogens and bacteria that are being transported by the water.There are many waterborne diseases such as cholera, hepatitis A, hepatitis E and diarrheal diseases, to mention a few. Gastrointestinal disease and diarrheal diseases are very common due to a lack of clean water during a flood. Most of clean water supplies are contaminated when flooding occurs. Hepatitis A and E are common because of the lack of sanitation in the water and in living quarters depending on where the flood is and how prepared the community is for a flood. When floods hit, people lose nearly all their crops, livestock, and food reserves and face starvation. Floods also frequently damage power transmission and sometimes power generation, which then has knock-on effects caused by the loss of power. This includes loss of drinking water treatment and water supply, which may result in loss of drinking water or severe water contamination. It may also cause the loss of sewage disposal facilities. Lack of clean water combined with human sewage in the flood waters raises the risk of waterborne diseases, which can include typhoid, giardia, cryptosporidium, cholera and many other diseases depending upon the location of the flood. Damage to roads and transport infrastructure may make it difficult to mobilize aid to those affected or to provide emergency health treatment. Flooding can cause chronically wet houses, leading to the growth of indoor mold and resulting in adverse health effects, particularly respiratory symptoms. Respiratory diseases are a common after the disaster has occurred. This depends on the amount of water damage and mold that grows after an incident. Research suggests that there will be an increase of 30–50% in adverse respiratory health outcomes caused by dampness and mold exposure for those living in coastal and wetland areas. Fungal contamination in homes is associated with increased allergic rhinitis and asthma. Vector borne diseases increase as well due to the increase in still water after the floods have settled. The diseases that are vector borne are malaria, dengue, West Nile, and yellow fever. Floods have a huge impact on victims' psychosocial integrity. People suffer from a wide variety of losses and stress. One of the most treated illness in long-term health problems are depression caused by the flood and all the tragedy that flows with one. Loss of life Below is a list of the deadliest floods worldwide, showing events with death tolls at or above 100,000 individuals. Positive impacts (benefits) Floods (in particular more frequent or smaller floods) can also bring many benefits, such as recharging ground water, making soil more fertile and increasing nutrients in some soils. Flood waters provide much needed water resources in arid and semi-arid regions where precipitation can be very unevenly distributed throughout the year and kills pests in the farming land. Freshwater floods particularly play an important role in maintaining ecosystems in river corridors and are a key factor in maintaining floodplain biodiversity. Flooding can spread nutrients to lakes and rivers, which can lead to increased biomass and improved fisheries for a few years. For some fish species, an inundated floodplain may form a highly suitable location for spawning with few predators and enhanced levels of nutrients or food. Fish, such as the weather fish, make use of floods in order to reach new habitats. Bird populations may also profit from the boost in food production caused by flooding. Flooding can bring benefits, such as making the soil more fertile and providing it with more nutrients. For this reason, periodic flooding was essential to the well-being of ancient communities along the Tigris-Euphrates Rivers, the Nile River, the Indus River, the Ganges and the Yellow River among others. The viability of hydropower, a renewable source of energy, is also higher in flood prone regions. Protections against floods and associated hazards Flood management Flood management examples In many countries around the world, waterways prone to floods are often carefully managed. Defenses such as detention basins, levees, bunds, reservoirs, and weirs are used to prevent waterways from overflowing their banks. When these defenses fail, emergency measures such as sandbags or portable inflatable tubes are often used to try to stem flooding. Coastal flooding has been addressed in portions of Europe and the Americas with coastal defenses, such as sea walls, beach nourishment, and barrier islands. In the riparian zone near rivers and streams, erosion control measures can be taken to try to slow down or reverse the natural forces that cause many waterways to meander over long periods of time. Flood controls, such as dams, can be built and maintained over time to try to reduce the occurrence and severity of floods as well. In the United States, the U.S. Army Corps of Engineers maintains a network of such flood control dams. In areas prone to urban flooding, one solution is the repair and expansion of human-made sewer systems and stormwater infrastructure. Another strategy is to reduce impervious surfaces in streets, parking lots and buildings through natural drainage channels, porous paving, and wetlands (collectively known as green infrastructure or sustainable urban drainage systems (SUDS)). Areas identified as flood-prone can be converted into parks and playgrounds that can tolerate occasional flooding. Ordinances can be adopted to require developers to retain stormwater on site and require buildings to be elevated, protected by floodwalls and levees, or designed to withstand temporary inundation. Property owners can also invest in solutions themselves, such as re-landscaping their property to take the flow of water away from their building and installing rain barrels, sump pumps, and check valves. Flood safety planning In the United States, the National Weather Service gives out the advice "Turn Around, Don't Drown" for floods; that is, it recommends that people get out of the area of a flood, rather than trying to cross it. At the most basic level, the best defense against floods is to seek higher ground for high-value uses while balancing the foreseeable risks with the benefits of occupying flood hazard zones. Critical community-safety facilities, such as hospitals, emergency-operations centers, and police, fire, and rescue services, should be built in areas least at risk of flooding. Structures, such as bridges, that must unavoidably be in flood hazard areas should be designed to withstand flooding. Areas most at risk for flooding could be put to valuable uses that could be abandoned temporarily as people retreat to safer areas when a flood is imminent. Planning for flood safety involves many aspects of analysis and engineering, including: observation of previous and present flood heights and inundated areas, statistical, hydrologic, and hydraulic model analyses, mapping inundated areas and flood heights for future flood scenarios, long-term land use planning and regulation, engineering design and construction of structures to control or withstand flooding, intermediate-term monitoring, forecasting, and emergency-response planning, and short-term monitoring, warning, and response operations. Each topic presents distinct yet related questions with varying scope and scale in time, space, and the people involved. Attempts to understand and manage the mechanisms at work in floodplains have been made for at least six millennia. In the United States, the Association of State Floodplain Managers works to promote education, policies, and activities that mitigate current and future losses, costs, and human suffering caused by flooding and to protect the natural and beneficial functions of floodplains – all without causing adverse impacts. A portfolio of best practice examples for disaster mitigation in the United States is available from the Federal Emergency Management Agency. Flood clean-up safety Clean-up activities following floods often pose hazards to workers and volunteers involved in the effort. Potential dangers include electrical hazards, carbon monoxide exposure, musculoskeletal hazards, heat or cold stress, motor vehicle-related dangers, fire, drowning, and exposure to hazardous materials. Because flooded disaster sites are unstable, clean-up workers might encounter sharp jagged debris, biological hazards in the flood water, exposed electrical lines, blood or other body fluids, and animal and human remains. In planning for and reacting to flood disasters, managers provide workers with hard hats, goggles, heavy work gloves, life jackets, and watertight boots with steel toes and insoles. Flood predictions Mathematical models and computer tools A series of annual maximum flow rates in a stream reach can be analyzed statistically to estimate the 100-year flood and floods of other recurrence intervals there. Similar estimates from many sites in a hydrologically similar region can be related to measurable characteristics of each drainage basin to allow indirect estimation of flood recurrence intervals for stream reaches without sufficient data for direct analysis. Physical process models of channel reaches are generally well understood and will calculate the depth and area of inundation for given channel conditions and a specified flow rate, such as for use in floodplain mapping and flood insurance. Conversely, given the observed inundation area of a recent flood and the channel conditions, a model can calculate the flow rate. Applied to various potential channel configurations and flow rates, a reach model can contribute to selecting an optimum design for a modified channel. Various reach models are available as of 2015, either 1D models (flood levels measured in the channel) or 2D models (variable flood depths measured across the extent of a floodplain). HEC-RAS, the Hydraulic Engineering Center model, is among the most popular software, if only because it is available free of charge. Other models such as TUFLOW combine 1D and 2D components to derive flood depths across both river channels and the entire floodplain. Physical process models of complete drainage basins are even more complex. Although many processes are well understood at a point or for a small area, others are poorly understood at all scales, and process interactions under normal or extreme climatic conditions may be unknown. Basin models typically combine land-surface process components (to estimate how much rainfall or snowmelt reaches a channel) with a series of reach models. For example, a basin model can calculate the runoff hydrograph that might result from a 100-year storm, although the recurrence interval of a storm is rarely equal to that of the associated flood. Basin models are commonly used in flood forecasting and warning, as well as in analysis of the effects of land use change and climate change. In the United States, an integrated approach to real-time hydrologic computer modelling uses observed data from the U.S. Geological Survey (USGS), various cooperative observing networks, various automated weather sensors, the NOAA National Operational Hydrologic Remote Sensing Center (NOHRSC), various hydroelectric companies, etc. combined with quantitative precipitation forecasts (QPF) of expected rainfall and/or snow melt to generate daily or as-needed hydrologic forecasts. The NWS also cooperates with Environment Canada on hydrologic forecasts that affect both the US and Canada, like in the area of the Saint Lawrence Seaway. The Global Flood Monitoring System, "GFMS", a computer tool which maps flood conditions worldwide, is available online. Users anywhere in the world can use GFMS to determine when floods may occur in their area. GFMS uses precipitation data from NASA's Earth observing satellites and the Global Precipitation Measurement satellite, "GPM". Rainfall data from GPM is combined with a land surface model that incorporates vegetation cover, soil type, and terrain to determine how much water is soaking into the ground, and how much water is flowing into streamflow. Users can view statistics for rainfall, streamflow, water depth, and flooding every 3 hours, at each 12-kilometer gridpoint on a global map. Forecasts for these parameters are 5 days into the future. Users can zoom in to see inundation maps (areas estimated to be covered with water) in 1-kilometer resolution. Flood forecasts and warnings Anticipating floods before they occur allows for precautions to be taken and people to be warned so that they can be prepared in advance for flooding conditions. For example, farmers can remove animals from low-lying areas and utility services can put in place emergency provisions to re-route services if needed. Emergency services can also make provisions to have enough resources available ahead of time to respond to emergencies as they occur. People can evacuate areas to be flooded. In order to make the most accurate flood forecasts for waterways, it is best to have a long time-series of historical data that relates stream flows to measured past rainfall events. Coupling this historical information with real-time knowledge about volumetric capacity in catchment areas, such as spare capacity in reservoirs, ground-water levels, and the degree of saturation of area aquifers is also needed in order to make the most accurate flood forecasts. Radar estimates of rainfall and general weather forecasting techniques are also important components of good flood forecasting. In areas where good quality data is available, the intensity and height of a flood can be predicted with fairly good accuracy and plenty of lead time. The output of a flood forecast is typically a maximum expected water level and the likely time of its arrival at key locations along a waterway, and it also may allow for the computation of the likely statistical return period of a flood. In many developed countries, urban areas at risk of flooding are protected against a 100-year flood – that is a flood that has a probability of around 63% of occurring in any 100-year period of time. According to the U.S. National Weather Service (NWS) Northeast River Forecast Center (RFC) in Taunton, Massachusetts, a rule of thumb for flood forecasting in urban areas is that it takes at least of rainfall in around an hour's time in order to start significant ponding of water on impermeable surfaces. Many NWS RFCs routinely issue Flash Flood Guidance and Headwater Guidance, which indicate the general amount of rainfall that would need to fall in a short period of time in order to cause flash flooding or flooding on larger water basins. Flood risk assessment Flood risks can be defined as the risk that floods pose to individuals, property and the natural landscape based on specific hazards and vulnerability. The extent of flood risks can impact the types of mitigation strategies required and implemented. A large amount of the world's population lives in close proximity to major coastlines, while many major cities and agricultural areas are located near floodplains. There is significant risk for increased coastal and fluvial flooding due to changing climatic conditions. Examples by country or region Worldwide: List of floods Africa: List of floods#Africa Asia: List of floods#Asia Europe: List of floods in Europe North Sea: Storm tides of the North Sea The Netherlands: Floods in the Netherlands, Flood control in the Netherlands Oceania: List of floods#Oceania Australia: Floods in Australia United States: Lists of floods in the United States Society and culture Myths and religion Etymology The word "flood" comes from the Old English , a word common to Germanic languages (compare German , Dutch from the same root as is seen in flow, float; also compare with Latin , ), meaning "a flowing of water, tide, an overflowing of land by water, a deluge, Noah's Flood; mass of water, river, sea, wave". The Old English word comes from the Proto-Germanic floduz (Old Frisian , Old Norse , Middle Dutch , Dutch , German , and Gothic derives from floduz). See also References Water Bodies of water Hydrology Meteorological phenomena Weather hazards Natural disasters
Flood
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
6,967
[ "Physical phenomena", "Earth phenomena", "Hydrology", "Weather hazards", "Weather", "Natural disasters", "Flood", "Meteorological phenomena", "Environmental engineering", "Water" ]
50,555
https://en.wikipedia.org/wiki/Domoic%20acid
Domoic acid (DA) is a kainic acid-type neurotoxin that causes amnesic shellfish poisoning (ASP). It is produced by algae and accumulates in shellfish, sardines, and anchovies. When sea lions, otters, cetaceans, humans, and other predators eat contaminated animals, poisoning may result. Exposure to this compound affects the brain, causing seizures, and possibly death. History There has been little use of domoic acid throughout history except for in Japan, where it has been used as an anthelmintic for centuries. Domoic acid was first isolated in 1959 from a species of red algae, Chondria armata, in Japan, which is commonly referred to as dōmoi (ドウモイ) in the Tokunoshima dialect, or hanayanagi. Poisonings in history have been rare, or undocumented; however, it is thought that the increase in human activities is resulting in an increasing frequency of harmful algal blooms along coastlines in recent years. In 2015, the North American Pacific coast was heavily impacted by an algal bloom, consisting predominantly of the domoic acid-producing pennate diatom, Pseudo-nitzschia. Consequently, elevated levels of domoic acid were measured in stranded marine mammals, prompting the closure of beaches and damaging razor clam, rock crab and Dungeness crab fisheries. In 1961, seabirds attacked the Capitola area in California, and though it was never confirmed, it was later hypothesized that they were under the influence of domoic acid. In 1987, in Prince Edward Island, Canada, there was a shellfish poisoning resulting in 3 deaths. Blue mussels (Mytulis edulis) contaminated with domoic acid were blamed. Domoic acid has been suggested to have been involved in an incident which took place on June 22, 2006, when a California brown pelican flew through the windshield of a car on the Pacific Coast Highway. On Friday, June 14, 2019, a teenager was attacked and injured by a sea lion that was alleged to be under the influence of domoic acid in Pismo Beach on the Central California coast. Chemistry General Domoic acid is a structural analog of kainic acid, proline, and endogenous excitatory neurotransmitter glutamate. Ohfune and Tomita, who wanted to investigate its absolute stereochemistry, were the first and only to synthesize domoic acid in 1982. Biosynthesis In 1999, using 13C- and 14C-labelled precursors, the biosynthesis of domoic acid in the diatom genus Pseudo-nitzschia was examined. After addition of [1,2-13C2]-acetate, NMR spectroscopy showed enrichment of every carbon in domoic acid, indicating incorporation of the carbon isotopes. This enrichment was consistent with two biosynthetic pathways. The labeling pattern determined that domoic acid can be biosynthesized by an isoprenoid intermediate in combination with a tricarboxylic acid (TCA) cycle intermediate. In 2018, using growth conditions known to induce domoic acid production in Pseudo-nitzschia multiseries, transcriptome sequencing successfully identified candidate domoic acid biosynthesis genes responsible for the pyrrolidine core. These domoic acid biosynthesis genes, or ‘Dab’ enzymes were heterologously expressed, characterized, and annotated as dabA (terpene cyclase), dabB (hypothetical protein), dabC (α-ketoglutarate–dependent dioxygenase), and dabD (CYP450).Domoic acid biosynthesis begins with the DabA-catalyzed geranylation of L-glutamic acid (L-Glu) with geranyl pyrophosphate (GPP) to form N-geranyl-L-glutamic acid (L-NGG). DabD then performs three successive oxidation reactions at the 7′-methyl of L-NGG to produce 7′-carboxy-L-NGG, which is then cyclized by DabC to generate the naturally occurring isodomoic acid A. Finally, an uncharacterized isomerase could convert isodomoic acid A to domoic acid. Further investigation is needed to resolve the final isomerization reaction to complete the pathway to Domoic acid. Synthesis Using intermediates 5 and 6, a Diels-Alder reaction produced a bicyclic compound (7). 7 then underwent ozonolysis to open the six-membered ring leading to selenide (8). 8 was then deselenated to form 9 (E-9 and Z-9), lastly leading to the formation of (-) domoic acid. Mechanism of action The effects of domoic acid have been attributed to several mechanisms, but the one of concern is through glutamate receptors. Domoic acid is an excitatory amino acid analogue of glutamate; a neurotransmitter in the brain that activates glutamate receptors. Domoic acid has a very strong affinity for these receptors, which results in excitotoxicity initiated by an integrative action on ionotropic glutamate receptors at both sides of the synapse, coupled with the effect of blocking the channel from rapid desensitization. In addition there is a synergistic effect with endogenous glutamate and N-Methyl-D-aspartate receptor agonists that contribute to the excitotoxicity. In the brain, domoic acid especially damages the hippocampus and amygdaloid nucleus. It damages the neurons by activating AMPA and kainate receptors, causing an influx of calcium. Although calcium flowing into cells is normal, the uncontrolled increase of calcium causes the cells to degenerate. Because the hippocampus may be severely damaged, short-term memory loss occurs. It may also cause kidney damage – even at levels considered safe for human consumption, a new study in mice has revealed. The kidney is affected at a hundred times lower than the concentration allowed under FDA regulations. Toxicology Domoic acid producing algal blooms are associated with the phenomenon of amnesic shellfish poisoning (ASP). Domoic acid can bioaccumulate in marine organisms such as shellfish, anchovies, and sardines that feed on the phytoplankton known to produce this toxin. It can accumulate in high concentrations in the tissues of these plankton feeders when the toxic phytoplankton are high in concentration in the surrounding waters. Domoic acid is a neurotoxin that inhibits neurochemical processes, causing short-term memory loss, brain damage, and, in severe cases, death in humans. In marine mammals, domoic acid typically causes seizures and tremors. Studies have shown that there are no symptomatic effects in humans at levels of 0.5 mg/kg of body weight. In the 1987 domoic acid poisoning on Prince Edward Island concentrations ranging from 0.31 to 1.28 mg/kg of muscle tissue were noted in people that became ill (three of whom died). Dangerous levels of domoic acid have been calculated based on cases such as the one on Prince Edward island. The exact for humans is unknown; for mice the LD50 is 3.6 mg/kg. New research has found that domoic acid is a heat-resistant and very stable toxin, which can damage kidneys at concentrations that are 100 times lower than what causes neurological effects. Diagnosis and prevention In order to be diagnosed and treated if poisoned, domoic acid must first be detected. Methods such as ELISA or probe development with polymerase chain reaction (PCR) may be used to detect the toxin or the organism producing this toxin. There is no known antidote available for domoic acid. Therefore, if poisoning occurs, it is advised to go quickly to a hospital. Cooking or freezing affected fish or shellfish tissue that are contaminated with domoic acid does not lessen the toxicity. As a public health concern, the concentration of domoic acid in shellfish and shellfish parts at point of sale should not exceed the current permissible limit of 20 mg/kg tissue. In addition, during processing shellfish, it is important to pay attention to environmental condition factors. In popular culture On August 18, 1961, in Capitola and Santa Cruz, California there was an invasion of what people described as chaotic seabirds. These birds were believed to be under the influence of domoic acid, and it inspired a scene in Alfred Hitchcock's feature film The Birds. In the Elementary Season 1 Episode 13 "The Red Team", domoic acid was used as a poison to mimic Alzheimer's. See also Canadian Reference Materials Pseudo-nitzschia Quisqualic acid Brevetoxin Ciguatoxin Okadaic acid Saxitoxin Maitotoxin References External links Marine neurotoxins Phycotoxins Secondary amino acids Pyrrolidines AMPA receptor agonists Kainate receptor agonists Chelating agents Tricarboxylic acids Conjugated dienes Toxic amino acids Excitotoxins
Domoic acid
[ "Chemistry" ]
1,945
[ "Chelating agents", "Process chemicals" ]
50,563
https://en.wikipedia.org/wiki/Sucrose
Sucrose, a disaccharide, is a sugar composed of glucose and fructose subunits. It is produced naturally in plants and is the main constituent of white sugar. It has the molecular formula . For human consumption, sucrose is extracted and refined from either sugarcane or sugar beet. Sugar mills – typically located in tropical regions near where sugarcane is grown – crush the cane and produce raw sugar which is shipped to other factories for refining into pure sucrose. Sugar beet factories are located in temperate climates where the beet is grown, and process the beets directly into refined sugar. The sugar-refining process involves washing the raw sugar crystals before dissolving them into a sugar syrup which is filtered and then passed over carbon to remove any residual colour. The sugar syrup is then concentrated by boiling under a vacuum and crystallized as the final purification process to produce crystals of pure sucrose that are clear, odorless, and sweet. Sugar is often an added ingredient in food production and recipes. About 185 million tonnes of sugar were produced worldwide in 2017. Sucrose is particularly dangerous as a risk factor for tooth decay because Streptococcus mutans bacteria convert it into a sticky, extracellular, dextran-based polysaccharide that allows them to cohere, forming plaque. Sucrose is the only sugar that bacteria can use to form this sticky polysaccharide. Etymology The word sucrose was coined in 1857, by the English chemist William Miller from the French ("sugar") and the generic chemical suffix for sugars -ose. The abbreviated term Suc is often used for sucrose in scientific literature. The name saccharose was coined in 1860 by the French chemist Marcellin Berthelot. Saccharose is an obsolete name for sugars in general, especially sucrose. Physical and chemical properties Structural O-α-D-glucopyranosyl-(1→2)-β-D-fructofuranoside In sucrose, the monomers glucose and fructose are linked via an ether bond between C1 on the glucosyl subunit and C2 on the fructosyl unit. The bond is called a glycosidic linkage. Glucose exists predominantly as a mixture of α and β "pyranose" anomers, but sucrose has only the α form. Fructose exists as a mixture of five tautomers but sucrose has only the β-D-fructofuranose form. Unlike most disaccharides, the glycosidic bond in sucrose is formed between the reducing ends of both glucose and fructose, and not between the reducing end of one and the non-reducing end of the other. This linkage inhibits further bonding to other saccharide units, and prevents sucrose from spontaneously reacting with cellular and circulatory macromolecules in the manner that glucose and other reducing sugars do. Since sucrose contains no anomeric hydroxyl groups, it is classified as a non-reducing sugar. Sucrose crystallizes in the monoclinic space group P21 with room-temperature lattice parameters a = 1.08631 nm, b = 0.87044 nm, c = 0.77624 nm, β = 102.938°. The purity of sucrose is measured by polarimetry, through the rotation of plane-polarized light by a sugar solution. The specific rotation at using yellow "sodium-D" light (589 nm) is +66.47°. Commercial samples of sugar are assayed using this parameter. Sucrose does not deteriorate at ambient conditions. Thermal and oxidative degradation Sucrose does not melt at high temperatures. Instead, it decomposes at to form caramel. Like other carbohydrates, it combusts to carbon dioxide and water by the simplified equation: Mixing sucrose with the oxidizer potassium nitrate produces the fuel known as rocket candy that is used to propel amateur rocket motors. This reaction is somewhat simplified though. Some of the carbon does get fully oxidized to carbon dioxide, and other reactions, such as the water-gas shift reaction also take place. A more accurate theoretical equation is: Sucrose burns with chloric acid, formed by the reaction of hydrochloric acid and potassium chlorate: Sucrose can be dehydrated with sulfuric acid to form a black, carbon-rich solid, as indicated in the following idealized equation: The formula for sucrose's decomposition can be represented as a two-step reaction: the first simplified reaction is dehydration of sucrose to pure carbon and water, and then carbon is oxidised to by from air. Hydrolysis Hydrolysis breaks the glycosidic bond converting sucrose into glucose and fructose. Hydrolysis is, however, so slow that solutions of sucrose can sit for years with negligible change. If the enzyme sucrase is added, however, the reaction will proceed rapidly. Hydrolysis can also be accelerated with acids, such as cream of tartar or lemon juice, both weak acids. Likewise, gastric acidity converts sucrose to glucose and fructose during digestion, the bond between them being an acetal bond which can be broken by an acid. Given (higher) heats of combustion of 1349.6 kcal/mol for sucrose, 673.0 for glucose, and 675.6 for fructose, hydrolysis releases about per mole of sucrose, or about 3 small calories per gram of product. Synthesis and biosynthesis of sucrose The biosynthesis of sucrose proceeds via the precursors UDP-glucose and fructose 6-phosphate, catalyzed by the enzyme sucrose-6-phosphate synthase. The energy for the reaction is gained by the cleavage of uridine diphosphate (UDP). Sucrose is formed by plants, algae and cyanobacteria but not by other organisms. Sucrose is the end product of photosynthesis and is found naturally in many food plants along with the monosaccharide fructose. In many fruits, such as pineapple and apricot, sucrose is the main sugar. In others, such as grapes and pears, fructose is the main sugar. Chemical synthesis After numerous unsuccessful attempts by others, Raymond Lemieux and George Huber succeeded in synthesizing sucrose from acetylated glucose and fructose in 1953. Sources In nature, sucrose is present in many plants, and in particular their roots, fruits and nectars, because it serves as a way to store energy, primarily from photosynthesis. Many mammals, birds, insects and bacteria accumulate and feed on the sucrose in plants and for some it is their main food source. Although honeybees consume sucrose, the honey they produce consists primarily of fructose and glucose, with only trace amounts of sucrose. As fruits ripen, their sucrose content usually rises sharply, but some fruits contain almost no sucrose at all. This includes grapes, cherries, blueberries, blackberries, figs, pomegranates, tomatoes, avocados, lemons and limes. Sucrose is a naturally occurring sugar, but with the advent of industrialization, it has been increasingly refined and consumed in all kinds of processed foods. Production History of sucrose refinement The production of table sugar has a long history. Some scholars claim Indians discovered how to crystallize sugar during the Gupta dynasty, around CE 350. Other scholars point to the ancient manuscripts of China, dated to the 8th century BCE, where one of the earliest historical mentions of sugar cane is included along with the fact that their knowledge of sugar cane was derived from India. By about 500 BCE, residents of modern-day India began making sugar syrup, cooling it in large flat bowls to produce raw sugar crystals that were easier to store and transport. In the local Indian language, these crystals were called (), which is the source of the word candy. The army of Alexander the Great was halted on the banks of river Indus by the refusal of his troops to go further east. They saw people in the Indian subcontinent growing sugarcane and making "granulated, salt-like sweet powder", locally called (), (), pronounced as () in Greek (Modern Greek, , ). On their return journey, the Greek soldiers carried back some of the "honey-bearing reeds". Sugarcane remained a limited crop for over a millennium. Sugar was a rare commodity and traders of sugar became wealthy. Venice, at the height of its financial power, was the chief sugar-distributing center of Europe. Moors started producing it in Sicily and Spain. Only after the Crusades did it begin to rival honey as a sweetener in Europe. The Spanish began cultivating sugarcane in the West Indies in 1506 (Cuba in 1523). The Portuguese first cultivated sugarcane in Brazil in 1532. Sugar remained a luxury in much of the world until the 18th century. Only the wealthy could afford it. In the 18th century, the demand for table sugar boomed in Europe and by the 19th century it had become regarded as a human necessity. The use of sugar grew from use in tea, to cakes, confectionery and chocolates. Suppliers marketed sugar in novel forms, such as solid cones, which required consumers to use a sugar nip, a pliers-like tool, in order to break off pieces. The demand for cheaper table sugar drove, in part, colonization of tropical islands and nations where labor-intensive sugarcane plantations and table sugar manufacturing could thrive. Growing sugar cane crop in hot humid climates, and producing table sugar in high temperature sugar mills was harsh, inhumane work. The demand for cheap labor for this work, in part, first drove slave trade from Africa (in particular West Africa), followed by indentured labor trade from South Asia (in particular India). Millions of slaves, followed by millions of indentured laborers were brought into the Caribbean, Indian Ocean, Pacific Islands, East Africa, Natal, north and eastern parts of South America, and southeast Asia. The modern ethnic mix of many nations, settled in the last two centuries, has been influenced by table sugar. Beginning in the late 18th century, the production of sugar became increasingly mechanized. The steam engine first powered a sugar mill in Jamaica in 1768, and, soon after, steam replaced direct firing as the source of process heat. During the same century, Europeans began experimenting with sugar production from other crops. Andreas Marggraf identified sucrose in beet root and his student Franz Achard built a sugar beet processing factory in Silesia (Prussia). The beet-sugar industry took off during the Napoleonic Wars, when France and the continent were cut off from Caribbean sugar. In 2009, about 20 percent of the world's sugar was produced from beets. Today, a large beet refinery producing around 1,500 tonnes of sugar a day needs a permanent workforce of about 150 for 24-hour production. Trends Table sugar (sucrose) comes from plant sources. Two important sugar crops predominate: sugarcane (Saccharum spp.) and sugar beets (Beta vulgaris), in which sugar can account for 12% to 20% of the plant's dry weight. Minor commercial sugar crops include the date palm (Phoenix dactylifera), sorghum (Sorghum vulgare), and the sugar maple (Acer saccharum). Sucrose is obtained by extraction of these crops with hot water; concentration of the extract gives syrups, from which solid sucrose can be crystallized. In 2017, worldwide production of table sugar amounted to 185 million tonnes. Most cane sugar comes from countries with warm climates, because sugarcane does not tolerate frost. Sugar beets, on the other hand, grow only in cooler temperate regions and do not tolerate extreme heat. About 80 percent of sucrose is derived from sugarcane, the rest almost all from sugar beets. In mid-2018, India and Brazil had about the same production of sugar – 34 million tonnes – followed by the European Union, Thailand, and China as the major producers. India, the European Union, and China were the leading domestic consumers of sugar in 2018. Beet sugar comes from regions with cooler climates: northwest and eastern Europe, northern Japan, plus some areas in the United States (including California). In the northern hemisphere, the beet-growing season ends with the start of harvesting around September. Harvesting and processing continues until March in some cases. The availability of processing plant capacity and the weather both influence the duration of harvesting and processing – the industry can store harvested beets until processed, but a frost-damaged beet becomes effectively unprocessable. The United States sets high sugar prices to support its producers, with the effect that many former purchasers of sugar have switched to corn syrup (beverage manufacturers) or moved out of the country (candy manufacturers). The low prices of glucose syrups produced from wheat and corn (maize) threaten the traditional sugar market. Used in combination with artificial sweeteners, they can allow drink manufacturers to produce very low-cost goods. Types Cane Since the 6th century BCE, cane sugar producers have crushed the harvested vegetable material from sugarcane in order to collect and filter the juice. They then treat the liquid, often with lime (calcium oxide), to remove impurities and then neutralize it. Boiling the juice then allows the sediment to settle to the bottom for dredging out, while the scum rises to the surface for skimming off. In cooling, the liquid crystallizes, usually in the process of stirring, to produce sugar crystals. Centrifuges usually remove the uncrystallized syrup. The producers can then either sell the sugar product for use as is, or process it further to produce lighter grades. The later processing may take place in another factory in another country. Sugarcane is a major component of Brazilian agriculture; the country is the world's largest producer of sugarcane and its derivative products, such as crystallized sugar and ethanol (ethanol fuel). Beet Beet sugar producers slice the washed beets, then extract the sugar with hot water in a "diffuser". An alkaline solution ("milk of lime" and carbon dioxide from the lime kiln) then serves to precipitate impurities (see carbonatation). After filtration, evaporation concentrates the juice to a content of about 70% solids, and controlled crystallisation extracts the sugar. A centrifuge removes the sugar crystals from the liquid, which gets recycled in the crystalliser stages. When economic constraints prevent the removal of more sugar, the manufacturer discards the remaining liquid, now known as molasses, or sells it on to producers of animal feed. Sieving the resultant white sugar produces different grades for selling. Cane versus beet It is difficult to distinguish between fully refined sugar produced from beet and cane. One way is by isotope analysis of carbon. Cane uses C4 carbon fixation, and beet uses C3 carbon fixation, resulting in a different ratio of 13C and 12C isotopes in the sucrose. Tests are used to detect fraudulent abuse of European Union subsidies or to aid in the detection of adulterated fruit juice. Sugar cane tolerates hot climates better, but the production of sugar cane needs approximately four times as much water as the production of sugar beet. As a result, some countries that traditionally produced cane sugar (such as Egypt) have built new beet sugar factories since about 2008. Some sugar factories process both sugar cane and sugar beets and extend their processing period in that way. The production of sugar leaves residues that differ substantially depending on the raw materials used and on the place of production. While cane molasses is often used in food preparation, humans find molasses from sugar beets unpalatable, and it consequently ends up mostly as industrial fermentation feedstock (for example in alcohol distilleries), or as animal feed. Once dried, either type of molasses can serve as fuel for burning. Pure beet sugar is difficult to find, so labelled, in the marketplace. Although some makers label their product clearly as "pure cane sugar", beet sugar is almost always labeled simply as sugar or pure sugar. Interviews with the five major beet sugar-producing companies revealed that many store brands or "private label" sugar products are pure beet sugar. The lot code can be used to identify the company and the plant from which the sugar came, enabling beet sugar to be identified if the codes are known. Culinary sugars Mill white Mill white, also called plantation white, crystal sugar or superior sugar is produced from raw sugar. It is exposed to sulfur dioxide during the production to reduce the concentration of color compounds and helps prevent further color development during the crystallization process. Although common to sugarcane-growing areas, this product does not store or ship well. After a few weeks, its impurities tend to promote discoloration and clumping; therefore this type of sugar is generally limited to local consumption. Blanco directo Blanco directo, a white sugar common in India and other south Asian countries, is produced by precipitating many impurities out of cane juice using phosphoric acid and calcium hydroxide, similar to the carbonatation technique used in beet sugar refining. Blanco directo is more pure than mill white sugar, but less pure than white refined. White refined White refined is the most common form of sugar in North America and Europe. Refined sugar is made by dissolving and purifying raw sugar using phosphoric acid similar to the method used for blanco directo, a carbonatation process involving calcium hydroxide and carbon dioxide, or by various filtration strategies. It is then further purified by filtration through a bed of activated carbon or bone char. Beet sugar refineries produce refined white sugar directly without an intermediate raw stage. White refined sugar is typically sold as granulated sugar, which has been dried to prevent clumping and comes in various crystal sizes for home and industrial use: Coarse-grain, such as sanding sugar (also called "pearl sugar", "decorating sugar", nibbed sugar or sugar nibs) is a coarse grain sugar used to add sparkle and flavor atop baked goods and candies. Its large reflective crystals will not dissolve when subjected to heat. Granulated, familiar as table sugar, with a grain size about 0.5 mm across. "Sugar cubes" are lumps for convenient consumption produced by mixing granulated sugar with sugar syrup. Caster (0.35 mm), a very fine sugar in Britain and other Commonwealth countries, so-named because the grains are small enough to fit through a sugar caster which is a small vessel with a perforated top, from which to sprinkle sugar at table. Commonly used in baking and mixed drinks, it is sold as "superfine" sugar in the United States. Because of its fineness, it dissolves faster than regular white sugar and is especially useful in meringues and cold liquids. Caster sugar can be prepared at home by grinding granulated sugar for a couple of minutes in a mortar or food processor. Powdered, 10X sugar, confectioner's sugar (0.060 mm), or icing sugar (0.024 mm), produced by grinding sugar to a fine powder. The manufacturer may add a small amount of anticaking agent to prevent clumping — either corn starch (1% to 3%) or tri-calcium phosphate. Brown sugar comes either from the late stages of cane sugar refining, when sugar forms fine crystals with significant molasses content, or from coating white refined sugar with a cane molasses syrup (blackstrap molasses). Brown sugar's color and taste become stronger with increasing molasses content, as do its moisture-retaining properties. Brown sugars also tend to harden if exposed to the atmosphere, although proper handling can reverse this. Measurement Dissolved sugar content Scientists and the sugar industry use degrees Brix (symbol °Bx), introduced by Adolf Brix, as units of measurement of the mass ratio of dissolved substance to water in a liquid. A 25 °Bx sucrose solution has 25 grams of sucrose per 100 grams of liquid; or, to put it another way, 25 grams of sucrose sugar and 75 grams of water exist in the 100 grams of solution. The Brix degrees are measured using an infrared sensor. This measurement does not equate to Brix degrees from a density or refractive index measurement, because it will specifically measure dissolved sugar concentration instead of all dissolved solids. When using a refractometer, one should report the result as "refractometric dried substance" (RDS). One might speak of a liquid as having 20 °Bx RDS. This refers to a measure of percent by weight of total dried solids and, although not technically the same as Brix degrees determined through an infrared method, renders an accurate measurement of sucrose content, since sucrose in fact forms the majority of dried solids. The advent of in-line infrared Brix measurement sensors has made measuring the amount of dissolved sugar in products economical using a direct measurement. Consumption Refined sugar was a luxury before the 18th century. It became widely popular in the 18th century, then graduated to becoming a necessary food in the 19th century. This evolution of taste and demand for sugar as an essential food ingredient unleashed major economic and social changes. Eventually, table sugar became sufficiently cheap and common enough to influence standard cuisine and flavored drinks. Sucrose forms a major element in confectionery and desserts. Cooks use it for sweetening. It can also act as a food preservative when used in sufficient concentrations, and thus is an important ingredient in the production of fruit preserves. Sucrose is important to the structure of many foods, including biscuits and cookies, cakes and pies, candy, and ice cream and sorbets. It is a common ingredient in many processed and so-called "junk foods". Nutritional information Fully refined sugar is 99.9% sucrose, thus providing only carbohydrate as dietary nutrient and 390 kilocalories per 100 g serving (table). There are no micronutrients of significance in fully refined sugar (table). Metabolism of sucrose In humans and other mammals, sucrose is broken down into its constituent monosaccharides, glucose and fructose, by sucrase or isomaltase glycoside hydrolases, which are located in the membrane of the microvilli lining the duodenum. The resulting glucose and fructose molecules are then rapidly absorbed into the bloodstream. In bacteria and some animals, sucrose is digested by the enzyme invertase. Sucrose is an easily assimilated macronutrient that provides a quick source of energy, provoking a rapid rise in blood glucose upon ingestion. Sucrose, as a pure carbohydrate, has an energy content of 3.94 kilocalories per gram (or 17 kilojoules per gram). If consumed excessively, sucrose may contribute to the development of metabolic syndrome, including increased risk for type 2 diabetes, insulin resistance, weight gain and obesity in adults and children. Tooth decay Tooth decay (dental caries) has become a pronounced health hazard associated with the consumption of sugars, especially sucrose. Oral bacteria such as Streptococcus mutans live in dental plaque and metabolize any free sugars (not just sucrose, but also glucose, lactose, fructose, and cooked starches) into lactic acid. The resultant lactic acid lowers the pH of the tooth's surface, stripping it of minerals in the process known as tooth decay. All 6-carbon sugars and disaccharides based on 6-carbon sugars can be converted by dental plaque bacteria into acid that demineralizes teeth, but sucrose may be uniquely useful to Streptococcus sanguinis (formerly Streptococcus sanguis) and Streptococcus mutans. Sucrose is the only dietary sugar that can be converted to sticky glucans (dextran-like polysaccharides) by extracellular enzymes. These glucans allow the bacteria to adhere to the tooth surface and to build up thick layers of plaque. The anaerobic conditions deep in the plaque encourage the formation of acids, which leads to carious lesions. Thus, sucrose could enable S. mutans, S. sanguinis and many other species of bacteria to adhere strongly and resist natural removal, e.g. by flow of saliva, although they are easily removed by brushing. The glucans and levans (fructose polysaccharides) produced by the plaque bacteria also act as a reserve food supply for the bacteria. Such a special role of sucrose in the formation of tooth decay is much more significant in light of the almost universal use of sucrose as the most desirable sweetening agent. Widespread replacement of sucrose by high-fructose corn syrup (HFCS) has not diminished the danger from sucrose. If smaller amounts of sucrose are present in the diet, they will still be sufficient for the development of thick, anaerobic plaque and plaque bacteria will metabolise other sugars in the diet, such as the glucose and fructose in HFCS. Glycemic index Sucrose is a disaccharide made up of 50% glucose and 50% fructose and has a glycemic index of 65. Sucrose is digested rapidly, but has a relatively low glycemic index due to its content of fructose, which has a minimal effect on blood glucose. As with other sugars, sucrose is digested into its components via the enzyme sucrase to glucose (blood sugar). The glucose component is transported into the blood where it serves immediate metabolic demands, or is converted and reserved in the liver as glycogen. Gout The occurrence of gout is connected with an excess production of uric acid. A diet rich in sucrose may lead to gout as it raises the level of insulin, which prevents excretion of uric acid from the body. As the concentration of uric acid in the body increases, so does the concentration of uric acid in the joint liquid and beyond a critical concentration, the uric acid begins to precipitate into crystals. Researchers have implicated sugary drinks high in fructose in a surge in cases of gout. Sucrose intolerance UN dietary recommendation In 2015, the World Health Organization published a new guideline on sugars intake for adults and children, as a result of an extensive review of the available scientific evidence by a multidisciplinary group of experts. The guideline recommends that both adults and children ensure their intake of free sugars (monosaccharides and disaccharides added to foods and beverages by the manufacturer, cook or consumer, and sugars naturally present in honey, syrups, fruit juices and fruit juice concentrates) is less than 10% of total energy intake. A level below 5% of total energy intake brings additional health benefits, especially with regards to dental caries. Religious concerns The sugar refining industry often uses bone char (calcinated animal bones) for decolorizing. About 25% of sugar produced in the U.S. is processed using bone char as a filter, the remainder being processed with activated carbon. As bone char does not seem to remain in finished sugar, Jewish religious leaders consider sugar filtered through it to be pareve, meaning that it is neither meat nor dairy and may be used with either type of food. However, the bone char must source to a kosher animal (e.g. cow, sheep) for the sugar to be kosher. Trade and economics One of the most widely traded commodities in the world throughout history, sugar accounts for around 2% of the global dry cargo market. International sugar prices show great volatility, ranging from around 3 cents to over 60 cents per pound in the 50 years. About 100 of the world's 180 countries produce sugar from beet or cane, a few more refine raw sugar to produce white sugar, and all countries consume sugar. Consumption of sugar ranges from around per person per annum in Ethiopia to around in Belgium. Consumption per capita rises with income per capita until it reaches a plateau of around per person per year in middle income countries. Many countries subsidize sugar production heavily. The European Union, the United States, Japan, and many developing countries subsidize domestic production and maintain high tariffs on imports. Sugar prices in these countries have often up to triple the prices on the international market; , with world market sugar futures prices strong, such prices were typically double world prices. Within international trade bodies, especially in the World Trade Organization (WTO), the "G20" countries led by Brazil have long argued that, because these sugar markets in essence exclude cane sugar imports, the G20 sugar producers receive lower prices than they would under free trade. While both the European Union and United States maintain trade agreements whereby certain developing and least developed countries (LDCs) can sell certain quantities of sugar into their markets, free of the usual import tariffs, countries outside these preferred trade régimes have complained that these arrangements violate the "most favoured nation" principle of international trade. This has led to numerous tariffs and levies in the past. In 2004, the WTO sided with a group of cane sugar exporting nations (led by Brazil and Australia) and ruled illegal the EU sugar-régime and the accompanying ACP-EU Sugar Protocol, that granted a group of African, Caribbean, and Pacific countries receive preferential access to the European sugar market. In response to this and to other rulings of the WTO, and owing to internal pressures against the EU sugar-régime, the European Commission proposed on 22 June 2005 a radical reform of the EU sugar-régime that cut prices by 39% and eliminated all EU sugar exports. In 2007, it seemed that the U.S. Sugar Program could become the next target for reform. However, some commentators expected heavy lobbying from the U.S. sugar industry, which donated $2.7 million to U.S. House and Senate incumbents in the 2006 U.S. election, more than any other group of U.S. food-growers. Especially prominent among sugar lobbyists were the Fanjul Brothers, so-called "sugar barons" who made the single individual contributions of soft money to both the Democratic and Republican parties in the U.S. political system. Small quantities of sugar, especially specialty grades of sugar, reach the market as 'fair trade' commodities; the fair trade system produces and sells these products with the understanding that a larger-than-usual fraction of the revenue will support small farmers in the developing world. However, whilst the Fairtrade Foundation offers a premium of $60.00 per tonne to small farmers for sugar branded as "Fairtrade", government schemes such as the U.S. Sugar Program and the ACP-EU Sugar Protocol offer premiums of around $400.00 per tonne above world market prices. However, the EU announced on 14 September 2007 that it had offered "to eliminate all duties and quotas on the import of sugar into the EU". References Further reading External links 3D images of sucrose archived from the original CDC – NIOSH Pocket Guide to Chemical Hazards Disaccharides Types of sugar
Sucrose
[ "Chemistry" ]
6,654
[ "Glycobiology", "Carbohydrates", "Glycosides", "Biomolecules" ]
50,604
https://en.wikipedia.org/wiki/Interacting%20boson%20model
The interacting boson model (IBM) is a model in nuclear physics in which nucleons (protons or neutrons) pair up, essentially acting as a single particle with boson properties, with integral spin of either 2 (d-boson) or 0 (s-boson). They correspond to a quintuplet and singlet, i.e. 6 states. It is sometimes known as the Interacting boson approximation (IBA). The IBM1/IBM-I model treats both types of nucleons the same and considers only pairs of nucleons coupled to total angular momentum 0 and 2, called respectively, s and d bosons. The IBM2/IBM-II model treats protons and neutrons separately. Both models are restricted to nuclei with even numbers of protons and neutrons. The model can be used to predict vibrational and rotational modes of non-spherical nuclei. History This model was invented by Akito Arima and Francesco Iachello in 1974. while working at the Kernfysisch Versneller Instituut(KVI) in Groningen, Netherlands. KVI is now property of Universitair Medisch Centrum Groningen (https://umcgresearch.org/). See also Liquid drop model Nuclear shell model References Further reading Evolution of shapes in even–even nuclei using the standard interacting boson model Nuclear physics
Interacting boson model
[ "Physics" ]
291
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
50,609
https://en.wikipedia.org/wiki/Nuclear%20shell%20model
In nuclear physics, atomic physics, and nuclear chemistry, the nuclear shell model utilizes the Pauli exclusion principle to model the structure of atomic nuclei in terms of energy levels. The first shell model was proposed by Dmitri Ivanenko (together with E. Gapon) in 1932. The model was developed in 1949 following independent work by several physicists, most notably Maria Goeppert Mayer and J. Hans D. Jensen, who received the 1963 Nobel Prize in Physics for their contributions to this model, and Eugene Wigner, who received the Nobel Prize alongside them for his earlier groundlaying work on the atomic nuclei. The nuclear shell model is partly analogous to the atomic shell model, which describes the arrangement of electrons in an atom, in that a filled shell results in better stability. When adding nucleons (protons and neutrons) to a nucleus, there are certain points where the binding energy of the next nucleon is significantly less than the last one. This observation that there are specific magic quantum numbers of nucleons (2, 8, 20, 28, 50, 82, and 126) that are more tightly bound than the following higher number is the origin of the shell model. The shells for protons and neutrons are independent of each other. Therefore, there can exist both "magic nuclei", in which one nucleon type or the other is at a magic number, and "doubly magic quantum nuclei", where both are. Due to variations in orbital filling, the upper magic numbers are 126 and, speculatively, 184 for neutrons, but only 114 for protons, playing a role in the search for the so-called island of stability. Some semi-magic numbers have been found, notably Z = 40, which gives the nuclear shell filling for the various elements; 16 may also be a magic number. To get these numbers, the nuclear shell model starts with an average potential with a shape somewhere between the square well and the harmonic oscillator. To this potential, a spin-orbit term is added. Even so, the total perturbation does not coincide with the experiment, and an empirical spin-orbit coupling must be added with at least two or three different values of its coupling constant, depending on the nuclei being studied. The magic numbers of nuclei, as well as other properties, can be arrived at by approximating the model with a three-dimensional harmonic oscillator plus a spin–orbit interaction. A more realistic but complicated potential is known as the Woods–Saxon potential. Modified harmonic oscillator model Consider a three-dimensional harmonic oscillator. This would give, for example, in the first three levels ("ℓ" is the angular momentum quantum number): Nuclei are built by adding protons and neutrons. These will always fill the lowest available level, with the first two protons filling level zero, the next six protons filling level one, and so on. As with electrons in the periodic table, protons in the outermost shell will be relatively loosely bound to the nucleus if there are only a few protons in that shell because they are farthest from the center of the nucleus. Therefore, nuclei with a full outer proton shell will have a higher nuclear binding energy than other nuclei with a similar total number of protons. The same is true for neutrons. This means that the magic numbers are expected to be those in which all occupied shells are full. In accordance with the experiment, we get 2 (level 0 full) and 8 (levels 0 and 1 full) for the first two numbers. However, the full set of magic numbers does not turn out correctly. These can be computed as follows: In a three-dimensional harmonic oscillator the total degeneracy of states at level n is . Due to the spin, the degeneracy is doubled and is . Thus, the magic numbers would befor all integer k. This gives the following magic numbers: 2, 8, 20, 40, 70, 112, ..., which agree with experiment only in the first three entries. These numbers are twice the tetrahedral numbers (1, 4, 10, 20, 35, 56, ...) from the Pascal Triangle. In particular, the first six shells are: level 0: 2 states (ℓ = 0) = 2. level 1: 6 states (ℓ = 1) = 6. level 2: 2 states (ℓ = 0) + 10 states (ℓ = 2) = 12. level 3: 6 states (ℓ = 1) + 14 states (ℓ = 3) = 20. level 4: 2 states (ℓ = 0) + 10 states (ℓ = 2) + 18 states (ℓ = 4) = 30. level 5: 6 states (ℓ = 1) + 14 states (ℓ = 3) + 22 states (ℓ = 5) = 42. where for every ℓ there are 2ℓ+1 different values of ml and 2 values of ms, giving a total of 4ℓ+2 states for every specific level. These numbers are twice the values of triangular numbers from the Pascal Triangle: 1, 3, 6, 10, 15, 21, .... Including a spin-orbit interaction We next include a spin–orbit interaction. First, we have to describe the system by the quantum numbers j, mj and parity instead of ℓ, ml and ms, as in the hydrogen–like atom. Since every even level includes only even values of ℓ, it includes only states of even (positive) parity. Similarly, every odd level includes only states of odd (negative) parity. Thus we can ignore parity in counting states. The first six shells, described by the new quantum numbers, are level 0 (n = 0): 2 states (j = ). Even parity. level 1 (n = 1): 2 states (j = ) + 4 states (j = ) = 6. Odd parity. level 2 (n = 2): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) = 12. Even parity. level 3 (n = 3): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) = 20. Odd parity. level 4 (n = 4): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) + 10 states (j = ) = 30. Even parity. level 5 (n = 5): 2 states (j = ) + 4 states (j = ) + 6 states (j = ) + 8 states (j = ) + 10 states (j = ) + 12 states (j = ) = 42. Odd parity. where for every j there are different states from different values of mj. Due to the spin–orbit interaction, the energies of states of the same level but with different j will no longer be identical. This is because in the original quantum numbers, when is parallel to , the interaction energy is positive, and in this case j = ℓ + s = ℓ + . When is anti-parallel to (i.e. aligned oppositely), the interaction energy is negative, and in this case . Furthermore, the strength of the interaction is roughly proportional to ℓ. For example, consider the states at level 4: The 10 states with j = come from ℓ = 4 and s parallel to ℓ. Thus they have a positive spin–orbit interaction energy. The 8 states with j = came from ℓ = 4 and s anti-parallel to ℓ. Thus they have a negative spin–orbit interaction energy. The 6 states with j = came from ℓ = 2 and s parallel to ℓ. Thus they have a positive spin–orbit interaction energy. However, its magnitude is half compared to the states with j = . The 4 states with j = came from ℓ = 2 and s anti-parallel to ℓ. Thus they have a negative spin–orbit interaction energy. However, its magnitude is half compared to the states with j = . The 2 states with j = came from ℓ = 0 and thus have zero spin–orbit interaction energy. Changing the profile of the potential The harmonic oscillator potential grows infinitely as the distance from the center r goes to infinity. A more realistic potential, such as the Woods–Saxon potential, would approach a constant at this limit. One main consequence is that the average radius of nucleons' orbits would be larger in a realistic potential. This leads to a reduced term in the Laplace operator of the Hamiltonian operator. Another main difference is that orbits with high average radii, such as those with high n or high ℓ, will have a lower energy than in a harmonic oscillator potential. Both effects lead to a reduction in the energy levels of high ℓ orbits. Predicted magic numbers Together with the spin–orbit interaction, and for appropriate magnitudes of both effects, one is led to the following qualitative picture: at all levels, the highest j states have their energies shifted downwards, especially for high n (where the highest j is high). This is both due to the negative spin–orbit interaction energy and to the reduction in energy resulting from deforming the potential into a more realistic one. The second-to-highest j states, on the contrary, have their energy shifted up by the first effect and down by the second effect, leading to a small overall shift. The shifts in the energy of the highest j states can thus bring the energy of states of one level closer to the energy of states of a lower level. The "shells" of the shell model are then no longer identical to the levels denoted by n, and the magic numbers are changed. We may then suppose that the highest j states for n = 3 have an intermediate energy between the average energies of n = 2 and n = 3, and suppose that the highest j states for larger n (at least up to n = 7) have an energy closer to the average energy of . Then we get the following shells (see the figure) 1st shell: 2 states (n = 0, j = ). 2nd shell: 6 states (n = 1, j = or ). 3rd shell: 12 states (n = 2, j = , or ). 4th shell: 8 states (n = 3, j = ). 5th shell: 22 states (n = 3, j = , or ; n = 4, j = ). 6th shell: 32 states (n = 4, j = , , or ; n = 5, j = ). 7th shell: 44 states (n = 5, j = , , , or ; n = 6, j = ). 8th shell: 58 states (n = 6, j = , , , , or ; n = 7, j = ). and so on. Note that the numbers of states after the 4th shell are doubled triangular numbers . Spin–orbit coupling causes so-called 'intruder levels' to drop down from the next higher shell into the structure of the previous shell. The sizes of the intruders are such that the resulting shell sizes are themselves increased to the next higher doubled triangular numbers from those of the harmonic oscillator. For example, 1f2p has 20 nucleons, and spin–orbit coupling adds 1g9/2 (10 nucleons), leading to a new shell with 30 nucleons. 1g2d3s has 30 nucleons, and adding intruder 1h11/2 (12 nucleons) yields a new shell size of 42, and so on. The magic numbers are then 2 and so on. This gives all the observed magic numbers and also predicts a new one (the so-called island of stability) at the value of 184 (for protons, the magic number 126 has not been observed yet, and more complicated theoretical considerations predict the magic number to be 114 instead). Another way to predict magic (and semi-magic) numbers is by laying out the idealized filling order (with spin–orbit splitting but energy levels not overlapping). For consistency, s is split into and components with 2 and 0 members respectively. Taking the leftmost and rightmost total counts within sequences bounded by / here gives the magic and semi-magic numbers. s(2,0)/p(4,2) > 2,2/6,8, so (semi)magic numbers 2,2/6,8 d(6,4):s(2,0)/f(8,6):p(4,2) > 14,18:20,20/28,34:38,40, so 14,20/28,40 g(10,8):d(6,4):s(2,0)/h(12,10):f(8,6):p(4,2) > 50,58,64,68,70,70/82,92,100,106,110,112, so 50,70/82,112 i(14,12):g(10,8):d(6,4):s(2,0)/j(16,14):h(12,10):f(8,6):p(4,2) > 126,138,148,156,162,166,168,168/184,198,210,220,228,234,238,240, so 126,168/184,240 The rightmost predicted magic numbers of each pair within the quartets bisected by / are double tetrahedral numbers from the Pascal Triangle: 2, 8, 20, 40, 70, 112, 168, 240 are 2x 1, 4, 10, 20, 35, 56, 84, 120, ..., and the leftmost members of the pairs differ from the rightmost by double triangular numbers: 2 − 2 = 0, 8 − 6 = 2, 20 − 14 = 6, 40 − 28 = 12, 70 − 50 = 20, 112 − 82 = 30, 168 − 126 = 42, 240 − 184 = 56, where 0, 2, 6, 12, 20, 30, 42, 56, ... are 2 × 0, 1, 3, 6, 10, 15, 21, 28, ... . Other properties of nuclei This model also predicts or explains with some success other properties of nuclei, in particular spin and parity of nuclei ground states, and to some extent their excited nuclear states as well. Take (oxygen-17) as an example: Its nucleus has eight protons filling the first three proton "shells", eight neutrons filling the first three neutron "shells", and one extra neutron. All protons in a complete proton shell have zero total angular momentum, since their angular momenta cancel each other. The same is true for neutrons. All protons in the same level (n) have the same parity (either +1 or −1), and since the parity of a pair of particles is the product of their parities, an even number of protons from the same level (n) will have +1 parity. Thus, the total angular momentum of the eight protons and the first eight neutrons is zero, and their total parity is +1. This means that the spin (i.e. angular momentum) of the nucleus, as well as its parity, are fully determined by that of the ninth neutron. This one is in the first (i.e. lowest energy) state of the 4th shell, which is a d-shell (ℓ = 2), and since p = (−1), this gives the nucleus an overall parity of +1. This 4th d-shell has a j = , thus the nucleus of is expected to have positive parity and total angular momentum , which indeed it has. The rules for the ordering of the nucleus shells are similar to Hund's Rules of the atomic shells, however, unlike its use in atomic physics, the completion of a shell is not signified by reaching the next n, as such the shell model cannot accurately predict the order of excited nuclei states, though it is very successful in predicting the ground states. The order of the first few terms are listed as follows: 1s, 1p, 1p, 1d, 2s, 1d... For further clarification on the notation refer to the article on the RussellSaunders term symbol. For nuclei farther from the magic quantum numbers one must add the assumption that due to the relation between the strong nuclear force and total angular momentum, protons or neutrons with the same n tend to form pairs of opposite angular momentum. Therefore, a nucleus with an even number of protons and an even number of neutrons has 0 spin and positive parity. A nucleus with an even number of protons and an odd number of neutrons (or vice versa) has the parity of the last neutron (or proton), and the spin equal to the total angular momentum of this neutron (or proton). By "last" we mean the properties coming from the highest energy level. In the case of a nucleus with an odd number of protons and an odd number of neutrons, one must consider the total angular momentum and parity of both the last neutron and the last proton. The nucleus parity will be a product of theirs, while the nucleus spin will be one of the possible results of the sum of their angular momenta (with other possible results being excited states of the nucleus). The ordering of angular momentum levels within each shell is according to the principles described above – due to spin–orbit interaction, with high angular momentum states having their energies shifted downwards due to the deformation of the potential (i.e. moving from a harmonic oscillator potential to a more realistic one). For nucleon pairs, however, it is often energetically favourable to be at high angular momentum, even if its energy level for a single nucleon would be higher. This is due to the relation between angular momentum and the strong nuclear force. The nuclear magnetic moment of neutrons and protons is partly predicted by this simple version of the shell model. The magnetic moment is calculated through j, ℓ and s of the "last" nucleon, but nuclei are not in states of well-defined ℓ and s. Furthermore, for odd-odd nuclei, one has to consider the two "last" nucleons, as in deuterium. Therefore, one gets several possible answers for the nuclear magnetic moment, one for each possible combined ℓ and s state, and the real state of the nucleus is a superposition of them. Thus the real (measured) nuclear magnetic moment is somewhere in between the possible answers. The electric dipole of a nucleus is always zero, because its ground state has a definite parity. The matter density (ψ, where ψ is the wavefunction) is always invariant under parity. This is usually the situation with the atomic electric dipole. Higher electric and magnetic multipole moments cannot be predicted by this simple version of the shell model for reasons similar to those in the case of deuterium. Including residual interactions For nuclei having two or more valence nucleons (i.e. nucleons outside a closed shell), a residual two-body interaction must be added. This residual term comes from the part of the inter-nucleon interaction not included in the approximative average potential. Through this inclusion, different shell configurations are mixed, and the energy degeneracy of states corresponding to the same configuration is broken. These residual interactions are incorporated through shell model calculations in a truncated model space (or valence space). This space is spanned by a basis of many-particle states where only single-particle states in the model space are active. The Schrödinger equation is solved on this basis, using an effective Hamiltonian specifically suited for the model space. This Hamiltonian is different from the one of free nucleons as, among other things, it has to compensate for excluded configurations. One can do away with the average potential approximation entirely by extending the model space to the previously inert core and treating all single-particle states up to the model space truncation as active. This forms the basis of the no-core shell model, which is an ab initio method. It is necessary to include a three-body interaction in such calculations to achieve agreement with experiments. Collective rotation and the deformed potential In 1953 the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was non-spherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to a large number of valence particles—and this intractability was even greater in the 1950s when computing power was extremely rudimentary. For these reasons, Aage Bohr, Ben Mottelson, and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is now known as the Nilsson model. It is essentially the harmonic oscillator model described in this article, but with anisotropy added, so the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z. Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier , known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ω is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level produces states whose expected angular momentum along the cranking axis is the desired value. Related models Igal Talmi developed a method to obtain the information from experimental data and use it to calculate and predict energies which have not been measured. This method has been successfully used by many nuclear physicists and has led to a deeper understanding of nuclear structure. The theory which gives a good description of these properties was developed. This description turned out to furnish the shell model basis of the elegant and successful interacting boson model. A model derived from the nuclear shell model is the alpha particle model developed by Henry Margenau, Edward Teller, J. K. Pering, T. H. Skyrme, also sometimes called the Skyrme model. Note, however, that the Skyrme model is usually taken to be a model of the nucleon itself, as a "cloud" of mesons (pions), rather than as a model of the nucleus as a "cloud" of alpha particles. See also Nuclear structure Table of nuclides Liquid drop model Isomeric shift Interacting boson model References Further reading External links Nuclear physics German inventions
Nuclear shell model
[ "Physics" ]
4,842
[ "Nuclear physics" ]
50,627
https://en.wikipedia.org/wiki/Conformal%20map
In mathematics, a conformal map is a function that locally preserves angles, but not necessarily lengths. More formally, let and be open subsets of . A function is called conformal (or angle-preserving) at a point if it preserves angles between directed curves through , as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature. The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix. For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertible complex analytic functions. In three and higher dimensions, Liouville's theorem sharply limits the conformal mappings to a few types. The notion of conformality generalizes in a natural way to maps between Riemannian or semi-Riemannian manifolds. In two dimensions If is an open subset of the complex plane , then a function is conformal if and only if it is holomorphic and its derivative is everywhere non-zero on . If is antiholomorphic (conjugate to a holomorphic function), it preserves angles but reverses their orientation. In the literature, there is another definition of conformal: a mapping which is one-to-one and holomorphic on an open set in the plane. The open mapping theorem forces the inverse function (defined on the image of ) to be holomorphic. Thus, under this definition, a map is conformal if and only if it is biholomorphic. The two definitions for conformal maps are not equivalent. Being one-to-one and holomorphic implies having a non-zero derivative. In fact, we have the following relation, the inverse function theorem: where . However, the exponential function is a holomorphic function with a nonzero derivative, but is not one-to-one since it is periodic. The Riemann mapping theorem, one of the profound results of complex analysis, states that any non-empty open simply connected proper subset of admits a bijective conformal map to the open unit disk in . Informally, this means that any blob can be transformed into a perfect circle by some conformal map. Global conformal maps on the Riemann sphere A map of the Riemann sphere onto itself is conformal if and only if it is a Möbius transformation. The complex conjugate of a Möbius transformation preserves angles, but reverses the orientation. For example, circle inversions. Conformality with respect to three types of angles In plane geometry there are three types of angles that may be preserved in a conformal map. Each is hosted by its own real algebra, ordinary complex numbers, split-complex numbers, and dual numbers. The conformal maps are described by linear fractional transformations in each case. In three or more dimensions Riemannian geometry In Riemannian geometry, two Riemannian metrics and on a smooth manifold are called conformally equivalent if for some positive function on . The function is called the conformal factor. A diffeomorphism between two Riemannian manifolds is called a conformal map if the pulled back metric is conformally equivalent to the original one. For example, stereographic projection of a sphere onto the plane augmented with a point at infinity is a conformal map. One can also define a conformal structure on a smooth manifold, as a class of conformally equivalent Riemannian metrics. Euclidean space A classical theorem of Joseph Liouville shows that there are far fewer conformal maps in higher dimensions than in two dimensions. Any conformal map from an open subset of Euclidean space into the same Euclidean space of dimension three or greater can be composed from three types of transformations: a homothety, an isometry, and a special conformal transformation. For linear transformations, a conformal map may only be composed of homothety and isometry, and is called a conformal linear transformation. Applications Applications of conformal mapping exist in aerospace engineering, in biomedical sciences (including brain mapping and genetic mapping), in applied math (for geodesics and in geometry), in earth sciences (including geophysics, geography, and cartography), in engineering, and in electronics. Cartography In cartography, several named map projections, including the Mercator projection and the stereographic projection are conformal. The preservation of compass directions makes them useful in marine navigation. Physics and engineering Conformal mappings are invaluable for solving problems in engineering and physics that can be expressed in terms of functions of a complex variable yet exhibit inconvenient geometries. By choosing an appropriate mapping, the analyst can transform the inconvenient geometry into a much more convenient one. For example, one may wish to calculate the electric field, , arising from a point charge located near the corner of two conducting planes separated by a certain angle (where is the complex coordinate of a point in 2-space). This problem per se is quite clumsy to solve in closed form. However, by employing a very simple conformal mapping, the inconvenient angle is mapped to one of precisely radians, meaning that the corner of two planes is transformed to a straight line. In this new domain, the problem (that of calculating the electric field impressed by a point charge located near a conducting wall) is quite easy to solve. The solution is obtained in this domain, , and then mapped back to the original domain by noting that was obtained as a function (viz., the composition of and ) of , whence can be viewed as , which is a function of , the original coordinate basis. Note that this application is not a contradiction to the fact that conformal mappings preserve angles, they do so only for points in the interior of their domain, and not at the boundary. Another example is the application of conformal mapping technique for solving the boundary value problem of liquid sloshing in tanks. If a function is harmonic (that is, it satisfies Laplace's equation ) over a plane domain (which is two-dimensional), and is transformed via a conformal map to another plane domain, the transformation is also harmonic. For this reason, any function which is defined by a potential can be transformed by a conformal map and still remain governed by a potential. Examples in physics of equations defined by a potential include the electromagnetic field, the gravitational field, and, in fluid dynamics, potential flow, which is an approximation to fluid flow assuming constant density, zero viscosity, and irrotational flow. One example of a fluid dynamic application of a conformal map is the Joukowsky transform that can be used to examine the field of flow around a Joukowsky airfoil. Conformal maps are also valuable in solving nonlinear partial differential equations in some specific geometries. Such analytic solutions provide a useful check on the accuracy of numerical simulations of the governing equation. For example, in the case of very viscous free-surface flow around a semi-infinite wall, the domain can be mapped to a half-plane in which the solution is one-dimensional and straightforward to calculate. For discrete systems, Noury and Yang presented a way to convert discrete systems root locus into continuous root locus through a well-know conformal mapping in geometry (aka inversion mapping). Maxwell's equations Maxwell's equations are preserved by Lorentz transformations which form a group including circular and hyperbolic rotations. The latter are sometimes called Lorentz boosts to distinguish them from circular rotations. All these transformations are conformal since hyperbolic rotations preserve hyperbolic angle, (called rapidity) and the other rotations preserve circular angle. The introduction of translations in the Poincaré group again preserves angles. A larger group of conformal maps for relating solutions of Maxwell's equations was identified by Ebenezer Cunningham (1908) and Harry Bateman (1910). Their training at Cambridge University had given them facility with the method of image charges and associated methods of images for spheres and inversion. As recounted by Andrew Warwick (2003) Masters of Theory: Each four-dimensional solution could be inverted in a four-dimensional hyper-sphere of pseudo-radius in order to produce a new solution. Warwick highlights this "new theorem of relativity" as a Cambridge response to Einstein, and as founded on exercises using the method of inversion, such as found in James Hopwood Jeans textbook Mathematical Theory of Electricity and Magnetism. General relativity In general relativity, conformal maps are the simplest and thus most common type of causal transformations. Physically, these describe different universes in which all the same events and interactions are still (causally) possible, but a new additional force is necessary to affect this (that is, replication of all the same trajectories would necessitate departures from geodesic motion because the metric tensor is different). It is often used to try to make models amenable to extension beyond curvature singularities, for example to permit description of the universe even before the Big Bang. See also Biholomorphic map Carathéodory's theorem – A conformal map extends continuously to the boundary Penrose diagram Schwarz–Christoffel mapping – a conformal transformation of the upper half-plane onto the interior of a simple polygon Special linear group – transformations that preserve volume (as opposed to angles) and orientation References Further reading Constantin Carathéodory (1932) Conformal Representation, Cambridge Tracts in Mathematics and Physics External links Interactive visualizations of many conformal maps Conformal Maps by Michael Trott, Wolfram Demonstrations Project. Conformal Mapping images of current flow in different geometries without and with magnetic field by Gerhard Brunthaler. Conformal Transformation: from Circle to Square. Online Conformal Map Grapher. Joukowski Transform Interactive WebApp Riemannian geometry Map projections Angle
Conformal map
[ "Physics", "Mathematics" ]
2,089
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Map projections", "Coordinate systems", "Wikipedia categories named after physical quantities", "Angle" ]
50,702
https://en.wikipedia.org/wiki/Environmental%20engineering
Environmental engineering is a professional engineering discipline related to environmental science. It encompasses broad scientific topics like chemistry, biology, ecology, geology, hydraulics, hydrology, microbiology, and mathematics to create solutions that will protect and also improve the health of living organisms and improve the quality of the environment. Environmental engineering is a sub-discipline of civil engineering and chemical engineering. While on the part of civil engineering, the Environmental Engineering is focused mainly on Sanitary Engineering. Environmental engineering applies scientific and engineering principles to improve and maintain the environment to protect human health, protect nature's beneficial ecosystems, and improve environmental-related enhancement of the quality of human life. Environmental engineers devise solutions for wastewater management, water and air pollution control, recycling, waste disposal, and public health. They design municipal water supply and industrial wastewater treatment systems, and design plans to prevent waterborne diseases and improve sanitation in urban, rural and recreational areas. They evaluate hazardous-waste management systems to evaluate the severity of such hazards, advise on treatment and containment, and develop regulations to prevent mishaps. They implement environmental engineering law, as in assessing the environmental impact of proposed construction projects. Environmental engineers study the effect of technological advances on the environment, addressing local and worldwide environmental issues such as acid rain, global warming, ozone depletion, water pollution and air pollution from automobile exhausts and industrial sources. Most jurisdictions impose licensing and registration requirements for qualified environmental engineers. Etymology The word environmental has its root in the late 19th-century French word environ (verb), meaning to encircle or to encompass. The word environment was used by Carlyle in 1827 to refer to the aggregate of conditions in which a person or thing lives. The meaning shifted again in 1956 when it was used in the ecological sense, where Ecology is the branch of science dealing with the relationship of living things to their environment. The second part of the phrase environmental engineer originates from Latin roots and was used in the 14th century French as engignour, meaning a constructor of military engines such as trebuchets, harquebuses, longbows, cannons, catapults, ballistas, stirrups, armour as well as other deadly or bellicose contraptions. The word engineer was not used to reference public works until the 16th century; and it likely entered the popular vernacular as meaning a contriver of public works during John Smeaton's time. History Ancient civilizations Environmental engineering is a name for work that has been done since early civilizations, as people learned to modify and control the environmental conditions to meet needs. As people recognized that their health was related to the quality of their environment, they built systems to improve it. The ancient Indus Valley Civilization (3300 B.C.E. to 1300 B.C.E.) had advanced control over their water resources. The public work structures found at various sites in the area include wells, public baths, water storage tanks, a drinking water system, and a city-wide sewage collection system. They also had an early canal irrigation system enabling large-scale agriculture. From 4000 to 2000 B.C.E., many civilizations had drainage systems and some had sanitation facilities, including the Mesopotamian Empire, Mohenjo-Daro, Egypt, Crete, and the Orkney Islands in Scotland. The Greeks also had aqueducts and sewer systems that used rain and wastewater to irrigate and fertilize fields. The first aqueduct in Rome was constructed in 312 B.C.E., and the Romans continued to construct aqueducts for irrigation and safe urban water supply during droughts. They also built an underground sewer system as early as the 7th century B.C.E. that fed into the Tiber River, draining marshes to create farmland as well as removing sewage from the city. Modern era Very little change was seen from the decline of the Roman Empire until the 19th century, where improvements saw increasing efforts focused on public health. Modern environmental engineering began in London in the mid-19th century when Joseph Bazalgette designed the first major sewerage system following the Great Stink. The city's sewer system conveyed raw sewage to the River Thames, which also supplied the majority of the city's drinking water, leading to an outbreak of cholera. The introduction of drinking water treatment and sewage treatment in industrialized countries reduced waterborne diseases from leading causes of death to rarities. The field emerged as a separate academic discipline during the middle of the 20th century in response to widespread public concern about water and air pollution and other environmental degradation. As society and technology grew more complex, they increasingly produced unintended effects on the natural environment. One example is the widespread application of the pesticide DDT to control agricultural pests in the years following World War II. The story of DDT as vividly told in Rachel Carson's Silent Spring (1962) is considered to be the birth of the modern environmental movement, which led to the modern field of "environmental engineering." Education Many universities offer environmental engineering programs through either the department of civil engineering or chemical engineering and also including electronic projects to develop and balance the environmental conditions. Environmental engineers in a civil engineering program often focus on hydrology, water resources management, bioremediation, and water and wastewater treatment plant design. Environmental engineers in a chemical engineering program tend to focus on environmental chemistry, advanced air and water treatment technologies, and separation processes. Some subdivisions of environmental engineering include natural resources engineering and agricultural engineering. Courses for students fall into a few broad classes: Mechanical engineering courses oriented towards designing machines and mechanical systems for environmental use such as water and wastewater treatment facilities, pumping stations, garbage segregation plants, and other mechanical facilities. Environmental engineering or environmental systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment. Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects of chemicals in the environment, including any mining processes, pollutants, and also biochemical processes. Environmental technology courses oriented towards producing electronic or electrical graduates capable of developing devices and artifacts able to monitor, measure, model and control environmental impact, including monitoring and managing energy generation from renewable sources. Curriculum The following topics make up a typical curriculum in environmental engineering: Mass and Energy transfer Environmental chemistry Inorganic chemistry Organic Chemistry Nuclear Chemistry Growth models Resource consumption Population growth Economic growth Risk assessment Hazard identification Dose-response Assessment Exposure assessment Risk characterization Comparative risk analysis Water pollution Water resources and pollutants Oxygen demand Pollutant transport Water and waste water treatment Air pollution Industry, transportation, commercial and residential emissions Criteria and toxic air pollutants Pollution modelling (e.g. Atmospheric dispersion modeling) Pollution control Air pollution and meteorology Global change Greenhouse effect and global temperature Carbon, nitrogen, and oxygen cycle IPCC emissions scenarios Oceanic changes (ocean acidification, other effects of global warming on oceans) and changes in the stratosphere (see Physical impacts of climate change) Solid waste management and resource recovery Life cycle assessment Source reduction Collection and transfer operations Recycling Waste-to-energy conversion Landfill Applications Water supply and treatment Environmental engineers evaluate the water balance within a watershed and determine the available water supply, the water needed for various needs in that watershed, the seasonal cycles of water movement through the watershed and they develop systems to store, treat, and convey water for various uses. Water is treated to achieve water quality objectives for the end uses. In the case of a potable water supply, water is treated to minimize the risk of infectious disease transmission, the risk of non-infectious illness, and to create a palatable water flavor. Water distribution systems are designed and built to provide adequate water pressure and flow rates to meet various end-user needs such as domestic use, fire suppression, and irrigation. Wastewater treatment There are numerous wastewater treatment technologies. A wastewater treatment train can consist of a primary clarifier system to remove solid and floating materials, a secondary treatment system consisting of an aeration basin followed by flocculation and sedimentation or an activated sludge system and a secondary clarifier, a tertiary biological nitrogen removal system, and a final disinfection process. The aeration basin/activated sludge system removes organic material by growing bacteria (activated sludge). The secondary clarifier removes the activated sludge from the water. The tertiary system, although not always included due to costs, is becoming more prevalent to remove nitrogen and phosphorus and to disinfect the water before discharge to a surface water stream or ocean outfall. Air pollution management Scientists have developed air pollution dispersion models to evaluate the concentration of a pollutant at a receptor or the impact on overall air quality from vehicle exhausts and industrial flue gas stack emissions. To some extent, this field overlaps the desire to decrease carbon dioxide and other greenhouse gas emissions from combustion processes. Environmental impact assessment and mitigation Environmental engineers apply scientific and engineering principles to evaluate if there are likely to be any adverse impacts to water quality, air quality, habitat quality, flora and fauna, agricultural capacity, traffic, ecology, and noise. If impacts are expected, they then develop mitigation measures to limit or prevent such impacts. An example of a mitigation measure would be the creation of wetlands in a nearby location to mitigate the filling in of wetlands necessary for a road development if it is not possible to reroute the road. In the United States, the practice of environmental assessment was formally initiated on January 1, 1970, the effective date of the National Environmental Policy Act (NEPA). Since that time, more than 100 developing and developed nations either have planned specific analogous laws or have adopted procedure used elsewhere. NEPA is applicable to all federal agencies in the United States. Regulatory agencies Environmental Protection Agency The U.S. Environmental Protection Agency (EPA) is one of the many agencies that work with environmental engineers to solve critical issues. An essential component of EPA's mission is to protect and improve air, water, and overall environmental quality to avoid or mitigate the consequences of harmful effects. See also Associations References Further reading Davis, M. L. and D. A. Cornwell, (2006) Introduction to environmental engineering (4th ed.) McGraw-Hill Chemical engineering Civil engineering Environmental science Engineering disciplines Environmental terminology
Environmental engineering
[ "Chemistry", "Engineering", "Environmental_science" ]
2,085
[ "Chemical engineering", "Construction", "Civil engineering", "nan", "Environmental engineering" ]
50,705
https://en.wikipedia.org/wiki/Construction%20engineering
Construction engineering, also known as construction operations, is a professional subdiscipline of civil engineering that deals with the designing, planning, construction, and operations management of infrastructure such as roadways, tunnels, bridges, airports, railroads, facilities, buildings, dams, utilities and other projects. Construction engineers learn some of the design aspects similar to civil engineers as well as project management aspects. At the educational level, civil engineering students concentrate primarily on the design work which is more analytical, gearing them toward a career as a design professional. This essentially requires them to take a multitude of challenging engineering science and design courses as part of obtaining a 4-year accredited degree. Education for construction engineers is primarily focused on construction procedures, methods, costs, schedules and personnel management. Their primary concern is to deliver a project on time within budget and of the desired quality. Regarding educational requirements, construction engineering students take basic design courses in civil engineering, as well as construction management courses. Work activities Being a sub-discipline of civil engineering, construction engineers apply their knowledge and business, technical and management skills obtained from their undergraduate degree to oversee projects that include bridges, buildings and housing projects. Construction engineers are heavily involved in the design and management/ allocation of funds in these projects. They are charged with risk analysis, costing and planning. A career in design work does require a professional engineer license (PE). Individuals who pursue this career path are strongly advised to sit for the Engineer in Training exam (EIT), also, referred to as the Fundamentals of Engineering Exam (FE) while in college as it takes five years' (4 years in USA) post-graduate to obtain the PE license. Some states have recently changed the PE license exam pre-requisite of 4 years work experience after graduation to become a licensed Professional Engineer where an EIT is eligible to take the PE Exam in as little as 6 months after taking the FE exam. Entry-level construction engineers position is typically project engineers or assistant project engineers. They are responsible for preparing purchasing requisitions, processing change orders, preparing monthly budgeting reports and handling meeting minutes. The construction management position does not necessarily require a PE license; however, possessing one does make the individual more marketable, as the PE license allows the individual to sign off on temporary structure designs. Abilities Construction engineers are problem solvers. They contribute to the creation of infrastructure that best meets the unique demands of its environment. They must be able to understand infrastructure life cycles. When compared and contrasted to design engineers, construction engineers bring to the table their own unique perspectives for solving technical challenges with clarity and imagination. While individuals considering this career path should certainly have a strong understanding of mathematics and science, many other skills are also highly desirable, including critical and analytical thinking, time management, people management and good communication skills. Educational requirements Individuals looking to obtain a construction engineering degree must first ensure that the program is accredited by the Accreditation Board for Engineering and Technology (ABET). ABET accreditation is assurance that a college or university program meets the quality standards established by the profession for which it prepares its students. In the US there are currently twenty-five programs that exist in the entire country so careful college consideration is advised. A typical construction engineering curriculum is a mixture of engineering mechanics, engineering design, construction management and general science and mathematics. This usually leads to a Bachelor of Science degree. The B.S. degree along with some design or construction experience is sufficient for most entry-level positions. Graduate schools may be an option for those who want to go further in depth of the construction and engineering subjects taught at the undergraduate level. In most cases construction engineering graduates look to either civil engineering, engineering management or business administration as a possible graduate degree. Job prospects Job prospects for construction engineers generally have a strong cyclical variation. For example, starting in 2008 and continuing until at least 2011, job prospects have been poor due to the collapse of housing bubbles in many parts of the world. This sharply reduced demand for construction, forced construction professionals towards infrastructure construction and therefore increased the competition faced by established and new construction engineers. This increased competition and a core reduction in quantity demand is in parallel with a possible shift in the demand for construction engineers due to the automation of many engineering tasks, overall resulting in reduced prospects for construction engineers. In early 2010, the United States construction industry had a 27% unemployment rate, this is nearly three times higher than the 9.7% national average unemployment rate. The construction unemployment rate (including tradesmen) is comparable to the United States 1933 unemployment rate—the lowest point of the Great Depression—of 25%. Remuneration The average salary for a civil engineer in the UK depends on the sector and more specifically the level of experience of the individual. A 2010 survey of the remuneration and benefits of those occupying jobs in construction and the built environment industry showed that the average salary of a civil engineer in the UK is £29,582. In the United States, as of May 2013, the average was $85,640. The average salary varies depending on experience, for example the average annual salary for a civil engineer with between 3 and 6 years' experience is £23,813. For those with between 14 and 20 years' experience the average is £38,214. See also Architectural engineering Building officials Civil engineering Constructability Construction communication Construction estimating software Construction law Construction management Cost engineering Cost overrun Earthquake engineering Engineering, procurement and construction (EPC) Engineering, procurement, construction and installation, (EPCI) Index of construction articles International Building Code List of BIM software Military engineering Quantity surveyor Structural engineering Work breakdown structure References Construction and extraction occupations Engineering disciplines Civil engineering Building engineering Construction management Transportation engineering
Construction engineering
[ "Engineering" ]
1,154
[ "Building engineering", "Industrial engineering", "Construction", "Transportation engineering", "Civil engineering", "nan", "Construction management", "Architecture" ]
50,719
https://en.wikipedia.org/wiki/Quantum%20harmonic%20oscillator
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known. One-dimensional harmonic oscillator Hamiltonian and energy eigenstates The Hamiltonian of the particle is: where is the particle's mass, is the force constant, is the angular frequency of the oscillator, is the position operator (given by in the coordinate basis), and is the momentum operator (given by in the coordinate basis). The first term in the Hamiltonian represents the kinetic energy of the particle, and the second term represents its potential energy, as in Hooke's law. The time-independent Schrödinger equation (TISE) is, where denotes a real number (which needs to be determined) that will specify a time-independent energy level, or eigenvalue, and the solution denotes that level's energy eigenstate. Then solve the differential equation representing this eigenvalue problem in the coordinate basis, for the wave function , using a spectral method. It turns out that there is a family of solutions. In this basis, they amount to Hermite functions, The functions Hn are the physicists' Hermite polynomials, The corresponding energy levels are The expectation values of position and momentum combined with variance of each variable can be derived from the wavefunction to understand the behavior of the energy eigenkets. They are shown to be and owing to the symmetry of the problem, whereas: The variance in both position and momentum are observed to increase for higher energy levels. The lowest energy level has value of which is its minimum value due to uncertainty relation and also corresponds to a gaussian wavefunction. This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples of ) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy (the energy of the state, called the ground state) is not equal to the minimum of the potential well, but above it; this is called zero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with the Heisenberg uncertainty principle. The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy. As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. (See the discussion below of the highly excited states.) This is consistent with the classical harmonic oscillator, in which the particle spends more of its time (and is therefore more likely to be found) near the turning points, where it is moving the slowest. The correspondence principle is thus satisfied. Moreover, special nondispersive wave packets, with minimum uncertainty, called coherent states oscillate very much like classical objects, as illustrated in the figure; they are not eigenstates of the Hamiltonian. Ladder operator method The "ladder operator" method, developed by Paul Dirac, allows extraction of the energy eigenvalues without directly solving the differential equation. It is generalizable to more complicated problems, notably in quantum field theory. Following this approach, we define the operators and its adjoint , Note these operators classically are exactly the generators of normalized rotation in the phase space of and , i.e they describe the forwards and backwards evolution in time of a classical harmonic oscillator. These operators lead to the following representation of and , The operator is not Hermitian, since itself and its adjoint are not equal. The energy eigenstates , when operated on by these ladder operators, give From the relations above, we can also define a number operator , which has the following property: The following commutators can be easily obtained by substituting the canonical commutation relation, and the Hamilton operator can be expressed as so the eigenstates of are also the eigenstates of energy. To see that, we can apply to a number state : Using the property of the number operator : we get: Thus, since solves the TISE for the Hamiltonian operator , is also one of its eigenstates with the corresponding eigenvalue: QED. The commutation property yields and similarly, This means that acts on to produce, up to a multiplicative constant, , and acts on to produce . For this reason, is called an annihilation operator ("lowering operator"), and a creation operator ("raising operator"). The two operators together are called ladder operators. Given any energy eigenstate, we can act on it with the lowering operator, , to produce another eigenstate with less energy. By repeated application of the lowering operator, it seems that we can produce energy eigenstates down to . However, since the smallest eigenvalue of the number operator is 0, and In this case, subsequent applications of the lowering operator will just produce zero, instead of additional energy eigenstates. Furthermore, we have shown above that Finally, by acting on |0⟩ with the raising operator and multiplying by suitable normalization factors, we can produce an infinite set of energy eigenstates such that which matches the energy spectrum given in the preceding section. Arbitrary eigenstates can be expressed in terms of |0⟩, Analytical questions The preceding analysis is algebraic, using only the commutation relations between the raising and lowering operators. Once the algebraic analysis is complete, one should turn to analytical questions. First, one should find the ground state, that is, the solution of the equation . In the position representation, this is the first-order differential equation whose solution is easily found to be the Gaussian Conceptually, it is important that there is only one solution of this equation; if there were, say, two linearly independent ground states, we would get two independent chains of eigenvectors for the harmonic oscillator. Once the ground state is computed, one can show inductively that the excited states are Hermite polynomials times the Gaussian ground state, using the explicit form of the raising operator in the position representation. One can also prove that, as expected from the uniqueness of the ground state, the Hermite functions energy eigenstates constructed by the ladder method form a complete orthonormal set of functions. Explicitly connecting with the previous section, the ground state |0⟩ in the position representation is determined by , hence so that , and so on. Natural length and energy scales The quantum harmonic oscillator possesses natural scales for length and energy, which can be used to simplify the problem. These can be found by nondimensionalization. The result is that, if energy is measured in units of and distance in units of , then the Hamiltonian simplifies to while the energy eigenfunctions and eigenvalues simplify to Hermite functions and integers offset by a half, where are the Hermite polynomials. To avoid confusion, these "natural units" will mostly not be adopted in this article. However, they frequently come in handy when performing calculations, by bypassing clutter. For example, the fundamental solution (propagator) of , the time-dependent Schrödinger operator for this oscillator, simply boils down to the Mehler kernel, where . The most general solution for a given initial configuration then is simply Coherent states The coherent states (also known as Glauber states) of the harmonic oscillator are special nondispersive wave packets, with minimum uncertainty , whose observables' expectation values evolve like a classical system. They are eigenvectors of the annihilation operator, not the Hamiltonian, and form an overcomplete basis which consequentially lacks orthogonality. The coherent states are indexed by and expressed in the basis as Since coherent states are not energy eigenstates, their time evolution is not a simple shift in wavefunction phase. The time-evolved states are, however, also coherent states but with phase-shifting parameter instead: . Because and via the Kermack-McCrae identity, the last form is equivalent to a unitary displacement operator acting on the ground state: . Calculating the expectation values: where is the phase contributed by complex . These equations confirm the oscillating behavior of the particle. The uncertainties calculated using the numeric method are: which gives . Since the only wavefunction that can have lowest position-momentum uncertainty, , is a gaussian wavefunction, and since the coherent state wavefunction has minimum position-momentum uncertainty, we note that the general gaussian wavefunction in quantum mechanics has the form:Substituting the expectation values as a function of time, gives the required time varying wavefunction. The probability of each energy eigenstates can be calculated to find the energy distribution of the wavefunction: which corresponds to Poisson distribution. Highly excited states When is large, the eigenstates are localized into the classical allowed region, that is, the region in which a classical particle with energy can move. The eigenstates are peaked near the turning points: the points at the ends of the classically allowed region where the classical particle changes direction. This phenomenon can be verified through asymptotics of the Hermite polynomials, and also through the WKB approximation. The frequency of oscillation at is proportional to the momentum of a classical particle of energy and position . Furthermore, the square of the amplitude (determining the probability density) is inversely proportional to , reflecting the length of time the classical particle spends near . The system behavior in a small neighborhood of the turning point does not have a simple classical explanation, but can be modeled using an Airy function. Using properties of the Airy function, one may estimate the probability of finding the particle outside the classically allowed region, to be approximately This is also given, asymptotically, by the integral Phase space solutions In the phase space formulation of quantum mechanics, eigenstates of the quantum harmonic oscillator in several different representations of the quasiprobability distribution can be written in closed form. The most widely used of these is for the Wigner quasiprobability distribution. The Wigner quasiprobability distribution for the energy eigenstate is, in the natural units described above, where Ln are the Laguerre polynomials. This example illustrates how the Hermite and Laguerre polynomials are linked through the Wigner map. Meanwhile, the Husimi Q function of the harmonic oscillator eigenstates have an even simpler form. If we work in the natural units described above, we have This claim can be verified using the Segal–Bargmann transform. Specifically, since the raising operator in the Segal–Bargmann representation is simply multiplication by and the ground state is the constant function 1, the normalized harmonic oscillator states in this representation are simply . At this point, we can appeal to the formula for the Husimi Q function in terms of the Segal–Bargmann transform. N-dimensional isotropic harmonic oscillator The one-dimensional harmonic oscillator is readily generalizable to dimensions, where . In one dimension, the position of the particle was specified by a single coordinate, . In dimensions, this is replaced by position coordinates, which we label . Corresponding to each position coordinate is a momentum; we label these . The canonical commutation relations between these operators are The Hamiltonian for this system is As the form of this Hamiltonian makes clear, the -dimensional harmonic oscillator is exactly analogous to independent one-dimensional harmonic oscillators with the same mass and spring constant. In this case, the quantities would refer to the positions of each of the particles. This is a convenient property of the potential, which allows the potential energy to be separated into terms depending on one coordinate each. This observation makes the solution straightforward. For a particular set of quantum numbers the energy eigenfunctions for the -dimensional oscillator are expressed in terms of the 1-dimensional eigenfunctions as: In the ladder operator method, we define sets of ladder operators, By an analogous procedure to the one-dimensional case, we can then show that each of the and operators lower and raise the energy by respectively. The Hamiltonian is This Hamiltonian is invariant under the dynamic symmetry group (the unitary group in dimensions), defined by where is an element in the defining matrix representation of . The energy levels of the system are As in the one-dimensional case, the energy is quantized. The ground state energy is times the one-dimensional ground energy, as we would expect using the analogy to independent one-dimensional oscillators. There is one further difference: in the one-dimensional case, each energy level corresponds to a unique quantum state. In -dimensions, except for the ground state, the energy levels are degenerate, meaning there are several states with the same energy. The degeneracy can be calculated relatively easily. As an example, consider the 3-dimensional case: Define . All states with the same will have the same energy. For a given , we choose a particular . Then . There are possible pairs . can take on the values to , and for each the value of is fixed. The degree of degeneracy therefore is: Formula for general and [ being the dimension of the symmetric irreducible -th power representation of the unitary group ]: The special case = 3, given above, follows directly from this general equation. This is however, only true for distinguishable particles, or one particle in dimensions (as dimensions are distinguishable). For the case of bosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integer using integers less than or equal to . This arises due to the constraint of putting quanta into a state ket where and , which are the same constraints as in integer partition. Example: 3D isotropic harmonic oscillator The Schrödinger equation for a particle in a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables. This procedure is analogous to the separation performed in the hydrogen-like atom problem, but with a different spherically symmetric potential where is the mass of the particle. Because will be used below for the magnetic quantum number, mass is indicated by , instead of , as earlier in this article. The solution to the equation is: where is a normalization constant; ; are generalized Laguerre polynomials; The order of the polynomial is a non-negative integer; is a spherical harmonic function; is the reduced Planck constant: The energy eigenvalue is The energy is usually described by the single quantum number Because is a non-negative integer, for every even we have and for every odd we have . The magnetic quantum number is an integer satisfying , so for every and ℓ there are 2ℓ + 1 different quantum states, labeled by . Thus, the degeneracy at level is where the sum starts from 0 or 1, according to whether is even or odd. This result is in accordance with the dimension formula above, and amounts to the dimensionality of a symmetric representation of , the relevant degeneracy group. Applications Harmonic oscillators lattice: phonons The notation of a harmonic oscillator can be extended to a one-dimensional lattice of many particles. Consider a one-dimensional quantum mechanical harmonic chain of N identical atoms. This is the simplest quantum mechanical model of a lattice, and we will see how phonons arise from it. The formalism that we will develop for this model is readily generalizable to two and three dimensions. As in the previous section, we denote the positions of the masses by , as measured from their equilibrium positions (i.e. if the particle is at its equilibrium position). In two or more dimensions, the are vector quantities. The Hamiltonian for this system is where is the (assumed uniform) mass of each atom, and and are the position and momentum operators for the i th atom and the sum is made over the nearest neighbors (nn). However, it is customary to rewrite the Hamiltonian in terms of the normal modes of the wavevector rather than in terms of the particle coordinates so that one can work in the more convenient Fourier space. We introduce, then, a set of "normal coordinates" , defined as the discrete Fourier transforms of the s, and "conjugate momenta" defined as the Fourier transforms of the s, The quantity will turn out to be the wave number of the phonon, i.e. 2π divided by the wavelength. It takes on quantized values, because the number of atoms is finite. This preserves the desired commutation relations in either real space or wave vector space From the general result it is easy to show, through elementary trigonometry, that the potential energy term is where The Hamiltonian may be written in wave vector space as Note that the couplings between the position variables have been transformed away; if the s and s were hermitian (which they are not), the transformed Hamiltonian would describe uncoupled harmonic oscillators. The form of the quantization depends on the choice of boundary conditions; for simplicity, we impose periodic boundary conditions, defining the -th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is The upper bound to comes from the minimum wavelength, which is twice the lattice spacing , as discussed above. The harmonic oscillator eigenvalues or energy levels for the mode are If we ignore the zero-point energy then the levels are evenly spaced at So an exact amount of energy , must be supplied to the harmonic oscillator lattice to push it to the next energy level. In analogy to the photon case when the electromagnetic field is quantised, the quantum of vibrational energy is called a phonon. All quantum systems show wave-like and particle-like properties. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described elsewhere. In the continuum limit, , , while is held fixed. The canonical coordinates devolve to the decoupled momentum modes of a scalar field, , whilst the location index (not the displacement dynamical variable) becomes the parameter argument of the scalar field, . Molecular vibrations The vibrations of a diatomic molecule are an example of a two-body version of the quantum harmonic oscillator. In this case, the angular frequency is given by where is the reduced mass and and are the masses of the two atoms. The Hooke's atom is a simple model of the helium atom using the quantum harmonic oscillator. Modelling phonons, as discussed above. A charge with mass in a uniform magnetic field is an example of a one-dimensional quantum harmonic oscillator: Landau quantization. See also Notes References Bibliography External links Quantum Harmonic Oscillator Rationale for choosing the ladder operators Live 3D intensity plots of quantum harmonic oscillator Driven and damped quantum harmonic oscillator (lecture notes of course "quantum optics in electric circuits") Quantum models Oscillators
Quantum harmonic oscillator
[ "Physics" ]
4,165
[ "Quantum models", "Quantum mechanics" ]
50,903
https://en.wikipedia.org/wiki/Wavelet
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful for signal processing. For example, a wavelet could be created to have a frequency of middle C and a short duration of roughly one tenth of a second. If this wavelet were to be convolved with a signal created from the recording of a melody, then the resulting signal would be useful for determining when the middle C note appeared in the song. Mathematically, a wavelet correlates with a signal if a portion of the signal is similar. Correlation is at the core of many practical wavelet applications. As a mathematical tool, wavelets can be used to extract information from many kinds of data, including audio signals and images. Sets of wavelets are needed to analyze data fully. "Complementary" wavelets decompose a signal without gaps or overlaps so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet-based compression/decompression algorithms, where it is desirable to recover the original information with minimal loss. In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square-integrable functions. This is accomplished through coherent states. In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. Multiple, closely spaced openings (e.g., a diffraction grating), can result in a complex pattern of varying intensity. Etymology The word wavelet has been used for decades in digital signal processing and exploration geophysics. The equivalent French word ondelette meaning "small wave" was used by Jean Morlet and Alex Grossmann in the early 1980s. Wavelet theory Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Discrete wavelet transform (continuous in time) of a discrete-time (sampled) signal by using discrete-time filterbanks of dyadic (octave band) configuration is a wavelet approximation to that signal. The coefficients of such a filter bank are called the shift and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a continuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle. Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based. Continuous wavelet transforms (continuous shift and scale parameters) In continuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of the Lp function space L2(R) ). For instance the signal may be represented on every frequency band of the form [f, 2f] for all positive frequencies f > 0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components. The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function ψ in L2(R), the mother wavelet. For the example of the scale one frequency band [1, 2] this function is with the (normalized) sinc function. That, Meyer's, and two other examples of mother wavelets are: The subspace of scale a or frequency band [1/a, 2/a] is generated by the functions (sometimes called child wavelets) where a is positive and defines the scale and b is any real number and defines the shift. The pair (a, b) defines a point in the right halfplane R+ × R. The projection of a function x onto the subspace of scale a then has the form with wavelet coefficients For the analysis of the signal x, one can assemble the wavelet coefficients into a scaleogram of the signal. See a list of some Continuous wavelets. Discrete wavelet transforms (discrete shift and scale parameters, continuous in time) It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper halfplane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is the affine system for some real parameters a > 1, b > 0. The corresponding discrete subset of the halfplane consists of all the points (am, nb am) with m, n in Z. The corresponding child wavelets are now given as A sufficient condition for the reconstruction of any signal x of finite energy by the formula is that the functions form an orthonormal basis of L2(R). Multiresolution based discrete wavelet transforms (continuous in time) In any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. In special situations this numerical complexity can be avoided if the scaled and shifted wavelets form a multiresolution analysis. This means that there has to exist an auxiliary function, the father wavelet φ in L2(R), and that a is an integer. A typical choice is a = 2 and b = 1. The most famous pair of father and mother wavelets is the Daubechies 4-tap wavelet. Note that not every orthonormal discrete wavelet basis can be associated to a multiresolution analysis; for example, the Journe wavelet admits no multiresolution analysis. From the mother and father wavelets one constructs the subspaces The father wavelet keeps the time domain properties, while the mother wavelets keeps the frequency domain properties. From these it is required that the sequence forms a multiresolution analysis of L2 and that the subspaces are the orthogonal "differences" of the above sequence, that is, Wm is the orthogonal complement of Vm inside the subspace Vm−1, In analogy to the sampling theorem one may conclude that the space Vm with sampling distance 2m more or less covers the frequency baseband from 0 to 1/2m-1. As orthogonal complement, Wm roughly covers the band [1/2m−1, 1/2m]. From those inclusions and orthogonality relations, especially , follows the existence of sequences and that satisfy the identities so that and so that The second identity of the first pair is a refinement equation for the father wavelet φ. Both pairs of identities form the basis for the algorithm of the fast wavelet transform. From the multiresolution analysis derives the orthogonal decomposition of the space L2 as For any signal or function this gives a representation in basis functions of the corresponding subspaces as where the coefficients are and Time-causal wavelets For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al and Lindeberg, with the latter method also involving a memory-efficient time-recursive implementation. Mother wavelet For practical applications, and for efficiency reasons, one prefers continuously differentiable functions with compact support as mother (prototype) wavelet (functions). However, to satisfy analytical requirements (in the continuous WT) and in general for theoretical reasons, one chooses the wavelet functions from a subspace of the space This is the space of Lebesgue measurable functions that are both absolutely integrable and square integrable in the sense that and Being in this space ensures that one can formulate the conditions of zero mean and square norm one: is the condition for zero mean, and is the condition for square norm one. For ψ to be a wavelet for the continuous wavelet transform (see there for exact statement), the mother wavelet must satisfy an admissibility criterion (loosely speaking, a kind of half-differentiability) in order to get a stably invertible transform. For the discrete wavelet transform, one needs at least the condition that the wavelet series is a representation of the identity in the space L2(R). Most constructions of discrete WT make use of the multiresolution analysis, which defines the wavelet by a scaling function. This scaling function itself is a solution to a functional equation. In most situations it is useful to restrict ψ to be a continuous function with a higher number M of vanishing moments, i.e. for all integer m < M The mother wavelet is scaled (or dilated) by a factor of a and translated (or shifted) by a factor of b to give (under Morlet's original formulation): For the continuous WT, the pair (a,b) varies over the full half-plane R+ × R; for the discrete WT this pair varies over a discrete subset of it, which is also called affine group. These functions are often incorrectly referred to as the basis functions of the (continuous) transform. In fact, as in the continuous Fourier transform, there is no basis in the continuous wavelet transform. Time-frequency interpretation uses a subtly different formulation (after Delprat). Restriction: when and , has a finite time interval Comparisons with Fourier transform (continuous-time) The wavelet transform is often compared with the Fourier transform, in which signals are represented as a sum of sinusoids. In fact, the Fourier transform can be viewed as a special case of the continuous wavelet transform with the choice of the mother wavelet . The main difference in general is that wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. The short-time Fourier transform (STFT) is similar to the wavelet transform, in that it is also time and frequency localized, but there are issues with the frequency/time resolution trade-off. In particular, assuming a rectangular window region, one may think of the STFT as a transform with a slightly different kernel where can often be written as , where and u respectively denote the length and temporal offset of the windowing function. Using Parseval's theorem, one may define the wavelet's energy as From this, the square of the temporal support of the window offset by time u is given by and the square of the spectral support of the window acting on a frequency Multiplication with a rectangular window in the time domain corresponds to convolution with a function in the frequency domain, resulting in spurious ringing artifacts for short/localized temporal windows. With the continuous-time Fourier transform, and this convolution is with a delta function in Fourier space, resulting in the true Fourier transform of the signal . The window function may be some other apodizing filter, such as a Gaussian. The choice of windowing function will affect the approximation error relative to the true Fourier transform. A given resolution cell's time-bandwidth product may not be exceeded with the STFT. All STFT basis elements maintain a uniform spectral and temporal support for all temporal shifts or offsets, thereby attaining an equal resolution in time for lower and higher frequencies. The resolution is purely determined by the sampling width. In contrast, the wavelet transform's multiresolutional properties enables large temporal supports for lower frequencies while maintaining short temporal widths for higher frequencies by the scaling properties of the wavelet transform. This property extends conventional time-frequency analysis into time-scale analysis. The discrete wavelet transform is less computationally complex, taking O(N) time as compared to O(N log N) for the fast Fourier transform (FFT). This computational advantage is not inherent to the transform, but reflects the choice of a logarithmic division of frequency, in contrast to the equally spaced frequency divisions of the FFT which uses the same basis functions as the discrete Fourier transform (DFT). This complexity only applies when the filter size has no relation to the signal size. A wavelet without compact support such as the Shannon wavelet would require O(N2). (For instance, a logarithmic Fourier Transform also exists with O(N) complexity, but the original signal must be sampled logarithmically in time, which is only useful for certain types of signals.) Definition of a wavelet A wavelet (or a wavelet family) can be defined in various ways: Scaling filter An orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined. For analysis with orthogonal wavelets the high pass filter is calculated as the quadrature mirror filter of the low pass, and reconstruction filters are the time reverse of the decomposition filters. Daubechies and Symlet wavelets can be defined by the scaling filter. Scaling function Wavelets are defined by the wavelet function ψ(t) (i.e. the mother wavelet) and scaling function φ(t) (also called father wavelet) in the time domain. The wavelet function is in effect a band-pass filter and scaling that for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See for a detailed explanation. For a wavelet with compact support, φ(t) can be considered finite in length and is equivalent to the scaling filter g. Meyer wavelets can be defined by scaling functions Wavelet function The wavelet only has a time domain representation as the wavelet function ψ(t). For instance, Mexican hat wavelets can be defined by a wavelet function. See a list of a few continuous wavelets. History The development of wavelets can be linked to several separate trains of thought, starting with Alfréd Haar's work in the early 20th century. Later work by Dennis Gabor yielded Gabor atoms (1946), which are constructed similarly to wavelets, and applied to similar purposes. Notable contributions to wavelet theory since then can be attributed to George Zweig’s discovery of the continuous wavelet transform (CWT) in 1975 (originally called the cochlear transform and discovered while studying the reaction of the ear to sound), Pierre Goupillaud, Alex Grossmann and Jean Morlet's formulation of what is now known as the CWT (1982), Jan-Olov Strömberg's early work on discrete wavelets (1983), the Le Gall–Tabatabai (LGT) 5/3-taps non-orthogonal filter bank with linear phase (1988), Ingrid Daubechies' orthogonal wavelets with compact support (1988), Stéphane Mallat's non-orthogonal multiresolution framework (1989), Ali Akansu's binomial QMF (1990), Nathalie Delprat's time-frequency interpretation of the CWT (1991), Newland's harmonic wavelet transform (1993), and set partitioning in hierarchical trees (SPIHT) developed by Amir Said with William A. Pearlman in 1996. The JPEG 2000 standard was developed from 1997 to 2000 by a Joint Photographic Experts Group (JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president). In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. It uses the CDF 9/7 wavelet transform (developed by Ingrid Daubechies in 1992) for its lossy compression algorithm, and the Le Gall–Tabatabai (LGT) 5/3 discrete-time filter bank (developed by Didier Le Gall and Ali J. Tabatabai in 1988) for its lossless compression algorithm. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004. Timeline First wavelet (Haar's wavelet) by Alfréd Haar (1909) Since the 1970s: George Zweig, Jean Morlet, Alex Grossmann Since the 1980s: Yves Meyer, Didier Le Gall, Ali J. Tabatabai, Stéphane Mallat, Ingrid Daubechies, Ronald Coifman, Ali Akansu, Victor Wickerhauser Since the 1990s: Nathalie Delprat, Newland, Amir Said, William A. Pearlman, Touradj Ebrahimi, JPEG 2000 Wavelet transforms A wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals. Wavelet transforms are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid. There are a large number of wavelet transforms each suitable for different applications. For a full list see list of wavelet-related transforms but the common ones are listed below: Continuous wavelet transform (CWT) Discrete wavelet transform (DWT) Fast wavelet transform (FWT) Lifting scheme and generalized lifting scheme Wavelet packet decomposition (WPD) Stationary wavelet transform (SWT) Fractional Fourier transform (FRFT) Fractional wavelet transform (FRWT) Generalized transforms There are a number of generalized transforms of which the wavelet transform is a special case. For example, Yosef Joseph introduced scale into the Heisenberg group, giving rise to a continuous transform space that is a function of time, scale, and frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-frequency volume. Another example of a generalized transform is the chirplet transform in which the CWT is also a two dimensional slice through the chirplet transform. An important application area for generalized transforms involves systems in which high frequency resolution is crucial. For example, darkfield electron optical transforms intermediate between direct and reciprocal space have been widely used in the harmonic analysis of atom clustering, i.e. in the study of crystals and crystal defects. Now that transmission electron microscopes are capable of providing digital images with picometer-scale information on atomic periodicity in nanostructure of all sorts, the range of pattern recognition and strain/metrology applications for intermediate transforms with high frequency resolution (like brushlets and ridgelets) is growing rapidly. Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform in the fractional Fourier transform domains. This transform is capable of providing the time- and fractional-domain information simultaneously and representing signals in the time-fractional-frequency plane. Applications Generally, an approximation to DWT is used for data compression if a signal is already sampled, and the CWT for signal analysis. Thus, DWT approximation is commonly used in engineering and computer science, and the CWT in scientific research. Like some other transforms, wavelet transforms can be used to transform data, then encode the transformed data, resulting in effective compression. For example, JPEG 2000 is an image compression standard that uses biorthogonal wavelets. This means that although the frame is overcomplete, it is a tight frame (see types of frames of a vector space), and the same frame functions (except for conjugation in the case of complex wavelets) are used for both analysis and synthesis, i.e., in both the forward and inverse transform. For details see wavelet compression. A related use is for smoothing/denoising data based on wavelet coefficient thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet coefficients that correspond to undesired frequency components smoothing and/or denoising operations can be performed. Wavelet transforms are also starting to be used for communication applications. Wavelet OFDM is the basic modulation scheme used in HD-PLC (a power line communications technology developed by Panasonic), and in one of the optional modes included in the IEEE 1901 standard. Wavelet OFDM can achieve deeper notches than traditional FFT OFDM, and wavelet OFDM does not require a guard interval (which usually represents significant overhead in FFT OFDM systems). As a representation of a signal Often, signals can be represented well as a sum of sinusoids. However, consider a non-continuous signal with an abrupt discontinuity; this signal can still be represented as a sum of sinusoids, but requires an infinite number, which is an observation known as Gibbs phenomenon. This, then, requires an infinite number of Fourier coefficients, which is not practical for many applications, such as compression. Wavelets are more useful for describing these signals with discontinuities because of their time-localized behavior (both Fourier and wavelet transforms are frequency-localized, but wavelets have an additional time-localization property). Because of this, many types of signals in practice may be non-sparse in the Fourier domain, but very sparse in the wavelet domain. This is particularly useful in signal reconstruction, especially in the recently popular field of compressed sensing. (Note that the short-time Fourier transform (STFT) is also localized in time and frequency, but there are often problems with the frequency-time resolution trade-off. Wavelets are better signal representations because of multiresolution analysis.) This motivates why wavelet transforms are now being adopted for a vast number of applications, often replacing the conventional Fourier transform. Many areas of physics have seen this paradigm shift, including molecular dynamics, chaos theory, ab initio calculations, astrophysics, gravitational wave transient data analysis, density-matrix localisation, seismology, optics, turbulence and quantum mechanics. This change has also occurred in image processing, EEG, EMG, ECG analyses, brain rhythms, DNA analysis, protein analysis, climatology, human sexual response analysis, general signal processing, speech recognition, acoustics, vibration signals, computer graphics, multifractal analysis, and sparse coding. In computer vision and image processing, the notion of scale space representation and Gaussian derivative operators is regarded as a canonical multi-scale representation. Wavelet denoising Suppose we measure a noisy signal , where represents the signal and represents the noise. Assume has a sparse representation in a certain wavelet basis, and Let the wavelet transform of be , where is the wavelet transform of the signal component and is the wavelet transform of the noise component. Most elements in are 0 or close to 0, and Since is orthogonal, the estimation problem amounts to recovery of a signal in iid Gaussian noise. As is sparse, one method is to apply a Gaussian mixture model for . Assume a prior , where is the variance of "significant" coefficients and is the variance of "insignificant" coefficients. Then , is called the shrinkage factor, which depends on the prior variances and . By setting coefficients that fall below a shrinkage threshold to zero, once the inverse transform is applied, an expectedly small amount of signal is lost due to the sparsity assumption. The larger coefficients are expected to primarily represent signal due to sparsity, and statistically very little of the signal, albeit the majority of the noise, is expected to be represented in such lower magnitude coefficients... therefore the zeroing-out operation is expected to remove most of the noise and not much signal. Typically, the above-threshold coefficients are not modified during this process. Some algorithms for wavelet-based denoising may attenuate larger coefficients as well, based on a statistical estimate of the amount of noise expected to be removed by such an attenuation. At last, apply the inverse wavelet transform to obtain Multiscale climate network Agarwal et al. proposed wavelet based advanced linear and nonlinear methods to construct and investigate Climate as complex networks at different timescales. Climate networks constructed using SST datasets at different timescale averred that wavelet based multi-scale analysis of climatic processes holds the promise of better understanding the system dynamics that may be missed when processes are analyzed at one timescale only List of wavelets Discrete wavelets Beylkin (18) Moore Wavelet Morlet wavelet Biorthogonal nearly coiflet (BNC) wavelets Coiflet (6, 12, 18, 24, 30) Cohen-Daubechies-Feauveau wavelet (Sometimes referred to as CDF N/P or Daubechies biorthogonal wavelets) Daubechies wavelet (2, 4, 6, 8, 10, 12, 14, 16, 18, 20, etc.) Binomial QMF (Also referred to as Daubechies wavelet) Haar wavelet Mathieu wavelet Legendre wavelet Villasenor wavelet Symlet Continuous wavelets Real-valued Beta wavelet Hermitian wavelet Meyer wavelet Mexican hat wavelet Poisson wavelet Shannon wavelet Spline wavelet Strömberg wavelet Complex-valued Complex Mexican hat wavelet fbsp wavelet Morlet wavelet Shannon wavelet Modified Morlet wavelet See also Chirplet transform Curvelet Digital cinema Dimension reduction Filter banks Fourier-related transforms Fractal compression Fractional Fourier transform Huygens–Fresnel principle (physical wavelets) JPEG 2000 Least-squares spectral analysis for computing periodicity in any including unevenly spaced data Morlet wavelet Multiresolution analysis Noiselet Non-separable wavelet Scale space Scaled correlation Shearlet Short-time Fourier transform Spectrogram Ultra wideband radio – transmits wavelets Wavelet for multidimensional signals analysis References Further reading External links 1st NJIT Symposium on Wavelets (April 30, 1990) (First Wavelets Conference in USA) Binomial-QMF Daubechies Wavelets Wavelets by Gilbert Strang, American Scientist 82 (1994) 250–255. (A very short and excellent introduction) Course on Wavelets given at UC Santa Barbara, 2004 Wavelets for Kids (PDF file) (Introductory (for very smart kids!)) WITS: Where Is The Starlet? A dictionary of tens of wavelets and wavelet-related terms ending in -let, from activelets to x-lets through bandlets, contourlets, curvelets, noiselets, wedgelets. The Fractional Spline Wavelet Transform describes a fractional wavelet transform based on fractional b-Splines. A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity provides a tutorial on two-dimensional oriented wavelets and related geometric multiscale transforms. Concise Introduction to Wavelets by René Puschinger A Really Friendly Guide To Wavelets by Clemens Valens Time–frequency analysis Signal processing
Wavelet
[ "Physics", "Technology", "Engineering" ]
6,050
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis" ]
51,038
https://en.wikipedia.org/wiki/Technological%20applications%20of%20superconductivity
Technological applications of superconductivity include: the production of sensitive magnetometers based on SQUIDs (superconducting quantum interference devices) fast digital circuits (including those based on Josephson junctions and rapid single flux quantum technology), powerful superconducting electromagnets used in maglev trains, magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR) machines, magnetic confinement fusion reactors (e.g. tokamaks), and the beam-steering and focusing magnets used in particle accelerators low-loss power cables RF and microwave filters (e.g., for mobile phone base stations, as well as military ultra-sensitive/selective receivers) fast fault current limiters high sensitivity particle detectors, including the transition edge sensor, the superconducting bolometer, the superconducting tunnel junction detector, the kinetic inductance detector, and the superconducting nanowire single-photon detector railgun and coilgun magnets electric motors and generators Low-temperature superconductivity Magnetic resonance imaging and nuclear magnetic resonance The biggest application for superconductivity is in producing the large-volume, stable, and high-intensity magnetic fields required for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR). This represents a multi-billion-US$ market for companies such as Oxford Instruments and Siemens. The magnets typically use low-temperature superconductors (LTS) because high-temperature superconductors are not yet cheap enough to cost-effectively deliver the high, stable, and large-volume fields required, notwithstanding the need to cool LTS instruments to liquid helium temperatures. Superconductors are also used in high field scientific magnets. Particle accelerators and magnetic fusion devices Particle accelerators such as the Large Hadron Collider can include many high field electromagnets requiring large quantities of LTS. To construct the LHC magnets required more than 28 percent of the world's niobium-titanium wire production for five years, with large quantities of NbTi also used in the magnets for the LHC's huge experiment detectors. Conventional fusion machines (JET, ST-40, NTSX-U and MAST) use blocks of copper. This limits their fields to 1-3 Tesla. Several superconducting fusion machines are planned for the 2024-2026 timeframe. These include ITER, ARC and the next version of ST-40. The addition of High Temperature Superconductors should yield an order of magnitude improvement in fields (10-13 tesla) for a new generation of Tokamaks. High-temperature superconductivity The commercial applications so far for high-temperature superconductors (HTS) have been limited by other properties of the materials discovered thus far. HTS require only liquid nitrogen, not liquid helium, to cool to superconducting temperatures. However, currently known high-temperature superconductors are brittle ceramics that are expensive to manufacture and not easily formed into wires or other useful shapes. Therefore, the applications for HTS have been where it has some other intrinsic advantage, e.g. in: low thermal loss current leads for LTS devices (low thermal conductivity), RF and microwave filters (low resistance to RF), and increasingly in specialist scientific magnets, particularly where size and electricity consumption are critical (while HTS wire is much more expensive than LTS in these applications, this can be offset by the relative cost and convenience of cooling); the ability to ramp field is desired (the higher and wider range of HTS's operating temperature means faster changes in field can be managed); or cryogen free operation is desired (LTS generally requires liquid helium, which is becoming more scarce and expensive). HTS-based systems HTS has application in scientific and industrial magnets, including use in NMR and MRI systems. Commercial systems are now available in each category. Also one intrinsic attribute of HTS is that it can withstand much higher magnetic fields than LTS, so HTS at liquid helium temperatures are being explored for very high-field inserts inside LTS magnets. Promising future industrial and commercial HTS applications include Induction heaters, transformers, fault current limiters, power storage, motors and generators, fusion reactors (see ITER) and magnetic levitation devices. Early applications will be where the benefit of smaller size, lower weight or the ability to rapidly switch current (fault current limiters) outweighs the added cost. Longer-term as conductor price falls HTS systems should be competitive in a much wider range of applications on energy efficiency grounds alone. (For a relatively technical and US-centric view of state of play of HTS technology in power systems and the development status of Generation 2 conductor see Superconductivity for Electric Systems 2008 US DOE Annual Peer Review.) Electric power transmission The Holbrook Superconductor Project, also known as the LIPA project, was a project to design and build the world's first production superconducting transmission power cable. The cable was commissioned in late June 2008 by the Long Island Power Authority (LIPA) and was in operation for two years. The suburban Long Island electrical substation is fed by a underground cable system which consists of about of high-temperature superconductor wire manufactured by American Superconductor chilled to with liquid nitrogen, greatly reducing the cost required to deliver additional power. In addition, the installation of the cable bypassed strict regulations for overhead power lines, and offered a solution for the public's concerns on overhead power lines. The Tres Amigas Project was proposed in 2009 as an electrical HVDC interconnector between the Eastern Interconnection, the Western Interconnection and Texas Interconnection. It was proposed to be a multi-mile, triangular pathway of superconducting electric cables, capable of transferring five gigawatts of power between the three U.S. power grids. The project lapsed in 2015 when the Eastern Interconnect withdrew from the project. Construction was never begun. Essen, Germany has the world's longest superconducting power cable in production at 1 kilometer. It is a 10 kV liquid nitrogen cooled cable. The cable is smaller than an equivalent 110 kV regular cable and the lower voltage has the additional benefit of smaller transformers. In 2020, an aluminium plant in Voerde, Germany, announced plans to use superconductors for cables carrying 200 kA, citing lower volume and material demand as advantages. Magnesium diboride Magnesium diboride is a much cheaper superconductor than either BSCCO or YBCO in terms of cost per current-carrying capacity per length (cost/(kA*m)), in the same ballpark as LTS, and on this basis many manufactured wires are already cheaper than copper. Furthermore, MgB2 superconducts at temperatures higher than LTS (its critical temperature is 39 K, compared with less than 10 K for NbTi and 18.3 K for Nb3Sn), introducing the possibility of using it at 10-20 K in cryogen-free magnets or perhaps eventually in liquid hydrogen. However MgB2 is limited in the magnetic field it can tolerate at these higher temperatures, so further research is required to demonstrate its competitiveness in higher field applications. Trapped field magnets Exposing superconducting materials to a brief magnetic field can trap the field for use in machines such as generators. In some applications they could replace traditional permanent magnets. Notes Superconductivity
Technological applications of superconductivity
[ "Physics", "Materials_science", "Engineering" ]
1,560
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
51,117
https://en.wikipedia.org/wiki/Meissner%20effect
In condensed-matter physics, the Meissner effect (or Meißner–Ochsenfeld effect) is the expulsion of a magnetic field from a superconductor during its transition to the superconducting state when it is cooled below the critical temperature. This expulsion will repel a nearby magnet. The German physicists Walther Meißner (anglicized Meissner) and Robert Ochsenfeld discovered this phenomenon in 1933 by measuring the magnetic field distribution outside superconducting tin and lead samples. The samples, in the presence of an applied magnetic field, were cooled below their superconducting transition temperature, whereupon the samples cancelled nearly all interior magnetic fields. They detected this effect only indirectly because the magnetic flux is conserved by a superconductor: when the interior field decreases, the exterior field increases. The experiment demonstrated for the first time that superconductors were more than just perfect conductors and provided a uniquely defining property of the superconductor state. The ability for the expulsion effect is determined by the nature of equilibrium formed by the neutralization within the unit cell of a superconductor. A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too strong. Superconductors can be divided into two classes according to how this breakdown occurs. In type-I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In type-II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the electric current as long as the current is not too large. Some type-II superconductors exhibit a small but finite resistance in the mixed state due to motion of the flux vortices induced by the Lorentz forces from the current. As the cores of the vortices are normal electrons, their motion will have dissipation. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are type I, while almost all impure and compound superconductors are type II. Explanation The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided where H is the magnetic field and λ is the London penetration depth. This equation, known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface. This exclusion of magnetic field is a manifestation of the superdiamagnetism emerged during the phase transition from conductor to superconductor, for example by reducing the temperature below critical temperature. In a weak applied field (less than the critical field that breaks down the superconducting phase), a superconductor expels nearly all magnetic flux by setting up electric currents near its surface, as the magnetic field H induces magnetization M within the London penetration depth from the surface. These surface currents shield the internal bulk of the superconductor from the external applied field. As the field expulsion, or cancellation, does not change with time, the currents producing this effect (called persistent currents or screening currents) do not decay with time. Near the surface, within the London penetration depth, the magnetic field is not completely canceled. Each superconducting material has its own characteristic penetration depth. Any perfect conductor will prevent any change to magnetic flux passing through its surface due to ordinary electromagnetic induction at zero resistance. However, the Meissner effect is distinct from this: when an ordinary conductor is cooled so that it makes the transition to a superconducting state in the presence of a constant applied magnetic field, the magnetic flux is expelled during the transition. This effect cannot be explained by infinite conductivity, but only by the London equation. The placement and subsequent levitation of a magnet above an already superconducting material does not demonstrate the Meissner effect, while an initially stationary magnet later being repelled by a superconductor as it is cooled below its critical temperature does. The persisting currents that exist in the superconductor to expel the magnetic field is commonly misconceived as a result of Lenz's Law or Faraday's Law. A reason this is not the case is that no change in flux was made to induce the current. Another explanation is that since the superconductor experiences zero resistance, there cannot be an induced emf in the superconductor. The persisting current therefore is not a result of Faraday's Law. Perfect diamagnetism Superconductors in the Meissner state exhibit perfect diamagnetism, or superdiamagnetism, meaning that the total magnetic field is very close to zero deep inside them (many penetration depths from the surface). This means that their volume magnetic susceptibility is = −1. Diamagnetics are defined by the generation of a spontaneous magnetization of a material which directly opposes the direction of an applied field. However, the fundamental origins of diamagnetism in superconductors and normal materials are very different. In normal materials diamagnetism arises as a direct result of the orbital spin of electrons about the nuclei of an atom induced electromagnetically by the application of an applied field. In superconductors the illusion of perfect diamagnetism arises from persistent screening currents which flow to oppose the applied field (the Meissner effect); not solely the orbital spin. Consequences The discovery of the Meissner effect led to the phenomenological theory of superconductivity by Fritz and Heinz London in 1935. This theory explained resistanceless transport and the Meissner effect, and allowed the first theoretical predictions for superconductivity to be made. However, this theory only explained experimental observations—it did not allow the microscopic origins of the superconducting properties to be identified. This was done successfully by the BCS theory in 1957, from which the penetration depth and the Meissner effect result. However, some physicists argue that BCS theory does not explain the Meissner effect. Paradigm for the Higgs mechanism The Meissner superconductivity effect serves as an important paradigm for the generation mechanism of a mass M (i.e., a reciprocal range, where h is the Planck constant and c is the speed of light) for a gauge field. In fact, this analogy is an abelian example for the Higgs mechanism, which generates the masses of the electroweak and gauge particles in high-energy physics. The length is identical with the London penetration depth in the theory of superconductivity. See also Flux pinning Silsbee effect Superfluid References Further reading By the man who explained the Meissner effect. pp. 34–37 gives a technical discussion of the Meissner effect for a superconducting sphere. pp. 486–489 gives a simple mathematical discussion of the surface currents responsible for the Meissner effect, in the case of a long magnet levitated above a superconducting plane. A good technical reference. External links The Meissner effect - The Feynman Lectures on Physics Meissner Effect (Science from scratch) Short video from Imperial College London about the Meissner effect and levitating trains of the future. Introduction to superconductivity Video about Type 1 Superconductors: R = 0/Transition temperatures/B is a state variable/Meissner effect/Energy gap (Giaever)/BCS model. Meissner Effect (Hyperphysics) Historical Background of the Meissner Effect Magnetic levitation Quantum magnetism Superconductivity
Meissner effect
[ "Physics", "Materials_science", "Engineering" ]
1,729
[ "Physical quantities", "Superconductivity", "Quantum mechanics", "Materials science", "Quantum magnetism", "Condensed matter physics", "Electrical resistance and conductance" ]
51,399
https://en.wikipedia.org/wiki/Buckingham%20%CF%80%20theorem
In engineering, applied mathematics, and physics, the Buckingham theorem is a key theorem in dimensional analysis. It is a formalisation of Rayleigh's method of dimensional analysis. Loosely, the theorem states that if there is a physically meaningful equation involving a certain number n physical variables, then the original equation can be rewritten in terms of a set of p = n − k dimensionless parameters 1, 2, ..., p constructed from the original variables, where k is the number of physical dimensions involved; it is obtained as the rank of a particular matrix. The theorem provides a method for computing sets of dimensionless parameters from the given variables, or nondimensionalization, even if the form of the equation is still unknown. The Buckingham theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (for example, pressure and volume are linked by Boyle's law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and the theorem would not hold. History Although named for Edgar Buckingham, the theorem was first proved by the French mathematician Joseph Bertrand in 1878. Bertrand considered only special cases of problems from electrodynamics and heat conduction, but his article contains, in distinct terms, all the basic ideas of the modern proof of the theorem and clearly indicates the theorem's utility for modelling physical phenomena. The technique of using the theorem ("the method of dimensions") became widely known due to the works of Rayleigh. The first application of the theorem in the general case to the dependence of pressure drop in a pipe upon governing parameters probably dates back to 1892, a heuristic proof with the use of series expansions, to 1894. Formal generalization of the theorem for the case of arbitrarily many quantities was given first by in 1892, then in 1911—apparently independently—by both A. Federman and D. Riabouchinsky, and again in 1914 by Buckingham. It was Buckingham's article that introduced the use of the symbol "" for the dimensionless variables (or parameters), and this is the source of the theorem's name. Statement More formally, the number of dimensionless terms that can be formed is equal to the nullity of the dimensional matrix, and is the rank. For experimental purposes, different systems that share the same description in terms of these dimensionless numbers are equivalent. In mathematical terms, if we have a physically meaningful equation such as where are any physical variables, and there is a maximal dimensionally independent subset of size , then the above equation can be restated as where are dimensionless parameters constructed from the by dimensionless equations — the so-called Pi groups — of the form where the exponents are rational numbers. (They can always be taken to be integers by redefining as being raised to a power that clears all denominators.) If there are fundamental units in play, then . Significance The Buckingham theorem provides a method for computing sets of dimensionless parameters from given variables, even if the form of the equation remains unknown. However, the choice of dimensionless parameters is not unique; Buckingham's theorem only provides a way of generating sets of dimensionless parameters and does not indicate the most "physically meaningful". Two systems for which these parameters coincide are called similar (as with similar triangles, they differ only in scale); they are equivalent for the purposes of the equation, and the experimentalist who wants to determine the form of the equation can choose the most convenient one. Most importantly, Buckingham's theorem describes the relation between the number of variables and fundamental dimensions. Proof For simplicity, it will be assumed that the space of fundamental and derived physical units forms a vector space over the real numbers, with the fundamental units as basis vectors, and with multiplication of physical units as the "vector addition" operation, and raising to powers as the "scalar multiplication" operation: represent a dimensional variable as the set of exponents needed for the fundamental units (with a power of zero if the particular fundamental unit is not present). For instance, the standard gravity has units of (length over time squared), so it is represented as the vector with respect to the basis of fundamental units (length, time). We could also require that exponents of the fundamental units be rational numbers and modify the proof accordingly, in which case the exponents in the pi groups can always be taken as rational numbers or even integers. Rescaling units Suppose we have quantities , where the units of contain length raised to the power . If we originally measure length in meters but later switch to centimeters, then the numerical value of would be rescaled by a factor of . Any physically meaningful law should be invariant under an arbitrary rescaling of every fundamental unit; this is the fact that the pi theorem hinges on. Formal proof Given a system of dimensional variables in fundamental (basis) dimensions, the dimensional matrix is the matrix whose rows correspond to the fundamental dimensions and whose columns are the dimensions of the variables: the th entry (where and ) is the power of the th fundamental dimension in the th variable. The matrix can be interpreted as taking in a combination of the variable quantities and giving out the dimensions of the combination in terms of the fundamental dimensions. So the (column) vector that results from the multiplication consists of the units of in terms of the fundamental independent (basis) units. If we rescale the th fundamental unit by a factor of , then gets rescaled by , where is the th entry of the dimensional matrix. In order to convert this into a linear algebra problem, we take logarithms (the base is irrelevant), yielding which is an action of on . We define a physical law to be an arbitrary function such that is a permissible set of values for the physical system when . We further require to be invariant under this action. Hence it descends to a function . All that remains is to exhibit an isomorphism between and , the (log) space of pi groups . We construct an matrix whose columns are a basis for . It tells us how to embed into as the kernel of . That is, we have an exact sequence Taking tranposes yields another exact sequence The first isomorphism theorem produces the desired isomorphism, which sends the coset to . This corresponds to rewriting the tuple into the pi groups coming from the columns of . The International System of Units defines seven base units, which are the ampere, kelvin, second, metre, kilogram, candela and mole. It is sometimes advantageous to introduce additional base units and techniques to refine the technique of dimensional analysis. (See orientational analysis and reference.) Examples Speed This example is elementary but serves to demonstrate the procedure. Suppose a car is driving at 100 km/h; how long does it take to go 200 km? This question considers dimensioned variables: distance time and speed and we are seeking some law of the form Any two of these variables are dimensionally independent, but the three taken together are not. Thus there is dimensionless quantity. The dimensional matrix is in which the rows correspond to the basis dimensions and and the columns to the considered dimensions where the latter stands for the speed dimension. The elements of the matrix correspond to the powers to which the respective dimensions are to be raised. For instance, the third column states that represented by the column vector is expressible in terms of the basis dimensions as since For a dimensionless constant we are looking for vectors such that the matrix-vector product equals the zero vector In linear algebra, the set of vectors with this property is known as the kernel (or nullspace) of the dimensional matrix. In this particular case its kernel is one-dimensional. The dimensional matrix as written above is in reduced row echelon form, so one can read off a non-zero kernel vector to within a multiplicative constant: If the dimensional matrix were not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written: Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant. Dimensional analysis has thus provided a general equation relating the three physical variables: or, letting denote a zero of function which can be written in the desired form (which recall was ) as The actual relationship between the three variables is simply In other words, in this case has one physically relevant root, and it is unity. The fact that only a single value of will do and that it is equal to 1 is not revealed by the technique of dimensional analysis. The simple pendulum We wish to determine the period of small oscillations in a simple pendulum. It will be assumed that it is a function of the length the mass and the acceleration due to gravity on the surface of the Earth which has dimensions of length divided by time squared. The model is of the form (Note that it is written as a relation, not as a function: is not written here as a function of ) Period, mass, and length are dimensionally independent, but acceleration can be expressed in terms of time and length, which means the four variables taken together are not dimensionally independent. Thus we need only dimensionless parameter, denoted by and the model can be re-expressed as where is given by for some values of The dimensions of the dimensional quantities are: The dimensional matrix is: (The rows correspond to the dimensions and and the columns to the dimensional variables For instance, the 4th column, states that the variable has dimensions of ) We are looking for a kernel vector such that the matrix product of on yields the zero vector The dimensional matrix as written above is in reduced row echelon form, so one can read off a kernel vector within a multiplicative constant: Were it not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant may be written: In fundamental terms: which is dimensionless. Since the kernel is only defined to within a multiplicative constant, if the above dimensionless constant is raised to any arbitrary power, it will yield another equivalent dimensionless constant. In this example, three of the four dimensional quantities are fundamental units, so the last (which is ) must be a combination of the previous. Note that if (the coefficient of ) had been non-zero then there would be no way to cancel the value; therefore be zero. Dimensional analysis has allowed us to conclude that the period of the pendulum is not a function of its mass (In the 3D space of powers of mass, time, and distance, we can say that the vector for mass is linearly independent from the vectors for the three other variables. Up to a scaling factor, is the only nontrivial way to construct a vector of a dimensionless parameter.) The model can now be expressed as: Then this implies that for some zero of the function If there is only one zero, call it then It requires more physical insight or an experiment to show that there is indeed only one zero and that the constant is in fact given by For large oscillations of a pendulum, the analysis is complicated by an additional dimensionless parameter, the maximum swing angle. The above analysis is a good approximation as the angle approaches zero. Electric power To demonstrate the application of the theorem, consider the power consumption of a stirrer with a given shape. The power, P, in dimensions [M · L2/T3], is a function of the density, ρ [M/L3], and the viscosity of the fluid to be stirred, μ [M/(L · T)], as well as the size of the stirrer given by its diameter, D [L], and the angular speed of the stirrer, n [1/T]. Therefore, we have a total of n = 5 variables representing our example. Those n = 5 variables are built up from k = 3 independent dimensions, e.g., length: L (SI units: m), time: T (s), and mass: M (kg). According to the -theorem, the n = 5 variables can be reduced by the k = 3 dimensions to form p = n − k = 5 − 3 = 2 independent dimensionless numbers. Usually, these quantities are chosen as , commonly named the Reynolds number which describes the fluid flow regime, and , the power number, which is the dimensionless description of the stirrer. Note that the two dimensionless quantities are not unique and depend on which of the n = 5 variables are chosen as the k = 3 dimensionally independent basis variables, which, in this example, appear in both dimensionless quantities. The Reynolds number and power number fall from the above analysis if , n, and D are chosen to be the basis variables. If, instead, , n, and D are selected, the Reynolds number is recovered while the second dimensionless quantity becomes . We note that is the product of the Reynolds number and the power number. Other examples An example of dimensional analysis can be found for the case of the mechanics of a thin, solid and parallel-sided rotating disc. There are five variables involved which reduce to two non-dimensional groups. The relationship between these can be determined by numerical experiment using, for example, the finite element method. The theorem has also been used in fields other than physics, for instance in sports science. See also Blast wave Dimensionless quantity Natural units Similitude (model) Reynolds number References Notes Citations Bibliography Original sources External links Some reviews and original sources on the history of pi theorem and the theory of similarity (in Russian) Articles containing proofs Dimensional analysis Eponymous theorems of physics
Buckingham π theorem
[ "Physics", "Mathematics", "Engineering" ]
2,874
[ "Dimensional analysis", "Equations of physics", "Eponymous theorems of physics", "Mechanical engineering", "Articles containing proofs", "Physics theorems" ]
51,414
https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20algebra
The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero. Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed. The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division. Despite its name, it is not fundamental for modern algebra; it was named when algebra was synonymous with the theory of equations. History , in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger), wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard, in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", by which he meant that no coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation although incomplete, has four solutions (counting multiplicities): 1 (twice), and As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type (with real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial , but he got a letter from Euler in 1742 in which it was shown that this polynomial is equal to with Also, Euler pointed out that A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem), which would not be proved until more than a century later and using the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b. In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p(z). At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap. The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, only filled by Alexander Ostrowski in 1920, as discussed in Smale (1981). The first rigorous proof was published by Argand, an amateur mathematician, in 1806 (and revisited in 1813); it was also here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another incomplete version of his original proof in 1849. The first textbook containing a proof of the theorem was Cauchy's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it. None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, which amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981. Without using countable choice, it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choice). However, Fred Richman proved a reformulated version of the theorem that does work. Equivalent statements There are several equivalent formulations of the theorem: Every univariate polynomial of positive degree with real coefficients has at least one complex root. Every univariate polynomial of positive degree with complex coefficients has at least one complex root. This implies immediately the previous assertion, as real numbers are also complex numbers. The converse results from the fact that one gets a polynomial with real coefficients by taking the product of a polynomial and its complex conjugate (obtained by replacing each coefficient with its complex conjugate). A root of this product is either a root of the given polynomial, or of its conjugate; in the latter case, the conjugate of this root is a root of the given polynomial. Every univariate polynomial of positive degree with complex coefficients can be factorized as where are complex numbers. The complex numbers are the roots of the polynomial. If a root appears in several factors, it is a multiple root, and the number of its occurrences is, by definition, the multiplicity of the root. The proof that this statement results from the previous ones is done by recursion on : when a root has been found, the polynomial division by provides a polynomial of degree whose roots are the other roots of the given polynomial. The next two statements are equivalent to the previous ones, although they do not involve any nonreal complex number. These statements can be proved from previous factorizations by remarking that, if is a non-real root of a polynomial with real coefficients, its complex conjugate is also a root, and is a polynomial of degree two with real coefficients (this is the complex conjugate root theorem). Conversely, if one has a factor of degree two, the quadratic formula gives a root. Every univariate polynomial with real coefficients of degree larger than two has a factor of degree two with real coefficients. Every univariate polynomial with real coefficients of positive degree can be factored as where is a real number and each is a monic polynomial of degree at most two with real coefficients. Moreover, one can suppose that the factors of degree two do not have any real root. Proofs All proofs below involve some mathematical analysis, or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This requirement has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra. Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This lemma is enough to establish the general case because, given a non-constant polynomial with complex coefficients, the polynomial has only real coefficients, and, if is a root of , then either or its conjugate is a root of . Here, is the polynomial obtained by replacing each coefficient of with its complex conjugate; the roots of are exactly the complex conjugates of the roots of Many non-algebraic proofs of the theorem use the fact (sometimes called the "growth lemma") that a polynomial function p(z) of degree n whose dominant coefficient is 1 behaves like zn when |z| is large enough. More precisely, there is some positive real number R such that when |z| > R. Real-analytic proofs Even without using complex numbers, it is possible to show that a real-valued polynomial p(x): p(0) ≠ 0 of degree n > 2 can always be divided by some quadratic polynomial with real coefficients. In other words, for some real-valued a and b, the coefficients of the linear remainder on dividing p(x) by x2 − ax − b simultaneously become zero. where q(x) is a polynomial of degree n − 2. The coefficients Rp(x)(a, b) and Sp(x)(a, b) are independent of x and completely defined by the coefficients of p(x). In terms of representation, Rp(x)(a, b) and Sp(x)(a, b) are bivariate polynomials in a and b. In the flavor of Gauss's first (incomplete) proof of this theorem from 1799, the key is to show that for any sufficiently large negative value of b, all the roots of both Rp(x)(a, b) and Sp(x)(a, b) in the variable a are real-valued and alternating each other (interlacing property). Utilizing a Sturm-like chain that contain Rp(x)(a, b) and Sp(x)(a, b) as consecutive terms, interlacing in the variable a can be shown for all consecutive pairs in the chain whenever b has sufficiently large negative value. As Sp(a, b = 0) = p(0) has no roots, interlacing of Rp(x)(a, b) and Sp(x)(a, b) in the variable a fails at b = 0. Topological arguments can be applied on the interlacing property to show that the locus of the roots of Rp(x)(a, b) and Sp(x)(a, b) must intersect for some real-valued a and b < 0. Complex-analytic proofs Find a closed disk D of radius r centered at the origin such that |p(z)| > |p(0)| whenever |z| ≥ r. The minimum of |p(z)| on D, which must exist since D is compact, is therefore achieved at some point z0 in the interior of D, but not at any point of its boundary. The maximum modulus principle applied to 1/p(z) implies that p(z0) = 0. In other words, z0 is a zero of p(z). A variation of this proof does not require the maximum modulus principle (in fact, a similar argument also gives a proof of the maximum modulus principle for holomorphic functions). Continuing from before the principle was invoked, if a := p(z0) ≠ 0, then, expanding p(z) in powers of z − z0, we can write Here, the cj are simply the coefficients of the polynomial z → p(z + z0) after expansion, and k is the index of the first non-zero coefficient following the constant term. For z sufficiently close to z0 this function has behavior asymptotically similar to the simpler polynomial . More precisely, the function for some positive constant M in some neighborhood of z0. Therefore, if we define and let tracing a circle of radius r > 0 around z, then for any sufficiently small r (so that the bound M holds), we see that When r is sufficiently close to 0 this upper bound for |p(z)| is strictly smaller than |a|, contradicting the definition of z0. Geometrically, we have found an explicit direction θ0 such that if one approaches z0 from that direction one can obtain values p(z) smaller in absolute value than |p(z0)|. Another analytic proof can be obtained along this line of thought observing that, since |p(z)| > |p(0)| outside D, the minimum of |p(z)| on the whole complex plane is achieved at z0. If |p(z0)| > 0, then 1/p is a bounded holomorphic function in the entire complex plane since, for each complex number z, |1/p(z)| ≤ |1/p(z0)|. Applying Liouville's theorem, which states that a bounded entire function must be constant, this would imply that 1/p is constant and therefore that p is constant. This gives a contradiction, and hence p(z0) = 0. Yet another analytic proof uses the argument principle. Let R be a positive real number large enough so that every root of p(z) has absolute value smaller than R; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R, consider the number where c(r) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p(z) in the open ball centered at 0 with radius r, which, since r > R, is the total number of zeros of p(z). On the other hand, the integral of n/z along c(r) divided by 2πi is equal to n. But the difference between the two numbers is The numerator of the rational expression being integrated has degree at most n − 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N − n and so N = n. Another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem. To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue. The proof of the latter statement is by contradiction. Let A be a complex square matrix of size n > 0 and let In be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R(z). Since, by assumption, A has no eigenvalues, the function R(z) is an entire function and Cauchy theorem implies that On the other hand, R(z) expanded as a geometric series gives: This formula is valid outside the closed disc of radius (the operator norm of A). Let Then (in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue. Finally, Rouché's theorem gives perhaps the shortest proof of the theorem. Topological proofs Suppose the minimum of |p(z)| on the whole complex plane is achieved at z0; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p(z) as a polynomial in z − z0: there is some natural number k and there are some complex numbers ck, ck + 1, ..., cn such that ck ≠ 0 and: If p(z0) is nonzero, it follows that if a is a kth root of −p(z0)/ck and if t is positive and sufficiently small, then |p(z0 + ta)| < |p(z0)|, which is impossible, since |p(z0)| is the minimum of |p| on D. For another topological proof by contradiction, suppose that the polynomial p(z) has no roots, and consequently is never equal to 0. Think of the polynomial as a map from the complex plane into the complex plane. It maps any circle |z| = R into a closed loop, a curve P(R). We will consider what happens to the winding number of P(R) at the extremes when R is very large and when R = 0. When R is a sufficiently large number, then the leading term zn of p(z) dominates all other terms combined; in other words, When z traverses the circle once counter-clockwise then winds n times counter-clockwise around the origin (0,0), and P(R) likewise. At the other extreme, with |z| = 0, the curve P(0) is merely the single point p(0), which must be nonzero because p(z) is never zero. Thus p(0) must be distinct from the origin (0,0), which denotes 0 in the complex plane. The winding number of P(0) around the origin (0,0) is thus 0. Now changing R continuously will deform the loop continuously. At some R the winding number must change. But that can only happen if the curve P(R) includes the origin (0,0) for some R. But then for some z on that circle |z| = R we have p(z) = 0, contradicting our original assumption. Therefore, p(z) has at least one zero. Algebraic proofs These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases): every polynomial with an odd degree and real coefficients has some real root; every non-negative real number has a square root. The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R() is algebraically closed. By induction As mentioned above, it suffices to check the statement "every non-constant polynomial p(z) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, ..., zn in F such that If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k − 1m′ with m′ odd. For a real number t, define: Then the coefficients of qt(z) are symmetric polynomials in the zi with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials, that is, in −a1, a2, ..., (−1)nan. So qt(z) has in fact real coefficients. Furthermore, the degree of qt(z) is n(n − 1)/2 = 2k−1m(n − 1), and m(n − 1) is an odd number. So, using the induction hypothesis, qt has at least one complex root; in other words, zi + zj + tzizj is complex for two distinct elements i and j from {1, ..., n}. Since there are more real numbers than pairs (i, j), one can find distinct real numbers t and s such that zi + zj + tzizj and zi + zj + szizj are complex (for the same i and j). So, both zi + zj and zizj are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that zi and zj are complex numbers, since they are roots of the quadratic polynomial z2 −  (zi + zj)z + zizj. Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so "odd" can be replaced by "odd prime" and this holds for fields of all characteristics). For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f(x) of odd degree has a root (since has a root, where k is chosen so that ). From Galois theory Another algebraic proof of the fundamental theorem can be given using Galois theory. It suffices to show that C has no proper finite field extension. Let K/C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension, as every algebraic extension of a field of characteristic 0 is separable). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G, so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory, there exists a subextension L of K/R such that Gal(K/L) = H. As [L:R] = [G:H] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R, thus [K:R] and [K:C] are powers of 2. Assuming by way of contradiction that [K:C] > 1, we conclude that the 2-group Gal(K/C) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [K:C] = 1, and therefore K = C, which completes the proof. Geometric proofs There exists still another way to approach the fundamental theorem of algebra, due to J. M. Almira and A. Romero: by Riemannian geometric arguments. The main idea here is to prove that the existence of a non-constant polynomial p(z) without zeros implies the existence of a flat Riemannian metric over the sphere S2. This leads to a contradiction since the sphere is not flat. A Riemannian surface (M, g) is said to be flat if its Gaussian curvature, which we denote by Kg, is identically null. Now, the Gauss–Bonnet theorem, when applied to the sphere S2, claims that which proves that the sphere is not flat. Let us now assume that n > 0 and for each complex number z. Let us define Obviously, p*(z) ≠ 0 for all z in C. Consider the polynomial f(z) = p(z)p*(z). Then f(z) ≠ 0 for each z in C. Furthermore, We can use this functional equation to prove that g, given by for w in C, and for w ∈ S2\{0}, is a well defined Riemannian metric over the sphere S2 (which we identify with the extended complex plane C ∪ {∞}). Now, a simple computation shows that since the real part of an analytic function is harmonic. This proves that Kg = 0. Corollaries Since the fundamental theorem of algebra can be seen as the statement that the field of complex numbers is algebraically closed, it follows that any theorem concerning algebraically closed fields applies to the field of complex numbers. Here are a few more consequences of the theorem, which are either about the field of real numbers or the relationship between the field of real numbers and the field of complex numbers: The field of complex numbers is the algebraic closure of the field of real numbers. Every polynomial in one variable z with complex coefficients is the product of a complex constant and polynomials of the form z + a with a complex. Every polynomial in one variable x with real coefficients can be uniquely written as the product of a constant, polynomials of the form x + a with a real, and polynomials of the form x2 + ax + b with a and b real and a2 − 4b < 0 (which is the same thing as saying that the polynomial x2 + ax + b has no real roots). (By the Abel–Ruffini theorem, the real numbers a and b are not necessarily expressible in terms of the coefficients of the polynomial, the basic arithmetic operations and the extraction of n-th roots.) This implies that the number of non-real complex roots is always even and remains even when counted with their multiplicity. Every rational function in one variable x, with real coefficients, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n (where n is a natural number, and a and b are real numbers), and rational functions of the form (ax + b)/(x2 + cx + d)n (where n is a natural number, and a, b, c, and d are real numbers such that c2 − 4d < 0). A corollary of this is that every rational function in one variable and real coefficients has an elementary primitive. Every algebraic extension of the real field is isomorphic either to the real field or to the complex field. Bounds on the zeros of a polynomial While the fundamental theorem of algebra states a general existence result, it is of some interest, both from the theoretical and from the practical point of view, to have information on the location of the zeros of a given polynomial. The simplest result in this direction is a bound on the modulus: all zeros ζ of a monic polynomial satisfy an inequality |ζ| ≤ R∞, where As stated, this is not yet an existence result but rather an example of what is called an a priori bound: it says that if there are solutions then they lie inside the closed disk of center the origin and radius R∞. However, once coupled with the fundamental theorem of algebra it says that the disk contains in fact at least one solution. More generally, a bound can be given directly in terms of any p-norm of the n-vector of coefficients that is |ζ| ≤ Rp, where Rp is precisely the q-norm of the 2-vector q being the conjugate exponent of p, for any 1 ≤ p ≤ ∞. Thus, the modulus of any solution is also bounded by for 1 < p < ∞, and in particular (where we define an to mean 1, which is reasonable since 1 is indeed the n-th coefficient of our polynomial). The case of a generic polynomial of degree n, is of course reduced to the case of a monic, dividing all coefficients by an ≠ 0. Also, in case that 0 is not a root, i.e. a0 ≠ 0, bounds from below on the roots ζ follow immediately as bounds from above on , that is, the roots of Finally, the distance from the roots ζ to any point can be estimated from below and above, seeing as zeros of the polynomial , whose coefficients are the Taylor expansion of P(z) at Let ζ be a root of the polynomial in order to prove the inequality |ζ| ≤ Rp we can assume, of course, |ζ| > 1. Writing the equation as and using the Hölder's inequality we find Now, if p = 1, this is thus In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression, we have thus and simplifying, Therefore holds, for all 1 ≤ p ≤ ∞. See also Weierstrass factorization theorem, a generalization of the theorem to other entire functions Eilenberg–Niven theorem, a generalization of the theorem to polynomials with quaternionic coefficients and variables Hilbert's Nullstellensatz, a generalization to several variables of the assertion that complex roots exist Bézout's theorem, a generalization to several variables of the assertion on the number of roots. References Citations Historic sources (tr. Course on Analysis of the Royal Polytechnic Academy, part 1: Algebraic Analysis) . English translation: (tr. New proof of the theorem that every integral rational algebraic function of one variable can be resolved into real factors of the first or second degree). – first proof. – second proof. – third proof. – fourth proof. (The Fundamental Theorem of Algebra and Intuitionism). (tr. An extension of a work of Hellmuth Kneser on the Fundamental Theorem of Algebra). (tr. On the first and fourth Gaussian proofs of the Fundamental Theorem of Algebra). (tr. New proof of the theorem that every integral rational function of one variable can be represented as a product of linear functions of the same variable). Recent literature (tr. On the history of the fundamental theorem of algebra: theory of equations and integral calculus.) (tr. The rational functions §80–88: the fundamental theorem). – English translation of Gauss's second proof. External links Algebra, fundamental theorem of at Encyclopaedia of Mathematics Fundamental Theorem of Algebra — a collection of proofs From the Fundamental Theorem of Algebra to Astrophysics: A "Harmonious" Path Mizar system proof: http://mizar.org/version/current/html/polynom5.html#T74 Articles containing proofs Field (mathematics) Theorems about polynomials Theorems in complex analysis
Fundamental theorem of algebra
[ "Mathematics" ]
6,355
[ "Theorems in mathematical analysis", "Theorems in algebra", "Theorems in complex analysis", "Theorems about polynomials", "Articles containing proofs" ]
51,421
https://en.wikipedia.org/wiki/Integer%20sequence
In mathematics, an integer sequence is a sequence (i.e., an ordered list) of integers. An integer sequence may be specified explicitly by giving a formula for its nth term, or implicitly by giving a relationship between its terms. For example, the sequence 0, 1, 1, 2, 3, 5, 8, 13, ... (the Fibonacci sequence) is formed by starting with 0 and 1 and then adding any two consecutive terms to obtain the next one: an implicit description . The sequence 0, 3, 8, 15, ... is formed according to the formula n2 − 1 for the nth term: an explicit definition. Alternatively, an integer sequence may be defined by a property which members of the sequence possess and other integers do not possess. For example, we can determine whether a given integer is a perfect number, , even though we do not have a formula for the nth perfect number. Computable and definable sequences An integer sequence is computable if there exists an algorithm that, given n, calculates an, for all n > 0. The set of computable integer sequences is countable. The set of all integer sequences is uncountable (with cardinality equal to that of the continuum), and so not all integer sequences are computable. Although some integer sequences have definitions, there is no systematic way to define what it means for an integer sequence to be definable in the universe or in any absolute (model independent) sense. Suppose the set M is a transitive model of ZFC set theory. The transitivity of M implies that the integers and integer sequences inside M are actually integers and sequences of integers. An integer sequence is a definable sequence relative to M if there exists some formula P(x) in the language of set theory, with one free variable and no parameters, which is true in M for that integer sequence and false in M for all other integer sequences. In each such M, there are definable integer sequences that are not computable, such as sequences that encode the Turing jumps of computable sets. For some transitive models M of ZFC, every sequence of integers in M is definable relative to M; for others, only some integer sequences are. There is no systematic way to define in M itself the set of sequences definable relative to M and that set may not even exist in some such M. Similarly, the map from the set of formulas that define integer sequences in M to the integer sequences they define is not definable in M and may not exist in M. However, in any model that does possess such a definability map, some integer sequences in the model will not be definable relative to the model. If M contains all integer sequences, then the set of integer sequences definable in M will exist in M and be countable and countable in M. Complete sequences A sequence of positive integers is called a complete sequence if every positive integer can be expressed as a sum of values in the sequence, using each value at most once. Examples Integer sequences that have their own name include: Abundant numbers Baum–Sweet sequence Bell numbers Binomial coefficients Carmichael numbers Catalan numbers Composite numbers Deficient numbers Euler numbers Even and odd numbers Factorial numbers Fibonacci numbers Fibonacci word Figurate numbers Golomb sequence Happy numbers Highly composite numbers Highly totient numbers Home primes Hyperperfect numbers Juggler sequence Kolakoski sequence Lucky numbers Lucas numbers Motzkin numbers Natural numbers Padovan numbers Partition numbers Perfect numbers Practical numbers Prime numbers Pseudoprime numbers Recamán's sequence Regular paperfolding sequence Rudin–Shapiro sequence Semiperfect numbers Semiprime numbers Superperfect numbers Triangular numbers Thue–Morse sequence Ulam numbers Weird numbers Wolstenholme number See also Constant-recursive sequence On-Line Encyclopedia of Integer Sequences List of OEIS sequences References . External links Journal of Integer Sequences. Articles are freely available online. Arithmetic functions
Integer sequence
[ "Mathematics" ]
834
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Arithmetic functions", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
51,521
https://en.wikipedia.org/wiki/Low-density%20lipoprotein
Low-density lipoprotein (LDL) is one of the five major groups of lipoprotein that transport all fat molecules around the body in extracellular water. These groups, from least dense to most dense, are chylomicrons (aka ULDL by the overall density naming convention), very low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL) and high-density lipoprotein (HDL). LDL delivers fat molecules to cells. LDL has been associated with the progression of atherosclerosis. Overview Lipoproteins transfer lipids (fats) around the body in the extracellular fluid, making fats available to body cells for receptor-mediated endocytosis. Lipoproteins are complex particles composed of multiple proteins, typically 80–100 proteins per particle (organized by a single apolipoprotein B for LDL and the larger particles). A single LDL particle is about 220–275 angstroms in diameter, typically transporting 3,000 to 6,000 fat molecules per particle, and varying in size according to the number and mix of fat molecules contained within. The lipids carried include all fat molecules with cholesterol, phospholipids, and triglycerides dominant; amounts of each vary considerably. A good clinical interpretation of blood lipid levels is that high LDL, in combination with a high amount of triglycerides, which indicates a high likelihood of the LDL being oxidised, is associated with increased risk of cardiovascular diseases. Biochemistry Structure Each native LDL particle enables emulsification, i.e. surrounding the fatty acids being carried, enabling these fats to move around the body within the water outside cells. Each particle contains a single apolipoprotein B-100 molecule (Apo B-100, a protein that has 4536 amino acid residues and a mass of 514 kDa), along with 80 to 100 additional ancillary proteins. Each LDL has a highly hydrophobic core consisting of polyunsaturated fatty acid known as linoleate and hundreds to thousands (about 1500 commonly cited as an average) of esterified and unesterified cholesterol molecules. This core also carries varying numbers of triglycerides and other fats and is surrounded by a shell of phospholipids and unesterified cholesterol, as well as the single copy of Apo B-100. LDL particles are approximately 22 nm (0.00000087 in.) to 27.5 nm in diameter and have a mass of about 3 million daltons. Since LDL particles contain a variable and changing number of fatty acid molecules, there is a distribution of LDL particle mass and size. Determining the structure of LDL has been difficult for biochemists because of its heterogeneous structure. However, the structure of LDL at human body temperature in native condition, with a resolution of about 16 Angstroms using cryogenic electron microscopy, has been described in 2011. Physiology LDL particles are formed when triglycerides are removed from VLDL by the lipoprotein lipase enzyme (LPL) and they become smaller and denser (i.e. fewer fat molecules with same protein transport shell), containing a higher proportion of cholesterol esters. Transport into the cell When a cell requires additional cholesterol (beyond its current internal HMGCoA production pathway), it synthesizes the necessary LDL receptors as well as PCSK9, a proprotein convertase that marks the LDL receptor for degradation. LDL receptors are inserted into the plasma membrane and diffuse freely until they associate with clathrin-coated pits. When LDL receptors bind LDL particles in the bloodstream, the clathrin-coated pits are endocytosed into the cell. Vesicles containing LDL receptors bound to LDL are delivered to the endosome. In the presence of low pH, such as that found in the endosome, LDL receptors undergo a conformation change, releasing LDL. LDL is then shipped to the lysosome, where cholesterol esters in the LDL are hydrolysed. LDL receptors are typically returned to the plasma membrane, where they repeat this cycle. If LDL receptors bind to PCSK9, however, transport of LDL receptors is redirected to the lysosome, where they are degraded. Role in the innate immune system LDL interferes with the quorum sensing system that upregulates genes required for invasive Staphylococcus aureus infection. The mechanism of antagonism entails binding apolipoprotein B to a S. aureus autoinducer pheromone, preventing signaling through its receptor. Mice deficient in apolipoprotein B are more susceptible to invasive bacterial infection. LDL size patterns LDL can be grouped based on its size: large low density LDL particles are described as pattern A, and small high density LDL particles are pattern B. Pattern B has been associated by some with a higher risk for coronary heart disease. This is thought to be because the smaller particles are more easily able to penetrate the endothelium of arterial walls. Pattern I, for intermediate, indicates that most LDL particles are very close in size to the normal gaps in the endothelium (26 nm). According to one study, sizes 19.0–20.5 nm were designated as pattern B and LDL sizes 20.6–22 nm were designated as pattern A. Other studies have shown no such correlation at all. Some evidence suggests the correlation between Pattern B and coronary heart disease is stronger than the correspondence between the LDL number measured in the standard lipid profile test. Tests to measure these LDL subtype patterns have been more expensive and not widely available, so the common lipid profile test is used more often. There has also been noted a correspondence between higher triglyceride levels and higher levels of smaller, denser LDL particles and alternately lower triglyceride levels and higher levels of the larger, less dense ("buoyant") LDL. With continued research, decreasing cost, greater availability and wider acceptance of other lipoprotein subclass analysis assay methods, including NMR spectroscopy, research studies have continued to show a stronger correlation between human clinically obvious cardiovascular events and quantitatively measured particle concentrations. Oxidized LDL Oxidized LDL is a general term for LDL particles with oxidatively modified structural components. As a result, from free radical attack, both lipid and protein parts of LDL can be oxidized in the vascular wall. Besides the oxidative reactions taking place in vascular wall, oxidized lipids in LDL can also be derived from oxidized dietary lipids. Oxidized LDL is known to associate with the development of atherosclerosis, and it is therefore widely studied as a potential risk factor of cardiovascular diseases. Atherogenicity of oxidized LDL has been explained by lack of recognition of oxidation-modified LDL structures by the LDL receptors, preventing the normal metabolism of LDL particles and leading eventually to development of atherosclerotic plaques. Of the lipid material contained in LDL, various lipid oxidation products are known as the ultimate atherogenic species. Acting as a transporter of these injurious molecules is another mechanism by which LDL can increase the risk of atherosclerosis. Testing Blood tests commonly report LDL-C: the amount of cholesterol which is estimated to be contained with LDL particles, on average, using a formula, the Friedewald equation. In clinical context, mathematically calculated estimates of LDL-C are commonly used as an estimate of how much low density lipoproteins are driving progression of atherosclerosis. The problem with this approach is that LDL-C values are commonly discordant with both direct measurements of LDL particles and actual rates of atherosclerosis progression. Direct LDL measurements are also available and better reveal individual issues but are less often promoted or done due to slightly higher costs and being available from only a couple of laboratories in the United States. In 2008, the ADA and ACC recognized direct LDL particle measurement by NMR as superior for assessing individual risk of cardiovascular events. Estimation of LDL particles via cholesterol content Chemical measures of lipid concentration have long been the most-used clinical measurement, not because they have the best correlation with individual outcome, but because these lab methods are less expensive and more widely available. The lipid profile does not measure LDL particles. It only estimates them using the Friedewald equation by subtracting the amount of cholesterol associated with other particles, such as HDL and VLDL, assuming a prolonged fasting state, etc.: where H is HDL cholesterol, L is LDL cholesterol, C is total cholesterol, T are triglycerides, and k is 0.20 if the quantities are measured in mg/dL and 0.45 if in mmol/L. There are limitations to this method, most notably that samples must be obtained after a 12 to 14 h fast and that LDL-C cannot be calculated if plasma triglyceride is >4.52 mmol/L (400 mg/dL). Even at triglyceride levels 2.5 to 4.5 mmol/L, this formula is considered inaccurate. If both total cholesterol and triglyceride levels are elevated then a modified formula, with quantities in mg/dL, may be used This formula provides an approximation with fair accuracy for most people, assuming the blood was drawn after fasting for about 14 hours or longer, but does not reveal the actual LDL particle concentration because the percentage of fat molecules within the LDL particles which are cholesterol varies, as much as 8:1 variation. There are several formulas published addressing the inaccuracy in LDL-C estimation. The inaccuracy is based on the assumption that VLDL-C (Very low density lipoprotein cholesterol) is always one-fifth of the triglyceride concentration. Another formulae addresses this issue by using an adjustable factor or by using a regression equation. There are few studies which have compared the LDL-C values derived from this formula and values obtained by direct enzymatic method. Direct enzymatic method are found to be accurate and it has to be the test of choice in clinical situations. In the resource poor settings, the option of using the formula has to be considered. However, the concentration of LDL particles, and to a lesser extent their size, has a stronger and consistent correlation with individual clinical outcome than the amount of cholesterol within LDL particles, even if the LDL-C estimation is approximately correct. There is increasing evidence and recognition of the value of more targeted and accurate measurements of LDL particles. Specifically, LDL particle number (concentration), and to a lesser extent size, have shown slightly stronger correlations with atherosclerotic progression and cardiovascular events than obtained using chemical measures of the amount of cholesterol carried by the LDL particles. It is possible that the LDL cholesterol concentration can be low, yet LDL particle number high and cardiovascular events rates are high. Correspondingly, it is possible that LDL cholesterol concentration can be relatively high, yet LDL particle number low and cardiovascular events are also low. Normal ranges In the US, the American Heart Association, NIH, and NCEP provide a set of guidelines for fasting LDL-Cholesterol levels, estimated or measured, and risk for heart disease. As of about 2005, these guidelines were: Over time, with more clinical research, these recommended levels keep being reduced because LDL reduction, including to abnormally low levels, was the most effective strategy for reducing cardiovascular death rates in one large double blind, randomized clinical trial of men with hypercholesterolemia; far more effective than coronary angioplasty/stenting or bypass surgery. For instance, for people with known atherosclerosis diseases, the 2004 updated American Heart Association, NIH and NCEP recommendations are for LDL levels to be lowered to less than 70 mg/dL. This low level of less than 70 mg/dL was recommended for primary prevention of 'very-high risk patients' and in secondary prevention as a 'reasonable further reduction'. This position was disputed. Statin drugs involved in such clinical trials have numerous physiological effects beyond simply the reduction of LDL levels. From longitudinal population studies following progression of atherosclerosis-related behaviors from early childhood into adulthood, the usual LDL in childhood, before the development of fatty streaks, is about 35 mg/dL. However, all the above values refer to chemical measures of lipid/cholesterol concentration within LDL, not measured low-density lipoprotein concentrations, the accurate approach. A study was conducted measuring the effects of guideline changes on LDL cholesterol reporting and control for diabetes visits in the US from 1995 to 2004. It was found that although LDL cholesterol reporting and control for diabetes and coronary heart disease visits improved continuously between 1995 and 2004, neither the 1998 ADA guidelines nor the 2001 ATP III guidelines increased LDL cholesterol control for diabetes relative to coronary heart disease. Direct measurement of LDL particle concentrations There are several competing methods for measurement of lipoprotein particle concentrations and size. The evidence is that the NMR methodology (developed, automated & greatly reduced in costs while improving accuracy as pioneered by Jim Otvos and associates) results in a 22-25% reduction in cardiovascular events within one year, contrary to the longstanding claims by many in the medical industry that the superiority over existing methods was weak, even by statements of some proponents. Since the later 1990s, because of the development of NMR measurements, it has been possible to clinically measure lipoprotein particles at lower cost [under $80 US (including shipping) & is decreasing; versus the previous costs of >$400 to >$5,000] and higher accuracy. There are two other assays for LDL particles, however, like LDL-C, most only estimate LDL particle concentrations. Direct LDL particle measurement by NMR was mentioned by the ADA and ACC, in a 28 March 2008 joint consensus statement, as having advantages for predicting individual risk of atherosclerosis disease events, but the statement noted that the test is less widely available, is more expensive [about $13.00 US (2015 without insurance coverage) from some labs which use the Vantera Analyzer]. Debate continues that it is "...unclear whether LDL particle size measurements add value to measurement of LDL-particle concentration", though outcomes have always tracked LDL particle, not LDL-C, concentrations. Using NMR, the total LDL particle concentrations, in nmol/L plasma, are typically subdivided by percentiles referenced to the 5,382 men and women, not on any lipid medications, who are participating in the MESA trial. LDL particle concentration can also be measured by measuring the concentration of the protein ApoB, based on the generally accepted principle that each LDL or VLDL particle carries one ApoB molecule. Optimal ranges The LDL particle concentrations are typically categorized by percentiles, <20%, 20–50%, 50th–80th%, 80th–95% and >95% groups of the people participating and being tracked in the MESA trial, a medical research study sponsored by the United States National Heart, Lung, and Blood Institute. The lowest incidence of atherosclerotic events over time occurs within the <20% group, with increased rates for the higher groups. Multiple other measures, including particle sizes, small LDL particle concentrations, large total and HDL particle concentrations, along with estimations of insulin resistance pattern and standard cholesterol lipid measurements (for comparison of the plasma data with the estimation methods discussed above) are also routinely provided. Lowering LDL-cholesterol The mevalonate pathway serves as the basis for the biosynthesis of many molecules, including cholesterol. The enzyme 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMG CoA reductase) is an essential component and performs the first of 37 steps within the cholesterol production pathway, and is present in every animal cell. LDL-C is not a measurement of actual LDL particles. LDL-C is only an estimate (not measured from the individual's blood sample) of how much cholesterol is being transported by all LDL particles, which is either a smaller concentration of large particles or a high concentration of small particles. LDL particles carry many fat molecules (typically 3,000 to 6,000 fat molecules per LDL particle); this includes cholesterol, triglycerides, phospholipids and others. Thus even if the hundreds to thousands of cholesterol molecules within an average LDL particle were measured, this does not reflect the other fat molecules or even the number of LDL particles. Pharmaceutical PCSK9 inhibitors, in clinical trials, by several companies, are more effective for LDL reduction than the statins, including statins alone at high dose (though not necessarily the combination of statins plus ezetimibe). Statins reduce high levels of LDL particles by inhibiting the enzyme HMG-CoA reductase in cells, the rate-limiting step of cholesterol synthesis. To compensate for the decreased cholesterol availability, synthesis of LDL receptors (including hepatic) is increased, resulting in an increased clearance of LDL particles from the extracellular water, including of the blood. Ezetimibe reduces intestinal absorption of cholesterol, thus can reduce LDL particle concentrations when combined with statins. Niacin (B3), lowers LDL by selectively inhibiting hepatic diacylglycerol acyltransferase 2, reducing triglyceride synthesis and VLDL secretion through a receptor HM74 and HM74A or GPR109A. Several CETP inhibitors have been researched to improve HDL concentrations, but so far, despite dramatically increasing HDL-C, have not had a consistent track record in reducing atherosclerosis disease events. Some have increased mortality rates compared with placebo. Clofibrate is effective at lowering cholesterol levels, but has been associated with significantly increased cancer and stroke mortality, despite lowered cholesterol levels. Other developed and tested fibrates, e.g. fenofibric acid have had a better track record and are primarily promoted for lowering VLDL particles (triglycerides), not LDL particles, yet can help some in combination with other strategies. Some tocotrienols, especially delta- and gamma-tocotrienols, are being promoted as statin alternative non-prescription agents to treat high cholesterol, having been shown in vitro to have an effect. In particular, gamma-tocotrienol appears to be another HMG-CoA reductase inhibitor, and can reduce cholesterol production. As with statins, this decrease in intra-hepatic (liver) LDL levels may induce hepatic LDL receptor up-regulation, also decreasing plasma LDL levels. As always, a key issue is how benefits and complications of such agents compare with statins—molecular tools that have been analyzed in large numbers of human research and clinical trials since the mid-1970s. Phytosterols are widely recognized as having a proven LDL cholesterol lowering efficacy' A 2018 review found a dose-response relationship for phytosterols, with intakes of 1.5 to 3 g/day lowering LDL-C by 7.5% to 12%, but reviews as of 2017 had found no data indicating that the consumption of phytosterols may reduce the risk of CVD. Current supplemental guidelines for reducing LDL recommend doses of phytosterols in the 1.6-3.0 grams per day range (Health Canada, EFSA, ATP III, FDA) with a 2009 meta-analysis demonstrating an 8.8% reduction in LDL-cholesterol at a mean dose of 2.15 gram per day. Lifestyle LDL cholesterol can be lowered through dietary intervention by limiting foods with saturated fat and avoiding foods with trans fat. Saturated fats are found in meat products (including poultry), full-fat dairy, eggs, and refined tropical oils like coconut and palm. Added trans fat (in the form of partially hydrogenated oils) has been banned in the US since 2021. However, trans fat can still be found in red meat and dairy products as it is produced in small amounts by ruminants such as sheep and cows. LDL cholesterol can also be lowered by increasing consumption of soluble fiber and plant-based foods. Another lifestyle approach to reduce LDL cholesterol has been minimizing total body fat, in particular fat stored inside the abdominal cavity (visceral body fat). Visceral fat, which is more metabolically active than subcutaneous fat, has been found to produce many enzymatic signals, e.g. resistin, which increase insulin resistance and circulating VLDL particle concentrations, thus both increasing LDL particle concentrations and accelerating the development of diabetes mellitus. Research Gene editing In 2021, scientists demonstrated that CRISPR gene editing can decrease blood levels of LDL cholesterol in Macaca fascicularis monkeys for months by 60% via knockout of PCSK9 in the liver. See also Notes and references External links Fat (LDL) Degradation: PMAP The Proteolysis Map-animation Adult Treatment Panel III Full Report ATP III Update 2004 Cardiology Lipid disorders Lipoproteins
Low-density lipoprotein
[ "Chemistry" ]
4,595
[ "Lipid biochemistry", "Lipoproteins" ]
51,596
https://en.wikipedia.org/wiki/Inositol%20trisphosphate
Inositol trisphosphate or inositol 1,4,5-trisphosphate abbreviated InsP3 or Ins3P or IP3 is an inositol phosphate signaling molecule. It is made by hydrolysis of phosphatidylinositol 4,5-bisphosphate (PIP2), a phospholipid that is located in the plasma membrane, by phospholipase C (PLC). Together with diacylglycerol (DAG), IP3 is a second messenger molecule used in signal transduction in biological cells. While DAG stays inside the membrane, IP3 is soluble and diffuses through the cell, where it binds to its receptor, which is a calcium channel located in the endoplasmic reticulum. When IP3 binds its receptor, calcium is released into the cytosol, thereby activating various calcium regulated intracellular signals. Properties Chemical formula and molecular weight IP3 is an organic molecule with a molecular mass of 420.10 g/mol. Its empirical formula is C6H15O15P3. It is composed of an inositol ring with three phosphate groups bound at the 1, 4, and 5 carbon positions, and three hydroxyl groups bound at positions 2, 3, and 6. Chemical properties Phosphate groups can exist in three different forms depending on a solution's pH. Phosphorus atoms can bind three oxygen atoms with single bonds and a fourth oxygen atom using a double/dative bond. The pH of the solution, and thus the form of the phosphate group determines its ability to bind to other molecules. The binding of phosphate groups to the inositol ring is accomplished by phosphor-ester binding (see phosphoric acids and phosphates). This bond involves combining a hydroxyl group from the inositol ring and a free phosphate group through a dehydration reaction. Considering that the average physiological pH is approximately 7.4, the main form of the phosphate groups bound to the inositol ring in vivo is PO42−. This gives IP3 a net negative charge, which is important in allowing it to dock to its receptor, through binding of the phosphate groups to positively charged residues on the receptor. IP3 has three hydrogen bond donors in the form of its three hydroxyl groups. The hydroxyl group on the 6th carbon atom in the inositol ring is also involved in IP3 docking. Binding to its receptor The docking of IP3 to its receptor, which is called the inositol trisphosphate receptor (InsP3R), was first studied using deletion mutagenesis in the early 1990s. Studies focused on the N-terminus side of the IP3 receptor. In 1997 researchers localized the region of the IP3 receptor involved with binding of IP3 to between amino acid residues 226 and 578 in 1997. Considering that IP3 is a negatively charged molecule, positively charged amino acids such as arginine and lysine were believed to be involved. Two arginine residues at position 265 and 511 and one lysine residue at position 508 were found to be key in IP3 docking. Using a modified form of IP3, it was discovered that all three phosphate groups interact with the receptor, but not equally. Phosphates at the 4th and 5th positions interact more extensively than the phosphate at the 1st position and the hydroxyl group at the 6th position of the inositol ring. Discovery The discovery that a hormone can influence phosphoinositide metabolism was made by Mabel R. Hokin (1924–2003) and her husband Lowell E. Hokin in 1953, when they discovered that radioactive 32P phosphate was incorporated into the phosphatidylinositol of pancreas slices when stimulated with acetylcholine. Up until then phospholipids were believed to be inert structures only used by cells as building blocks for construction of the plasma membrane. Over the next 20 years, little was discovered about the importance of PIP2 metabolism in terms of cell signaling, until the mid-1970s when Robert H. Michell hypothesized a connection between the catabolism of PIP2 and increases in intracellular calcium (Ca2+) levels. He hypothesized that receptor-activated hydrolysis of PIP2 produced a molecule that caused increases in intracellular calcium mobilization. This idea was researched extensively by Michell and his colleagues, who in 1981 were able to show that PIP2 is hydrolyzed into DAG and IP3 by a then unknown phosphodiesterase. In 1984 it was discovered that IP3 acts as a secondary messenger that is capable of traveling through the cytoplasm to the endoplasmic reticulum (ER), where it stimulates the release of calcium into the cytoplasm. Further research provided valuable information on the IP3 pathway, such as the discovery in 1986 that one of the many roles of the calcium released by IP3 is to work with DAG to activate protein kinase C (PKC). It was discovered in 1989 that phospholipase C (PLC) is the phosphodiesterase responsible for hydrolyzing PIP2 into DAG and IP3. Today the IP3 signaling pathway is well mapped out, and is known to be important in regulating a variety of calcium-dependent cell signaling pathways. Signaling pathway Increases in the intracellular Ca2+ concentrations are often a result of IP3 activation. When a ligand binds to a G protein-coupled receptor (GPCR) that is coupled to a Gq heterotrimeric G protein, the α-subunit of Gq can bind to and induce activity in the PLC isozyme PLC-β, which results in the cleavage of PIP2 into IP3 and DAG. If a receptor tyrosine kinase (RTK) is involved in activating the pathway, the isozyme PLC-γ has tyrosine residues that can become phosphorylated upon activation of an RTK, and this will activate PLC-γ and allow it to cleave PIP2 into DAG and IP3. This occurs in cells that are capable of responding to growth factors such as insulin, because the growth factors are the ligands responsible for activating the RTK. IP3 (also abbreviated Ins(1,4,5)P3 is a soluble molecule and is capable of diffusing through the cytoplasm to the ER, or the sarcoplasmic reticulum (SR) in the case of muscle cells, once it has been produced by the action of PLC. Once at the ER, IP3 is able to bind to the Ins(1,4,5)P3 receptor Ins(1,4,5)P3R which is a ligand-gated Ca2+ channel that is found on the surface of the ER. The binding of IP3 (the ligand in this case) to Ins(1,4,5)P3R triggers the opening of the Ca2+ channel, and thus release of Ca2+ into the cytoplasm. In heart muscle cells this increase in Ca2+ activates the ryanodine receptor-operated channel on the SR, results in further increases in Ca2+ through a process known as calcium-induced calcium release. IP3 may also activate Ca2+ channels on the cell membrane indirectly, by increasing the intracellular Ca2+ concentration. Function Human IP3's main functions are to mobilize Ca2+ from storage organelles and to regulate cell proliferation and other cellular reactions that require free calcium. In smooth muscle cells, for example, an increase in concentration of cytoplasmic Ca2+ results in the contraction of the muscle cell. In the nervous system, IP3 serves as a second messenger, with the cerebellum containing the highest concentration of IP3 receptors. There is evidence that IP3 receptors play an important role in the induction of plasticity in cerebellar Purkinje cells. Sea urchin eggs The slow block to polyspermy in the sea urchin is mediated by the PIP2 secondary messenger system. Activation of the binding receptors activates PLC, which cleaves PIP2 in the egg plasma membrane, releasing IP3 into the egg cell cytoplasm. IP3 diffuses to the ER, where it opens Ca2+ channels. Research Huntington's disease Huntington's disease occurs when the cytosolic protein Huntingtin (Htt) has an additional 35 glutamine residues added to its amino terminal region. This modified form of Htt is called Httexp. Httexp makes Type 1 IP3 receptors more sensitive to IP3, which leads to the release of too much Ca2+ from the ER. The release of Ca2+ from the ER causes an increase in the cytosolic and mitochondrial concentrations of Ca2+. This increase in Ca2+ is thought to be the cause of GABAergic MSN degradation. Alzheimer's disease Alzheimer's disease involves the progressive degeneration of the brain, severely impacting mental faculties. Since the Ca2+ hypothesis of Alzheimer's was proposed in 1994, several studies have shown that disruptions in Ca2+ signaling are the primary cause of Alzheimer's disease. Familial Alzheimer's disease has been strongly linked to mutations in the presenilin 1 (PS1), presenilin 2 (PS2), and amyloid precursor protein (APP) genes. All of the mutated forms of these genes observed to date have been found to cause abnormal Ca2+ signaling in the ER. Mutations in PS1 have been shown to increase IP3-mediated Ca2+ release from the ER in several animal models. Calcium channel blockers have been used to treat Alzheimer's disease with some success, and the use of lithium to decrease IP3 turnover has also been suggested as a possible method of treatment. See also Adenophostin Inositol Inositol phosphate myo-Inositol Myo-inositol trispyrophosphate Inositol pentakisphosphate Inositol hexaphosphate Inositol trisphosphate receptor ITPR1 ITPKC References External links Signal transduction Inositol Phosphate esters Second messenger system
Inositol trisphosphate
[ "Chemistry", "Biology" ]
2,162
[ "Inositol", "Second messenger system", "Signal transduction", "Biochemistry", "Neurochemistry" ]