text
stringlengths
11
320k
source
stringlengths
26
161
In fluid dynamics , the Hadamard–Rybczynski equation gives the terminal velocity of slowly moving spherical bubble through an ambient fluid . It is named after Jacques Hadamard and Witold Rybczynski : where The Hadamard–Rybczynski equation can be derived from the Navier–Stokes equations by considering only the buoyancy force and drag force acting on the moving bubble. The surface tension force and inertia force of the bubble are neglected. [ 1 ]
https://en.wikipedia.org/wiki/Hadamard–Rybczynski_equation
In particle physics , a hadron is a composite subatomic particle made of two or more quarks held together by the strong nuclear force . Pronounced / ˈ h æ d r ɒ n / ⓘ , the name is derived from Ancient Greek ἁδρός (hadrós) ' stout, thick ' . They are analogous to molecules , which are held together by the electric force . Most of the mass of ordinary matter comes from two hadrons: the proton and the neutron , while most of the mass of the protons and neutrons is in turn due to the binding energy of their constituent quarks, due to the strong force. Hadrons are categorized into two broad families: baryons , made of an odd number of quarks (usually three) and mesons , made of an even number of quarks (usually two: one quark and one antiquark ). [ 1 ] Protons and neutrons (which make the majority of the mass of an atom ) are examples of baryons; pions are an example of a meson. A tetraquark state (an exotic meson ), named the Z(4430) − , was discovered in 2007 by the Belle Collaboration [ 2 ] and confirmed as a resonance in 2014 by the LHCb collaboration. [ 3 ] Two pentaquark states ( exotic baryons ), named P + c (4380) and P + c (4450) , were discovered in 2015 by the LHCb collaboration. [ 4 ] There are several other "Exotic" hadron candidates and other colour-singlet quark combinations that may also exist. Almost all "free" hadrons and antihadrons (meaning, in isolation and not bound within an atomic nucleus ) are believed to be unstable and eventually decay into other particles. The only known possible exception is free protons, which appear to be stable , or at least, take immense amounts of time to decay (order of 10 34+ years). By way of comparison, free neutrons are the longest-lived unstable particle , and decay with a half-life of about 611 seconds, and have a mean lifetime of 879 seconds, [ a ] [ 5 ] see free neutron decay . Hadron physics is studied by colliding hadrons, e.g. protons, with each other or the nuclei of dense, heavy elements , such as lead (Pb) or gold (Au), and detecting the debris in the produced particle showers . A similar process occurs in the natural environment, in the extreme upper-atmosphere, where muons and mesons such as pions are produced by the collisions of cosmic rays with rarefied gas particles in the outer atmosphere. [ 6 ] The term "hadron" is a new Greek word introduced by L. B. Okun in a plenary talk at the 1962 International Conference on High Energy Physics at CERN . [ 7 ] He opened his talk with the definition of a new category term: Notwithstanding the fact that this report deals with weak interactions, we shall frequently have to speak of strongly interacting particles. These particles pose not only numerous scientific problems, but also a terminological problem. The point is that " strongly interacting particles " is a very clumsy term which does not yield itself to the formation of an adjective. For this reason, to take but one instance, decays into strongly interacting particles are called "non- leptonic ". This definition is not exact because "non-leptonic" may also signify photonic. In this report I shall call strongly interacting particles "hadrons", and the corresponding decays "hadronic" (the Greek ἁδρός signifies "large", "massive", in contrast to λεπτός which means "small", "light"). I hope that this terminology will prove to be convenient. — L. B. Okun (1962) [ 7 ] According to the quark model , [ 8 ] the properties of hadrons are primarily determined by their so-called valence quarks . For example, a proton is composed of two up quarks (each with electric charge + + 2 ⁄ 3 , for a total of + 4 ⁄ 3 together) and one down quark (with electric charge − + 1 ⁄ 3 ). Adding these together yields the proton charge of +1. Although quarks also carry color charge , hadrons must have zero total color charge because of a phenomenon called color confinement . That is, hadrons must be "colorless" or "white". The simplest ways for this to occur are with a quark of one color and an antiquark of the corresponding anticolor, or three quarks of different colors. Hadrons with the first arrangement are a type of meson , and those with the second arrangement are a type of baryon . Massless virtual gluons compose the overwhelming majority of particles inside hadrons, as well as the major constituents of its mass (with the exception of the heavy charm and bottom quarks ; the top quark vanishes before it has time to bind into a hadron). The strength of the strong-force gluons which bind the quarks together has sufficient energy ( E ) to have resonances composed of massive ( m ) quarks ( E ≥ mc 2 ). One outcome is that short-lived pairs of virtual quarks and antiquarks are continually forming and vanishing again inside a hadron. Because the virtual quarks are not stable wave packets (quanta), but an irregular and transient phenomenon, it is not meaningful to ask which quark is real and which virtual; only the small excess is apparent from the outside in the form of a hadron. Therefore, when a hadron or anti-hadron is stated to consist of (typically) two or three quarks, this technically refers to the constant excess of quarks versus antiquarks. Like all subatomic particles , hadrons are assigned quantum numbers corresponding to the representations of the Poincaré group : J PC ( m ), where J is the spin quantum number, P the intrinsic parity (or P-parity ), C the charge conjugation (or C-parity ), and m is the particle's mass . Note that the mass of a hadron has very little to do with the mass of its valence quarks; rather, due to mass–energy equivalence , most of the mass comes from the large amount of energy associated with the strong interaction . Hadrons may also carry flavor quantum numbers such as isospin ( G-parity ), and strangeness . All quarks carry an additive, conserved quantum number called a baryon number ( B ), which is + + 1 ⁄ 3 for quarks and − + 1 ⁄ 3 for antiquarks. This means that baryons (composite particles made of three, five or a larger odd number of quarks) have B = 1 whereas mesons have B = 0. Hadrons have excited states known as resonances . Each ground state hadron may have several excited states; several hundred different resonances have been observed in experiments. Resonances decay extremely quickly (within about 10 −24 seconds ) via the strong nuclear force. In other phases of matter the hadrons may disappear. For example, at very high temperature and high pressure, unless there are sufficiently many flavors of quarks, the theory of quantum chromodynamics (QCD) predicts that quarks and gluons will no longer be confined within hadrons, "because the strength of the strong interaction diminishes with energy ". This property, which is known as asymptotic freedom , has been experimentally confirmed in the energy range between 1 GeV (gigaelectronvolt) and 1 TeV (teraelectronvolt). [ 9 ] All free hadrons except ( possibly ) the proton and antiproton are unstable . Baryons are hadrons containing an odd number of valence quarks (at least 3). [ 1 ] Most well-known baryons such as the proton and neutron have three valence quarks, but pentaquarks with five quarks—three quarks of different colors, and also one extra quark-antiquark pair—have also been proven to exist. Because baryons have an odd number of quarks, they are also all fermions , i.e. , they have half-integer spin . As quarks possess baryon number B = 1 ⁄ 3 , baryons have baryon number B = 1. Pentaquarks also have B = 1, since the extra quark's and antiquark's baryon numbers cancel. Each type of baryon has a corresponding antiparticle (antibaryon) in which quarks are replaced by their corresponding antiquarks. For example, just as a proton is made of two up quarks and one down quark, its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark. As of August 2015, there are two known pentaquarks, P + c (4380) and P + c (4450) , both discovered in 2015 by the LHCb collaboration. [ 4 ] Mesons are hadrons containing an even number of valence quarks (at least two). [ 1 ] Most well known mesons are composed of a quark-antiquark pair, but possible tetraquarks (four quarks) and hexaquarks (six quarks, comprising either a dibaryon or three quark-antiquark pairs) may have been discovered and are being investigated to confirm their nature. [ 10 ] Several other hypothetical types of exotic meson may exist which do not fall within the quark model of classification. These include glueballs and hybrid mesons (mesons bound by excited gluons ). Because mesons have an even number of quarks, they are also all bosons , with integer spin , i.e. , 0, +1, or −1. They have baryon number B = ⁠ 1 / 3 ⁠ − ⁠ 1 / 3 ⁠ = 0 . Examples of mesons commonly produced in particle physics experiments include pions and kaons . Pions also play a role in holding atomic nuclei together via the residual strong force .
https://en.wikipedia.org/wiki/Hadron
Hadrosaurids , also commonly referred to as duck-billed dinosaurs or hadrosaurs, were large terrestrial herbivores . Their diet remains a subject of debate among paleontologists, especially regarding whether hadrosaurids were grazers who fed on vegetation close to the ground, or browsers who ate higher-growing leaves and twigs. Preserved stomach content findings have indicated they may have been browsers, whereas other studies into jaw movements indicate they may have been grazers. The mouth of a hadrosaur had hundreds of tiny teeth packed into dental batteries . These teeth were continually replaced with new teeth. [ 1 ] Hadrosaur beaks were used to cut food, either by stripping off leaves [ 2 ] [ 3 ] or by cropping. [ 1 ] It is believed hadrosaurs had cheeks in order to keep food in the mouth. [ 4 ] [ 5 ] Researchers have long believed their unusual mouth mechanics may have played a role in their evolutionary success. [ 6 ] However, because they lack the complex flexible lower jaw joint of today's mammals , it has been difficult for scientists to determine exactly how the hadrosaurs broke down their food and ate. [ 7 ] Without this understanding, it had been impossible to form a complete understanding of the Late Cretaceous ecosystems and how they were affected during the Cretaceous–Paleogene extinction event 66 million years ago. [ 8 ] It has also remained unclear exactly what hadrosaurids ate. In particular, it has never been definitively proven whether hadrosaurs were grazers who ate vegetation close to the ground, like modern-day sheep or cows , or whether the dinosaurs were browsers who ate higher-growing leaves and twigs, like today's deer or giraffes . [ 8 ] A 2008–2009 study by University of Leicester researchers analyzed hundreds of microscopic scratches on the teeth of a fossilized Edmontosaurus jaw and determined hadrosaurs had a unique way of eating unlike any creature living today. In contrast to a flexible lower jaw joint prevalent in today's mammals, a hadrosaur had a unique hinge between the upper jaws and the rest of its skull. The team found the dinosaur's upper jaws pushed outwards and sideways while chewing, as the lower jaw slid against the upper teeth. Coprolites (fossilized droppings) of some Late Cretaceous hadrosaurs show that the animals sometimes deliberately ate rotting wood. Wood itself is not nutritious, but decomposing wood would have contained fungi, decomposed wood material and detritus -eating invertebrates , all of which would have been nutritious. [ 9 ] The first hadrosaur finds did not include much skull material. Hadrosaur teeth have been known since the 1850s ( Joseph Leidy 's Trachodon ), [ 10 ] and a few fragments of teeth and jaws were among the bones named Hadrosaurus by Leidy in 1858. [ 11 ] [ 12 ] (The skeletal mount made for Hadrosaurus by Benjamin Waterhouse Hawkins included a speculative iguana -like skull) [ 13 ] Leidy had enough skeletal material to make other inferences about the paleobiology of hadrosaurs, though. Of particular importance was the unequal lengths of the forelimbs and hindlimbs. He interpreted his new animal as a kangaroo -like animal that browsed along rivers, using its forelimbs to manipulate branches. [ 11 ] [ 13 ] His vague inference of amphibious habits would later be expanded upon by Edward Drinker Cope , who contributed the mistaken conclusion that hadrosaur teeth and jaws were weak and suitable only for eating soft water plants. [ 2 ] Cope described the next piece of the puzzle in 1874: a more complete jaw fragment in 1874 he named Cionodon arctatus , [ 14 ] which revealed for the first time the complex hadrosaur tooth battery. [ 13 ] However, the first essentially complete hadrosaur skull was not described until 1883. It was part of a skeleton (the first essentially complete hadrosaur skeleton as well) collected in 1882 by Dr. J. L. Wortman and R. S. Hill for Cope. Described as a specimen of Diclonius mirabilis , it is now known as Edmontosaurus . [ 15 ] Cope immediately drew attention to the anterior part of the skull, which was drawn out, long, and wide. He compared it to that of a goose in side view, and to a short-billed spoonbill in top view. Additionally, he noted the presence of what he interpreted as the remnants of a dermal structure surrounding the beak. Significantly, Cope regarded his Diclonius as an amphibious animal consuming soft water vegetation. His reasoning was that the teeth of the lower jaw were weakly connected to the bone and liable to break off if used to consume terrestrial food, and he described the beak as weak as well. [ 16 ] Unfortunately for Cope, aside from misidentifying several of the bones of the skull, [ 17 ] by chance the lower jaws he was studying were missing the walls supporting the teeth from the inside; the teeth were actually well-supported. [ 2 ] [ 18 ] While Cope anticipated publishing a full report with illustrations, he never did so, and instead the first accurate illustrated description of a hadrosaur skull and skeleton would be produced by his great rival, Othniel Charles Marsh . [ 13 ] [ 17 ] [ 19 ] While Marsh corrected several anatomical errors, he retained Cope's postulated diet of soft plants. [ 17 ] The description of hadrosaurs as amphibious eaters of aquatic plants became so ingrained that when the first possible case of hadrosaur gut contents was described in 1922 and found to be made up of terrestrial plants, the author made a point of noting that the specimen only established that hadrosaurs could eat land plants as well as water plants. [ 2 ] [ 20 ] The early study of hadrosaurid dietary adaptations and feeding behavior was summarized in a 1942 monograph by Richard Swann Lull and Nelda Wright . Unlike previous authors, they moved away from soft water plants as the major part of the diet, but retained the interpretation of an amphibious lifestyle. They drew attention to the extensive development of the hadrosaurid dental batteries, and compared their dental equipment to that of horses, noting the advantage the dinosaurs had in continual replacement of teeth. However, they found the purpose of the dental batteries uncertain: hadrosaur jaws were unlike those of any modern reptiles, and there did not appear to be an evolutionary pressure on hadrosaurids like grasses were for horses . Lull and Wright eliminated the soft plants as the primary choice of diet, and eliminated grasses on the grounds that the beak was unlike that of grazing birds like geese, and that the quantity of available grasses appeared insufficient to feed hadrosaurids. Instead, they proposed equisetaleans (horsetails) as the major food source, as these plants existed in the same times and places as hadrosaurids, are known to be rich in starch , and contain abrasive silica which would necessitate teeth that could be replaced. Softer land and water plants were proposed as secondary foods. Lull and Wright found that their proposed feeding ecology was comparable to that of a modern moose , which browses on trees and feeds on water plants in wetlands. They further interpreted the complex anatomy of hadrosaurid snouts and nasal passages as adaptations to feeding underwater, like moose. [ 21 ] Lull and Wright added a new element to hadrosaurid feeding by proposing the presence of muscles analogous to mammalian cheek muscles , which would hold in food chopped by the teeth. These muscles would be attached on bony ridges present on the upper and lower jaws. The authors interpreted the action of the jaws as limited to simple up–and–down motions, finding forward–backward motion unlikely based on skull articulation. The vertical motion would cut food into short lengths, and the pieces would be retained by the cheeks. To manipulated the food in the cheeks, the authors inferred the presence of a well-developed tongue. [ 22 ] The general preexisting consensus on hadrosaurid paleobiology was challenged in 1964 by John Ostrom , who found little evidence to support either a diet of aquatic plants or an amphibious lifestyle. Unlike previous depictions, he interpreted hadrosaurids as terrestrial foragers that browsed on land plants, not aquatic plants. Like Lull and Wright, he drew attention to the robust dental batteries, and found that hard, resistant foods were the most likely diet (such as woody, silica–rich, or fibrous materials). Unlike Lull and Wright, he interpreted hadrosaur jaws as using a complex rodent–like forward–backward grinding motion, and did not comment on the possibility of cheeks. Drawing on an older proposal made during study of a hadrosaur specimen with a preserved beak, he noted the possibility that the animals stripped leaves and shoots from branches by closing the beak over branches and pulling back. A terrestrial diet was also supported by the 1922 gut content study, which found conifer needles and twigs, seeds, and fruits inside the specimen. There was also more circumstantial evidence for terrestrial feeding. Ostrom found that hadrosaurid skeletal anatomy indicated that the animals were well–adapted to move on land, and were well–supported by ossified tendons along the vertebral spines , which would have hindered swimming. He also reported that aquatic plant pollen was rare in the rock units hadrosaurids are known from, which indicates that aquatic plants were uncommon. [ 2 ] In 1984, David B. Weishampel proposed a new hypothesis on how hadrosaurids fed. His study of the sutures between bones in fossil skulls concluded that ornithopods , a group of bird-hipped dinosaurs that includes hadrosaurids, had flexible upper jaws and that when the lower jaw clamped shut, pressure would spread outward from both sides of the upper jaw. The upper teeth would grind against the lower teeth like rasps, trapping the plants and grinding them up. [ 23 ] The theory remained largely unproven until the study by Purnell, Williams and Barrett, which Science magazine called, "The strongest independent evidence yet for this unique jaw motion". [ 24 ] However, in 2008, a group of American and Canadian researchers, led by vertebrate paleobiologist Natalia Rybczynski, replicated Weishampel's proposed chewing motion using a computerized three-dimensional animation model. Rybczynski et al. believe Weishampel's model may not be viable, and plan to test other hypotheses. [ 25 ] In 2008, a team led by University of Colorado at Boulder graduate student Justin S. Tweet found a homogeneous accumulation of millimeter-scale leaf fragments in the gut region of a well-preserved partially grown Brachylophosaurus . [ 26 ] [ 27 ] As a result of that finding, Tweet concluded in September 2008 that the animal was likely a browser, not a grazer. [ 27 ] A study into exactly how a hadrosaur broke down and ate its food was conducted by Vince Williams, a graduate student at the University of Leicester; Paul Barrett, a paleontologist with London 's Natural History Museum ; and Mark Purnell , a British paleontologist from the geology department of the University of Leicester . [ 28 ] [ 29 ] The three men employed a new approach to analyze the feeding mechanisms of dinosaurs, and thus help understand their place in the prehistoric ecosystems. [ 8 ] Chewing on solid food always leaves tiny scratches on the teeth's surfaces. The trio believed that by looking at the size and orientation of those markings on hadrosaurid teeth, they would be able to learn about the movements of their jaws. [ 29 ] Purnell said although he believed this form of study could help determine how and what the hadrosaur ate, he said no previous studies had ever employed this type of analysis. [ 30 ] Williams, Barrett, and Purnell conducted their study using the jaws of an Edmontosaurus , a hadrosaurid that lived between 68 and 66 million years ago in what is now the United States and Canada . The specific Edmontosaurus jaw used in this study was collected from Late Cretaceous rocks found in the United States. [ 28 ] [ 29 ] The individual teeth on the jaw contained multiple hundreds of microscopic scratches, which had been preserved intact during fossilization . The researchers carefully cleaned the jaws, molded them and coated them with gold to make a detailed replica of the tooth surface. Then they used a scanning electron microscope to give high-power magnification of the scratches for study, and conducted a three-dimensional statistical analysis of the direction of the scratches. [ 8 ] [ 28 ] [ 29 ] The study found that the hadrosaur chewed using a method completely different from any creature living today, [ 7 ] and utilized a type of jaw that is now extinct. [ 8 ] The study found the Edmontosaurus jaw had four different sets of parallel scratches running in different directions. Purnell concluded each set of scratches related to a specific jaw movement. This revealed the movement of hadrosaurs was complex and employed movement in several different directions, including up-and-down, front-to-back and sideways movements. The trio concluded that in contrast to the flexible lower jaw joint prevalent in modern mammals, the hadrosaur had a hinge between its upper jaws and the rest of its skull. [ 7 ] According to the study, the hadrosaur would push its upper jaws outwards and sideways, while the lower teeth slid against the upper teeth. [ 29 ] As the tooth surfaces slid sideways across each other, the food would be ground and shredded before consumption. [ 8 ] Purnell said the style of eating, "was not a scissor-like movement; it seems that these dinosaurs invented their own way of chewing." [ 29 ] Although the upper-jaw teeth hinged outward when the hadrosaur ate, Purnell said it was likely the dinosaur could still chew with its mouth closed. While the outward flexure of the upper jaws might have been visible, Purnell said the chewing was likely concealed by the hadrosaur's cheeks and probably looked "quite subtle". [ 31 ] The study also made conclusions about what hadrosaurids ate, although Purnell cautioned the conclusions about the hadrosaur's diet were "a little less secure than the very good evidence we have for the motions of the teeth relative to each other." [ 8 ] The scratches found on each individual tooth were so equal that measuring an area of just one square millimeter was enough to sample the whole jaw. The team concluded the evenness of the scratches suggested the hadrosaur used the same series of jaw motions over and over again. As a result, the study determined that the hadrosaur diet was probably made of leaves and lacked the bulkier items such as twigs or stems, which might have required a different chewing method and created different wear patterns. [ 31 ] The lack of pit marks on the teeth also upheld these conclusions, and suggested the hadrosaurs likely grazed on low-lying vegetation that lacked pits, rather than browsing on higher-growing vegetation with twigs. [ 31 ] The scratches also indicated the hadrosaur's food contained either small particles of grit , which was normal for vegetation cropped close to the ground, or that it contained microscopic granules of silica , which is common in grass . [ 8 ] Grasses had evolved by the Late Cretaceous period, but were not particularly common, so the study concluded it probably did not play a major component in the hadrosaur's diet. Instead, they believed horsetails , a common plant at the time containing the above characteristics, was probably an important food for the dinosaur. [ 8 ] [ 29 ] The results of the study were published online on June 30, 2009, in The Proceedings of the National Academy of Sciences , the official journal of the United States National Academy of Sciences . The study was published under the title, "Quantitative analysis of dental microwear in hadrosaurid dinosaurs, and the implications for hypotheses of jaw mechanics and feeding". [ 29 ] It was the first quantitative analysis of tooth microwear in dinosaurs. [ 32 ] Purnell said the technique employed in the study was equally important as the findings themselves, and that the study proved analyzing microscopic scratch marks on teeth can provide reliable information about an animal's diet and chewing mechanism. [ 29 ] Purnell said this method could be used to study other areas of scientific research, including the dietary habits of other long-vanished species including dinosaurs, extinct groups of fish or very early mammals. [ 8 ] Purnell said the findings were further significant not only for the basic understanding of how hadrosaurids ate, but also because a lack of such understanding from those dinosaurs represented a "big gap in our knowledge" of the ecosystem of the late Cretaceous. Because hadrosaurs were the dominant terrestrial herbivores of that time, they played a major role in structure the ecosystem of the Late Cretaceous period. Purnell said, "The more we understand the ecosystems of the past, and how they were affected by global events like climate change, the better we can understand how changes now are going to pan out in the future." [ 8 ] Lawrence Witmer, a paleontologist with Ohio University College of Osteopathic Medicine in Athens , called the study, "One of the best microwear papers I've seen", although he said he was not yet convinced the hadrosaurid upper jaw could flex. [ 24 ] The hypothesis that hadrosaurs were likely grazers rather than browsers appears to contradict previous findings from preserved stomach contents found in the fossilized guts in previous hadrosaurs studies. [ 8 ] In response to such findings, Purnell said preserved stomach contents are questionable because they do not necessarily represent the usual diet of the animal. [ 33 ] Alan Boyle , a journalist and MSNBC science editor who reported on the team's findings, said of the apparent contradictions between Williams et al. .'s study and previous stomach content findings are subject to debate, but do not necessarily render Williams et al. .'s study irrelevant or incorrect. Specifically, Boyle said, "the claims about grazing vs. browsing are certainly not conclusive (but) the researcher's surmise is that they were more likely to graze". [ 33 ] Williams et al. .'s hypothesis of hadrosaurids as grazers who ate vegetation close to the ground, rather than browsing higher-growing leaves and twigs, would also contradict the portrayal of hadrosaurs in Jurassic Park , the 1990 science fiction novel by Michael Crichton . [ 8 ]
https://en.wikipedia.org/wiki/Hadrosaur_diet
In integral geometry (otherwise called geometric probability theory), Hadwiger's theorem characterises the valuations on convex bodies in R n . {\displaystyle \mathbb {R} ^{n}.} It was proved by Hugo Hadwiger . Let K n {\displaystyle \mathbb {K} ^{n}} be the collection of all compact convex sets in R n . {\displaystyle \mathbb {R} ^{n}.} A valuation is a function v : K n → R {\displaystyle v:\mathbb {K} ^{n}\to \mathbb {R} } such that v ( ∅ ) = 0 {\displaystyle v(\varnothing )=0} and for every S , T ∈ K n {\displaystyle S,T\in \mathbb {K} ^{n}} that satisfy S ∪ T ∈ K n , {\displaystyle S\cup T\in \mathbb {K} ^{n},} v ( S ) + v ( T ) = v ( S ∩ T ) + v ( S ∪ T ) . {\displaystyle v(S)+v(T)=v(S\cap T)+v(S\cup T)~.} A valuation is called continuous if it is continuous with respect to the Hausdorff metric . A valuation is called invariant under rigid motions if v ( φ ( S ) ) = v ( S ) {\displaystyle v(\varphi (S))=v(S)} whenever S ∈ K n {\displaystyle S\in \mathbb {K} ^{n}} and φ {\displaystyle \varphi } is either a translation or a rotation of R n . {\displaystyle \mathbb {R} ^{n}.} The quermassintegrals W j : K n → R {\displaystyle W_{j}:\mathbb {K} ^{n}\to \mathbb {R} } are defined via Steiner's formula V o l n ( K + t B ) = ∑ j = 0 n ( n j ) W j ( K ) t j , {\displaystyle \mathrm {Vol} _{n}(K+tB)=\sum _{j=0}^{n}{\binom {n}{j}}W_{j}(K)t^{j}~,} where B {\displaystyle B} is the Euclidean ball. For example, W 0 {\displaystyle W_{0}} is the volume, W 1 {\displaystyle W_{1}} is proportional to the surface measure , W n − 1 {\displaystyle W_{n-1}} is proportional to the mean width , and W n {\displaystyle W_{n}} is the constant Vol n ⁡ ( B ) . {\displaystyle \operatorname {Vol} _{n}(B).} W j {\displaystyle W_{j}} is a valuation which is homogeneous of degree n − j , {\displaystyle n-j,} that is, W j ( t K ) = t n − j W j ( K ) , t ≥ 0 . {\displaystyle W_{j}(tK)=t^{n-j}W_{j}(K)~,\quad t\geq 0~.} Any continuous valuation v {\displaystyle v} on K n {\displaystyle \mathbb {K} ^{n}} that is invariant under rigid motions can be represented as v ( S ) = ∑ j = 0 n c j W j ( S ) . {\displaystyle v(S)=\sum _{j=0}^{n}c_{j}W_{j}(S)~.} Any continuous valuation v {\displaystyle v} on K n {\displaystyle \mathbb {K} ^{n}} that is invariant under rigid motions and homogeneous of degree j {\displaystyle j} is a multiple of W n − j . {\displaystyle W_{n-j}.} An account and a proof of Hadwiger's theorem may be found in An elementary and self-contained proof was given by Beifang Chen in
https://en.wikipedia.org/wiki/Hadwiger's_theorem
In combinatorial geometry , the Hadwiger conjecture states that any convex body in n -dimensional Euclidean space can be covered by 2 n or fewer smaller bodies homothetic with the original body, and that furthermore, the upper bound of 2 n is necessary if and only if the body is a parallelepiped . There also exists an equivalent formulation in terms of the number of floodlights needed to illuminate the body. The Hadwiger conjecture is named after Hugo Hadwiger , who included it on a list of unsolved problems in 1957; it was, however, previously studied by Levi (1955) and independently, Gohberg & Markus (1960) . Additionally, there is a different Hadwiger conjecture concerning graph coloring —and in some sources the geometric Hadwiger conjecture is also called the Levi–Hadwiger conjecture or the Hadwiger–Levi covering problem . The conjecture remains unsolved even in three dimensions, though the two dimensional case was resolved by Levi (1955) . Formally, the Hadwiger conjecture is: If K is any bounded convex set in the n -dimensional Euclidean space R n , then there exists a set of 2 n scalars s i and a set of 2 n translation vectors v i such that all s i lie in the range 0 < s i < 1, and Furthermore, the upper bound is necessary if and only if K is a parallelepiped, in which case all 2 n of the scalars may be chosen to be equal to 1/2. As shown by Boltyansky , the problem is equivalent to one of illumination: how many floodlights must be placed outside of an opaque convex body in order to completely illuminate its exterior? For the purposes of this problem, a body is only considered to be illuminated if for each point of the boundary of the body, there is at least one floodlight that is separated from the body by all of the tangent planes intersecting the body on this point; thus, although the faces of a cube may be lit by only two floodlights, the planes tangent to its vertices and edges cause it to need many more lights in order for it to be fully illuminated. For any convex body, the number of floodlights needed to completely illuminate it turns out to equal the number of smaller copies of the body that are needed to cover it. [ 1 ] As shown in the illustration, a triangle may be covered by three smaller copies of itself, and more generally in any dimension a simplex may be covered by n + 1 copies of itself, scaled by a factor of n /( n + 1). However, covering a square by smaller squares (with parallel sides to the original) requires four smaller squares, as each one can cover only one of the larger square's four corners. In higher dimensions, covering a hypercube or more generally a parallelepiped by smaller homothetic copies of the same shape requires a separate copy for each of the vertices of the original hypercube or parallelepiped; because these shapes have 2 n vertices, 2 n smaller copies are necessary. This number is also sufficient: a cube or parallelepiped may be covered by 2 n copies, scaled by a factor of 1/2. Hadwiger's conjecture is that parallelepipeds are the worst case for this problem, and that any other convex body may be covered by fewer than 2 n smaller copies of itself. [ 1 ] The two-dimensional case was settled by Levi (1955) : every two-dimensional bounded convex set may be covered with four smaller copies of itself, with the fourth copy needed only in the case of parallelograms. However, the conjecture remains open in higher dimensions except for some special cases. The best known asymptotic upper bound on the number of smaller copies needed to cover a given body is [ 2 ] where c {\displaystyle c} is a positive constant. For small n {\displaystyle n} the upper bound of ( n + 1 ) n n − 1 − ( n − 1 ) ( n − 2 ) n − 1 {\displaystyle (n+1)n^{n-1}-(n-1)(n-2)^{n-1}} established by Lassak (1988) is better than the asymptotic one. In three dimensions it is known that 16 copies always suffice, but this is still far from the conjectured bound of 8 copies. [ 1 ] The conjecture is known to hold for certain special classes of convex bodies, including, in dimension three, centrally symmetric polyhedra and bodies of constant width . [ 1 ] The number of copies needed to cover any zonotope (other than a parallelepiped) is at most ( 3 / 4 ) 2 n {\displaystyle (3/4)2^{n}} , while for bodies with a smooth surface (that is, having a single tangent plane per boundary point), at most n + 1 {\displaystyle n+1} smaller copies are needed to cover the body, as Levi already proved. [ 1 ]
https://en.wikipedia.org/wiki/Hadwiger_conjecture_(combinatorial_geometry)
In geometric graph theory , the Hadwiger–Nelson problem , named after Hugo Hadwiger and Edward Nelson , asks for the minimum number of colors required to color the plane such that no two points at distance 1 from each other have the same color. The answer is unknown, but has been narrowed down to one of the numbers 5, 6 or 7. The correct value may depend on the choice of axioms for set theory . [ 1 ] The question can be phrased in graph theoretic terms as follows. Let G be the unit distance graph of the plane: an infinite graph with all points of the plane as vertices and with an edge between two vertices if and only if the distance between the two points is 1. The Hadwiger–Nelson problem is to find the chromatic number of G . As a consequence, the problem is often called "finding the chromatic number of the plane". By the de Bruijn–Erdős theorem , a result of de Bruijn & Erdős (1951) , the problem is equivalent (under the assumption of the axiom of choice ) to that of finding the largest possible chromatic number of a finite unit distance graph. According to Jensen & Toft (1995) , the problem was first formulated by Nelson in 1950, and first published by Gardner (1960) . Hadwiger (1945) had earlier published a related result, showing that any cover of the plane by five congruent closed sets contains a unit distance in one of the sets, and he also mentioned the problem in a later paper ( Hadwiger 1961 ). Soifer (2008) discusses the problem and its history extensively. One application of the problem connects it to the Beckman–Quarles theorem , according to which any mapping of the Euclidean plane (or any higher dimensional space) to itself that preserves unit distances must be an isometry , preserving all distances. [ 2 ] Finite colorings of these spaces can be used to construct mappings from them to higher-dimensional spaces that preserve distances but are not isometries. For instance, the Euclidean plane can be mapped to a six-dimensional space by coloring it with seven colors so that no two points at distance one have the same color, and then mapping the points by their colors to the seven vertices of a six-dimensional regular simplex with unit-length edges. This maps any two points at unit distance to distinct colors, and from there to distinct vertices of the simplex, at unit distance apart from each other. However, it maps all other distances to zero or one, so it is not an isometry. If the number of colors needed to color the plane could be reduced from seven to a lower number, the same reduction would apply to the dimension of the target space in this construction. [ 3 ] The fact that the chromatic number of the plane must be at least four follows from the existence of a seven-vertex unit distance graph with chromatic number four, named the Moser spindle after its discovery in 1961 by the brothers William and Leo Moser . This graph consists of two unit equilateral triangles joined at a common vertex, x . Each of these triangles is joined along another edge to another equilateral triangle; the vertices y and z of these joined triangles are at unit distance from each other. If the plane could be three-colored, the coloring within the triangles would force y and z to both have the same color as x , but then, since y and z are at unit distance from each other, we would not have a proper coloring of the unit distance graph of the plane. Therefore, at least four colors are needed to color this graph and the plane containing it. An alternative lower bound in the form of a ten-vertex four-chromatic unit distance graph, the Golomb graph , was discovered at around the same time by Solomon W. Golomb . [ 4 ] The lower bound was raised to five in 2018, when computer scientist and biogerontologist Aubrey de Grey found a 1581-vertex, non-4-colourable unit-distance graph. The proof is computer assisted. [ 5 ] Mathematician Gil Kalai and computer scientist Scott Aaronson posted discussion of de Grey's finding, with Aaronson reporting independent verifications of de Grey's result using SAT solvers . Kalai linked additional posts by Jordan Ellenberg and Noam Elkies , with Elkies and (separately) de Grey proposing a Polymath project to find non-4-colorable unit distance graphs with fewer vertices than the one in de Grey's construction. [ 6 ] As of 2021, the smallest known unit distance graph with chromatic number 5 has 509 vertices. [ 7 ] The page of the Polymath project, Polymath (2018) , contains further research, media citations and verification data. The upper bound of seven on the chromatic number follows from the existence of a tessellation of the plane by regular hexagons, with diameter slightly less than one, that can be assigned seven colors in a repeating pattern to form a 7-coloring of the plane. According to Soifer (2008) , this upper bound was first observed by John R. Isbell . The problem can easily be extended to higher dimensions. Finding the chromatic number of 3-space is a particularly interesting problem. As with the version on the plane, the answer is not known, but has been shown to be at least 6 and at most 15. [ 8 ] In the n -dimensional case of the problem, an easy upper bound on the number of required colorings found from tiling n -dimensional cubes is ⌊ 2 + n ⌋ n {\displaystyle \lfloor 2+{\sqrt {n}}\rfloor ^{n}} . A lower bound from simplexes is n + 1 {\displaystyle n+1} . For n > 1 {\displaystyle n>1} , a lower bound of n + 2 {\displaystyle n+2} is available using a generalization of the Moser spindle: a pair of the objects (each two simplexes glued together on a facet) which are joined on one side by a point and the other side by a line. An exponential lower bound was proved by Frankl and Wilson in 1981. [ 9 ] One can also consider colorings of the plane in which the sets of points of each color are restricted to sets of some particular type. [ 10 ] Such restrictions may cause the required number of colors to increase, as they prevent certain colorings from being considered acceptable. For instance, if a coloring of the plane consists of regions bounded by Jordan curves , then at least six colors are required. [ 11 ]
https://en.wikipedia.org/wiki/Hadwiger–Nelson_problem
Blood residue are the wet and dry remnants of blood , as well the discoloration of surfaces on which blood has been shed. In forensic science , blood residue can help investigators identify weapons, reconstruct a criminal action, and link suspects to the crime. [ 1 ] In archaeology, it can be used to detect of origin of blood stains on burried objects. Blood constitutes about eight percent of a person's weight (normally about five liters), and it circulates near the surface of the skin. Almost all trauma to the body, therefore, results in the shedding of blood. Its red color makes it readily apparent at crime scenes, and its residues are very difficult to completely remove. Blood residue has even been recovered from 100,000-year-old stone tools. [ 1 ] Laboratory testing can reveal whether a substance is indeed blood, whether the blood is of animal or human origin, and the blood group to which it belongs. This allows investigators to include or exclude persons as perpetrators or victims. The antigens that allow blood group testing, however, deteriorate with age or improper storage. [ 2 ] The DNA contained in blood, on the other hand, is less subject to deterioration, and allows near-certain matching of blood residue to individuals with DNA profiling techniques. [ 2 ] Through bloodstain pattern analysis , information about events can also be gained from the spatial distribution of bloodstains. Freshly dried bloodstains are a glossy reddish-brown in color. Under the influence of sunlight, the weather or removal attempts, the color eventually disappears and the stain turns grey. The surface on which it is found may also influence the stain's color. [ 1 ] Crime scenes are normally carefully searched for blood residue. Flashlights held at an angle to the surfaces under examination assist in this, [ 1 ] as do luminol sprays which can detect even trace amounts of blood. Presumptive tests exist with which blood can be distinguished from other reddish stains, such as of ketchup or rust, found at the scene. [ 1 ] The search includes areas beyond the immediate crime scene where blood might have been wiped off or bloody fingerprints left, such as towels or doorknobs. At outdoor crime scenes, bloodstains may be recovered from the ground or from plant surfaces. [ 3 ] The standard documentation of blood residue includes photographs and descriptions of form, color, size and position of each stain found. Overall photographs and sketches are also produced to show the relationship of the blood residue to other elements of the scene and to enable pattern analysis. [ 3 ] Recently 3D imaging techniques have been tried for documenting and investigating bloodstains. [ 4 ] To collect samples for analysis, wet blood is collected with a syringe and stored in a tube with anticoagulant , or collected with absorbent fabric that is allowed to air-dry. Dried blood is scraped off with a blade, or collected with a moistened cotton-tipped applicator, a gel lifter or fingerprint tape . [ 2 ] Bloodstained clothing and other items are generally wrapped in paper and shipped whole to the laboratory. [ 2 ] To prevent deterioration, blood residue samples are stored under refrigeration and, in the case of stains, air-dried. [ 3 ] Analysis of blood residue is also an important technique in archaeology , [ 5 ] [ 6 ] where this field is sometimes referred to as haemotaphonomy ( etymologically coming from the Greek haima for blood , taphos for burial , and nomos for law ). It particularly involves studies of the morphology of blood cell in blood stains deposited on different media. Haemotaphonomy has also been used to study blood residues on fragments of medieval manuscripts and on the Shroud of Turin. [ 7 ] The term haemotaphonomy was proposed in 1992. It was inspired by the word " taphonomy " introduced in palaeontology in 1940 by Ivan Yefremov . The focus of haemotaphonomy is the morphology of blood cells when blood is in the form of a stain . Therefore, its subjects of study are any specimens stained with blood. The study method of haemotaphonomy is the analysis of images obtained through a scanning electron microscope (SEM). However, confocal microscopy is a practical alternative to an SEM when a very high level of detail of the bloodstain surface is not required.
https://en.wikipedia.org/wiki/Haemotaphonomy
The Hafner–Sarnak–McCurley constant is a mathematical constant representing the probability that the determinants of two randomly chosen square integer matrices will be relatively prime . The probability depends on the matrix size, n , in accordance with the formula where p k is the k th prime number. The constant is the limit of this expression as n approaches infinity. Its value is roughly 0.3532363719... (sequence A085849 in the OEIS ). This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hafner–Sarnak–McCurley_constant
In mathematics , the hafnian is a scalar function of a symmetric matrix that generalizes the permanent . The hafnian was named by Eduardo R. Caianiello "to mark the fruitful period of stay in Copenhagen (Hafnia in Latin)." [ 1 ] The hafnian of a 2 n × 2 n {\displaystyle 2n\times 2n} symmetric matrix A {\displaystyle A} is defined as where P 2 n 2 {\displaystyle P_{2n}^{2}} is the set of all partitions of the set { 1 , 2 , … , 2 n } {\displaystyle \{1,2,\dots ,2n\}} into subsets of size 2 {\displaystyle 2} . [ 2 ] [ 3 ] This definition is similar to that of the Pfaffian , but differs in that the signatures of the permutations are not taken into account. Thus the relationship of the hafnian to the Pfaffian is the same as relationship of the permanent to the determinant . [ 4 ] The hafnian may also be defined as where S 2 n {\displaystyle S_{2n}} is the symmetric group on { 1 , 2 , . . . , 2 n } {\displaystyle \{1,2,...,2n\}} . [ 5 ] The two definitions are equivalent because if σ ∈ S 2 n {\displaystyle \sigma \in S_{2n}} , then { { σ ( 2 i − 1 ) , σ ( 2 i ) } : i ∈ { 1 , . . . , n } } {\displaystyle \{\{\sigma (2i-1),\sigma (2i)\}:i\in \{1,...,n\}\}} is a partition of { 1 , 2 , … , 2 n } {\displaystyle \{1,2,\dots ,2n\}} into subsets of size 2, and as σ {\displaystyle \sigma } ranges over S 2 n {\displaystyle S_{2n}} , each such partition is counted exactly n ! 2 n {\displaystyle n!2^{n}} times. Note that this argument relies on the symmetry of A {\displaystyle A} , without which the original definition is not well-defined. The hafnian of an adjacency matrix of a graph is the number of perfect matchings (also known as 1-factors) in the graph. This is because a partition of { 1 , 2 , … , 2 n } {\displaystyle \{1,2,\dots ,2n\}} into subsets of size 2 can also be thought of as a perfect matching in the complete graph K 2 n {\displaystyle K_{2n}} . The hafnian can also be thought of as a generalization of the permanent , since the permanent can be expressed as Just as the hafnian counts the number of perfect matchings in a graph given its adjacency matrix, the permanent counts the number of matchings in a bipartite graph given its biadjacency matrix . The hafnian is also related to moments of multivariate Gaussian distributions . By Wick's probability theorem , the hafnian of a real 2 n × 2 n {\displaystyle 2n\times 2n} symmetric matrix may expressed as where λ {\displaystyle \lambda } is any number large enough to make A + λ I {\displaystyle A+\lambda I} positive semi-definite . Note that the hafnian does not depend on the diagonal entries of the matrix , and the expectation on the right-hand side does not depend on λ {\displaystyle \lambda } . Let S = ( A C C T B ) = S T {\displaystyle S={\begin{pmatrix}A&C\\C^{\mathsf {T}}&B\end{pmatrix}}=S^{\mathsf {T}}} be an arbitrary complex symmetric 2 m × 2 m {\displaystyle 2m\times 2m} matrix composed of four m × m {\displaystyle m\times m} blocks A = A T {\displaystyle A=A^{\mathsf {T}}} , B = B T {\displaystyle B=B^{\mathsf {T}}} , C {\displaystyle C} and C T {\displaystyle C^{\mathsf {T}}} . Let z 1 , … , z m {\displaystyle z_{1},\ldots ,z_{m}} be a set of m {\displaystyle m} independent variables, and let Z = ( 0 diag ( z 1 , z 2 , … , z m ) diag ( z 1 , z 2 , … , z m ) 0 ) {\displaystyle Z={\begin{pmatrix}0&{\textrm {diag}}(z_{1},z_{2},\ldots ,z_{m})\\{\textrm {diag}}(z_{1},z_{2},\ldots ,z_{m})&0\end{pmatrix}}} be an antidiagonal block matrix composed of entries z j {\displaystyle z_{j}} (each one is presented twice, one time per nonzero block). Let I {\displaystyle I} denote the identity matrix . Then the following identity holds: [ 4 ] where the right-hand side involves hafnians of ( 2 ∑ k n k ) × ( 2 ∑ k n k ) {\displaystyle {\Big (}2\sum _{k}n_{k}{\Big )}\times {\Big (}2\sum _{k}n_{k}{\Big )}} matrices S ~ ( { n k } ) = ( A ~ ( { n k } ) C ~ ( { n k } ) C ~ T ( { n k } ) B ~ ( { n k } ) ) {\displaystyle {\tilde {S}}(\{n_{k}\})={\begin{pmatrix}{\tilde {A}}(\{n_{k}\})&{\tilde {C}}(\{n_{k}\})\\{\tilde {C}}^{\mathsf {T}}(\{n_{k}\})&{\tilde {B}}(\{n_{k}\})\\\end{pmatrix}}} , whose blocks A ~ ( { n k } ) {\displaystyle {\tilde {A}}(\{n_{k}\})} , B ~ ( { n k } ) {\displaystyle {\tilde {B}}(\{n_{k}\})} , C ~ ( { n k } ) {\displaystyle {\tilde {C}}(\{n_{k}\})} and C ~ T ( { n k } ) {\displaystyle {\tilde {C}}^{\mathsf {T}}(\{n_{k}\})} are built from the blocks A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} and C T {\displaystyle C^{\mathsf {T}}} respectively in the way introduced in MacMahon's Master theorem . In particular, A ~ ( { n k } ) {\displaystyle {\tilde {A}}(\{n_{k}\})} is a matrix built by replacing each entry A k , t {\displaystyle A_{k,t}} in the matrix A {\displaystyle A} with a n k × n t {\displaystyle n_{k}\times n_{t}} block filled with A k , t {\displaystyle A_{k,t}} ; the same scheme is applied to B {\displaystyle B} , C {\displaystyle C} and C T {\displaystyle C^{\mathsf {T}}} . The sum ∑ { n k } {\displaystyle \sum _{\{n_{k}\}}} runs over all k {\displaystyle k} -tuples of non-negative integers , and it is assumed that haf ⁡ S ~ ( { n k = 0 | k = 1 … m } ) = 1 {\displaystyle \operatorname {haf} {\tilde {S}}(\{n_{k}=0|k=1\ldots m\})=1} . The identity can be proved [ 4 ] by means of multivariate Gaussian integrals and Wick's probability theorem . The expression in the left-hand side, 1 / det ( I − Z S ) {\displaystyle 1{\Big /}{\sqrt {\det {\big (}I-ZS{\big )}}}{\Big .}} , is in fact a multivariate generating function for a series of hafnians, and the right-hand side constitutes its multivariable Taylor expansion in the vicinity of the point z 1 = … = z m = 0. {\displaystyle z_{1}=\ldots =z_{m}=0.} As a consequence of the given relation, the hafnian of a symmetric 2 m × 2 m {\displaystyle 2m\times 2m} matrix S {\displaystyle S} can be represented as the following mixed derivative of the order m {\displaystyle m} : The hafnian generating function identity written above can be considered as a hafnian generalization of MacMahon's Master theorem , which introduces the generating function for matrix permanents and has the following form in terms of the introduced notation: Note that MacMahon's Master theorem comes as a simple corollary from the hafnian generating function identity due to the relation per ⁡ ( C ) = haf ⁡ ( 0 C C T 0 ) {\displaystyle \operatorname {per} (C)=\operatorname {haf} {\begin{pmatrix}0&C\\C^{\mathsf {T}}&0\end{pmatrix}}} . If C {\displaystyle C} is a Hermitian positive semi-definite n × n {\displaystyle n\times n} matrix and B {\displaystyle B} is a complex symmetric n × n {\displaystyle n\times n} matrix, then where C ¯ {\displaystyle {\overline {C}}} denotes the complex conjugate of C {\displaystyle C} . [ 6 ] A simple way to see this when ( C B B ¯ C ¯ ) {\displaystyle {\begin{pmatrix}C&B\\{\overline {B}}&{\overline {C}}\\\end{pmatrix}}} is positive semi-definite is to observe that, by Wick's probability theorem , haf ⁡ ( B C C ¯ B ¯ ) = E [ | X 1 … X n | 2 ] {\displaystyle \operatorname {haf} {\begin{pmatrix}B&C\\{\overline {C}}&{\overline {B}}\\\end{pmatrix}}=\mathbb {E} \left[\left|X_{1}\dots X_{n}\right|^{2}\right]} when ( X 1 , … , X n ) {\displaystyle \left(X_{1},\dots ,X_{n}\right)} is a complex normal random vector with mean 0 {\displaystyle 0} , covariance matrix C {\displaystyle C} and relation matrix B {\displaystyle B} . This result is a generalization of the fact that the permanent of a Hermitian positive semi-definite matrix is non-negative. This corresponds to the special case B = 0 {\displaystyle B=0} using the relation per ⁡ ( C ) = haf ⁡ ( 0 C C T 0 ) {\displaystyle \operatorname {per} (C)=\operatorname {haf} {\begin{pmatrix}0&C\\C^{\mathsf {T}}&0\end{pmatrix}}} . The loop hafnian of an m × m {\displaystyle m\times m} symmetric matrix is defined as where M {\displaystyle {\mathcal {M}}} is the set of all perfect matchings of the complete graph on m {\displaystyle m} vertices with loops , i.e., the set of all ways to partition the set { 1 , 2 , … , m } {\displaystyle \{1,2,\dots ,m\}} into pairs or singletons (treating a singleton ( i ) {\displaystyle (i)} as the pair ( i , i ) {\displaystyle (i,i)} ). [ 7 ] Thus the loop hafnian depends on the diagonal entries of the matrix, unlike the hafnian. [ 7 ] Furthermore, the loop hafnian can be non-zero when m {\displaystyle m} is odd. The loop hafnian can be used to count the total number of matchings in a graph (perfect or non-perfect), also known as its Hosoya index . Specifically, if one takes the adjacency matrix of a graph and sets the diagonal elements to 1, then the loop hafnian of the resulting matrix is equal to the total number of matchings in the graph. [ 7 ] The loop hafnian can also be thought of as incorporating a mean into the interpretation of the hafnian as a multivariate Gaussian moment. Specifically, by Wick's probability theorem again, the loop hafnian of a real m × m {\displaystyle m\times m} symmetric matrix can be expressed as where λ {\displaystyle \lambda } is any number large enough to make A + λ I {\displaystyle A+\lambda I} positive semi-definite. Computing the hafnian of a (0,1)-matrix is #P-complete , because computing the permanent of a (0,1)-matrix is #P-complete . [ 4 ] [ 7 ] The hafnian of a 2 n × 2 n {\displaystyle 2n\times 2n} matrix can be computed in O ( n 3 2 n ) {\displaystyle O(n^{3}2^{n})} time. [ 7 ] If the entries of a matrix are non-negative, then its hafnian can be approximated to within an exponential factor in polynomial time. [ 8 ]
https://en.wikipedia.org/wiki/Hafnian
The hafnium nitrides are the various salts produced from combining hafnium and nitrogen . The two most important such are hafnium(III) nitride, HfN; and hafnium(IV) nitride, Hf 3 N 4 . None can be prepared from hafnium oxide , but must instead be prepared from the elemental metal or a different hafnium nitride salt; attempted nitridation of the oxide gives an oxynitride instead. [ 1 ] HfN is refractory and generally produced as a thin film coating, [ 2 ] although zone annealing gives the bulk material. [ 3 ] HfN adopts the rock-salt crystal structure . [ 2 ] The surplus hafnium electron delocalizes , so that HfN is a metal , conducting at room temperature and superconducting below 8.8 K (−443.83 °F). Its bright gold color is a cheaper alternative to gilding . [ 4 ] The dark red semiconductor Hf 3 N 4 does not form at room temperature, but requires high pressure , high temperature synthesis in a diamond anvil cell . At 18 GPa (180,000 atm) and 2,800 K (4,580 °F), it adopts the cubic crystal structure and repeats according to space group I {{{1}}} 3d. [ 2 ] At lower pressures, the cubic structure is believed metastable , decaying to the orthorhombic structure of zirconium(IV) nitride . [ 4 ] [ 5 ] That structure forms outright at 19 GPa and 2,000 K (3,140 °F), and another metastable tetragonal structure forms at 12 GPa and 1,500 K (2,240 °F). Computational studies suggest that it may catalyze polymerization of nitrogen at very high temperatures, through a catenary anion in HfN 10 . [ 5 ] In systems with limited nitrogen, hafnium also forms Hf 3 N 2 , as well as a solid solution hafnium alloy . [ 6 ]
https://en.wikipedia.org/wiki/Hafnium_nitrides
The Hagedorn temperature , T H , is the temperature in theoretical physics where hadronic matter (i.e. ordinary matter) is no longer stable, and must either "evaporate" or convert into quark matter ; as such, it can be thought of as the " boiling point " of hadronic matter. It was discovered by Rolf Hagedorn . The Hagedorn temperature exists because the amount of energy available is high enough that matter particle ( quark – antiquark ) pairs can be spontaneously pulled from vacuum. Thus, hypothetically, a system at Hagedorn temperature can accommodate as much energy as one can put in, because the formed quarks provide new degrees of freedom, and thus the Hagedorn temperature would be an impassable absolute hot. However, if this phase is viewed as quarks instead, it becomes apparent that the matter has transformed into quark matter , which can be further heated. The Hagedorn temperature, T H , is about 150 MeV/ k B or about 1.7 × 10 12 K , [ 1 ] little above the mass–energy of the lightest hadrons, the pion . [ 2 ] Matter at Hagedorn temperature or above will spew out fireballs of new particles, which can again produce new fireballs, and the ejected particles can then be detected by particle detectors. This quark matter may have been detected in heavy-ion collisions at SPS and LHC in CERN (France and Switzerland) [ 3 ] and at RHIC in Brookhaven National Laboratory (USA). [ 4 ] In string theory , a separate Hagedorn temperature can be defined for strings rather than hadrons. This temperature is extremely high (10 30 K) and thus of mainly theoretical interest. [ 5 ] The Hagedorn temperature was discovered by German physicist Rolf Hagedorn in the 1960s while working at CERN. His work on the statistical bootstrap model of hadron production showed that because increases in energy in a system will cause new particles to be produced, an increase of collision energy will increase the entropy of the system rather than the temperature, and "the temperature becomes stuck at a limiting value". [ 6 ] [ 7 ] Hagedorn temperature is the temperature T H above which the partition sum diverges in a system with exponential growth in the density of states. [ 6 ] [ 8 ] where β = 1 / k B T {\displaystyle \beta =1/k_{\text{B}}T} , k B {\displaystyle k_{\text{B}}} being the Boltzmann constant . Because of the divergence, people may come to the incorrect conclusion that it is impossible to have temperatures above the Hagedorn temperature, which would make it the absolute hot temperature, because it would require an infinite amount of energy . In equations: This line of reasoning was well known to be false even to Hagedorn. The partition function for creation of hydrogen–antihydrogen pairs diverges even more rapidly, because it gets a finite contribution from energy levels that accumulate at the ionization energy. The states that cause the divergence are spatially big, since the electrons are very far from the protons. The divergence indicates that at a low temperature hydrogen–antihydrogen will not be produced, rather proton/antiproton and electron/antielectron. The Hagedorn temperature is only a maximum temperature in the physically unrealistic case of exponentially many species with energy E and finite size. The concept of exponential growth in the number of states was originally proposed in the context of condensed matter physics . It was incorporated into high-energy physics in the early 1970s by Steven Frautschi and Hagedorn. In hadronic physics, the Hagedorn temperature is the deconfinement temperature. In string theory , it indicates a phase transition: the transition at which very long strings are copiously produced. It is controlled by the size of the string tension, which is smaller than the Planck scale by some power of the coupling constant. By adjusting the tension to be small compared to the Planck scale, the Hagedorn transition can be much less than the Planck temperature . Traditional grand unified string models place this in the magnitude of 10 30 K , two orders of magnitude smaller than the Planck temperature. Such temperatures have not been reached in any experiment and are far beyond the reach of current, or even foreseeable technology.
https://en.wikipedia.org/wiki/Hagedorn_temperature
In telecommunications , a Hagelbarger code is a convolutional code that enables error bursts to be corrected provided that there are relatively long error-free intervals between the error bursts. In the Hagelbarger code, inserted parity check bits are spread out in time so that an error burst is not likely to affect more than one of the groups in which parity is checked. This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hagelbarger_code
Hagemann's ester , ethyl 2-methyl-4-oxo-2-cyclohexenecarboxylate, is an organic compound that was first prepared and described in 1893 by German chemist Carl Hagemann . The compound is used in organic chemistry as a reagent in the synthesis of many natural products including sterols , trisporic acids , and terpenoids . Methylene iodide and two equivalents of ethyl acetoacetate react in the presence of sodium methoxide to form the diethyl ester of 2,4-diacetyl pentane. This precursor is treated with base to induce cyclization . Finally, heat is applied to generate Hagemann's ester. [ 2 ] [ 3 ] Soon after Hagemann, Emil Knoevenagel described a modified procedure to produce the same intermediate diethyl ester of 2,4-diacetyl pentane using formaldehyde and two equivalents of ethyl acetoacetate which undergo condensation in the presence of a catalytic amount of piperidine . [ 3 ] 2-Methoxy-1,3-butadiene and ethyl-2-butynoate undergo a Diels-Alder reaction to generate a precursor which is hydrolyzed to obtain Hagemann's ester. By varying the substituents on the butynoate starting material, this approach allows for different C2 alkylated Hagemann's ester derivatives to be synthesized. [ 3 ] Methyl vinyl ketone , ethyl acetoacetate, and diethyl-methyl-(3-oxo-butyl)-ammonium iodide react to form a cyclic aldol product. Sodium methoxide is added to generate Hagemann's ester. Methyl vinyl ketone and ethyl acetoacetate undergo aldol cyclization in the presence of catalytic pyrrolidinum acetate or Triton B or sodium ethoxide to produce Hagemann's ester. [ 3 ] This variant is a type of Robinson annulation . [ 4 ] Hagemann's ester has been used as a key building block in many syntheses . [ 3 ] For example, a key intermediate for the fungal hormone trisporic acid was made by its alkylation [ 5 ] and it has been used to make sterols. [ 6 ] Other authors have used it in inverse-electron-demand Diels–Alder reactions leading to sesquiterpene dimers [ 7 ] or in reactions forming simple derivatives. [ 8 ] [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Hagemann's_ester
In optics , the Hagen–Rubens relation (or Hagen–Rubens formula ) is a relation between the coefficient of reflection and the conductivity for materials that are good conductors. [ 1 ] The relation states that for solids where the contribution of the dielectric constant to the index of refraction is negligible, the reflection coefficient can be written as (in SI Units ): [ 2 ] where ω {\displaystyle \omega } is the frequency of observation, σ {\displaystyle \sigma } is the conductivity, and ϵ 0 {\displaystyle \epsilon _{0}} is the vacuum permittivity . For metals, this relation holds for frequencies (much) smaller than the Drude relaxation rate , and in this case the otherwise frequency-dependent conductivity σ {\displaystyle \sigma } can be assumed frequency-independent and equal to the dc conductivity. The relation is named after German physicists Ernst Bessel Hagen and Heinrich Rubens who discovered it in 1903. [ 3 ] [ 4 ] This optics -related article is a stub . You can help Wikipedia by expanding it . This scattering –related article is a stub . You can help Wikipedia by expanding it . This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hagen–Rubens_relation
Hager Group is a manufacturer of electrical installations in residential, commercial and industrial buildings based in Blieskastel , Germany . [ 2 ] The company has been family-run and owned ever since its foundation in 1955. [ 3 ] Hager Group provides products and services ranging from energy distribution and cable management to intelligent building automation and security systems , under the brand Hager. Hager Group also owns the brands Berker, Bocchiotti, Daitem, Diagral, Elcom and E3/DC. [ 4 ] In 2018, Hager Group was the world market leader [ 3 ] in electrical installation systems. [ 5 ] In August 2019, the group was ranked number 128 in the top 500 family-owned businesses in Germany according to the magazine Die Deutsche Wirtschaft. [ 6 ] In 1955, Hager oHG, elektrotechnische Fabrik was founded by brothers Oswald and Hermann Hager, together with their father Peter Hager in Ensheim in the Saarland region of Germany. [ 7 ] [ 8 ] Since 1945, Saarland had been under the economic control of France and had no access to the German market. However, Hager wanted to gain a foothold in both markets. In 1959, the Hager brothers founded their first foreign subsidiary, Hager Electro S. A., in Obernai , Alsace , in north-eastern France. [ 9 ] In 1966, Hager began systematical training of its electricians, whose expertise has created a culture of customer loyalty, something that continues to this day. Hager's modular rotary fuse carrier was patented in Germany in 1968 and in France in 1970. At the same time, the first mass-produced distribution board, the Hager-Rapid-System, was launched on the French market. In 1973, Hager achieved sales of 43 million Deutsche Marks in Germany and in 1974 the company reached a turnover of 22 million francs in France. In 1976, Hager launched the mini Gamma enclosure, [ 10 ] in 1982 the company started producing the first Residual-current circuit breakers (RCCB) in Germany. [ 11 ] A new production facility with a high-bay warehouse was opened in Blieskastel . Hager Group began to market itself as a complete service provider for electrical installations in buildings in the 1980s, setting up sales companies in Europe (Switzerland and Great Britain). [ 12 ] In the mid-1990s, Hager set up distribution channels in the United Arab Emirates (Dubai), Singapore, Malaysia, Hong Kong, China, Australia and New Zealand. In 2007, Hager Group became a European Company : Hager SE. [ 13 ] Hager Group has 22 manufacturing sites in 10 countries across the world. [ 14 ] Components for the respective markets are manufactured at the local production facilities in order to accommodate local installation requirements. The biggest production site is in Obernai, France. [ 15 ] Hager Forum was established there in 2015 as a training and meeting place for partners, customers and employees of the company. [ 16 ] [ 17 ] In 1992, the group acquired Lumetal, a manufacturer of distribution boards from Porcia , Italy . [ 18 ] Hager Group acquired the German company Tehalit in 1996, a manufacturer of cable management systems and cable ducts. [ 19 ] In 1998, the group acquired the French electronic timer manufacturer Flash, whose registered office was in Saverne . [ 20 ] Prior to this, Hager Group manufactured only mechanical timers. The same year, the company also acquired British manufacturer Ashley & Rock from Ulverston , whose products were manufactured according to British Standards . In 2002, the Polish company Polo, whose registered office was in Tychy , was integrated into the company, in 2004, Hager Group acquired Swiss company Weber AG and French manufacturer Atral. In addition to Hager brand security systems, Atral also manufactures products for the brands Diagral, Daitem and Logisty. In 2006, Hager entered the Brazilian market when it acquired 100 % of the shares in Eletromar. Hager Group opened a plant in Pune , India in 2008 and on 30 September of the same year, the foundation stone for a new Eletromar production site in Brazil was laid. On 1 January 2009, Hager acquired Electraplan Solutions GmbH , [ 21 ] and in 2010, Hager acquired Berker, a German manufacturer of switches, whose registered offices were in Schalksmühle and Ottfingen. [ 22 ] 2012 Hager gained the German family firm Elcom, a producer of intercoms . [ 23 ] In 2018, it acquired E3/DC GmbH, a German developer of inverters and energy storage systems . [ 24 ] Hager brand offers services for electrical installations in residential, commercial and industrial buildings. In 2009, the previous brands Tehalit, Weber, Lume, Klik, Flash, Polo, Ashley & Rock and Logisty were combined under Hager brand. Alarms and security systems are sold under Daitem [ 25 ] and Diagral. [ 26 ] Berker manufactures switches and switch systems as part of Hager Group. [ 22 ] Bocchiotti/Iboco , the Italian market leader in cable management and room distribution systems, is also part of Hager Group [ 27 ] whilst Elcom produces intercom systems for residential and office buildings. [ 28 ] There are four different areas of application for Hager Group's products and services: Since 2018, Hager Group has been working on electromobility with Audi AG . The aim of the collaboration is to connect the Audi e-Tron model with Hager Group's Home Energy Management System (HEMS). [ 29 ] 6% of sales are invested in research and development. In 2019, the company filed around 3,000 patents. [ 30 ] The group employs around 800 people in research and development, [ 14 ] which mainly focuses on electromobility, intelligent building technology (for smart homes) and energy efficiency. [ 31 ] Between October 2010 [ 32 ] and June 2014, [ 33 ] Hager Group sponsored football club 1. FC Saarbrücken, with a focus on promoting young talent. Since 2017, the group has been supporting the French football club Racing Club Strasbourg Alsace. This sponsorship lasts for three years. [ 34 ]
https://en.wikipedia.org/wiki/Hager_Group
In mathematics , the Hahn decomposition theorem , named after the Austrian mathematician Hans Hahn , states that for any measurable space ( X , Σ ) {\displaystyle (X,\Sigma )} and any signed measure μ {\displaystyle \mu } defined on the σ {\displaystyle \sigma } -algebra Σ {\displaystyle \Sigma } , there exist two Σ {\displaystyle \Sigma } -measurable sets, P {\displaystyle P} and N {\displaystyle N} , of X {\displaystyle X} such that: Moreover, this decomposition is essentially unique , meaning that for any other pair ( P ′ , N ′ ) {\displaystyle (P',N')} of Σ {\displaystyle \Sigma } -measurable subsets of X {\displaystyle X} fulfilling the three conditions above, the symmetric differences P △ P ′ {\displaystyle P\triangle P'} and N △ N ′ {\displaystyle N\triangle N'} are μ {\displaystyle \mu } - null sets in the strong sense that every Σ {\displaystyle \Sigma } -measurable subset of them has zero measure. The pair ( P , N ) {\displaystyle (P,N)} is then called a Hahn decomposition of the signed measure μ {\displaystyle \mu } . A consequence of the Hahn decomposition theorem is the Jordan decomposition theorem , which states that every signed measure μ {\displaystyle \mu } defined on Σ {\displaystyle \Sigma } has a unique decomposition into a difference μ = μ + − μ − {\displaystyle \mu =\mu ^{+}-\mu ^{-}} of two positive measures, μ + {\displaystyle \mu ^{+}} and μ − {\displaystyle \mu ^{-}} , at least one of which is finite, such that μ + ( E ) = 0 {\displaystyle {\mu ^{+}}(E)=0} for every Σ {\displaystyle \Sigma } -measurable subset E ⊆ N {\displaystyle E\subseteq N} and μ − ( E ) = 0 {\displaystyle {\mu ^{-}}(E)=0} for every Σ {\displaystyle \Sigma } -measurable subset E ⊆ P {\displaystyle E\subseteq P} , for any Hahn decomposition ( P , N ) {\displaystyle (P,N)} of μ {\displaystyle \mu } . We call μ + {\displaystyle \mu ^{+}} and μ − {\displaystyle \mu ^{-}} the positive and negative part of μ {\displaystyle \mu } , respectively. The pair ( μ + , μ − ) {\displaystyle (\mu ^{+},\mu ^{-})} is called a Jordan decomposition (or sometimes Hahn–Jordan decomposition ) of μ {\displaystyle \mu } . The two measures can be defined as for every E ∈ Σ {\displaystyle E\in \Sigma } and any Hahn decomposition ( P , N ) {\displaystyle (P,N)} of μ {\displaystyle \mu } . Note that the Jordan decomposition is unique, while the Hahn decomposition is only essentially unique. The Jordan decomposition has the following corollary: Given a Jordan decomposition ( μ + , μ − ) {\displaystyle (\mu ^{+},\mu ^{-})} of a finite signed measure μ {\displaystyle \mu } , one has for any E {\displaystyle E} in Σ {\displaystyle \Sigma } . Furthermore, if μ = ν + − ν − {\displaystyle \mu =\nu ^{+}-\nu ^{-}} for a pair ( ν + , ν − ) {\displaystyle (\nu ^{+},\nu ^{-})} of finite non-negative measures on X {\displaystyle X} , then The last expression means that the Jordan decomposition is the minimal decomposition of μ {\displaystyle \mu } into a difference of non-negative measures. This is the minimality property of the Jordan decomposition. Proof of the Jordan decomposition: For an elementary proof of the existence, uniqueness, and minimality of the Jordan measure decomposition see Fischer (2012) . Preparation: Assume that μ {\displaystyle \mu } does not take the value − ∞ {\displaystyle -\infty } (otherwise decompose according to − μ {\displaystyle -\mu } ). As mentioned above, a negative set is a set A ∈ Σ {\displaystyle A\in \Sigma } such that μ ( B ) ≤ 0 {\displaystyle \mu (B)\leq 0} for every Σ {\displaystyle \Sigma } -measurable subset B ⊆ A {\displaystyle B\subseteq A} . Claim: Suppose that D ∈ Σ {\displaystyle D\in \Sigma } satisfies μ ( D ) ≤ 0 {\displaystyle \mu (D)\leq 0} . Then there is a negative set A ⊆ D {\displaystyle A\subseteq D} such that μ ( A ) ≤ μ ( D ) {\displaystyle \mu (A)\leq \mu (D)} . Proof of the claim: Define A 0 := D {\displaystyle A_{0}:=D} . Inductively assume for n ∈ N 0 {\displaystyle n\in \mathbb {N} _{0}} that A n ⊆ D {\displaystyle A_{n}\subseteq D} has been constructed. Let denote the supremum of μ ( B ) {\displaystyle \mu (B)} over all the Σ {\displaystyle \Sigma } -measurable subsets B {\displaystyle B} of A n {\displaystyle A_{n}} . This supremum might a priori be infinite. As the empty set ∅ {\displaystyle \varnothing } is a possible candidate for B {\displaystyle B} in the definition of t n {\displaystyle t_{n}} , and as μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} , we have t n ≥ 0 {\displaystyle t_{n}\geq 0} . By the definition of t n {\displaystyle t_{n}} , there then exists a Σ {\displaystyle \Sigma } -measurable subset B n ⊆ A n {\displaystyle B_{n}\subseteq A_{n}} satisfying Set A n + 1 := A n ∖ B n {\displaystyle A_{n+1}:=A_{n}\setminus B_{n}} to finish the induction step. Finally, define As the sets ( B n ) n = 0 ∞ {\displaystyle (B_{n})_{n=0}^{\infty }} are disjoint subsets of D {\displaystyle D} , it follows from the sigma additivity of the signed measure μ {\displaystyle \mu } that This shows that μ ( A ) ≤ μ ( D ) {\displaystyle \mu (A)\leq \mu (D)} . Assume A {\displaystyle A} were not a negative set. This means that there would exist a Σ {\displaystyle \Sigma } -measurable subset B ⊆ A {\displaystyle B\subseteq A} that satisfies μ ( B ) > 0 {\displaystyle \mu (B)>0} . Then t n ≥ μ ( B ) {\displaystyle t_{n}\geq \mu (B)} for every n ∈ N 0 {\displaystyle n\in \mathbb {N} _{0}} , so the series on the right would have to diverge to + ∞ {\displaystyle +\infty } , implying that μ ( D ) = + ∞ {\displaystyle \mu (D)=+\infty } , which is a contradiction, since μ ( D ) ≤ 0 {\displaystyle \mu (D)\leq 0} . Therefore, A {\displaystyle A} must be a negative set. Construction of the decomposition: Set N 0 = ∅ {\displaystyle N_{0}=\varnothing } . Inductively, given N n {\displaystyle N_{n}} , define as the infimum of μ ( D ) {\displaystyle \mu (D)} over all the Σ {\displaystyle \Sigma } -measurable subsets D {\displaystyle D} of X ∖ N n {\displaystyle X\setminus N_{n}} . This infimum might a priori be − ∞ {\displaystyle -\infty } . As ∅ {\displaystyle \varnothing } is a possible candidate for D {\displaystyle D} in the definition of s n {\displaystyle s_{n}} , and as μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} , we have s n ≤ 0 {\displaystyle s_{n}\leq 0} . Hence, there exists a Σ {\displaystyle \Sigma } -measurable subset D n ⊆ X ∖ N n {\displaystyle D_{n}\subseteq X\setminus N_{n}} such that By the claim above, there is a negative set A n ⊆ D n {\displaystyle A_{n}\subseteq D_{n}} such that μ ( A n ) ≤ μ ( D n ) {\displaystyle \mu (A_{n})\leq \mu (D_{n})} . Set N n + 1 := N n ∪ A n {\displaystyle N_{n+1}:=N_{n}\cup A_{n}} to finish the induction step. Finally, define As the sets ( A n ) n = 0 ∞ {\displaystyle (A_{n})_{n=0}^{\infty }} are disjoint, we have for every Σ {\displaystyle \Sigma } -measurable subset B ⊆ N {\displaystyle B\subseteq N} that by the sigma additivity of μ {\displaystyle \mu } . In particular, this shows that N {\displaystyle N} is a negative set. Next, define P := X ∖ N {\displaystyle P:=X\setminus N} . If P {\displaystyle P} were not a positive set, there would exist a Σ {\displaystyle \Sigma } -measurable subset D ⊆ P {\displaystyle D\subseteq P} with μ ( D ) < 0 {\displaystyle \mu (D)<0} . Then s n ≤ μ ( D ) {\displaystyle s_{n}\leq \mu (D)} for all n ∈ N 0 {\displaystyle n\in \mathbb {N} _{0}} and [ clarification needed ] which is not allowed for μ {\displaystyle \mu } . Therefore, P {\displaystyle P} is a positive set. Proof of the uniqueness statement: Suppose that ( N ′ , P ′ ) {\displaystyle (N',P')} is another Hahn decomposition of X {\displaystyle X} . Then P ∩ N ′ {\displaystyle P\cap N'} is a positive set and also a negative set. Therefore, every measurable subset of it has measure zero. The same applies to N ∩ P ′ {\displaystyle N\cap P'} . As this completes the proof. Q.E.D.
https://en.wikipedia.org/wiki/Hahn_decomposition_theorem
In mathematics , Hahn series (sometimes also known as Hahn–Mal'cev–Neumann series ) are a type of formal infinite series . They are a generalization of Puiseux series (themselves a generalization of formal power series ) and were first introduced by Hans Hahn in 1907 [ 1 ] (and then further generalized by Anatoly Maltsev and Bernhard Neumann to a non-commutative setting). They allow for arbitrary exponents of the indeterminate so long as the set supporting them forms a well-ordered subset of the value group (typically Q {\displaystyle \mathbb {Q} } or R {\displaystyle \mathbb {R} } ). Hahn series were first introduced, as groups, in the course of the proof of the Hahn embedding theorem and then studied by him in relation to Hilbert's second problem . The field of Hahn series K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} (in the indeterminate T {\displaystyle T} ) over a field K {\displaystyle K} and with value group Γ {\displaystyle \Gamma } (an ordered group) is the set of formal expressions of the form with c e ∈ K {\displaystyle c_{e}\in K} such that the support supp ⁡ f := { e ∈ Γ : c e ≠ 0 } {\displaystyle \operatorname {supp} f:=\{e\in \Gamma :c_{e}\neq 0\}} of f is well-ordered . The sum and product of are given by and (in the latter, the sum ∑ e ′ + e ″ = e {\displaystyle \sum _{e'+e''=e}} over values ( e ′ , e ″ ) {\displaystyle (e',e'')} such that c e ′ ≠ 0 {\displaystyle c_{e'}\neq 0} , d e ″ ≠ 0 {\displaystyle d_{e''}\neq 0} and e ′ + e ″ = e {\displaystyle e'+e''=e} is finite because a well-ordered set cannot contain an infinite decreasing sequence). [ 2 ] For example, T − 1 / p + T − 1 / p 2 + T − 1 / p 3 + ⋯ {\displaystyle T^{-1/p}+T^{-1/p^{2}}+T^{-1/p^{3}}+\cdots } is a Hahn series (over any field) because the set of rationals is well-ordered; it is not a Puiseux series because the denominators in the exponents are unbounded. (And if the base field K has characteristic p , then this Hahn series satisfies the equation X p − X = T − 1 {\displaystyle X^{p}-X=T^{-1}} so it is algebraic over K ( T ) {\displaystyle K(T)} .) The valuation v ( f ) {\displaystyle v(f)} of a non-zero Hahn series is defined as the smallest e ∈ Γ {\displaystyle e\in \Gamma } such that c e ≠ 0 {\displaystyle c_{e}\neq 0} (in other words, the smallest element of the support of f {\displaystyle f} ): this makes K [ [ T Γ ] ] {\displaystyle K[[T^{\Gamma }]]} into a spherically complete valued field with value group Γ {\displaystyle \Gamma } and residue field K {\displaystyle K} (justifying a posteriori the terminology). In fact, if K {\displaystyle K} has characteristic zero, then ( K [ [ T Γ ] ] , v ) {\displaystyle (K[[T^{\Gamma }]],v)} is up to (non-unique) isomorphism the only spherically complete valued field with residue field K {\displaystyle K} and value group Γ {\displaystyle \Gamma } . [ 3 ] The valuation v {\displaystyle v} defines a topology on K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} . If Γ ⊆ R {\displaystyle \Gamma \subseteq \mathbb {R} } , then v {\displaystyle v} corresponds to an ultrametric absolute value | f | = exp ⁡ ( − v ( f ) ) {\displaystyle |f|=\exp(-v(f))} , with respect to which K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} is a complete metric space . However, unlike in the case of formal Laurent series or Puiseux series, the formal sums used in defining the elements of the field do not converge: in the case of T − 1 / p + T − 1 / p 2 + T − 1 / p 3 + ⋯ {\displaystyle T^{-1/p}+T^{-1/p^{2}}+T^{-1/p^{3}}+\cdots } for example, the absolute values of the terms tend to 1 (because their valuations tend to 0), so the series is not convergent (such series are sometimes known as "pseudo-convergent" [ 4 ] ). If K {\displaystyle K} is algebraically closed (but not necessarily of characteristic zero) and Γ {\displaystyle \Gamma } is divisible , then K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} is algebraically closed. [ 5 ] Thus, the algebraic closure of K ( ( T ) ) {\displaystyle K((T))} is contained in K ¯ [ [ T Q ] ] {\displaystyle {\overline {K}}[[T^{\mathbb {Q} }]]} , where K ¯ {\displaystyle {\overline {K}}} is the algebraic closure of K {\displaystyle K} (when K {\displaystyle K} is of characteristic zero, it is exactly the field of Puiseux series ): in fact, it is possible to give a somewhat analogous description of the algebraic closure of K ( ( T ) ) {\displaystyle K((T))} in positive characteristic as a subset of K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} . [ 6 ] If K {\displaystyle K} is an ordered field then K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} is totally ordered by making the indeterminate T {\displaystyle T} infinitesimal (greater than 0 but less than any positive element of K {\displaystyle K} ) or, equivalently, by using the lexicographic order on the coefficients of the series. If K {\displaystyle K} is real-closed and Γ {\displaystyle \Gamma } is divisible then K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} is itself real-closed. [ 7 ] This fact can be used to analyse (or even construct) the field of surreal numbers (which is isomorphic, as an ordered field, to the field of Hahn series with real coefficients and value group the surreal numbers themselves [ 8 ] ). If κ is an infinite regular cardinal , one can consider the subset of K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} consisting of series whose support set { e ∈ Γ : c e ≠ 0 } {\displaystyle \{e\in \Gamma :c_{e}\neq 0\}} has cardinality (strictly) less than κ : it turns out that this is also a field, with much the same algebraic closedness properties as the full K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} : e.g., it is algebraically closed or real closed when K {\displaystyle K} is so and Γ {\displaystyle \Gamma } is divisible. [ 9 ] One can define a notion of summable families in K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} . If I {\displaystyle I} is a set and ( f i ) i ∈ I {\displaystyle (f_{i})_{i\in I}} is a family of Hahn series f i ∈ K [ [ T Γ ] ] {\displaystyle f_{i}\in K\left[\left[T^{\Gamma }\right]\right]} , then we say that ( f i ) i ∈ I {\displaystyle (f_{i})_{i\in I}} is summable if the set ⋃ i ∈ I supp ⁡ f i ⊂ Γ {\displaystyle \bigcup \limits _{i\in I}\operatorname {supp} f_{i}\subset \Gamma } is well-ordered, and each set { i ∈ I ∣ e ∈ supp ⁡ f i } {\displaystyle \{i\in I\mid e\in \operatorname {supp} f_{i}\}} for e ∈ Γ {\displaystyle e\in \Gamma } is finite. We may then define the sum ∑ i ∈ I f i {\displaystyle \sum \limits _{i\in I}f_{i}} as the Hahn series If ( f i ) i ∈ I , ( g i ) i ∈ I {\displaystyle (f_{i})_{i\in I},(g_{i})_{i\in I}} are summable, then so are the families ( f i + g i ) i ∈ I , ( f i g j ) ( i , j ) ∈ I × I {\displaystyle (f_{i}+g_{i})_{i\in I},(f_{i}g_{j})_{(i,j)\in I\times I}} , and we have [ 10 ] and This notion of summable family does not correspond to the notion of convergence in the valuation topology on K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} . For instance, in Q [ [ T Q ] ] {\displaystyle \mathbb {Q} \left[\left[T^{\mathbb {Q} }\right]\right]} , the family ( T n n + 1 + T n + 1 ) n ∈ N {\displaystyle (T^{\frac {n}{n+1}}+T^{n+1})_{n\in \mathbb {N} }} is summable but the sequence ( ∑ k ≤ n T k k + 1 + T k + 1 ) n ∈ N {\displaystyle {\big (}\sum \limits _{k\leq n}T^{\frac {k}{k+1}}+T^{k+1}{\big )}_{n\in \mathbb {N} }} does not converge. Let a ∈ R {\displaystyle a\in \mathbb {R} } and let A a {\displaystyle {\mathcal {A}}_{a}} denote the ring of real-valued functions which are analytic on a neighborhood of a {\displaystyle a} . If K {\displaystyle K} contains R {\displaystyle \mathbb {R} } , then we can evaluate every element f {\displaystyle f} of A a {\displaystyle {\mathcal {A}}_{a}} at every element of K [ [ T Γ ] ] {\displaystyle K\left[\left[T^{\Gamma }\right]\right]} of the form a + ε {\displaystyle a+\varepsilon } , where the valuation of ε {\displaystyle \varepsilon } is strictly positive. Indeed, the family ( f ( n ) ( a ) n ! ε n ) n ∈ N {\displaystyle {\bigg (}{\frac {f^{(n)}(a)}{n!}}\varepsilon ^{n}{\bigg )}_{\!n\in \mathbb {N} }} is always summable, [ 11 ] so we can define f ( a + ε ) := ∑ n ∈ N f ( n ) ( a ) n ! ε n {\displaystyle f(a+\varepsilon ):=\sum \limits _{n\in \mathbb {N} }{\frac {f^{(n)}(a)}{n!}}\varepsilon ^{n}} . This defines a ring homomorphism A a ⟶ K [ [ T Γ ] ] {\displaystyle {\mathcal {A}}_{a}\longrightarrow K\left[\left[T^{\Gamma }\right]\right]} . The construction of Hahn series can be combined with Witt vectors (at least over a perfect field ) to form twisted Hahn series or Hahn–Witt series : [ 12 ] for example, over a finite field K of characteristic p (or their algebraic closure), the field of Hahn–Witt series with value group Γ (containing the integers ) would be the set of formal sums ∑ e ∈ Γ c e p e {\displaystyle \sum _{e\in \Gamma }c_{e}p^{e}} where now c e {\displaystyle c_{e}} are Teichmüller representatives (of the elements of K ) which are multiplied and added in the same way as in the case of ordinary Witt vectors (which is obtained when Γ is the group of integers). When Γ is the group of rationals or reals and K is the algebraic closure of the finite field with p elements, this construction gives a (ultra)metrically complete algebraically closed field containing the p -adics , hence a more or less explicit description of the field C p {\displaystyle \mathbb {C} _{p}} or its spherical completion. [ 13 ]
https://en.wikipedia.org/wiki/Hahn_series
In functional analysis , the Hahn–Banach theorem is a central result that allows the extension of bounded linear functionals defined on a vector subspace of some vector space to the whole space. The theorem also shows that there are sufficient continuous linear functionals defined on every normed vector space in order to study the dual space . Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem , and has numerous uses in convex geometry . The theorem is named for the mathematicians Hans Hahn and Stefan Banach , who proved it independently in the late 1920s. The special case of the theorem for the space C [ a , b ] {\displaystyle C[a,b]} of continuous functions on an interval was proved earlier (in 1912) by Eduard Helly , [ 1 ] and a more general extension theorem, the M. Riesz extension theorem , from which the Hahn–Banach theorem can be derived, was proved in 1923 by Marcel Riesz . [ 2 ] The first Hahn–Banach theorem was proved by Eduard Helly in 1912 who showed that certain linear functionals defined on a subspace of a certain type of normed space ( C N {\displaystyle \mathbb {C} ^{\mathbb {N} }} ) had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensional extension exists (where the linear functional has its domain extended by one dimension) and then using induction . In 1927, Hahn defined general Banach spaces and used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that uses sublinear functions . Whereas Helly's proof used mathematical induction, Hahn and Banach both used transfinite induction . [ 3 ] The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as the moment problem , whereby given all the potential moments of a function one must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is the Fourier cosine series problem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so. Riesz and Helly solved the problem for certain classes of spaces (such as L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} and C ( [ a , b ] ) {\displaystyle C([a,b])} ) where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem: [ 3 ] If X {\displaystyle X} happens to be a reflexive space then to solve the vector problem, it suffices to solve the following dual problem: [ 3 ] Riesz went on to define L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} space ( 1 < p < ∞ {\displaystyle 1<p<\infty } ) in 1910 and the ℓ p {\displaystyle \ell ^{p}} spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution. [ 3 ] Theorem [ 3 ] (The functional problem) — Let ( x i ) i ∈ I {\displaystyle \left(x_{i}\right)_{i\in I}} be vectors in a real or complex normed space X {\displaystyle X} and let ( c i ) i ∈ I {\displaystyle \left(c_{i}\right)_{i\in I}} be scalars also indexed by I ≠ ∅ . {\displaystyle I\neq \varnothing .} There exists a continuous linear functional f {\displaystyle f} on X {\displaystyle X} such that f ( x i ) = c i {\displaystyle f\left(x_{i}\right)=c_{i}} for all i ∈ I {\displaystyle i\in I} if and only if there exists a K > 0 {\displaystyle K>0} such that for any choice of scalars ( s i ) i ∈ I {\displaystyle \left(s_{i}\right)_{i\in I}} where all but finitely many s i {\displaystyle s_{i}} are 0 , {\displaystyle 0,} the following holds: | ∑ i ∈ I s i c i | ≤ K ‖ ∑ i ∈ I s i x i ‖ . {\displaystyle \left|\sum _{i\in I}s_{i}c_{i}\right|\leq K\left\|\sum _{i\in I}s_{i}x_{i}\right\|.} The Hahn–Banach theorem can be deduced from the above theorem. [ 3 ] If X {\displaystyle X} is reflexive then this theorem solves the vector problem. A real-valued function f : M → R {\displaystyle f:M\to \mathbb {R} } defined on a subset M {\displaystyle M} of X {\displaystyle X} is said to be dominated (above) by a function p : X → R {\displaystyle p:X\to \mathbb {R} } if f ( m ) ≤ p ( m ) {\displaystyle f(m)\leq p(m)} for every m ∈ M . {\displaystyle m\in M.} For this reason, the following version of the Hahn–Banach theorem is called the dominated extension theorem . Hahn–Banach dominated extension theorem (for real linear functionals) [ 4 ] [ 5 ] [ 6 ] — If p : X → R {\displaystyle p:X\to \mathbb {R} } is a sublinear function (such as a norm or seminorm for example) defined on a real vector space X {\displaystyle X} then any linear functional defined on a vector subspace of X {\displaystyle X} that is dominated above by p {\displaystyle p} has at least one linear extension to all of X {\displaystyle X} that is also dominated above by p . {\displaystyle p.} Explicitly, if p : X → R {\displaystyle p:X\to \mathbb {R} } is a sublinear function , which by definition means that it satisfies p ( x + y ) ≤ p ( x ) + p ( y ) and p ( t x ) = t p ( x ) for all x , y ∈ X and all real t ≥ 0 , {\displaystyle p(x+y)\leq p(x)+p(y)\quad {\text{ and }}\quad p(tx)=tp(x)\qquad {\text{ for all }}\;x,y\in X\;{\text{ and all real }}\;t\geq 0,} and if f : M → R {\displaystyle f:M\to \mathbb {R} } is a linear functional defined on a vector subspace M {\displaystyle M} of X {\displaystyle X} such that f ( m ) ≤ p ( m ) for all m ∈ M {\displaystyle f(m)\leq p(m)\quad {\text{ for all }}m\in M} then there exists a linear functional F : X → R {\displaystyle F:X\to \mathbb {R} } such that F ( m ) = f ( m ) for all m ∈ M , {\displaystyle F(m)=f(m)\quad {\text{ for all }}m\in M,} F ( x ) ≤ p ( x ) for all x ∈ X . {\displaystyle F(x)\leq p(x)\quad ~\;\,{\text{ for all }}x\in X.} Moreover, if p {\displaystyle p} is a seminorm then | F ( x ) | ≤ p ( x ) {\displaystyle |F(x)|\leq p(x)} necessarily holds for all x ∈ X . {\displaystyle x\in X.} The theorem remains true if the requirements on p {\displaystyle p} are relaxed to require only that p {\displaystyle p} be a convex function : [ 7 ] [ 8 ] p ( t x + ( 1 − t ) y ) ≤ t p ( x ) + ( 1 − t ) p ( y ) for all 0 < t < 1 and x , y ∈ X . {\displaystyle p(tx+(1-t)y)\leq tp(x)+(1-t)p(y)\qquad {\text{ for all }}0<t<1{\text{ and }}x,y\in X.} A function p : X → R {\displaystyle p:X\to \mathbb {R} } is convex and satisfies p ( 0 ) ≤ 0 {\displaystyle p(0)\leq 0} if and only if p ( a x + b y ) ≤ a p ( x ) + b p ( y ) {\displaystyle p(ax+by)\leq ap(x)+bp(y)} for all vectors x , y ∈ X {\displaystyle x,y\in X} and all non-negative real a , b ≥ 0 {\displaystyle a,b\geq 0} such that a + b ≤ 1. {\displaystyle a+b\leq 1.} Every sublinear function is a convex function. On the other hand, if p : X → R {\displaystyle p:X\to \mathbb {R} } is convex with p ( 0 ) ≥ 0 , {\displaystyle p(0)\geq 0,} then the function defined by p 0 ( x ) = def inf t > 0 p ( t x ) t {\displaystyle p_{0}(x)\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\inf _{t>0}{\frac {p(tx)}{t}}} is positively homogeneous (because for all x {\displaystyle x} and r > 0 {\displaystyle r>0} one has p 0 ( r x ) = inf t > 0 p ( t r x ) t = r inf t > 0 p ( t r x ) t r = r inf τ > 0 p ( τ x ) τ = r p 0 ( x ) {\displaystyle p_{0}(rx)=\inf _{t>0}{\frac {p(trx)}{t}}=r\inf _{t>0}{\frac {p(trx)}{tr}}=r\inf _{\tau >0}{\frac {p(\tau x)}{\tau }}=rp_{0}(x)} ), hence, being convex, it is sublinear . It is also bounded above by p 0 ≤ p , {\displaystyle p_{0}\leq p,} and satisfies F ≤ p 0 {\displaystyle F\leq p_{0}} for every linear functional F ≤ p . {\displaystyle F\leq p.} So the extension of the Hahn–Banach theorem to convex functionals does not have a much larger content than the classical one stated for sublinear functionals. If F : X → R {\displaystyle F:X\to \mathbb {R} } is linear then F ≤ p {\displaystyle F\leq p} if and only if [ 4 ] − p ( − x ) ≤ F ( x ) ≤ p ( x ) for all x ∈ X , {\displaystyle -p(-x)\leq F(x)\leq p(x)\quad {\text{ for all }}x\in X,} which is the (equivalent) conclusion that some authors [ 4 ] write instead of F ≤ p . {\displaystyle F\leq p.} It follows that if p : X → R {\displaystyle p:X\to \mathbb {R} } is also symmetric , meaning that p ( − x ) = p ( x ) {\displaystyle p(-x)=p(x)} holds for all x ∈ X , {\displaystyle x\in X,} then F ≤ p {\displaystyle F\leq p} if and only | F | ≤ p . {\displaystyle |F|\leq p.} Every norm is a seminorm and both are symmetric balanced sublinear functions. A sublinear function is a seminorm if and only if it is a balanced function . On a real vector space (although not on a complex vector space), a sublinear function is a seminorm if and only if it is symmetric. The identity function R → R {\displaystyle \mathbb {R} \to \mathbb {R} } on X := R {\displaystyle X:=\mathbb {R} } is an example of a sublinear function that is not a seminorm. The dominated extension theorem for real linear functionals implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces. Hahn–Banach theorem [ 3 ] [ 9 ] — Suppose p : X → R {\displaystyle p:X\to \mathbb {R} } a seminorm on a vector space X {\displaystyle X} over the field K , {\displaystyle \mathbf {K} ,} which is either R {\displaystyle \mathbb {R} } or C . {\displaystyle \mathbb {C} .} If f : M → K {\displaystyle f:M\to \mathbf {K} } is a linear functional on a vector subspace M {\displaystyle M} such that | f ( m ) | ≤ p ( m ) for all m ∈ M , {\displaystyle |f(m)|\leq p(m)\quad {\text{ for all }}m\in M,} then there exists a linear functional F : X → K {\displaystyle F:X\to \mathbf {K} } such that F ( m ) = f ( m ) for all m ∈ M , {\displaystyle F(m)=f(m)\quad \;{\text{ for all }}m\in M,} | F ( x ) | ≤ p ( x ) for all x ∈ X . {\displaystyle |F(x)|\leq p(x)\quad \;\,{\text{ for all }}x\in X.} The theorem remains true if the requirements on p {\displaystyle p} are relaxed to require only that for all x , y ∈ X {\displaystyle x,y\in X} and all scalars a {\displaystyle a} and b {\displaystyle b} satisfying | a | + | b | ≤ 1 , {\displaystyle |a|+|b|\leq 1,} [ 8 ] p ( a x + b y ) ≤ | a | p ( x ) + | b | p ( y ) . {\displaystyle p(ax+by)\leq |a|p(x)+|b|p(y).} This condition holds if and only if p {\displaystyle p} is a convex and balanced function satisfying p ( 0 ) ≤ 0 , {\displaystyle p(0)\leq 0,} or equivalently, if and only if it is convex, satisfies p ( 0 ) ≤ 0 , {\displaystyle p(0)\leq 0,} and p ( u x ) ≤ p ( x ) {\displaystyle p(ux)\leq p(x)} for all x ∈ X {\displaystyle x\in X} and all unit length scalars u . {\displaystyle u.} A complex-valued functional F {\displaystyle F} is said to be dominated by p {\displaystyle p} if | F ( x ) | ≤ p ( x ) {\displaystyle |F(x)|\leq p(x)} for all x {\displaystyle x} in the domain of F . {\displaystyle F.} With this terminology, the above statements of the Hahn–Banach theorem can be restated more succinctly: Proof The following observations allow the Hahn–Banach theorem for real vector spaces to be applied to (complex-valued) linear functionals on complex vector spaces. Every linear functional F : X → C {\displaystyle F:X\to \mathbb {C} } on a complex vector space is completely determined by its real part Re ⁡ F : X → R {\displaystyle \;\operatorname {Re} F:X\to \mathbb {R} \;} through the formula [ 6 ] [ proof 1 ] F ( x ) = Re ⁡ F ( x ) − i Re ⁡ F ( i x ) for all x ∈ X {\displaystyle F(x)\;=\;\operatorname {Re} F(x)-i\operatorname {Re} F(ix)\qquad {\text{ for all }}x\in X} and moreover, if ‖ ⋅ ‖ {\displaystyle \|\cdot \|} is a norm on X {\displaystyle X} then their dual norms are equal: ‖ F ‖ = ‖ Re ⁡ F ‖ . {\displaystyle \|F\|=\|\operatorname {Re} F\|.} [ 10 ] In particular, a linear functional on X {\displaystyle X} extends another one defined on M ⊆ X {\displaystyle M\subseteq X} if and only if their real parts are equal on M {\displaystyle M} (in other words, a linear functional F {\displaystyle F} extends f {\displaystyle f} if and only if Re ⁡ F {\displaystyle \operatorname {Re} F} extends Re ⁡ f {\displaystyle \operatorname {Re} f} ). The real part of a linear functional on X {\displaystyle X} is always a real-linear functional (meaning that it is linear when X {\displaystyle X} is considered as a real vector space) and if R : X → R {\displaystyle R:X\to \mathbb {R} } is a real-linear functional on a complex vector space then x ↦ R ( x ) − i R ( i x ) {\displaystyle x\mapsto R(x)-iR(ix)} defines the unique linear functional on X {\displaystyle X} whose real part is R . {\displaystyle R.} If F {\displaystyle F} is a linear functional on a (complex or real) vector space X {\displaystyle X} and if p : X → R {\displaystyle p:X\to \mathbb {R} } is a seminorm then [ 6 ] [ proof 2 ] | F | ≤ p if and only if Re ⁡ F ≤ p . {\displaystyle |F|\,\leq \,p\quad {\text{ if and only if }}\quad \operatorname {Re} F\,\leq \,p.} Stated in simpler language, a linear functional is dominated by a seminorm p {\displaystyle p} if and only if its real part is dominated above by p . {\displaystyle p.} Suppose p : X → R {\displaystyle p:X\to \mathbb {R} } is a seminorm on a complex vector space X {\displaystyle X} and let f : M → C {\displaystyle f:M\to \mathbb {C} } be a linear functional defined on a vector subspace M {\displaystyle M} of X {\displaystyle X} that satisfies | f | ≤ p {\displaystyle |f|\leq p} on M . {\displaystyle M.} Consider X {\displaystyle X} as a real vector space and apply the Hahn–Banach theorem for real vector spaces to the real-linear functional Re ⁡ f : M → R {\displaystyle \;\operatorname {Re} f:M\to \mathbb {R} \;} to obtain a real-linear extension R : X → R {\displaystyle R:X\to \mathbb {R} } that is also dominated above by p , {\displaystyle p,} so that it satisfies R ≤ p {\displaystyle R\leq p} on X {\displaystyle X} and R = Re ⁡ f {\displaystyle R=\operatorname {Re} f} on M . {\displaystyle M.} The map F : X → C {\displaystyle F:X\to \mathbb {C} } defined by F ( x ) = R ( x ) − i R ( i x ) {\displaystyle F(x)\;=\;R(x)-iR(ix)} is a linear functional on X {\displaystyle X} that extends f {\displaystyle f} (because their real parts agree on M {\displaystyle M} ) and satisfies | F | ≤ p {\displaystyle |F|\leq p} on X {\displaystyle X} (because Re ⁡ F ≤ p {\displaystyle \operatorname {Re} F\leq p} and p {\displaystyle p} is a seminorm). ◼ {\displaystyle \blacksquare } The proof above shows that when p {\displaystyle p} is a seminorm then there is a one-to-one correspondence between dominated linear extensions of f : M → C {\displaystyle f:M\to \mathbb {C} } and dominated real-linear extensions of Re ⁡ f : M → R ; {\displaystyle \operatorname {Re} f:M\to \mathbb {R} ;} the proof even gives a formula for explicitly constructing a linear extension of f {\displaystyle f} from any given real-linear extension of its real part. Continuity A linear functional F {\displaystyle F} on a topological vector space is continuous if and only if this is true of its real part Re ⁡ F ; {\displaystyle \operatorname {Re} F;} if the domain is a normed space then ‖ F ‖ = ‖ Re ⁡ F ‖ {\displaystyle \|F\|=\|\operatorname {Re} F\|} (where one side is infinite if and only if the other side is infinite). [ 10 ] Assume X {\displaystyle X} is a topological vector space and p : X → R {\displaystyle p:X\to \mathbb {R} } is sublinear function . If p {\displaystyle p} is a continuous sublinear function that dominates a linear functional F {\displaystyle F} then F {\displaystyle F} is necessarily continuous. [ 6 ] Moreover, a linear functional F {\displaystyle F} is continuous if and only if its absolute value | F | {\displaystyle |F|} (which is a seminorm that dominates F {\displaystyle F} ) is continuous. [ 6 ] In particular, a linear functional is continuous if and only if it is dominated by some continuous sublinear function. The Hahn–Banach theorem for real vector spaces ultimately follows from Helly's initial result for the special case where the linear functional is extended from M {\displaystyle M} to a larger vector space in which M {\displaystyle M} has codimension 1. {\displaystyle 1.} [ 3 ] Lemma [ 6 ] ( One–dimensional dominated extension theorem ) — Let p : X → R {\displaystyle p:X\to \mathbb {R} } be a sublinear function on a real vector space X , {\displaystyle X,} let f : M → R {\displaystyle f:M\to \mathbb {R} } a linear functional on a proper vector subspace M ⊊ X {\displaystyle M\subsetneq X} such that f ≤ p {\displaystyle f\leq p} on M {\displaystyle M} (meaning f ( m ) ≤ p ( m ) {\displaystyle f(m)\leq p(m)} for all m ∈ M {\displaystyle m\in M} ), and let x ∈ X {\displaystyle x\in X} be a vector not in M {\displaystyle M} (so M ⊕ R x = span ⁡ { M , x } {\displaystyle M\oplus \mathbb {R} x=\operatorname {span} \{M,x\}} ). There exists a linear extension F : M ⊕ R x → R {\displaystyle F:M\oplus \mathbb {R} x\to \mathbb {R} } of f {\displaystyle f} such that F ≤ p {\displaystyle F\leq p} on M ⊕ R x . {\displaystyle M\oplus \mathbb {R} x.} Given any real number b , {\displaystyle b,} the map F b : M ⊕ R x → R {\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} } defined by F b ( m + r x ) = f ( m ) + r b {\displaystyle F_{b}(m+rx)=f(m)+rb} is always a linear extension of f {\displaystyle f} to M ⊕ R x {\displaystyle M\oplus \mathbb {R} x} [ note 1 ] but it might not satisfy F b ≤ p . {\displaystyle F_{b}\leq p.} It will be shown that b {\displaystyle b} can always be chosen so as to guarantee that F b ≤ p , {\displaystyle F_{b}\leq p,} which will complete the proof. If m , n ∈ M {\displaystyle m,n\in M} then f ( m ) − f ( n ) = f ( m − n ) ≤ p ( m − n ) = p ( m + x − x − n ) ≤ p ( m + x ) + p ( − x − n ) {\displaystyle f(m)-f(n)=f(m-n)\leq p(m-n)=p(m+x-x-n)\leq p(m+x)+p(-x-n)} which implies − p ( − n − x ) − f ( n ) ≤ p ( m + x ) − f ( m ) . {\displaystyle -p(-n-x)-f(n)~\leq ~p(m+x)-f(m).} So define a = sup n ∈ M [ − p ( − n − x ) − f ( n ) ] and c = inf m ∈ M [ p ( m + x ) − f ( m ) ] {\displaystyle a=\sup _{n\in M}[-p(-n-x)-f(n)]\qquad {\text{ and }}\qquad c=\inf _{m\in M}[p(m+x)-f(m)]} where a ≤ c {\displaystyle a\leq c} are real numbers. To guarantee F b ≤ p , {\displaystyle F_{b}\leq p,} it suffices that a ≤ b ≤ c {\displaystyle a\leq b\leq c} (in fact, this is also necessary [ note 2 ] ) because then b {\displaystyle b} satisfies "the decisive inequality" [ 6 ] − p ( − n − x ) − f ( n ) ≤ b ≤ p ( m + x ) − f ( m ) for all m , n ∈ M . {\displaystyle -p(-n-x)-f(n)~\leq ~b~\leq ~p(m+x)-f(m)\qquad {\text{ for all }}\;m,n\in M.} To see that f ( m ) + r b ≤ p ( m + r x ) {\displaystyle f(m)+rb\leq p(m+rx)} follows, [ note 3 ] assume r ≠ 0 {\displaystyle r\neq 0} and substitute 1 r m {\displaystyle {\tfrac {1}{r}}m} in for both m {\displaystyle m} and n {\displaystyle n} to obtain − p ( − 1 r m − x ) − 1 r f ( m ) ≤ b ≤ p ( 1 r m + x ) − 1 r f ( m ) . {\displaystyle -p\left(-{\tfrac {1}{r}}m-x\right)-{\tfrac {1}{r}}f\left(m\right)~\leq ~b~\leq ~p\left({\tfrac {1}{r}}m+x\right)-{\tfrac {1}{r}}f\left(m\right).} If r > 0 {\displaystyle r>0} (respectively, if r < 0 {\displaystyle r<0} ) then the right (respectively, the left) hand side equals 1 r [ p ( m + r x ) − f ( m ) ] {\displaystyle {\tfrac {1}{r}}\left[p(m+rx)-f(m)\right]} so that multiplying by r {\displaystyle r} gives r b ≤ p ( m + r x ) − f ( m ) . {\displaystyle rb\leq p(m+rx)-f(m).} ◼ {\displaystyle \blacksquare } This lemma remains true if p : X → R {\displaystyle p:X\to \mathbb {R} } is merely a convex function instead of a sublinear function. [ 7 ] [ 8 ] Assume that p {\displaystyle p} is convex, which means that p ( t y + ( 1 − t ) z ) ≤ t p ( y ) + ( 1 − t ) p ( z ) {\displaystyle p(ty+(1-t)z)\leq tp(y)+(1-t)p(z)} for all 0 ≤ t ≤ 1 {\displaystyle 0\leq t\leq 1} and y , z ∈ X . {\displaystyle y,z\in X.} Let M , {\displaystyle M,} f : M → R , {\displaystyle f:M\to \mathbb {R} ,} and x ∈ X ∖ M {\displaystyle x\in X\setminus M} be as in the lemma's statement . Given any m , n ∈ M {\displaystyle m,n\in M} and any positive real r , s > 0 , {\displaystyle r,s>0,} the positive real numbers t := s r + s {\displaystyle t:={\tfrac {s}{r+s}}} and r r + s = 1 − t {\displaystyle {\tfrac {r}{r+s}}=1-t} sum to 1 {\displaystyle 1} so that the convexity of p {\displaystyle p} on X {\displaystyle X} guarantees p ( s r + s m + r r + s n ) = p ( s r + s ( m − r x ) + r r + s ( n + s x ) ) ≤ s r + s p ( m − r x ) + r r + s p ( n + s x ) {\displaystyle {\begin{alignedat}{9}p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)~&=~p{\big (}{\tfrac {s}{r+s}}(m-rx)&&+{\tfrac {r}{r+s}}(n+sx){\big )}&&\\&\leq ~{\tfrac {s}{r+s}}\;p(m-rx)&&+{\tfrac {r}{r+s}}\;p(n+sx)&&\\\end{alignedat}}} and hence s f ( m ) + r f ( n ) = ( r + s ) f ( s r + s m + r r + s n ) by linearity of f ≤ ( r + s ) p ( s r + s m + r r + s n ) f ≤ p on M ≤ s p ( m − r x ) + r p ( n + s x ) {\displaystyle {\begin{alignedat}{9}sf(m)+rf(n)~&=~(r+s)\;f\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad {\text{ by linearity of }}f\\&\leq ~(r+s)\;p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad f\leq p{\text{ on }}M\\&\leq ~sp(m-rx)+rp(n+sx)\\\end{alignedat}}} thus proving that − s p ( m − r x ) + s f ( m ) ≤ r p ( n + s x ) − r f ( n ) , {\displaystyle -sp(m-rx)+sf(m)~\leq ~rp(n+sx)-rf(n),} which after multiplying both sides by 1 r s {\displaystyle {\tfrac {1}{rs}}} becomes 1 r [ − p ( m − r x ) + f ( m ) ] ≤ 1 s [ p ( n + s x ) − f ( n ) ] . {\displaystyle {\tfrac {1}{r}}[-p(m-rx)+f(m)]~\leq ~{\tfrac {1}{s}}[p(n+sx)-f(n)].} This implies that the values defined by a = sup r > 0 m ∈ M 1 r [ − p ( m − r x ) + f ( m ) ] and c = inf s > 0 n ∈ M 1 s [ p ( n + s x ) − f ( n ) ] {\displaystyle a=\sup _{\stackrel {m\in M}{r>0}}{\tfrac {1}{r}}[-p(m-rx)+f(m)]\qquad {\text{ and }}\qquad c=\inf _{\stackrel {n\in M}{s>0}}{\tfrac {1}{s}}[p(n+sx)-f(n)]} are real numbers that satisfy a ≤ c . {\displaystyle a\leq c.} As in the above proof of the one–dimensional dominated extension theorem above, for any real b ∈ R {\displaystyle b\in \mathbb {R} } define F b : M ⊕ R x → R {\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} } by F b ( m + r x ) = f ( m ) + r b . {\displaystyle F_{b}(m+rx)=f(m)+rb.} It can be verified that if a ≤ b ≤ c {\displaystyle a\leq b\leq c} then F b ≤ p {\displaystyle F_{b}\leq p} where r b ≤ p ( m + r x ) − f ( m ) {\displaystyle rb\leq p(m+rx)-f(m)} follows from b ≤ c {\displaystyle b\leq c} when r > 0 {\displaystyle r>0} (respectively, follows from a ≤ b {\displaystyle a\leq b} when r < 0 {\displaystyle r<0} ). ◼ {\displaystyle \blacksquare } The lemma above is the key step in deducing the dominated extension theorem from Zorn's lemma . The set of all possible dominated linear extensions of f {\displaystyle f} are partially ordered by extension of each other, so there is a maximal extension F . {\displaystyle F.} By the codimension-1 result, if F {\displaystyle F} is not defined on all of X , {\displaystyle X,} then it can be further extended. Thus F {\displaystyle F} must be defined everywhere, as claimed. ◼ {\displaystyle \blacksquare } When M {\displaystyle M} has countable codimension, then using induction and the lemma completes the proof of the Hahn–Banach theorem. The standard proof of the general case uses Zorn's lemma although the strictly weaker ultrafilter lemma [ 11 ] (which is equivalent to the compactness theorem and to the Boolean prime ideal theorem ) may be used instead. Hahn–Banach can also be proved using Tychonoff's theorem for compact Hausdorff spaces [ 12 ] (which is also equivalent to the ultrafilter lemma) The Mizar project has completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file. [ 13 ] The Hahn–Banach theorem can be used to guarantee the existence of continuous linear extensions of continuous linear functionals . Hahn–Banach continuous extension theorem [ 14 ] — Every continuous linear functional f {\displaystyle f} defined on a vector subspace M {\displaystyle M} of a (real or complex) locally convex topological vector space X {\displaystyle X} has a continuous linear extension F {\displaystyle F} to all of X . {\displaystyle X.} If in addition X {\displaystyle X} is a normed space , then this extension can be chosen so that its dual norm is equal to that of f . {\displaystyle f.} In category-theoretic terms, the underlying field of the vector space is an injective object in the category of locally convex vector spaces. On a normed (or seminormed ) space, a linear extension F {\displaystyle F} of a bounded linear functional f {\displaystyle f} is said to be norm-preserving if it has the same dual norm as the original functional: ‖ F ‖ = ‖ f ‖ . {\displaystyle \|F\|=\|f\|.} Because of this terminology, the second part of the above theorem is sometimes referred to as the " norm-preserving " version of the Hahn–Banach theorem. [ 15 ] Explicitly: Norm-preserving Hahn–Banach continuous extension theorem [ 15 ] — Every continuous linear functional f {\displaystyle f} defined on a vector subspace M {\displaystyle M} of a (real or complex) normed space X {\displaystyle X} has a continuous linear extension F {\displaystyle F} to all of X {\displaystyle X} that satisfies ‖ f ‖ = ‖ F ‖ . {\displaystyle \|f\|=\|F\|.} The following observations allow the continuous extension theorem to be deduced from the Hahn–Banach theorem . [ 16 ] The absolute value of a linear functional is always a seminorm. A linear functional F {\displaystyle F} on a topological vector space X {\displaystyle X} is continuous if and only if its absolute value | F | {\displaystyle |F|} is continuous, which happens if and only if there exists a continuous seminorm p {\displaystyle p} on X {\displaystyle X} such that | F | ≤ p {\displaystyle |F|\leq p} on the domain of F . {\displaystyle F.} [ 17 ] If X {\displaystyle X} is a locally convex space then this statement remains true when the linear functional F {\displaystyle F} is defined on a proper vector subspace of X . {\displaystyle X.} Let f {\displaystyle f} be a continuous linear functional defined on a vector subspace M {\displaystyle M} of a locally convex topological vector space X . {\displaystyle X.} Because X {\displaystyle X} is locally convex, there exists a continuous seminorm p : X → R {\displaystyle p:X\to \mathbb {R} } on X {\displaystyle X} that dominates f {\displaystyle f} (meaning that | f ( m ) | ≤ p ( m ) {\displaystyle |f(m)|\leq p(m)} for all m ∈ M {\displaystyle m\in M} ). By the Hahn–Banach theorem , there exists a linear extension of f {\displaystyle f} to X , {\displaystyle X,} call it F , {\displaystyle F,} that satisfies | F | ≤ p {\displaystyle |F|\leq p} on X . {\displaystyle X.} This linear functional F {\displaystyle F} is continuous since | F | ≤ p {\displaystyle |F|\leq p} and p {\displaystyle p} is a continuous seminorm. Proof for normed spaces A linear functional f {\displaystyle f} on a normed space is continuous if and only if it is bounded , which means that its dual norm ‖ f ‖ = sup { | f ( m ) | : ‖ m ‖ ≤ 1 , m ∈ domain ⁡ f } {\displaystyle \|f\|=\sup\{|f(m)|:\|m\|\leq 1,m\in \operatorname {domain} f\}} is finite, in which case | f ( m ) | ≤ ‖ f ‖ ‖ m ‖ {\displaystyle |f(m)|\leq \|f\|\|m\|} holds for every point m {\displaystyle m} in its domain. Moreover, if c ≥ 0 {\displaystyle c\geq 0} is such that | f ( m ) | ≤ c ‖ m ‖ {\displaystyle |f(m)|\leq c\|m\|} for all m {\displaystyle m} in the functional's domain, then necessarily ‖ f ‖ ≤ c . {\displaystyle \|f\|\leq c.} If F {\displaystyle F} is a linear extension of a linear functional f {\displaystyle f} then their dual norms always satisfy ‖ f ‖ ≤ ‖ F ‖ {\displaystyle \|f\|\leq \|F\|} [ proof 3 ] so that equality ‖ f ‖ = ‖ F ‖ {\displaystyle \|f\|=\|F\|} is equivalent to ‖ F ‖ ≤ ‖ f ‖ , {\displaystyle \|F\|\leq \|f\|,} which holds if and only if | F ( x ) | ≤ ‖ f ‖ ‖ x ‖ {\displaystyle |F(x)|\leq \|f\|\|x\|} for every point x {\displaystyle x} in the extension's domain. This can be restated in terms of the function ‖ f ‖ ‖ ⋅ ‖ : X → R {\displaystyle \|f\|\,\|\cdot \|:X\to \mathbb {R} } defined by x ↦ ‖ f ‖ ‖ x ‖ , {\displaystyle x\mapsto \|f\|\,\|x\|,} which is always a seminorm : [ note 4 ] Applying the Hahn–Banach theorem to f {\displaystyle f} with this seminorm ‖ f ‖ ‖ ⋅ ‖ {\displaystyle \|f\|\,\|\cdot \|} thus produces a dominated linear extension whose norm is (necessarily) equal to that of f , {\displaystyle f,} which proves the theorem: Let f {\displaystyle f} be a continuous linear functional defined on a vector subspace M {\displaystyle M} of a normed space X . {\displaystyle X.} Then the function p : X → R {\displaystyle p:X\to \mathbb {R} } defined by p ( x ) = ‖ f ‖ ‖ x ‖ {\displaystyle p(x)=\|f\|\,\|x\|} is a seminorm on X {\displaystyle X} that dominates f , {\displaystyle f,} meaning that | f ( m ) | ≤ p ( m ) {\displaystyle |f(m)|\leq p(m)} holds for every m ∈ M . {\displaystyle m\in M.} By the Hahn–Banach theorem , there exists a linear functional F {\displaystyle F} on X {\displaystyle X} that extends f {\displaystyle f} (which guarantees ‖ f ‖ ≤ ‖ F ‖ {\displaystyle \|f\|\leq \|F\|} ) and that is also dominated by p , {\displaystyle p,} meaning that | F ( x ) | ≤ p ( x ) {\displaystyle |F(x)|\leq p(x)} for every x ∈ X . {\displaystyle x\in X.} The fact that ‖ f ‖ {\displaystyle \|f\|} is a real number such that | F ( x ) | ≤ ‖ f ‖ ‖ x ‖ {\displaystyle |F(x)|\leq \|f\|\|x\|} for every x ∈ X , {\displaystyle x\in X,} guarantees ‖ F ‖ ≤ ‖ f ‖ . {\displaystyle \|F\|\leq \|f\|.} Since ‖ F ‖ = ‖ f ‖ {\displaystyle \|F\|=\|f\|} is finite, the linear functional F {\displaystyle F} is bounded and thus continuous. The continuous extension theorem might fail if the topological vector space (TVS) X {\displaystyle X} is not locally convex . For example, for 0 < p < 1 , {\displaystyle 0<p<1,} the Lebesgue space L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} is a complete metrizable TVS (an F-space ) that is not locally convex (in fact, its only convex open subsets are itself L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} and the empty set) and the only continuous linear functional on L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} is the constant 0 {\displaystyle 0} function ( Rudin 1991 , §1.47). Since L p ( [ 0 , 1 ] ) {\displaystyle L^{p}([0,1])} is Hausdorff, every finite-dimensional vector subspace M ⊆ L p ( [ 0 , 1 ] ) {\displaystyle M\subseteq L^{p}([0,1])} is linearly homeomorphic to Euclidean space R dim ⁡ M {\displaystyle \mathbb {R} ^{\dim M}} or C dim ⁡ M {\displaystyle \mathbb {C} ^{\dim M}} (by F. Riesz's theorem ) and so every non-zero linear functional f {\displaystyle f} on M {\displaystyle M} is continuous but none has a continuous linear extension to all of L p ( [ 0 , 1 ] ) . {\displaystyle L^{p}([0,1]).} However, it is possible for a TVS X {\displaystyle X} to not be locally convex but nevertheless have enough continuous linear functionals that its continuous dual space X ∗ {\displaystyle X^{*}} separates points ; for such a TVS, a continuous linear functional defined on a vector subspace might have a continuous linear extension to the whole space. If the TVS X {\displaystyle X} is not locally convex then there might not exist any continuous seminorm p : X → R {\displaystyle p:X\to \mathbb {R} } defined on X {\displaystyle X} (not just on M {\displaystyle M} ) that dominates f , {\displaystyle f,} in which case the Hahn–Banach theorem can not be applied as it was in the above proof of the continuous extension theorem. However, the proof's argument can be generalized to give a characterization of when a continuous linear functional has a continuous linear extension: If X {\displaystyle X} is any TVS (not necessarily locally convex), then a continuous linear functional f {\displaystyle f} defined on a vector subspace M {\displaystyle M} has a continuous linear extension F {\displaystyle F} to all of X {\displaystyle X} if and only if there exists some continuous seminorm p {\displaystyle p} on X {\displaystyle X} that dominates f . {\displaystyle f.} Specifically, if given a continuous linear extension F {\displaystyle F} then p := | F | {\displaystyle p:=|F|} is a continuous seminorm on X {\displaystyle X} that dominates f ; {\displaystyle f;} and conversely, if given a continuous seminorm p : X → R {\displaystyle p:X\to \mathbb {R} } on X {\displaystyle X} that dominates f {\displaystyle f} then any dominated linear extension of f {\displaystyle f} to X {\displaystyle X} (the existence of which is guaranteed by the Hahn–Banach theorem) will be a continuous linear extension. The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets: { − p ( − x − n ) − f ( n ) : n ∈ M } , {\displaystyle \{-p(-x-n)-f(n):n\in M\},} and { p ( m + x ) − f ( m ) : m ∈ M } . {\displaystyle \{p(m+x)-f(m):m\in M\}.} This sort of argument appears widely in convex geometry , [ 18 ] optimization theory , and economics . Lemmas to this end derived from the original Hahn–Banach theorem are known as the Hahn–Banach separation theorems . [ 19 ] [ 20 ] They are generalizations of the hyperplane separation theorem , which states that two disjoint nonempty convex subsets of a finite-dimensional space R n {\displaystyle \mathbb {R} ^{n}} can be separated by some affine hyperplane , which is a fiber ( level set ) of the form f − 1 ( s ) = { x : f ( x ) = s } {\displaystyle f^{-1}(s)=\{x:f(x)=s\}} where f ≠ 0 {\displaystyle f\neq 0} is a non-zero linear functional and s {\displaystyle s} is a scalar. Theorem [ 19 ] — Let A {\displaystyle A} and B {\displaystyle B} be non-empty convex subsets of a real locally convex topological vector space X . {\displaystyle X.} If Int ⁡ A ≠ ∅ {\displaystyle \operatorname {Int} A\neq \varnothing } and B ∩ Int ⁡ A = ∅ {\displaystyle B\cap \operatorname {Int} A=\varnothing } then there exists a continuous linear functional f {\displaystyle f} on X {\displaystyle X} such that sup f ( A ) ≤ inf f ( B ) {\displaystyle \sup f(A)\leq \inf f(B)} and f ( a ) < inf f ( B ) {\displaystyle f(a)<\inf f(B)} for all a ∈ Int ⁡ A {\displaystyle a\in \operatorname {Int} A} (such an f {\displaystyle f} is necessarily non-zero). When the convex sets have additional properties, such as being open or compact for example, then the conclusion can be substantially strengthened: Theorem [ 3 ] [ 21 ] — Let A {\displaystyle A} and B {\displaystyle B} be convex non-empty disjoint subsets of a real topological vector space X . {\displaystyle X.} If X {\displaystyle X} is complex (rather than real) then the same claims hold, but for the real part of f . {\displaystyle f.} Then following important corollary is known as the Geometric Hahn–Banach theorem or Mazur's theorem (also known as Ascoli–Mazur theorem [ 22 ] ). It follows from the first bullet above and the convexity of M . {\displaystyle M.} Theorem (Mazur) [ 23 ] — Let M {\displaystyle M} be a vector subspace of the topological vector space X {\displaystyle X} and suppose K {\displaystyle K} is a non-empty convex open subset of X {\displaystyle X} with K ∩ M = ∅ . {\displaystyle K\cap M=\varnothing .} Then there is a closed hyperplane (codimension-1 vector subspace) N ⊆ X {\displaystyle N\subseteq X} that contains M , {\displaystyle M,} but remains disjoint from K . {\displaystyle K.} Mazur's theorem clarifies that vector subspaces (even those that are not closed) can be characterized by linear functionals. Corollary [ 24 ] (Separation of a subspace and an open convex set) — Let M {\displaystyle M} be a vector subspace of a locally convex topological vector space X , {\displaystyle X,} and U {\displaystyle U} be a non-empty open convex subset disjoint from M . {\displaystyle M.} Then there exists a continuous linear functional f {\displaystyle f} on X {\displaystyle X} such that f ( m ) = 0 {\displaystyle f(m)=0} for all m ∈ M {\displaystyle m\in M} and Re ⁡ f > 0 {\displaystyle \operatorname {Re} f>0} on U . {\displaystyle U.} Since points are trivially convex , geometric Hahn–Banach implies that functionals can detect the boundary of a set. In particular, let X {\displaystyle X} be a real topological vector space and A ⊆ X {\displaystyle A\subseteq X} be convex with Int ⁡ A ≠ ∅ . {\displaystyle \operatorname {Int} A\neq \varnothing .} If a 0 ∈ A ∖ Int ⁡ A {\displaystyle a_{0}\in A\setminus \operatorname {Int} A} then there is a functional that is vanishing at a 0 , {\displaystyle a_{0},} but supported on the interior of A . {\displaystyle A.} [ 19 ] Call a normed space X {\displaystyle X} smooth if at each point x {\displaystyle x} in its unit ball there exists a unique closed hyperplane to the unit ball at x . {\displaystyle x.} Köthe showed in 1983 that a normed space is smooth at a point x {\displaystyle x} if and only if the norm is Gateaux differentiable at that point. [ 3 ] Let U {\displaystyle U} be a convex balanced neighborhood of the origin in a locally convex topological vector space X {\displaystyle X} and suppose x ∈ X {\displaystyle x\in X} is not an element of U . {\displaystyle U.} Then there exists a continuous linear functional f {\displaystyle f} on X {\displaystyle X} such that [ 3 ] sup | f ( U ) | ≤ | f ( x ) | . {\displaystyle \sup |f(U)|\leq |f(x)|.} The Hahn–Banach theorem is the first sign of an important philosophy in functional analysis : to understand a space, one should understand its continuous functionals . For example, linear subspaces are characterized by functionals: if X is a normed vector space with linear subspace M (not necessarily closed) and if z {\displaystyle z} is an element of X not in the closure of M , then there exists a continuous linear map f : X → K {\displaystyle f:X\to \mathbf {K} } with f ( m ) = 0 {\displaystyle f(m)=0} for all m ∈ M , {\displaystyle m\in M,} f ( z ) = 1 , {\displaystyle f(z)=1,} and ‖ f ‖ = dist ⁡ ( z , M ) − 1 . {\displaystyle \|f\|=\operatorname {dist} (z,M)^{-1}.} (To see this, note that dist ⁡ ( ⋅ , M ) {\displaystyle \operatorname {dist} (\cdot ,M)} is a sublinear function.) Moreover, if z {\displaystyle z} is an element of X , then there exists a continuous linear map f : X → K {\displaystyle f:X\to \mathbf {K} } such that f ( z ) = ‖ z ‖ {\displaystyle f(z)=\|z\|} and ‖ f ‖ ≤ 1. {\displaystyle \|f\|\leq 1.} This implies that the natural injection J {\displaystyle J} from a normed space X into its double dual V ∗ ∗ {\displaystyle V^{**}} is isometric. That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space is Hausdorff or locally convex . However, suppose X is a topological vector space, not necessarily Hausdorff or locally convex , but with a nonempty, proper, convex, open set M . Then geometric Hahn–Banach implies that there is a hyperplane separating M from any other point. In particular, there must exist a nonzero functional on X — that is, the continuous dual space X ∗ {\displaystyle X^{*}} is non-trivial. [ 3 ] [ 25 ] Considering X with the weak topology induced by X ∗ , {\displaystyle X^{*},} then X becomes locally convex; by the second bullet of geometric Hahn–Banach, the weak topology on this new space separates points . Thus X with this weak topology becomes Hausdorff . This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces. The Hahn–Banach theorem is often useful when one wishes to apply the method of a priori estimates . Suppose that we wish to solve the linear differential equation P u = f {\displaystyle Pu=f} for u , {\displaystyle u,} with f {\displaystyle f} given in some Banach space X . If we have control on the size of u {\displaystyle u} in terms of ‖ f ‖ X {\displaystyle \|f\|_{X}} and we can think of u {\displaystyle u} as a bounded linear functional on some suitable space of test functions g , {\displaystyle g,} then we can view f {\displaystyle f} as a linear functional by adjunction: ( f , g ) = ( u , P ∗ g ) . {\displaystyle (f,g)=(u,P^{*}g).} At first, this functional is only defined on the image of P , {\displaystyle P,} but using the Hahn–Banach theorem, we can try to extend it to the entire codomain X . The resulting functional is often defined to be a weak solution to the equation . Theorem [ 26 ] — A real Banach space is reflexive if and only if every pair of non-empty disjoint closed convex subsets, one of which is bounded, can be strictly separated by a hyperplane. To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem. Proposition — Suppose X {\displaystyle X} is a Hausdorff locally convex TVS over the field K {\displaystyle \mathbf {K} } and Y {\displaystyle Y} is a vector subspace of X {\displaystyle X} that is TVS–isomorphic to K I {\displaystyle \mathbf {K} ^{I}} for some set I . {\displaystyle I.} Then Y {\displaystyle Y} is a closed and complemented vector subspace of X . {\displaystyle X.} Since K I {\displaystyle \mathbf {K} ^{I}} is a complete TVS so is Y , {\displaystyle Y,} and since any complete subset of a Hausdorff TVS is closed, Y {\displaystyle Y} is a closed subset of X . {\displaystyle X.} Let f = ( f i ) i ∈ I : Y → K I {\displaystyle f=\left(f_{i}\right)_{i\in I}:Y\to \mathbf {K} ^{I}} be a TVS isomorphism, so that each f i : Y → K {\displaystyle f_{i}:Y\to \mathbf {K} } is a continuous surjective linear functional. By the Hahn–Banach theorem, we may extend each f i {\displaystyle f_{i}} to a continuous linear functional F i : X → K {\displaystyle F_{i}:X\to \mathbf {K} } on X . {\displaystyle X.} Let F := ( F i ) i ∈ I : X → K I {\displaystyle F:=\left(F_{i}\right)_{i\in I}:X\to \mathbf {K} ^{I}} so F {\displaystyle F} is a continuous linear surjection such that its restriction to Y {\displaystyle Y} is F | Y = ( F i | Y ) i ∈ I = ( f i ) i ∈ I = f . {\displaystyle F{\big \vert }_{Y}=\left(F_{i}{\big \vert }_{Y}\right)_{i\in I}=\left(f_{i}\right)_{i\in I}=f.} Let P := f − 1 ∘ F : X → Y , {\displaystyle P:=f^{-1}\circ F:X\to Y,} which is a continuous linear map whose restriction to Y {\displaystyle Y} is P | Y = f − 1 ∘ F | Y = f − 1 ∘ f = 1 Y , {\displaystyle P{\big \vert }_{Y}=f^{-1}\circ F{\big \vert }_{Y}=f^{-1}\circ f=\mathbf {1} _{Y},} where 1 Y {\displaystyle \mathbb {1} _{Y}} denotes the identity map on Y . {\displaystyle Y.} This shows that P {\displaystyle P} is a continuous linear projection onto Y {\displaystyle Y} (that is, P ∘ P = P {\displaystyle P\circ P=P} ). Thus Y {\displaystyle Y} is complemented in X {\displaystyle X} and X = Y ⊕ ker ⁡ P {\displaystyle X=Y\oplus \ker P} in the category of TVSs. ◼ {\displaystyle \blacksquare } The above result may be used to show that every closed vector subspace of R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} is complemented because any such space is either finite dimensional or else TVS–isomorphic to R N . {\displaystyle \mathbb {R} ^{\mathbb {N} }.} General template There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows: Theorem [ 3 ] — If D {\displaystyle D} is an absorbing disk in a real or complex vector space X {\displaystyle X} and if f {\displaystyle f} be a linear functional defined on a vector subspace M {\displaystyle M} of X {\displaystyle X} such that | f | ≤ 1 {\displaystyle |f|\leq 1} on M ∩ D , {\displaystyle M\cap D,} then there exists a linear functional F {\displaystyle F} on X {\displaystyle X} extending f {\displaystyle f} such that | F | ≤ 1 {\displaystyle |F|\leq 1} on D . {\displaystyle D.} Hahn–Banach theorem for seminorms [ 27 ] [ 28 ] — If p : M → R {\displaystyle p:M\to \mathbb {R} } is a seminorm defined on a vector subspace M {\displaystyle M} of X , {\displaystyle X,} and if q : X → R {\displaystyle q:X\to \mathbb {R} } is a seminorm on X {\displaystyle X} such that p ≤ q | M , {\displaystyle p\leq q{\big \vert }_{M},} then there exists a seminorm P : X → R {\displaystyle P:X\to \mathbb {R} } on X {\displaystyle X} such that P | M = p {\displaystyle P{\big \vert }_{M}=p} on M {\displaystyle M} and P ≤ q {\displaystyle P\leq q} on X . {\displaystyle X.} Let S {\displaystyle S} be the convex hull of { m ∈ M : p ( m ) ≤ 1 } ∪ { x ∈ X : q ( x ) ≤ 1 } . {\displaystyle \{m\in M:p(m)\leq 1\}\cup \{x\in X:q(x)\leq 1\}.} Because S {\displaystyle S} is an absorbing disk in X , {\displaystyle X,} its Minkowski functional P {\displaystyle P} is a seminorm. Then p = P {\displaystyle p=P} on M {\displaystyle M} and P ≤ q {\displaystyle P\leq q} on X . {\displaystyle X.} So for example, suppose that f {\displaystyle f} is a bounded linear functional defined on a vector subspace M {\displaystyle M} of a normed space X , {\displaystyle X,} so its the operator norm ‖ f ‖ {\displaystyle \|f\|} is a non-negative real number. Then the linear functional's absolute value p := | f | {\displaystyle p:=|f|} is a seminorm on M {\displaystyle M} and the map q : X → R {\displaystyle q:X\to \mathbb {R} } defined by q ( x ) = ‖ f ‖ ‖ x ‖ {\displaystyle q(x)=\|f\|\,\|x\|} is a seminorm on X {\displaystyle X} that satisfies p ≤ q | M {\displaystyle p\leq q{\big \vert }_{M}} on M . {\displaystyle M.} The Hahn–Banach theorem for seminorms guarantees the existence of a seminorm P : X → R {\displaystyle P:X\to \mathbb {R} } that is equal to | f | {\displaystyle |f|} on M {\displaystyle M} (since P | M = p = | f | {\displaystyle P{\big \vert }_{M}=p=|f|} ) and is bounded above by P ( x ) ≤ ‖ f ‖ ‖ x ‖ {\displaystyle P(x)\leq \|f\|\,\|x\|} everywhere on X {\displaystyle X} (since P ≤ q {\displaystyle P\leq q} ). Hahn–Banach sandwich theorem [ 3 ] — Let p : X → R {\displaystyle p:X\to \mathbb {R} } be a sublinear function on a real vector space X , {\displaystyle X,} let S ⊆ X {\displaystyle S\subseteq X} be any subset of X , {\displaystyle X,} and let f : S → R {\displaystyle f:S\to \mathbb {R} } be any map. If there exist positive real numbers a {\displaystyle a} and b {\displaystyle b} such that 0 ≥ inf s ∈ S [ p ( s − a x − b y ) − f ( s ) − a f ( x ) − b f ( y ) ] for all x , y ∈ S , {\displaystyle 0\geq \inf _{s\in S}[p(s-ax-by)-f(s)-af(x)-bf(y)]\qquad {\text{ for all }}x,y\in S,} then there exists a linear functional F : X → R {\displaystyle F:X\to \mathbb {R} } on X {\displaystyle X} such that F ≤ p {\displaystyle F\leq p} on X {\displaystyle X} and f ≤ F ≤ p {\displaystyle f\leq F\leq p} on S . {\displaystyle S.} Theorem [ 3 ] (Andenaes, 1970) — Let p : X → R {\displaystyle p:X\to \mathbb {R} } be a sublinear function on a real vector space X , {\displaystyle X,} let f : M → R {\displaystyle f:M\to \mathbb {R} } be a linear functional on a vector subspace M {\displaystyle M} of X {\displaystyle X} such that f ≤ p {\displaystyle f\leq p} on M , {\displaystyle M,} and let S ⊆ X {\displaystyle S\subseteq X} be any subset of X . {\displaystyle X.} Then there exists a linear functional F : X → R {\displaystyle F:X\to \mathbb {R} } on X {\displaystyle X} that extends f , {\displaystyle f,} satisfies F ≤ p {\displaystyle F\leq p} on X , {\displaystyle X,} and is (pointwise) maximal on S {\displaystyle S} in the following sense: if F ^ : X → R {\displaystyle {\widehat {F}}:X\to \mathbb {R} } is a linear functional on X {\displaystyle X} that extends f {\displaystyle f} and satisfies F ^ ≤ p {\displaystyle {\widehat {F}}\leq p} on X , {\displaystyle X,} then F ≤ F ^ {\displaystyle F\leq {\widehat {F}}} on S {\displaystyle S} implies F = F ^ {\displaystyle F={\widehat {F}}} on S . {\displaystyle S.} If S = { s } {\displaystyle S=\{s\}} is a singleton set (where s ∈ X {\displaystyle s\in X} is some vector) and if F : X → R {\displaystyle F:X\to \mathbb {R} } is such a maximal dominated linear extension of f : M → R , {\displaystyle f:M\to \mathbb {R} ,} then F ( s ) = inf m ∈ M [ f ( s ) + p ( s − m ) ] . {\displaystyle F(s)=\inf _{m\in M}[f(s)+p(s-m)].} [ 3 ] Vector–valued Hahn–Banach theorem [ 3 ] — If X {\displaystyle X} and Y {\displaystyle Y} are vector spaces over the same field and if f : M → Y {\displaystyle f:M\to Y} is a linear map defined on a vector subspace M {\displaystyle M} of X , {\displaystyle X,} then there exists a linear map F : X → Y {\displaystyle F:X\to Y} that extends f . {\displaystyle f.} A set Γ {\displaystyle \Gamma } of maps X → X {\displaystyle X\to X} is commutative (with respect to function composition ∘ {\displaystyle \,\circ \,} ) if F ∘ G = G ∘ F {\displaystyle F\circ G=G\circ F} for all F , G ∈ Γ . {\displaystyle F,G\in \Gamma .} Say that a function f {\displaystyle f} defined on a subset M {\displaystyle M} of X {\displaystyle X} is Γ {\displaystyle \Gamma } -invariant if L ( M ) ⊆ M {\displaystyle L(M)\subseteq M} and f ∘ L = f {\displaystyle f\circ L=f} on M {\displaystyle M} for every L ∈ Γ . {\displaystyle L\in \Gamma .} An invariant Hahn–Banach theorem [ 29 ] — Suppose Γ {\displaystyle \Gamma } is a commutative set of continuous linear maps from a normed space X {\displaystyle X} into itself and let f {\displaystyle f} be a continuous linear functional defined some vector subspace M {\displaystyle M} of X {\displaystyle X} that is Γ {\displaystyle \Gamma } -invariant , which means that L ( M ) ⊆ M {\displaystyle L(M)\subseteq M} and f ∘ L = f {\displaystyle f\circ L=f} on M {\displaystyle M} for every L ∈ Γ . {\displaystyle L\in \Gamma .} Then f {\displaystyle f} has a continuous linear extension F {\displaystyle F} to all of X {\displaystyle X} that has the same operator norm ‖ f ‖ = ‖ F ‖ {\displaystyle \|f\|=\|F\|} and is also Γ {\displaystyle \Gamma } -invariant, meaning that F ∘ L = F {\displaystyle F\circ L=F} on X {\displaystyle X} for every L ∈ Γ . {\displaystyle L\in \Gamma .} This theorem may be summarized: The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem. Mazur–Orlicz theorem [ 3 ] — Let p : X → R {\displaystyle p:X\to \mathbb {R} } be a sublinear function on a real or complex vector space X , {\displaystyle X,} let T {\displaystyle T} be any set, and let R : T → R {\displaystyle R:T\to \mathbb {R} } and v : T → X {\displaystyle v:T\to X} be any maps. The following statements are equivalent: The following theorem characterizes when any scalar function on X {\displaystyle X} (not necessarily linear) has a continuous linear extension to all of X . {\displaystyle X.} Theorem ( The extension principle [ 30 ] ) — Let f {\displaystyle f} a scalar-valued function on a subset S {\displaystyle S} of a topological vector space X . {\displaystyle X.} Then there exists a continuous linear functional F {\displaystyle F} on X {\displaystyle X} extending f {\displaystyle f} if and only if there exists a continuous seminorm p {\displaystyle p} on X {\displaystyle X} such that | ∑ i = 1 n a i f ( s i ) | ≤ p ( ∑ i = 1 n a i s i ) {\displaystyle \left|\sum _{i=1}^{n}a_{i}f(s_{i})\right|\leq p\left(\sum _{i=1}^{n}a_{i}s_{i}\right)} for all positive integers n {\displaystyle n} and all finite sequences a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} of scalars and elements s 1 , … , s n {\displaystyle s_{1},\ldots ,s_{n}} of S . {\displaystyle S.} Let X be a topological vector space. A vector subspace M of X has the extension property if any continuous linear functional on M can be extended to a continuous linear functional on X , and we say that X has the Hahn–Banach extension property ( HBEP ) if every vector subspace of X has the extension property. [ 31 ] The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For complete metrizable topological vector spaces there is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex. [ 31 ] On the other hand, a vector space X of uncountable dimension, endowed with the finest vector topology , then this is a topological vector spaces with the Hahn–Banach extension property that is neither locally convex nor metrizable. [ 31 ] A vector subspace M of a TVS X has the separation property if for every element of X such that x ∉ M , {\displaystyle x\not \in M,} there exists a continuous linear functional f {\displaystyle f} on X such that f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} and f ( m ) = 0 {\displaystyle f(m)=0} for all m ∈ M . {\displaystyle m\in M.} Clearly, the continuous dual space of a TVS X separates points on X if and only if { 0 } , {\displaystyle \{0\},} has the separation property. In 1992, Kakol proved that any infinite dimensional vector space X , there exist TVS-topologies on X that do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points on X . However, if X is a TVS then every vector subspace of X has the extension property if and only if every vector subspace of X has the separation property. [ 31 ] The proof of the Hahn–Banach theorem for real vector spaces ( HB ) commonly uses Zorn's lemma , which in the axiomatic framework of Zermelo–Fraenkel set theory ( ZF ) is equivalent to the axiom of choice ( AC ). It was discovered by Łoś and Ryll-Nardzewski [ 12 ] and independently by Luxemburg [ 11 ] that HB can be proved using the ultrafilter lemma ( UL ), which is equivalent (under ZF ) to the Boolean prime ideal theorem ( BPI ). BPI is strictly weaker than the axiom of choice and it was later shown that HB is strictly weaker than BPI . [ 32 ] The ultrafilter lemma is equivalent (under ZF ) to the Banach–Alaoglu theorem , [ 33 ] which is another foundational theorem in functional analysis . Although the Banach–Alaoglu theorem implies HB , [ 34 ] it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger than HB ). However, HB is equivalent to a certain weakened version of the Banach–Alaoglu theorem for normed spaces. [ 35 ] The Hahn–Banach theorem is also equivalent to the following statement: [ 36 ] ( BPI is equivalent to the statement that there are always non-constant probability charges which take only the values 0 and 1.) In ZF , the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set. [ 37 ] Moreover, the Hahn–Banach theorem implies the Banach–Tarski paradox . [ 38 ] For separable Banach spaces , D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL 0 , a weak subsystem of second-order arithmetic that takes a form of Kőnig's lemma restricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example of reverse mathematics . [ 39 ] [ 40 ] Proofs
https://en.wikipedia.org/wiki/Hahn–Banach_theorem
Hai-Lung Dai is a Taiwanese-American physical chemist and university administrator. He currently is the Laura H. Carnell Professor of Chemistry at Temple University in Philadelphia , Pennsylvania , in the United States. Dai was born in Taiwan. He completed a B.S. in chemistry at National Taiwan University in 1974, following military service, and went to the United States in 1976 for graduate studies. He obtained his doctorate in chemistry from the University of California at Berkeley in 1981, and then did postdoctoral research at the Massachusetts Institute of Technology until 1984. That year he began teaching in the chemistry department of the University of Pennsylvania in Philadelphia , where he remained for twenty-two years and became department chair and the Hirschmann-Makineni Professor. He founded the Penn Science Teacher Institute that eventually trained 300 in-service science teachers and was named as a model for training science teachers in a 2005 National Academy of Sciences white paper. In 2007 he became Dean of the College of Science and Technology of Temple University , also in Philadelphia, Pennsylvania, and was Provost [ 1 ] of Temple University during 2012 - 2016. During his time as provost, Temple's USNWR ranking went from #135 to #115 [ 2 ] and Temple became a Carnegie R1 Highest Research Activity University. [ 3 ] In 2017, Dai was appointed vice president for International Affairs at Temple University. [ 4 ] Dai has received several honors and awards, among them:
https://en.wikipedia.org/wiki/Hai-Lung_Dai
Haifuki-ho (灰吹法; literally "ash-blowing method"), also known as Lead-silver separation method ( Korean : 연은분리법; Hanja : 鉛銀分離法), [ 1 ] is a method of silver mining developed in Joseon dynasty of Korea [ 2 ] in the 16th century and spread to China and Feudal Japan . [ 3 ] The industrial process involved cupellation , and was a contributing factor to the large amount of silver traditionally exported by Japan. [ 3 ] In 1526 Kamiya Jutei, a wealthy merchant from Hakata , founded the Iwami Ginzan Silver Mine in Ōda . [ 4 ] Seeking to increase silver production, In 1533 he introduced a Korean method of silver refining to the mine which became the Hai-Fuki-Ho method. [ 5 ] The two technicians, Keiju (慶寿; Korean : 경수; Revised Romanization : Gyeongsu) and Sotan (宗丹; Korean : 종단; Revised Romanization : Jongdan), were invited to Japan to instruct their skills. Historians have compared the Hai-Fuki-Ho method to the Medieval European seigerprozess [ de ] method of silver smelting. [ 6 ] Under the Hai-Fuki-Ho method, silver-containing copper ore would be cast-smelted with lead, then allowed to dry. The silver in the copper ore would bind to the lead, creating a single mixture. This mixture would then be heated so that the lead melted and separated out of the copper, taking the bonded silver with it. The silver-rich lead would then be treated with an oxidizing airflow to separate the silver. [ 7 ] This was akin to a liquation method. The high-purity silver produced by the Hai-Fuki-Ho method was highly desired by foreign merchants. [ 3 ] In addition, the process allowed for greater amounts of the silver to be produced by Japanese mines, which had more efficient refining processes than their competitors. By the 16th century, Japanese mines were producing up to one third of the world's silver. [ 3 ] The Hai-Fuki-Ho method was eventually replaced by more modern methods of silver mining. [ 6 ]
https://en.wikipedia.org/wiki/Haifukiho
Haig–Simons income or Schanz–Haig–Simons income is an income measure used by public finance economists to analyze economic well-being which defines income as consumption plus change in net worth . [ 1 ] [ 2 ] It is represented by the mathematical formula: where C = consumption and Δ NW = change in net worth. Consumption refers to the money spent on goods and services of any kind. From a perfect theory view, consumption does not include capital expenditures , and the full spending would be amortized . [ clarification needed ] The measure of the income tax base equal to the sum of consumption and change in net worth was first advocated by German legal scholar Georg von Schanz . [ 3 ] His concept was further developed by the American economists Robert M. Haig and Henry C. Simons in the 1920s and 1930s. [ 4 ] [ 5 ] Haig defined personal income as "the money value of the net accretion to one's economic power between two points of time," a formulation that was intended to include the taxpayer's consumption. [ 6 ] That was thought by Simons to be interchangeable with his own formulation: In this concept, all inflows and outflows of resources are considered taxable income in a broad sense, including donations and windfall gains . [ 8 ] A cash-flow consumption tax is intended to confine the cash-flow tax burden to an individual's annual consumption and to remove nonconsumption expenses and current savings from the tax base. [ 9 ] The base is calculated by combining the year's gross receipts and savings withdrawals, and then subtracting the year's business and investment expenses and the year's additions to savings. [ 10 ] Progressive rates are applied to the resulting sum. [ 11 ] By contrast, the base for a theoretically correct Schanz–Haig–Simons (SHS) income tax is each individual's annual consumption plus current additions to savings. Thus current receipts that are otherwise taxable remain in the tax base, even if they are saved, and withdrawals from earlier savings are not currently taxed since they were assessed in a prior year. [ 12 ] Stated differently, the SHS tax base has two components—current consumption and current savings (including current appreciation accruing to earlier investments)—whereas a cash-flow consumption tax has only a single component—current consumption. [ 13 ] In spite of their differences, however, both a cash-flow consumption tax and an SHS tax require that dollars paid out as business or investment expenses be eliminated from the base. This is necessary under a cash-flow consumption tax because business and investment expenses are not consumption [ 14 ] and it is necessary under an SHS tax because these expenditures are neither consumption nor additions to savings. [ 15 ] Since business and investment outlays have no place in the base of either tax, intuition suggests that business and investment interest expenses would be treated identically under a cash-flow consumption tax and an SHS tax. But they are not. The SHS tax and the cash-flow consumption tax take different structural approaches to the treatment of business and investment interest outlays although both systems share the general objective of removing current business and investment costs from the tax base. The Haig–Simons equation is different from the USA's individual income tax base calculations. For example, any employer contributions to employee health insurance are not included in taxable employee income. Under the Haig–Simons definition of income, such contributions would be included in income. Such contributions might not be included in a Haig–Simons income tax base, however, if their exclusion reflected "an appropriate adjustment in measuring ability to pay." [ 16 ] The European Union and most states in the USA employ a tax on Haig–Simons income with a consumption tax . In the European Union, a value added tax applies to purchases of goods and services on each level of exchange until it reaches the ultimate consumer. In the US, most states tax purchases of goods with a sales tax . Some argue that the definition is tautological : Others observe that it is "only a surrogate utility measure." [ 18 ] Some fault it for neutrality between savings and consumption. [ 19 ] Some scholars resist these criticisms, to the extent they conceive of Haig–Simons as dependent on utility; Simons rejected utility as the basis of the ability-to-pay standard. [ 20 ] Indeed, Simons rejected both the notion that humans are "equally efficient pleasure machines," [ 21 ] and the idea that taxation can take account of interpersonal utilities. [ 22 ] Simons sought a measurable definition for income, but his solution is open to criticism for reifying troubling dichotomies; for example, the Haig–Simons definition depends on the distinction between market and non-market values. [ 23 ]
https://en.wikipedia.org/wiki/Haig–Simons_income
Hainuwele , "The Coconut Girl", is a figure from the Wemale and Alune folklore of the island of Seram in the Maluku Islands , Indonesia . Her story is an origin myth . [ 1 ] The myth of Hainuwele was recorded by German ethnologist Adolf E. Jensen following the Frobenius Institute 's 1937–38 expedition to the Maluku Islands. [ 2 ] The study of this myth during his research on religious sacrifice led Jensen to the introduction of the concept of Dema Deity in ethnology . [ 3 ] Joseph Campbell first narrated the Hainuwele legend to an English-speaking audience in his work The Masks of God . [ 4 ] While hunting one day a man named Ameta found a coconut , something never before seen on Seram, that had been caught in the tusk of a wild boar . Ameta, who was part of one of the original nine families of the West Ceram people who had emerged from bananas , took the coconut home. That night, a figure appeared in a dream and instructed him to plant the coconut. Ameta did so, and in just a few days the coconut grew into a tall tree and bloomed. Ameta climbed the tree to cut the flowers to collect the sap , but in the process slashed his finger and the blood dropped onto a blossom. Nine days later, Ameta found in the place of this blossom a girl whom he named Hainuwele, meaning "Coconut Branch". He wrapped her in a sarong and brought her home. She grew to maturity with astonishing rapidity. Hainuwele had a remarkable talent: when she defecated she excreted valuable items. Thanks to these, Ameta became very rich. Hainuwele attended a dance that was to last for nine nights at a place known as Tamene Siwa . In this dance, it was traditional for girls to distribute areca nuts to the men. Hainuwele did so, but when the men asked her for areca nuts, she gave them instead the valuable things which she was able to excrete. Each day she gave them something bigger and more valuable: golden earrings , coral , porcelain dishes, bush-knives , copper boxes, and gongs . The men were happy at first, but gradually they decided that what Hainuwele was doing was uncanny and, driven by jealousy, they decided to kill her on the ninth night. In the successive dances, the men circled around the women at the center of the dance ground, Hainuwele amongst them, who handed out gifts. Before the ninth night, the men dug a pit in the center of the dance ground and, singling out Hainuwele, in the course of the dance they pushed her further and further inward until she was pushed right into the pit. The men quickly heaped earth over the girl, covering her cries with their song. Thus Hainuwele was buried alive, while the men kept dancing on the dirt stomping it firmly down. Ameta, missing Hainuwele, went in search for her. Through an oracle he found out what had happened, then he exhumed her corpse and cut it into pieces which he then re-buried around the village. These pieces grew into various new useful plants, including tubers , giving origin to the principal foods the people of Indonesia have enjoyed ever since. Ameta brought Hainuwele's cut arms to mulua Satene , the ruling deity over humans. With them, she built for him a gate in spiral shape through which all men should pass. Those who would be able to step across the gate would remain human beings, although henceforward mortal, becoming divided into Patalima (Men of the five) and Patasiwa (Men of the nine). Those unable to pass through the threshold became new kinds of animals or ghosts . Satene herself left the Earth and became ruler over the realm of the dead. [ 5 ] Patasiwa is the group to which both the Wemale and the Alune people belong. Hainuwele can be understood as a creation myth in which the natural environment, the daily tasks of men, and the social structures are given meaning. In the myth, spirits and plants are created, and an explanation is provided for the mortality of mankind and the formation of tribal divisions within the Wemale ethnic group. Jensen identifies the Hainuwele figure with a Dema deity . [ 6 ] According to Jensen, the belief in a Dema deity is typical of cultures based on basic plant cultivation as opposed to cultures of hunter-gatherers , as well as complex agricultural cultures such as those based on the cultivation of grain . Jensen identifies the worship of Dema deities in the context of many different cultures worldwide. He assumes that it dates back to the Neolithic Revolution in the early history of mankind. One of the main characteristics of Dema deities is that they are killed by early immortal men (‘Dema’) and hacked to pieces that are strewn about or buried. [ citation needed ] Jensen found versions of the basic pattern of what could be defined as "Hainuwele Complex," in which a ritual murder and burial originates the tuberous crops on which people lived, spread throughout Southeast Asia and elsewhere. He contrasted these myths of the first era of agriculture, using root crops, with those in Asia and beyond that explained the origin of rice as coming from a theft from heaven, a pattern of myth found among grain-crop agriculturalists. These delineate two different eras and cultures in the history of agriculture itself. [ citation needed ] The earliest one transformed hunting-and-gathering societies' totemistic myths such as we find in Australian Aboriginal cultures, in response to the discovery of food cultivation, and centered on a Dema deity arising from the earth, and the later-developing grain-crop cultures centered on a sky god . Jensen explored the far-reaching culture-historical implications of these and other insights in his later work Myth and Cult among Primitive Peoples , published in 1963. [ 7 ] The worship of a Dema-deity implies that the creation of new life is inevitably attached to the end of life, to death. In light of this fact Jensen indicates that some rituals of the Wemale people, such as the “Maro dance,” include many elements of the Hainuwele myth. Therefore, myth and ritual were structured in a unity of meaning. [ 8 ] Recent [ when? ] research, [ citation needed ] , however, disputes the use of the term Dema-deity in the context of the Hainuwele story. It disagrees with the definition of the legend as a creation myth, preferring to define it as an origin myth. From the standpoint of cultural morphology, the idea of the Dema-deity is already problematic. Jensen assumes a connection between highly dissimilar myths of different cultures located in areas that are separated by great distances. Moreover, these purported parallels are not supported by archaeological or empirical data. [ citation needed ] Additionally, among indigenous people in Seram , there are different versions of the origin myth in which the "magic" woman secretly brought forth foodstuffs, sagu and valuable items from her menstrual blood and/or vagina (i.e., menstruated or born from her vagina rather than defecated) [ 9 ] [ 10 ] Some versions suggest the menstrual blood allowed these items to emerge from the earth. When discovered, she turned into the "original" sago starch producing tree (pohon sageru; the sugar palm - Arenga pinnata). This seems partly unusual as Metroxylon sagu is primarily used for starch extraction - one of the most important starch staples of Malukans. Metroxylon also produces palm fronds and leaves for house construction of walls, floorings, and thatch. On the other hand, the Arenga palm is useful to produce sugar, sweet and alcoholic beverages, and sago starch. [ citation needed ] The interpretation of the Hainuwele myth has put greater stress on the anthropological aspects. It underlines the fact that, since she had defecated them, the gifts that the generous girl Hainuwele was giving out had an unclean origin and, although useful, they defiled the persons accepting them. The uncanny way in which the material gifts were brought forward points out the reality that all the objects enumerated in the myth were foreign, not produced in Seram, and thus not available on the island before the 16th century. [ citation needed ] Alternatively, archaeological evidence from Seram and Ambon among other Malukan islands indicates 11th-14th century Song-Yuan Dynasty stoneware ceramics were quite prevalent; the green glazed Celadons being particularly important in marriage ritual exchanges and dispute settlements. [ 9 ] [ 10 ] There are Thai and Vietnamese glazed ceramics from the 14th century onwards, increasing in abundance during the various Ming Gaps in Chinese trade policy and practice. Interestingly, the distribution includes not only coastal port sites (such as Hitu and Hitu Lama on Ambon; or Serapi near Hatusua on Seram), but remote hinterland and highland sites. [ 9 ] [ 10 ] This indicates the regional and extra-regional value chains included both coastal, port, and inland (highland, hinterland) areas and peoples. Bronze drums and gongs have been available for over 2000 years. Gorom still has an excellent example of a bronze Dongson drum made in Vietnam. Dongson drums were distributed throughout the archipelago and likely related to the spice trade reaching as far as China since at least the Han Dynasty over 2000 years ago where cloves were mandatory for visitors to the court to freshen their breath. Spices from Maluku also made it westward, eventually to the Mideast and Europe well before the European Colonial Period began in the early 16th century (estimated at least to the first and early second millennia CE). [ citation needed ] Gold, silver, bronze (especially gongs), glass (bangles and beads), and iron items were also available prior to western colonialism - although mostly traded in rather than locally produced. The small Indo-Pacific bead technology originated from Arikamedu, India over 2000 years ago. The beads, for example, are culturally valuable items still in circulation today. It remains unknown if similar beads were also produced in Southeast Asia (primarily mainland Southeast Asia) using similar technology or possibly artisans and master craftsmen from India. In either case, it demonstrates the length, extent, and inclusion of nested regional to extra-regional value chains. [ citation needed ] Pre-colonial Chinese glass bangle fragments were also recovered from radiocarbon dated excavation contexts in Seram in the 1990s (dated to at least the Srivijaya period in the mid to late first millennium CE). Spices (clove, nutmeg and mace) were central to the extra-regional trade and demand. These items were endemic to Maluku and seeking control of the source drove much of Western Colonialism (e.g., the Portuguese dispatched ships to Maluku in 1512 after seizing Melaka in Malaysia in 1511). [ citation needed ] Nevertheless, many other items were very important - such as pearls, pearl shell, aromatic woods, birds of paradise feathers, etc. as well as and many foodstuffs such as sago (Metroxolyn sagu), kenari (Canarium spp.), palm sugar (Arenga pinnata), fermented palm drinks (sageru; sometimes distilled to make much stronger sopi), tripang, fish, meat, and others. [ 9 ] [ 10 ] Hainuwele's variety of presents brought about an element of corruption , bringing about inequality, greed, and jealousy into a roughly homogeneous society, represented by the standard present of areca nuts. [ citation needed ] Hence the various gifts of the Coconut girl can be interpreted as " dirty money ", polluting and degrading everyone who accepts it, bringing about a socioeconomic conflict and the deviation from an ideal state. [ citation needed ] Thus, the Hainuwele legend as recorded by Jensen was a myth that sought to rearrange the inconsistencies with which the Wemale were confronted as the elements of change affected their society by trying to bring about agreement of the more recent socioeconomic clash with the older mythical representations. [ citation needed ] Following the conflict brought about by the material objects that were obtained through Hainuwele the introduction of mortality among humans became a sort of compensation in order to reintroduce peace with the world of spirits and deities. Thus the Hainuwele myth signals the end of an era and the beginning of another. [ 11 ] Several of these interpretations are highly debatable. Some local versions display generic consistencies but substantial nuanced variations. Numerous interpretations (positive, neutral and negative) can be posited and have been offered by local specialists. Additionally, Jensen's initial translations and interpretations (or subsequent interpretations by others) may be partially erroneous and/or questionably representative of the larger populations and diverse social groups they are meant to portray as a single cultural unit of analysis in parts of Seram Island, Seram Island as a whole, or neighboring islands in Central Maluku. [ citation needed ]
https://en.wikipedia.org/wiki/Hainuwele
Hair multiplication , or hair cloning , is a proposed technique to counter hair loss. The technology is in its early stages, but multiple groups have demonstrated pieces of the technology at a small scale, with a few in commercial development. Scientists previously assumed that in the case of complete baldness, follicles are completely absent from the scalp, so they cannot be regenerated. However, the follicles are not entirely absent, as there are stem cells in the bald scalp from which the follicles naturally arise. The behavior of these follicles is suggested to be the result of progenitor cell deficiency in these areas. The basic idea of hair cloning is that healthy follicle cells or dermal papillae can be extracted from the subject from areas that are not bald and are not suffering hair loss. They can be multiplied (cloned) by various culturing methods [ 1 ] and the new cells can be injected back into the bald scalp, where they would produce healthy hair. One of the first companies to begin experimenting with hair cloning was Intercytex. [ 2 ] Intercytex tried to clone new hair follicles from the stem cells harvested from the back of the neck. They hoped that if they multiplied the follicles and then implanted them back in the scalp in the bald areas, they would be successful in regrowing the hair itself. [ 3 ] In 2008, Intercytex interpreted that they failed in fully developing the hair cloning therapy and decided to discontinue all research. [ 3 ] The first time scientists were able to grow artificial hair follicles from stem cells was in 2010. Scientists at the Technische Universität Berlin in Germany, with Intercytex and several other research teams, took animal cells and created follicles by using them. As a result, they produced follicles "thinner than normal". They were able to clone one or two follicles from an extracted hair. Aderans Research Institute, a Japanese company, worked on what they called the "Ji Gami" process, which involved the removal of a small strip of the scalp, which is broken down into individual follicular stem cells. After the extraction, these cells are cultured and injected back into the bald areas of the scalp. [ citation needed ] The trials continued in 2012. Aderans decided to discontinue the funding of its hair multiplication research in July 2013. [ 4 ] [ better source needed ] In 2012, scientists from the University of Pennsylvania School of Medicine published their own findings regarding hair cloning. [ 5 ] During their investigation, they found that non-bald and bald scalps have the same number of stem cells, but the progenitor cell number was significantly depleted in the case of the latter. Based on this, they concluded that it is not the absence of the stem cells that are responsible for hair loss but the unsuccessful activation of said cells. [ 6 ] In 2015, initial trials for human hair were successful in generating new follicles, [ 7 ] but the hairs grew in varying directions. In 2016, scientists in Japan announced they had successfully grown human skin in a lab. [ 8 ] The skin was created using induced pluripotent stem cells , and when implanted in a mouse, the skin grew hairs successfully. The group has formed partnerships with Organ Technologies and Kyocera Corporation to commercially develop the research. [ 9 ] dNovo Bio, a Silicon Valley –based company, was founded in 2018 and has demonstrated growing a patch of human hair on a mouse. [ 10 ] [ better source needed ] In July 2019, a researcher from San Diego–based Stemson Therapeutics, partnered with UCSD , successfully grew his own follicles on a mouse using iPSC -derived epithelial and dermal cell therapy. The hair was aligned properly with a 3D-printed biodegradable shaft. The hairs were permanent and regenerated naturally. [ 11 ] Stemson therapeutics closed operations in December 2024. [ 12 ] In October 2022, researchers from the Japan-based Yokohama National University successfully cloned fully-grown mouse hair follicles for the first time in history. [ 13 ]
https://en.wikipedia.org/wiki/Hair_cloning
A hair dryer (the handheld type also referred to as a blow dryer ) is an electromechanical device that blows ambient air in hot or warm settings for styling or drying hair. [ 1 ] [ 2 ] Hair dryers enable better control over the shape and style of hair, by accelerating and controlling the formation of temporary hydrogen bonds within each strand. These bonds are powerful, [ a ] but are temporary and extremely vulnerable to humidity . They disappear with a single washing of the hair. Hairstyles using hair dryers usually have volume and discipline, which can be further improved with styling products, hairbrushes, and combs during drying to add tension, hold and lift. Hair dryers were invented in the late 19th century. The first model was created in 1911 by Gabriel Kazanjian. Handheld, household hair dryers first appeared in 1920. Hair dryers are used in beauty salons by professional stylists, as well as by consumers at home. In 1888 the first hair dryer was invented by French stylist Alexandre Godefroy [ fr ] . [ 3 ] His invention was a large, seated version that consisted of a bonnet that attached to the chimney pipe of a gas stove. Godefroy invented it for use in his hair salon in France, and it was not portable or handheld. It could only be used by having the person sit underneath it. [ 4 ] Armenian American inventor Gabriel Kazanjian was the first to patent a hair dryer in the United States, in 1911. [ 5 ] Around 1920, hair dryers began to go on the market in handheld form. This was due to innovations by National Stamping and Electricworks under the white cross brand, [ 6 ] and later U.S. Racine Universal Motor Company and the Hamilton Beach Co., which allowed the dryer to be small enough to be held by hand. Even in the 1920s, the new dryers were often heavy, weighing in at approximately 2 pounds (0.9 kg), and were difficult to use. They also had many instances of overheating and electrocution . Hair dryers were only capable of using 100 watts, which increased the amount of time needed to dry hair (the average dryer today can use up to 2000 watts of heat). [ 7 ] Since the 1920s, development of the hair dryer has mainly focused on improving the wattage and superficial exterior and material changes. In fact, the mechanism of the dryer has not had any significant changes since its inception. One of the more important changes for the hair dryer is to be made of plastic, so that it is more lightweight. This really caught on in the 1960s with the introduction of better electrical motors and the improvement of plastics. Another important change happened in 1954 when GEC changed the design of the dryer to move the motor inside the casing. [ 8 ] The bonnet dryer was introduced to consumers in 1951. This type worked by having the dryer, usually in a small portable box, connected to a tube that went into a bonnet with holes in it that could be placed on top of a person's head. This worked by giving an even amount of heat to the whole head at once. [ 9 ] The 1950s also saw the introduction of the rigid-hood hair dryer which is the type most frequently seen in salons. It had a hard plastic helmet that wraps around the person's head. This dryer works similarly to the bonnet dryer of the 1950s but at a much higher wattage. [ 8 ] In the 1970s, the U.S. Consumer Product Safety Commission set up guidelines that hair dryers had to meet to be considered safe to manufacture. Since 1991 the CPSC has mandated that all dryers must use a ground fault circuit interrupter so that it cannot electrocute a person if it gets wet. [ 10 ] By 2000, deaths by blowdryers had dropped to fewer than four people a year, a stark difference to the hundreds of cases of electrocution accidents during the mid-20th century. Most hair dryers consist of electric heating coils and a fan that blows the air (usually powered by a universal motor ). The heating element in most dryers is a bare, coiled nichrome wire that is wrapped around mica insulators. Nichrome is used due to its high resistivity , and low tendency to corrode when heated. [ 11 ] A survey of stores in 2007 showed that most hair dryers had ceramic heating elements (like ceramic heaters ) because of their "instant heat" capability. This means that it takes less time for the dryers to heat up and for the hair to dry. [ 12 ] Many of these dryers have "cool shot" buttons that turn off the heater and blow room-temperature air while the button is pressed. [ 13 ] This function helps to maintain the hairstyle by setting it. The colder air reduces frizz and can help to promote shine in the hair. Many feature "ionic" operation, to reduce the build-up of static electricity in the hair, [ 14 ] though the efficacy of ionic technology is of some debate. [ 15 ] Manufacturers claim this makes the hair "smoother". [ citation needed ] Hair dryers are available with attachments, such as diffusers, airflow concentrators, and comb nozzles. Hair dryers have been cited as an effective treatment for head lice . [ 16 ] Today there are two major types of hair dryers: the handheld and the rigid-hood dryer. A hood dryer has a hard plastic dome that fits over a person's head to dry their hair. Hot air is blown out through tiny openings around the inside of the dome so the hair is dried evenly. Hood dryers are mainly found in hair salons. [ 17 ] A hair dryer brush (also called "hot air brush" and "round brush hair dryer" and "hair styler" [ 18 ] ) has the shape of a brush and it is used as a volumizer too. [ 19 ] There are two types of round brush hair dryers – rotating and static. Rotating round brush hair dryers have barrels that rotate automatically while static round brush hair dryers don't. The British historical drama television series Downton Abbey made note of the invention of the portable hair dryer when a character purchased one in Series 6 Episode 9, set in the year 1925.
https://en.wikipedia.org/wiki/Hair_dryer
Hair ice , also known as ice wool or frost beard , is a type of ice that forms on dead wood and takes the shape of fine, silky hair. [ 1 ] It is somewhat uncommon, and has been reported mostly at latitudes between 45 and 55 °N in broadleaf forests . [ 1 ] [ 2 ] The meteorologist (and discoverer of continental drift ) Alfred Wegener described hair ice on wet dead wood in 1918, [ 3 ] assuming some specific fungi as the catalyst, a theory mostly confirmed by Gerhart Wagner and Christian Mätzler in 2005. [ 4 ] [ 5 ] [ 6 ] In 2015, the fungus Exidiopsis effusa was identified as key to the formation of hair ice. [ 1 ] Hair ice forms on moist, rotting wood from broadleaf trees when temperatures are slightly under 0 °C (32 °F) and the air is humid . [ 1 ] The hairs appear to root at the mouth of wood rays (never on the bark), and their thickness is similar to the diameter of the wood ray channels. [ 1 ] A piece of wood that produces hair ice once may continue to produce it over several years. [ 1 ] Each of the smooth, silky hairs has a diameter of about 0.02 mm (0.0008 in) and a length of up to 20 cm (8 in). [ 1 ] The hairs are brittle, but take the shape of curls and waves. [ 1 ] They can maintain their shape for hours and sometimes days. [ 1 ] This long lifetime indicates that something is preventing the small ice crystals from recrystallizing into larger ones, since recrystallization normally occurs very quickly at temperatures near 0 °C (32 °F). [ 1 ] In 2015, German and Swiss scientists identified the fungus Exidiopsis effusa as key to the formation of hair ice. [ 1 ] The fungus was found on every hair ice sample examined by the researchers, and disabling the fungus with fungicide or hot water prevented hair ice formation. [ 1 ] The fungus shapes the ice into fine hairs through an uncertain mechanism and likely stabilizes it by providing a recrystallization inhibitor similar to antifreeze proteins . [ 1 ] [ 2 ] https://www.livescience.com/hair-ice-ireland-fungus.html
https://en.wikipedia.org/wiki/Hair_ice
Hair oil is an oil-based cosmetic product intended to improve the condition of hair. Various types of oils may be included in hair oil products. These often purport to aid with hair growth, dryness, or damage. [ 1 ] [ 2 ] Ancient Egyptians paid special attention to hair and images of hairdressers are depicted in ancient relics found by archaeologists. Archaic texts found during this era had information about “recipes” used by the Egyptians to tackle baldness. During this time period, people used combs and ointments to groom and style their hair. [ 3 ] It is also a popular Ancient Indian technique, it was often used as a predecessor of the modern shampoo. [ 4 ] Many cosmetic products including shampoo, heat protectants, hair drops, or hair masks contain oils. [ citation needed ] Humans produce natural hair oil called sebum from glands around each follicle . Other mammals produce similar oils such as lanolin . Similar to natural oils, artificial hair oils can decrease scalp dryness by forming hydrophobic films that decrease transepidermal water loss , reducing evaporation of water from the skin. [ 5 ] Oils on the hair can reduce the absorption of water that damages hair strands through repeated hygral stress as the hair swells when wet, then shrinks as it dries. [ 6 ] Oils also protect cuticle cells in the hair follicle and prevent the penetration of substances like surfactants . [ 6 ] Saturated and monounsaturated oils diffuse into hair better than polyunsaturated ones. [ 7 ] Mineral and vegetable oils are used to make a variety of commercial and traditional hair oils. Coconut oil is a common ingredient. Other vegetable sources include almond , argan , babassu , burdock , Castor , and tea seed . [ citation needed ] Natural oils are used more commonly as cosmetic products on the scalp. Natural oils come from natural resources that are very high in nutrients such as vitamins and fatty acids. [ 8 ] [ better source needed ] Coconut oil has properties that reduce protein loss in hair when used before and after wash. [ 9 ] Coconut oil is known to have lauric acid , which is a type of fatty acid that may penetrate the hair shaft due to a low molecular weight and linear conformation. [ 10 ] Argan oil originates from Morocco and is known for a conditioning effect that leaves hair soft and relieves frizz . [ citation needed ] Avocado oil is rich in nutrients. It has a high concentration of vitamin E , which is an antioxidant that may decrease hair loss and encourages hair growth. [ 11 ] [ irrelevant citation ] Oils including almond oil, grapeseed oil , jojoba oil , olive oil may promote hair elasticity and help prevent dryness and hair damage. [ 5 ] This material -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hair_oil
Rolling hairpin replication ( RHR ) is a unidirectional, strand displacement form of DNA replication used by parvoviruses, a group of viruses that constitute the family Parvoviridae . Parvoviruses have linear, single-stranded DNA (ssDNA) genomes in which the coding portion of the genome is flanked by telomeres at each end that form hairpin loops . During RHR, these hairpin loops repeatedly unfold and refold to change the direction of DNA replication so that replication progresses in a continuous manner back and forth across the genome. RHR is initiated and terminated by an endonuclease encoded by parvoviruses that is variously called NS1 or Rep, and RHR is similar to rolling circle replication , which is used by ssDNA viruses that have circular genomes. Before RHR begins, a host cell DNA polymerase converts the genome to a duplex form in which the coding portion is double-stranded and connected to the terminal hairpins. From there, messenger RNA (mRNA) that encodes the viral initiator protein is transcribed and translated to synthesize the protein. The initiator protein commences RHR by binding to and nicking the genome in a region adjacent to a hairpin called the origin and establishing a replication fork with its helicase activity. Nicking leads to the hairpin unfolding into a linear, extended form. The telomere is then replicated and both strands of the telomere refold back in on themselves to their original turn-around forms. This repositions the replication fork to switch templates to the other strand and move in the opposite direction. Upon reaching the other end, the same process of unfolding, replication, and refolding occurs. Parvoviruses vary in whether both hairpins are the same or different. Homotelomeric parvoviruses such as adeno-associated viruses (AAV), i.e. those that have identical or similar telomeres, have both ends replicated by terminal resolution, the previously described process. Heterotelomeric parvoviruses such as minute virus of mice (MVM), i.e. those that have different telomeres, have one end replicated by terminal resolution and the other by an asymmetric process called junction resolution. During asymmetric junction resolution, the duplex extended form of the telomere reorganizes into a cruciform-shaped junction , and the correct orientation of the telomere is replicated off the lower arm of the cruciform. As a result of RHR, a replicative molecule that contains numerous copies of the genomes is synthesized. The initiator protein periodically excises progeny ssDNA genomes from this replicative concatemer. Parvoviruses are a family of DNA viruses that have single-stranded DNA (ssDNA) genomes enclosed in rugged, icosahedral protein capsids 18–26 nanometers (nm) in diameter. [ 1 ] Unlike most other ssDNA viruses, which have circular genomes that form a loop, parvoviruses have linear genomes with short terminal sequences at each end of the genome. These termini are capable of being formed into structures called hairpins or hairpin loops and consist of short, imperfect palindromes. [ 2 ] [ 3 ] Varying from virus to virus, the coding region of the genome is 4–6 kilobases (kb) in length, and the termini are 116–550 nucleotides (nt) in length each. The hairpin sequences provide most of the cis -acting information needed for DNA replication and packaging. [ 1 ] [ 4 ] Parvovirus genomes may be either positive-sense or negative-sense . Some species, such as adeno-associated viruses (AAV) like AAV2, package a roughly equal number of positive-sense and negative-sense strands into virions, others, such as minute virus of mice (MVM), show preference toward packaging negative-sense strands, and others have varying proportions. [ 4 ] Because of this disparity, the 5′-end (usually pronounced "five prime end") of the strand that encodes the non-structural proteins is called the "left end", and the 3′-end (usually pronounced "three prime end") is called the "right end". [ 3 ] In reference to the negative-sense strand, the 3′-end is the left side and the 5′-end is the right side. [ 4 ] [ 5 ] Parvoviruses replicate their genomes through a process called rolling hairpin replication (RHR), which is a unidirectional, strand displacement form of DNA replication. Before replication, the coding portion of the ssDNA genome is converted to a double-strand DNA (dsDNA) form, which is then cleaved by a viral protein to initiate replication. Sequential unfolding and refolding of the hairpin termini acts to reverse the direction of synthesis, which allows replication to go back and forth along the genome to synthesize a continuous duplex replicative form (RF) DNA intermediate. Progeny ssDNA genomes are then excised from the RF intermediate. [ 4 ] [ 6 ] While the general aspects of RHR are conserved across genera and species, the exact details likely vary. [ 7 ] Parvovirus genomes have distinct starting points of replication that contain palindromic DNA sequences. These sequences are able to alternate between inter- and intrastrand basepairing throughout replication, and they serve as self-priming telomeres at each end of the genome. [ 2 ] They also contain two key sites necessary for replication used by the initiator protein: a binding site and a cleavage site. [ 8 ] Telomere sequences have significant complexity and diversity, suggesting that they perform additional functions for many species. [ 1 ] [ 9 ] In MVM, for example, the left-end hairpin contains binding sites for transcription factors that modulate gene expression from an adjacent promoter . For AAV, the hairpins can bind to MRE11/Rad50/NBS1 (MRN) complexes and Ku70/80 heterodimers, which are involved in sensing and repairing DNA. [ 5 ] In general, however, they have the same basic structure: imperfect palindromes in which a fully or primarily basepaired region terminates into an axial symmetry. These palindromes can fold into a variety of structures such as a Y-shaped structure and a cruciform-shaped structure. During replication, the termini act as hinges in which the imperfectly basepaired or partial cruciform regions surrounding the axis provide a favorable environment for unfolding and refolding of the hairpin. [ 2 ] [ 3 ] [ 4 ] Some parvoviruses, such as AAV2, are homotelomeric, meaning the two palindromic telomeres are similar or identical and form part of larger (inverted) terminal repeat ((I)TR) sequences. Replication at each terminal ending is therefore similar. Other parvoviruses, such as MVM, are heterotelomeric, meaning they have two physically different telomeres. As a result, heterotelomeric parvoviruses tend to have a more complex replication process since the two telomeres have different replication processes. [ 2 ] [ 3 ] [ 4 ] In general, homotelomeric parvoviruses replicate both ends via a process called terminal resolution, whereas heterotelomeric parvoviruses replicate one end by terminal resolution and the other end by an asymmetric process called junction resolution. [ 4 ] [ 5 ] [ 6 ] [ 10 ] Whether a genus is hetero- or homotelomeric, along with other genomic characteristics, is shown in the following table. [ 4 ] The entire process of rolling hairpin replication, which has distinct, sequential stages, can be summarized as follows: [ 4 ] [ 5 ] [ 7 ] Upon cell entry, a tether about 24 nucleotides in length that attaches the viral protein NS1, essential in replication, to the virion is cleaved off the virion to be reattached later. [ 3 ] After cell entry, virions accumulate in the cell nucleus while the genome is still contained within the capsid. These capsids may be reconfigured to an open or transitioned state during entry. The exact mechanism by which the genome leaves the capsid is unclear. [ 9 ] For AAV, it has been suggested that nuclear factors disassemble the capsid, whereas for MVM, it appears as if the genome is ejected in a 3′-to-5′ direction from an opening in the capsid called a portal. [ 5 ] Parvoviruses lack genes capable of inducing resting cells to enter their DNA synthesis phase (S-phase). Additionally, naked ssDNA is likely to be unstable, perceived as foreign by the host cell, or improperly replicated by host DNA repair . For these reasons, the genome must either be converted rapidly to its less obstructive, more stable duplex form or retained within the capsid until it is uncoated during S-phase. Typically, the latter occurs and virion remains silent in the nucleus until the host cell enters S-phase by itself. During this waiting period, virions may make use of certain strategies to evade host defense mechanisms to protect their hairpins and DNA to reach S-phase, [ 9 ] though it is unclear how this occurs. [ 4 ] Since the genome is packaged as ssDNA, creation of a complementary strand is necessary before gene expression . [ 5 ] [ 9 ] DNA polymerases are only able to synthesize DNA in a 5′ to 3′ direction, and they require a basepair primer to begin synthesis. Parvoviruses address these limitations by using their termini as primers for complementary strand synthesis. [ 9 ] A 3′ hydroxyl end of the left-hand (3′) terminus pairs with an internal base to prime initial DNA synthesis, resulting in the conversion of the ssDNA genome to its first duplex form. [ 1 ] [ 7 ] This is a monomeric double-stranded DNA molecule in which the two strands are covalently cross-linked to each other at the left-end by a single copy of the viral telomere. Synthesis of the duplex form precedes NS1 expression so that when the replication fork during initial complementary strand synthesis reaches the right (5′) end, it does not displace and copy the right-end hairpin. This allows the 3′-end of the new DNA strand to be covalently ligated to the 5′-end of the right hairpin by a host ligase, thereby creating the duplex molecule. During this step, the tether sequence that was present before viral entry into the cell is resynthesized. [ 6 ] Once an infected cell enters S-phase, parvovirus genomes are converted to their duplex form by host replication machinery, and mRNA that encodes non-structural (NS) proteins is transcribed starting from a viral promoter (P4 for MVM). [ 4 ] [ 5 ] [ 9 ] One of these NS proteins is usually called NS1 but also Rep1 or Rep68/78 for the genus Dependoparvovirus , which AAV belongs to. [ 4 ] NS1 is a site-specific DNA binding protein that acts as the replication initiator protein [ 9 ] via nickase activity. [ 15 ] It also mediates excision of both ends of the genome from duplex RF intermediates via a transesterification reaction that introduces a nick into specific duplex origin sequences. [ 4 ] Key components of NS1 include an HUH endonuclease domain toward the N-terminus of the protein and a superfamily 3 (SF3) helicase toward the C-terminus , [ 16 ] as well as ATPase activity. [ 1 ] It binds to ssDNA, RNA, and site-specifically on duplex DNA at reiterations of the tetranucleotide sequence 5′-ACCA-3′ 1–3 . [ 1 ] [ 9 ] These sequences are present in the viral replication origin sites and repeated at multiple sites throughout the genome in more or less degenerative forms. [ 15 ] NS1 nicks the covalently-closed right-end telomere via a transesterification reaction that liberates a basepaired 3′ nucleotide as a free hydroxyl (-OH). [ 4 ] This reaction is assisted by a host DNA-binding protein from the high mobility group 1/2 (HMG1/2) family and is made in the replication origin, OriR , which was created by sequences in and immediately adjacent to the right hairpin. The left-end telomere of MVM, a heterotelomeric parvovirus, contains sequences that can give rise to replication origins in higher-order duplex intermediates, but these sequences are inactive in the hairpin terminus of the monomeric molecule, so NS1 always initiates replication at the right end. [ 6 ] The 3′-OH that is freed by nicking acts as a primer for the DNA polymerase to start complementary strand synthesis [ 8 ] while NS1 remains covalently attached to the 5′-end via a tyrosine residue. [ 1 ] Consequently, a copy of NS1 remains attached to the 5′-end of all RF and progeny DNA throughout replication, packaging, and virion release. [ 4 ] [ 6 ] NS1 is only able to bind to this specific site by assembling into homodimers or higher order multimers, which happens naturally with the addition of adenosine triphosphate (ATP) that is likely mediated by NS1's helicase domain. In vivo studies have shown that NS1 can form into a variety of oligomeric states, but it most likely assembles into hexamers to fulfill the functions of both the endonuclease domain and helicase domain. [ 15 ] Starting from the location at the nick, it is thought that NS1 organizes a replication fork and acts as the replicative 3′-to-5′ helicase. Near its C-terminus, NS1 contains an acidic transcriptional activation domain. This domain acts to upregulate transcription starting from a viral promoter (P38 for MVM) when NS1 is bound to a series of 5′-ACCA-3′ motifs, called the tar sequence, positioned upstream (toward the 5′-end) of the promoter unit, and via interaction with NS1 and various transcription factors. [ 15 ] NS1 also recruits the cellular replication protein A (RPA) complex, which is essential for establishing the new replication fork and for binding and stabilizing displaced single strands. [ 6 ] While NS1 is the only non-structural protein essential for all parvoviruses, some have other individual proteins that are essential for replication. For MVM, NS2 appears to reprogram the host cell for efficient DNA amplification, single-strand progeny synthesis, capsid assembly, and virion export, though it seems to lack direct involvement in these processes. NS2 initially accumulates up to three times more quickly than NS1 in the early S-phase but is turned-over rapidly by a proteasome-mediated pathway. As the infectious cycle progresses, NS2 becomes less common as P38-driven transcription becomes more prominent. [ 15 ] Another example is the nuclear phosphoprotein NP1 of bocaviruses, which, if not synthesized, results in non-viable progeny genomes. [ 5 ] As viral NS proteins accumulate, they commandeer host cell replication apparati, terminating host cell DNA synthesis and causing viral DNA amplification to begin. Interference with host DNA replication may be due to direct effects on host replication proteins that are not essential for viral replication, by extensive nicking of host DNA, or by the restructuring of the nucleus during viral infection. Early in infection, parvoviruses establish replication foci in the nucleus that are termed autonomous parvovirus-associated replication (APAR) bodies. NS1 co-localizes with replicating viral DNA in these structures with other cellular proteins necessary for viral DNA synthesis, [ 15 ] while other complexes not required for replication are sequestered from APAR bodies. The exact manner by which proteins are included or excluded from APAR bodies is unclear and appears to vary from species to species and between cell types. [ 5 ] As infection progresses, APAR microdomains begin to coalesce with other, formerly distinct, nuclear bodies to form progressively larger nuclear inclusions where viral replication and virion assembly occur. After S-phase begins, the host cell is forced to synthesize viral DNA and cannot leave S-phase. [ 17 ] The right-end hairpin of MVM contains 248 nucleotides [ 10 ] organized into a cruciform shape. [ 1 ] This region is almost perfectly basepaired, with just three unpaired bases at the axis and a mismatched region positioned 20 nucleotides from the axis. A three nucleotide insertion, AGA or TCT, on one strand separates opposing pairs of NS1 binding sites, creating a 36 basepair-length palindrome that can assume an alternate cruciform configuration. This configuration is expected to destabilize the duplex, which facilitates its ability to function as a hinge. The mismatch of the unpaired bases, rather than the three-nucleotide sequence itself, may help to promote instability of duplex DNA. [ 10 ] Fully-duplex linear forms of the right-end hairpin sequence also function as NS1-dependent origins. For many parvoviral telomeres, however, only an initiator binding site next to the nick site is required for the origin function so that the minimal sequences required for nicking are less than 40 basepairs in length. For MVM, the minimal right-end origin is around 125 basepairs in length and includes most of the hairpin sequence because at least three recognition elements are involved: the nick site 5′-CTWWTCA-3′ (element 1), positioned seven nucleotides upstream from a duplex NS1-binding site (element 2) that is oriented to have the attached NS1 complex extending over the nick site, and a second NS1-binding site (element 3), which is adjacent to the hairpin axis. [ 10 ] The second binding site is over 100 basepairs away from the nick site but is required for NS1-mediated cleavage. [ 10 ] In vivo , there is slight variation in the position of the nick, plus or minus one nucleotide, with one position preferred. During nicking, this site is likely exposed as a single strand and is potentially stabilized as a minimal stem-loop by the tetranucleotide inverted repeats to the sides of the site. Optimal forms of the NS1-binding site contain at least three tandem copies of the 5′-ACCA-3′ sequence. Modest alterations to these motifs only have a small effect on affinity, which suggests that each tetranucleotide motif is recognized by different molecules in the NS1 complex. The NS1-binding site that positions NS1 over the nick site in the right-end origin is a high affinity site. [ 18 ] With ATP, NS1 binds asymmetrically over the aforementioned sequence, protecting a region 41 basepairs in length from digestion. This footprint extends just five nucleotides beyond the 3′-end of the ACCA repeat but 22 nucleotides beyond the 5′-end so that the footprint ends 15 nucleotides beyond the nick site, placing NS1 in position to nick the origin. Nicking only occurs if the second, distant NS1-binding site is also present in the origin and the entire complex is activated by addition of HMG1. [ 18 ] In the absence of NS1, HMG1 binds the hairpin sequence independently, causing it to bend, without protecting any region from digestion. HMG1 can also directly bind to NS1 and mediates interactions between NS1 molecules bound to their recognition elements in the origin, so it is essential for formation of the cleavage complex. The ability of the axis region to reconfigure into a cruciform does not appear to be important in this process. Cleavage is dependent on the correct spacing of the elements of the origin, so additions and deletions can be lethal, whereas substitutions can be tolerated. Addition of HMG1 appears to only slightly adjust the sequences protected by NS1, but the conformation of the intervening DNA changes, folding into a double helical loop that extends about 30 basepairs through a guanine -rich element in the hairpin stem. Between this element and the nick site there are five thymidine residues included in the loop, and the site has a region to its side containing many alternating adenine and thymine residues, which likely increases flexibility. The creation of the loop likely allows the terminus to assume a specific 3-dimensional structure required to activate the nickase since origins that fail to reconfigure into a double-helical loop once HMG1 is added are not nicked. [ 18 ] Following nicking, a replication fork is established at the newly exposed 3′ nucleotide that proceeds to unfold and copy the right-end hairpin through a series of melting and reannealing reactions. [ 9 ] [ 18 ] This process begins once NS1 nicks the inboard end of the original hairpin. The terminal sequence is then copied in the opposite direction, which produces an inverted copy of the original sequence. [ 9 ] The end result is a duplex extended-form terminus that contains two copies of the terminal sequence. [ 18 ] While NS1 is required for this, it is unclear if unfolding is mediated by its helicase activity in front of the fork or by destabilization of the duplex following DNA binding at one of its 5′-(ACCA) n -3′ recognition sites. [ 6 ] This process is usually called terminal resolution but also hairpin transfer or hairpin resolution. [ 6 ] [ 9 ] Terminal resolution occurs with each round of replication, so progeny genomes contain an equal number of each terminal orientation. The two orientations are termed "flip" and "flop", [ 5 ] [ 6 ] and may be represented as R and r, or B and b, for the flip and flop of the right-end telomere and L and l, or A and a, for the flip and flop of the left-end telomere. [ 7 ] [ 19 ] Since parvoviral terminal palindromes are imperfect, it is easy to identify which orientation is which. [ 1 ] The extended-form duplex telomeres generated during terminal resolution are melted, mediated by NS1 with ATP hydrolysis , causing individual strands to fold back on themselves to create hairpin "rabbit ear" structures that have the flip and flop of the termini. This requires the NS1 helicase activity as well as its site-specific binding activity, the latter of which enables NS1 to bind to symmetrical copies of NS1-binding sites that surround the axis of the extended-form terminus. [ 10 ] [ 20 ] Rabbit ear formation allows the 3′ nucleotide of the newly synthesized DNA strand to pair with an internal base, which repositions the replication fork in a strand-switching maneuver that primes synthesis of additional linear sequences. [ 10 ] Switching from DNA synthesis to rabbit-ear formation at the end of terminal resolution may require different types of NS1 complexes. Alternatively, the NS1 complex may remain intact during this switch, being ready to start stand displacement synthesis following refolding into rabbit ears. [ 20 ] After the replication fork is repositioned, replication continues toward the left end, using the newly synthesized DNA strand as a template. [ 7 ] At the left end of the genome, NS1 is probably required to unfold the hairpin. NS1 appears to be directly involved in melting-out and reconfiguring the resulting extended-form left-end duplexes into rabbit ear structures, though this reaction seems to be less efficient than at the right-end terminus. Dimeric and tetrameric concatemers of the genome are generated successively for MVM. In these concatemers, alternating unit-length genomes are fused through a palindromic junction in left-end to left-end and right-end to right-end orientations. [ 1 ] [ 10 ] In total, RHR results in coding sequences of the genome being copied twice as often as the termini. [ 1 ] [ 7 ] [ 10 ] Both linear and hairpin configurations of the right-end telomere support initiation of RHR, so resolution of duplex right-end to right-end junctions can occur symmetrically on the basepaired duplex sequence or after this complex is melted and reconfigured into two hairpins. It is unclear which of these two reactions is more common since both appear to produce identical results. [ 20 ] For AAV, each telomere is 125 bases in length and capable folding into a T-shaped hairpin. AAV contains a Rep gene that encodes for four Rep proteins, two of which, Rep68 and Rep78, act as replication initiator proteins and fulfill the same functions, such the nickase and helicase activities, as NS1. They recognize and bind to a (GAGC) 3 sequence in the stem region of the terminus and nick a site 20 bases away termed trs . The same process of terminal resolution as MVM is done for AAV, but at both ends. The other two Rep proteins, Rep52 and Rep40, are not involved in DNA replication but are implicated in synthesis of progeny. AAV replication is dependent on a helper virus that is either an adenovirus or a herpesvirus that coinfects the cell. In the absence of coinfection, the AAV genome is integrated into the host cell's DNA until coinfection occurs. [ 1 ] A general rule is that parvoviruses with identical termini, i.e. homotelomeric parvoviruses such as AAV and B19, replicate both ends by terminal resolution, generating equal numbers of flips and flops of each telomere. [ 1 ] [ 4 ] [ 6 ] Parvoviruses that have different termini, i.e. heterotelomeric parvoviruses like MVM, replicate one end by terminal resolution and the other end by asymmetric junction resolution, which conserves a single-sequence orientation and requires different structural arrangements and cofactors to activate NS1's nickase. [ 4 ] [ 10 ] AAV DNA intermediates containing covalently linked sense and antisense strands yield genomic concatemers under denaturing conditions, indicating that AAV replication also synthesizes duplex concatemers that require some form of junction resolution. [ 10 ] In negative-sense MVM genomes, the left-end hairpin is 121 nucleotides in length and exists in a single flip sequence orientation. This telomere is Y-shaped and contains small internal palindromes that fold into the "ears" of the Y, a duplex stem region 43 nucleotides in length that is interrupted by an asymmetric thymidine residue, and a mismatched "bubble" sequence in which the 5′-GAA-3′ sequence on the inboard arm is opposite of 5′-GA-3′ in the outboard strand. [ 1 ] [ 20 ] Sequences in this hairpin are involved in both replication and regulation of transcription. The elements involved in these two functions separate the two arms of the hairpin. [ 20 ] The left-end telomere of MVM, and likely of all heterotelomeric parvoviruses, cannot function as a replication origin in its hairpin configuration. Instead, a single origin on the lower strand is created when the hairpin is unfolded, extended, and copied to form a duplex basepaired sequence that spans adjacent genomes in the dimer RF. Within this structure, the sequence from the outboard arm that surrounds a GA/TC [ 1 ] dinucleotide serves as an origin, OriL TC . The equivalent GAA/TTC sequence on the inboard arm that contains the bubble trinucleotide, called OriL GAA , does not serve as an origin. The inboard arm and hairpin configuration of the terminus instead appear to function as upstream control elements for the viral transcriptional promoter P4. Additionally, the ability to segregate one arm from nicking appears essential for replication. [ 20 ] The minimal linear left-end origin is about 50 basepairs long and extends from two 5′-ACGT-3′ motifs, spaced five nucleotides apart at one end, to a position seven basepairs beyond the nick site. The bubble's GA sequence itself is relatively unimportant, but the space that it occupies is necessary for the origin to function. [ 1 ] [ 20 ] Within the origin, there are three recognition sequences: an NS1-binding site that orients the NS1 complex over the nick site 5′-CTWWTCA-3′, which is located 17 nucleotides downstream (toward the 3′-end), and the two ACGT motifs. These motifs bind a heterodimeric cellular factor called either parvovirus initiation factor (PIF) or glucocorticoid modulating element-binding protein (GMEB). [ 21 ] PIF is a site-specific DNA-binding heterodimeric complex that contains two subunits, p96 and p79, and functions as a transcription modulator in the host cell. It binds DNA via a KDWK fold and recognizes two ACGT half-sites. The spacing between these sites can vary significantly for PIF, from one to nine nucleotides, with an optimal spacing of six. PIF stabilizes the binding of NS1 on the active form of the left-end origin, OriL TC , but not on the inactive form, OriL GAA , because the two complexes are able to establish contact over the bubble binucleotide. The left-end hairpin of all other species in the Protoparvovirus genus, [ note 6 ] of which MVM belongs, have bubble asymmetries and PIF-binding sites, though with slight variation in spacing. This suggests that they all share a similar origin segregation mechanism. [ 21 ] Due to the location of the active origin OriL TC in the dimer junction, synthesis of new copies of the left-end hairpin in the correct, i.e.flip, orientation is not straightforward since a replication fork moving from this site through the linear bridge structure should synthesize new DNA in the flop orientation. Instead, the left-hand MVM dimer junction is resolved asymmetrically in a process that creates a cruciform intermediate. This maneuver accomplishes two things: it allows synthesis of the new DNA in the correct sequence orientation, and it creates a structure that can be resolved by NS1. This "heterocruciform" model of synthesis suggests that resolution is driven by the NS1 helicase activity and depends on the inherent instability of the duplex palindrome, a property that allows it to switch between its linear and cruciform configurations. [ 21 ] NS1 initially introduces a single-strand nick in OriL TC in the B ("right") arm of the junction and becomes covalently attached to the DNA on the 5′ side of the nick, exposing a basepaired 3′ nucleotide. Two outcomes can then occur, depending on the speed with which a replication fork is assembled. If assembly is rapid, then while the junction is in its linear configuration, "read-through" synthesis copies the upper strand, which regenerates the duplex junction and displaces a positive-sense strand that feeds back into the replicative pool. This promotes MVM DNA amplification but does not lead to synthesis of new terminal sequences in the correct orientation or to junction resolution. [ 22 ] To create a resolvable structure, the initial nicking must be followed by melting and rearrangement of the dimer junction into a cruciform. This is driven by the 3′-to-5′ helicase activity of the 5′-linked NS1 complex. Once this cruciform extends to include sequences beyond the nick site, the exposed primer at the nick site in OriL TC undergoes template switching by annealing with its complement in the lower arm of the cruciform. If a fork assembles after this point, then the subsequent synthesis unfolds and copies the lower cruciform arm. This creates a heterocruciform intermediate that contains the newly synthesized telomere in the flip sequence orientation that is attached to the lower strand of the B arm. [ 22 ] This modified junction is called MJ2. [ 23 ] The lower arm of MJ2 is an extended-form duplex palindrome that is essentially identical to those generated during terminal resolution. Once MJ2 is synthesized, the lower arm becomes susceptible to rabbit-ear formation. This repositions the 3′ nucleotide of the newly synthesized copy of the lower arm so that it pairs with inboard sequences on the junction's B arm to prime strand displacement synthesis. If a replication fork is created at this 3′ nucleotide, then the lower strand of the B arm is copied, creating an intermediate junction called MJ1 and progressively displacing the upper strand. This leads to the release of the newly synthesized B turn-around (B-ta) sequence. The residual cruciform, called δJ, is partially single-stranded at the upper part of the B arm and contains the intact upper strand of the junction paired to the lower strand of the A ("left") arm, with an intact copy of the left-end hairpin, ending in a 5′ NS1 complex. Since δJ carries the NS1 helicase, it is presumed to periodically alter configuration. [ 22 ] [ 23 ] The next step is less certain but can be inferred based on what is known about the process thus far. The NS1 helicase is expected to create a dynamic structure in which the nick site in δJ in the normally inactive A side is temporarily but repeatedly exposed in a single-stranded form during duplex-to-hairpin rearrangements, which allows NS1 to engage the nick site in the origin OriL GAA without the help of a cofactor. The nick would leave NS1 covalently attached to the positive-sense "B" strand of δJ and lead to the release of this strand. Nicking also leaves open a basepaired 3′ nucleotide on the "A" strand of δJ to prime DNA synthesis. If a replication fork is established here, then the A strand is unfolded and copied to create its duplex extended form. [ 23 ] When MVM genomes replicate in vivo , the aforementioned nick may not occur because both ends of the dimer replicative form contain an efficient number of right-end hairpin origins. Therefore, replication forks may progress back toward the dimer junction from the genome's right end, copying the top strand of the B arm before the final resolution nick. This bypasses dimer bridge resolution and recycles the top strand into a replicating duplex dimer pool. In a closely related virus, LuIII, the single-strand nick releases a positive-sense strand with its left-end hairpin in the flop orientation. Unlike MVM, LuIII packages strands of both sense with equal frequency. In the negative-sense strands, the left-end hairpins are all in the flip orientation, while in the positive-sense strands, there are an equal number of flip and flop orientations. Compared to MVM, LuIII contains a two-base insertion immediately 3′ of the nick site in the right origin, which impairs its efficiency. Because of this, the reduced efficiency of replication fork assembly in the genome's right end may favor single-strand nicking by giving it more time to occur. [ 23 ] Individual progeny genomes are excised from genomic replicative concatemers starting by introducing breaks in replication origins, usually by the replication initiator protein. This results in the establishment of new replication forks that replicate the telomeres in a combination of terminal resolution and junction resolution and displaces individual ssDNA genomes from the replicative molecule. [ 7 ] [ 20 ] At the end of this process, the telomeres are folded back inwards to form hairpins on excised genomes. The extended-form termini created during excision resemble the extended-form molecules prior to terminal resolution, so they can be melted out and refolded into rabbit ears for additional rounds of replication. [ 1 ] Within an infected cell, numerous replicative concatemers are therefore able to arise. [ 7 ] Displacement of progeny ssDNA genomes either occurs: predominantly or exclusively during active DNA replication, or when cells are assembling viral particles. Displacement of single strands may therefore be associated with packaging viral DNA into capsids. Earlier research suggested that the preassembled viral particle may sequester the genome in a 5′-to-3′ direction as it is displaced from the fork, but more recent research suggests that packaging is performed in a 3′-to-5′ direction driven by the NS1 helicase using newly synthesized single strands. [ 24 ] It is not clear if these single strands are released into the nucleoplasm so that packaging complexes are physically separate from replication complexes or if the replication intermediates serve as both replication and packaging substrates. In the latter case, newly displaced progeny genomes would be kept in the replication complex via interactions between their 5′-linked NS1 molecules and NS1 or capsid proteins that are physically associated with replicating DNA. [ 24 ] Genomes are inserted into the capsid via an entrance called a portal situated at one of the icosahedral 5-fold axes of the capsid, [ 4 ] which is possibly opposite of the opening from which genomes are expelled early in the replication cycle. [ 5 ] Strand selection for encapsidation likely does not involve specific packaging signals but may be predictable by the Kinetic Hairpin Transfer (KHT) mathematical model, which explains the distribution of the strands and terminal conformations of packaged genomes in terms of the efficiency with which each terminus type can undergo reactions that allow it to be copied and reformed. In other words, the KHT model postulates that the relative efficiency with which two genomic termini are resolved and replicated determines the distribution of amplified replication intermediates created during infection and ultimately the efficiency with which ssDNAs of characteristic polarity and terminal orientations are excised, which will then be packaged with equal efficiency. [ 4 ] [ 24 ] Preferential excision of particular genomes is only apparent during packaging. Therefore, among parvoviruses that package strands of one sense, replication appears to be biphasic. At early times, both sense strands are excised. This is followed by a switch in the replication mode that allows for exclusive synthesis of a single sense for packaging. A modified form of the KHT model, called the preferential strand displacement model, proposes that the aforementioned switch in replication is caused by the onset of packaging because the substrate for packaging is probably a newly displaced DNA molecule. [ 24 ] For heterotelomeric parvoviruses, imbalance of origin firing leads to preferential displacement of negative sense strands from the right-end origin. The relative frequency of sense strands in packaged virions can therefore be used to infer the type of resolution mechanism used during excision. [ 5 ] Shortly after the start of S-phase, translation of viral mRNA leads to the accumulation of capsid proteins in the nucleus. These proteins form into oligomers that are assembled into intact empty capsids. After encapsidation, complete virions may be exported from the nucleus to the exterior of the cell before disintegration of the nucleus. Disruption of the host cell environment may also occur later on in infection. This results in cell lysis via necrosis or apoptosis , which releases virions to the outside of the cell. [ 4 ] [ 17 ] Many small replicons that have circular genomes such as circular ssDNA viruses and circular plasmids replicate via rolling circle replication (RCR), which is a unidirectional, strand displacement form of DNA replication similar to RHR. In RCR, successive rounds of replication, which proceeds in a loop around the genome, are initiated and terminated by site-specific single-strand nicks made by a replicon-encoded endonuclease, variously called the nickase, relaxase, mobilization protein (mob), transesterase, or replication protein (Rep). The replication initiator protein of parvoviruses is genetically related to these other endonucleases. [ 17 ] RCR initiator proteins contain three motifs considered to be important for replication. Two of these are retained within parvovirus initiator proteins: an HUHUUU cluster, which is presumed to bind to a Mg 2+ ion required for nicking, and a YxxxK motif that contains the active-site tyrosine residue that attacks the phosphodiester bond of target DNA. In contrast to RCR initiator proteins, which can join together DNA strands, RHR initiator proteins have only vestigial traces of being able to perform ligation. [ 17 ] RCR begins when the initiator protein nicks a DNA strand at a specific sequence in the replication origin region. This is done through a transesterification reaction that forms a 5′-phosphate bond that connects the DNA to the active-site tyrosine and frees the 3′-end hydroxyl (3′-OH) adjacent to the nick site. The 3′-end is then used as a primer for the host DNA polymerase to begin replication while the initiator protein remains attached to the 5′-end of the "original" strand. After one loop of replication around the circular genome, the initiator protein returns to the nick site, i.e. the original initiator complex, while still attached to the parent strand and attacks the regenerated duplex nick site, or a nearby second site in some cases, by means of a topoisomerase -like nicking-joining reaction. [ 17 ] During the aforementioned reaction, the initiator protein cleaves a new nick site and is transferred across the analogous phosphodiester bond. It thereby becomes attached to the new 5′-end while ligating the 5′-end of the first strand to which it was originally attached to the 3′-end of the same strand. This second mechanism varies depending on the replicon. Some replicons such as the virus ΦX174 contain a second active tyrosine residue in the initiator protein. Others use the analogous active-site tyrosine in a second initiator protein that is present as part of a multimeric nickase complex. [ 17 ] This second nicking reaction may occur after one loop or successive loops may occur in which a concatemer containing multiple copies of the genome is created. The result of this nick is that displaced genomes become detached from the replicative molecule. These copies of the genome are ligated and may either be encapsidated into progeny capsids, provided they are monomeric, or converted to a covalently-closed double-stranded form by a host DNA polymerase for further replication. While RHR generally involves replication of both sense strands in a continuous process, RCR has complementary strand synthesis and genomic strand synthesis occur separately. [ 7 ] The strategies used in RHR to engage the nick site are also present in RCR. Most RCR origins are in the form of duplex DNA that has to be melted before nicking. RCR initiators accomplish this by binding to specific DNA-binding sequences in the origin next to the initiation site. [ 17 ] The latter site is then melted in a process that consumes ATP and which is assisted by the ability of the separated strands to reconfigure into stem-loop structures. In these structures, the nick site is presented on an exposed loop. Like RHR initiator proteins, many RCR initiator proteins contain helicase activity, which allows them to melt the DNA prior to nicking and serve as the 3′-to-5′ helicase in the replication fork. [ 19 ]
https://en.wikipedia.org/wiki/Hairpin_resolution
Rolling hairpin replication ( RHR ) is a unidirectional, strand displacement form of DNA replication used by parvoviruses, a group of viruses that constitute the family Parvoviridae . Parvoviruses have linear, single-stranded DNA (ssDNA) genomes in which the coding portion of the genome is flanked by telomeres at each end that form hairpin loops . During RHR, these hairpin loops repeatedly unfold and refold to change the direction of DNA replication so that replication progresses in a continuous manner back and forth across the genome. RHR is initiated and terminated by an endonuclease encoded by parvoviruses that is variously called NS1 or Rep, and RHR is similar to rolling circle replication , which is used by ssDNA viruses that have circular genomes. Before RHR begins, a host cell DNA polymerase converts the genome to a duplex form in which the coding portion is double-stranded and connected to the terminal hairpins. From there, messenger RNA (mRNA) that encodes the viral initiator protein is transcribed and translated to synthesize the protein. The initiator protein commences RHR by binding to and nicking the genome in a region adjacent to a hairpin called the origin and establishing a replication fork with its helicase activity. Nicking leads to the hairpin unfolding into a linear, extended form. The telomere is then replicated and both strands of the telomere refold back in on themselves to their original turn-around forms. This repositions the replication fork to switch templates to the other strand and move in the opposite direction. Upon reaching the other end, the same process of unfolding, replication, and refolding occurs. Parvoviruses vary in whether both hairpins are the same or different. Homotelomeric parvoviruses such as adeno-associated viruses (AAV), i.e. those that have identical or similar telomeres, have both ends replicated by terminal resolution, the previously described process. Heterotelomeric parvoviruses such as minute virus of mice (MVM), i.e. those that have different telomeres, have one end replicated by terminal resolution and the other by an asymmetric process called junction resolution. During asymmetric junction resolution, the duplex extended form of the telomere reorganizes into a cruciform-shaped junction , and the correct orientation of the telomere is replicated off the lower arm of the cruciform. As a result of RHR, a replicative molecule that contains numerous copies of the genomes is synthesized. The initiator protein periodically excises progeny ssDNA genomes from this replicative concatemer. Parvoviruses are a family of DNA viruses that have single-stranded DNA (ssDNA) genomes enclosed in rugged, icosahedral protein capsids 18–26 nanometers (nm) in diameter. [ 1 ] Unlike most other ssDNA viruses, which have circular genomes that form a loop, parvoviruses have linear genomes with short terminal sequences at each end of the genome. These termini are capable of being formed into structures called hairpins or hairpin loops and consist of short, imperfect palindromes. [ 2 ] [ 3 ] Varying from virus to virus, the coding region of the genome is 4–6 kilobases (kb) in length, and the termini are 116–550 nucleotides (nt) in length each. The hairpin sequences provide most of the cis -acting information needed for DNA replication and packaging. [ 1 ] [ 4 ] Parvovirus genomes may be either positive-sense or negative-sense . Some species, such as adeno-associated viruses (AAV) like AAV2, package a roughly equal number of positive-sense and negative-sense strands into virions, others, such as minute virus of mice (MVM), show preference toward packaging negative-sense strands, and others have varying proportions. [ 4 ] Because of this disparity, the 5′-end (usually pronounced "five prime end") of the strand that encodes the non-structural proteins is called the "left end", and the 3′-end (usually pronounced "three prime end") is called the "right end". [ 3 ] In reference to the negative-sense strand, the 3′-end is the left side and the 5′-end is the right side. [ 4 ] [ 5 ] Parvoviruses replicate their genomes through a process called rolling hairpin replication (RHR), which is a unidirectional, strand displacement form of DNA replication. Before replication, the coding portion of the ssDNA genome is converted to a double-strand DNA (dsDNA) form, which is then cleaved by a viral protein to initiate replication. Sequential unfolding and refolding of the hairpin termini acts to reverse the direction of synthesis, which allows replication to go back and forth along the genome to synthesize a continuous duplex replicative form (RF) DNA intermediate. Progeny ssDNA genomes are then excised from the RF intermediate. [ 4 ] [ 6 ] While the general aspects of RHR are conserved across genera and species, the exact details likely vary. [ 7 ] Parvovirus genomes have distinct starting points of replication that contain palindromic DNA sequences. These sequences are able to alternate between inter- and intrastrand basepairing throughout replication, and they serve as self-priming telomeres at each end of the genome. [ 2 ] They also contain two key sites necessary for replication used by the initiator protein: a binding site and a cleavage site. [ 8 ] Telomere sequences have significant complexity and diversity, suggesting that they perform additional functions for many species. [ 1 ] [ 9 ] In MVM, for example, the left-end hairpin contains binding sites for transcription factors that modulate gene expression from an adjacent promoter . For AAV, the hairpins can bind to MRE11/Rad50/NBS1 (MRN) complexes and Ku70/80 heterodimers, which are involved in sensing and repairing DNA. [ 5 ] In general, however, they have the same basic structure: imperfect palindromes in which a fully or primarily basepaired region terminates into an axial symmetry. These palindromes can fold into a variety of structures such as a Y-shaped structure and a cruciform-shaped structure. During replication, the termini act as hinges in which the imperfectly basepaired or partial cruciform regions surrounding the axis provide a favorable environment for unfolding and refolding of the hairpin. [ 2 ] [ 3 ] [ 4 ] Some parvoviruses, such as AAV2, are homotelomeric, meaning the two palindromic telomeres are similar or identical and form part of larger (inverted) terminal repeat ((I)TR) sequences. Replication at each terminal ending is therefore similar. Other parvoviruses, such as MVM, are heterotelomeric, meaning they have two physically different telomeres. As a result, heterotelomeric parvoviruses tend to have a more complex replication process since the two telomeres have different replication processes. [ 2 ] [ 3 ] [ 4 ] In general, homotelomeric parvoviruses replicate both ends via a process called terminal resolution, whereas heterotelomeric parvoviruses replicate one end by terminal resolution and the other end by an asymmetric process called junction resolution. [ 4 ] [ 5 ] [ 6 ] [ 10 ] Whether a genus is hetero- or homotelomeric, along with other genomic characteristics, is shown in the following table. [ 4 ] The entire process of rolling hairpin replication, which has distinct, sequential stages, can be summarized as follows: [ 4 ] [ 5 ] [ 7 ] Upon cell entry, a tether about 24 nucleotides in length that attaches the viral protein NS1, essential in replication, to the virion is cleaved off the virion to be reattached later. [ 3 ] After cell entry, virions accumulate in the cell nucleus while the genome is still contained within the capsid. These capsids may be reconfigured to an open or transitioned state during entry. The exact mechanism by which the genome leaves the capsid is unclear. [ 9 ] For AAV, it has been suggested that nuclear factors disassemble the capsid, whereas for MVM, it appears as if the genome is ejected in a 3′-to-5′ direction from an opening in the capsid called a portal. [ 5 ] Parvoviruses lack genes capable of inducing resting cells to enter their DNA synthesis phase (S-phase). Additionally, naked ssDNA is likely to be unstable, perceived as foreign by the host cell, or improperly replicated by host DNA repair . For these reasons, the genome must either be converted rapidly to its less obstructive, more stable duplex form or retained within the capsid until it is uncoated during S-phase. Typically, the latter occurs and virion remains silent in the nucleus until the host cell enters S-phase by itself. During this waiting period, virions may make use of certain strategies to evade host defense mechanisms to protect their hairpins and DNA to reach S-phase, [ 9 ] though it is unclear how this occurs. [ 4 ] Since the genome is packaged as ssDNA, creation of a complementary strand is necessary before gene expression . [ 5 ] [ 9 ] DNA polymerases are only able to synthesize DNA in a 5′ to 3′ direction, and they require a basepair primer to begin synthesis. Parvoviruses address these limitations by using their termini as primers for complementary strand synthesis. [ 9 ] A 3′ hydroxyl end of the left-hand (3′) terminus pairs with an internal base to prime initial DNA synthesis, resulting in the conversion of the ssDNA genome to its first duplex form. [ 1 ] [ 7 ] This is a monomeric double-stranded DNA molecule in which the two strands are covalently cross-linked to each other at the left-end by a single copy of the viral telomere. Synthesis of the duplex form precedes NS1 expression so that when the replication fork during initial complementary strand synthesis reaches the right (5′) end, it does not displace and copy the right-end hairpin. This allows the 3′-end of the new DNA strand to be covalently ligated to the 5′-end of the right hairpin by a host ligase, thereby creating the duplex molecule. During this step, the tether sequence that was present before viral entry into the cell is resynthesized. [ 6 ] Once an infected cell enters S-phase, parvovirus genomes are converted to their duplex form by host replication machinery, and mRNA that encodes non-structural (NS) proteins is transcribed starting from a viral promoter (P4 for MVM). [ 4 ] [ 5 ] [ 9 ] One of these NS proteins is usually called NS1 but also Rep1 or Rep68/78 for the genus Dependoparvovirus , which AAV belongs to. [ 4 ] NS1 is a site-specific DNA binding protein that acts as the replication initiator protein [ 9 ] via nickase activity. [ 15 ] It also mediates excision of both ends of the genome from duplex RF intermediates via a transesterification reaction that introduces a nick into specific duplex origin sequences. [ 4 ] Key components of NS1 include an HUH endonuclease domain toward the N-terminus of the protein and a superfamily 3 (SF3) helicase toward the C-terminus , [ 16 ] as well as ATPase activity. [ 1 ] It binds to ssDNA, RNA, and site-specifically on duplex DNA at reiterations of the tetranucleotide sequence 5′-ACCA-3′ 1–3 . [ 1 ] [ 9 ] These sequences are present in the viral replication origin sites and repeated at multiple sites throughout the genome in more or less degenerative forms. [ 15 ] NS1 nicks the covalently-closed right-end telomere via a transesterification reaction that liberates a basepaired 3′ nucleotide as a free hydroxyl (-OH). [ 4 ] This reaction is assisted by a host DNA-binding protein from the high mobility group 1/2 (HMG1/2) family and is made in the replication origin, OriR , which was created by sequences in and immediately adjacent to the right hairpin. The left-end telomere of MVM, a heterotelomeric parvovirus, contains sequences that can give rise to replication origins in higher-order duplex intermediates, but these sequences are inactive in the hairpin terminus of the monomeric molecule, so NS1 always initiates replication at the right end. [ 6 ] The 3′-OH that is freed by nicking acts as a primer for the DNA polymerase to start complementary strand synthesis [ 8 ] while NS1 remains covalently attached to the 5′-end via a tyrosine residue. [ 1 ] Consequently, a copy of NS1 remains attached to the 5′-end of all RF and progeny DNA throughout replication, packaging, and virion release. [ 4 ] [ 6 ] NS1 is only able to bind to this specific site by assembling into homodimers or higher order multimers, which happens naturally with the addition of adenosine triphosphate (ATP) that is likely mediated by NS1's helicase domain. In vivo studies have shown that NS1 can form into a variety of oligomeric states, but it most likely assembles into hexamers to fulfill the functions of both the endonuclease domain and helicase domain. [ 15 ] Starting from the location at the nick, it is thought that NS1 organizes a replication fork and acts as the replicative 3′-to-5′ helicase. Near its C-terminus, NS1 contains an acidic transcriptional activation domain. This domain acts to upregulate transcription starting from a viral promoter (P38 for MVM) when NS1 is bound to a series of 5′-ACCA-3′ motifs, called the tar sequence, positioned upstream (toward the 5′-end) of the promoter unit, and via interaction with NS1 and various transcription factors. [ 15 ] NS1 also recruits the cellular replication protein A (RPA) complex, which is essential for establishing the new replication fork and for binding and stabilizing displaced single strands. [ 6 ] While NS1 is the only non-structural protein essential for all parvoviruses, some have other individual proteins that are essential for replication. For MVM, NS2 appears to reprogram the host cell for efficient DNA amplification, single-strand progeny synthesis, capsid assembly, and virion export, though it seems to lack direct involvement in these processes. NS2 initially accumulates up to three times more quickly than NS1 in the early S-phase but is turned-over rapidly by a proteasome-mediated pathway. As the infectious cycle progresses, NS2 becomes less common as P38-driven transcription becomes more prominent. [ 15 ] Another example is the nuclear phosphoprotein NP1 of bocaviruses, which, if not synthesized, results in non-viable progeny genomes. [ 5 ] As viral NS proteins accumulate, they commandeer host cell replication apparati, terminating host cell DNA synthesis and causing viral DNA amplification to begin. Interference with host DNA replication may be due to direct effects on host replication proteins that are not essential for viral replication, by extensive nicking of host DNA, or by the restructuring of the nucleus during viral infection. Early in infection, parvoviruses establish replication foci in the nucleus that are termed autonomous parvovirus-associated replication (APAR) bodies. NS1 co-localizes with replicating viral DNA in these structures with other cellular proteins necessary for viral DNA synthesis, [ 15 ] while other complexes not required for replication are sequestered from APAR bodies. The exact manner by which proteins are included or excluded from APAR bodies is unclear and appears to vary from species to species and between cell types. [ 5 ] As infection progresses, APAR microdomains begin to coalesce with other, formerly distinct, nuclear bodies to form progressively larger nuclear inclusions where viral replication and virion assembly occur. After S-phase begins, the host cell is forced to synthesize viral DNA and cannot leave S-phase. [ 17 ] The right-end hairpin of MVM contains 248 nucleotides [ 10 ] organized into a cruciform shape. [ 1 ] This region is almost perfectly basepaired, with just three unpaired bases at the axis and a mismatched region positioned 20 nucleotides from the axis. A three nucleotide insertion, AGA or TCT, on one strand separates opposing pairs of NS1 binding sites, creating a 36 basepair-length palindrome that can assume an alternate cruciform configuration. This configuration is expected to destabilize the duplex, which facilitates its ability to function as a hinge. The mismatch of the unpaired bases, rather than the three-nucleotide sequence itself, may help to promote instability of duplex DNA. [ 10 ] Fully-duplex linear forms of the right-end hairpin sequence also function as NS1-dependent origins. For many parvoviral telomeres, however, only an initiator binding site next to the nick site is required for the origin function so that the minimal sequences required for nicking are less than 40 basepairs in length. For MVM, the minimal right-end origin is around 125 basepairs in length and includes most of the hairpin sequence because at least three recognition elements are involved: the nick site 5′-CTWWTCA-3′ (element 1), positioned seven nucleotides upstream from a duplex NS1-binding site (element 2) that is oriented to have the attached NS1 complex extending over the nick site, and a second NS1-binding site (element 3), which is adjacent to the hairpin axis. [ 10 ] The second binding site is over 100 basepairs away from the nick site but is required for NS1-mediated cleavage. [ 10 ] In vivo , there is slight variation in the position of the nick, plus or minus one nucleotide, with one position preferred. During nicking, this site is likely exposed as a single strand and is potentially stabilized as a minimal stem-loop by the tetranucleotide inverted repeats to the sides of the site. Optimal forms of the NS1-binding site contain at least three tandem copies of the 5′-ACCA-3′ sequence. Modest alterations to these motifs only have a small effect on affinity, which suggests that each tetranucleotide motif is recognized by different molecules in the NS1 complex. The NS1-binding site that positions NS1 over the nick site in the right-end origin is a high affinity site. [ 18 ] With ATP, NS1 binds asymmetrically over the aforementioned sequence, protecting a region 41 basepairs in length from digestion. This footprint extends just five nucleotides beyond the 3′-end of the ACCA repeat but 22 nucleotides beyond the 5′-end so that the footprint ends 15 nucleotides beyond the nick site, placing NS1 in position to nick the origin. Nicking only occurs if the second, distant NS1-binding site is also present in the origin and the entire complex is activated by addition of HMG1. [ 18 ] In the absence of NS1, HMG1 binds the hairpin sequence independently, causing it to bend, without protecting any region from digestion. HMG1 can also directly bind to NS1 and mediates interactions between NS1 molecules bound to their recognition elements in the origin, so it is essential for formation of the cleavage complex. The ability of the axis region to reconfigure into a cruciform does not appear to be important in this process. Cleavage is dependent on the correct spacing of the elements of the origin, so additions and deletions can be lethal, whereas substitutions can be tolerated. Addition of HMG1 appears to only slightly adjust the sequences protected by NS1, but the conformation of the intervening DNA changes, folding into a double helical loop that extends about 30 basepairs through a guanine -rich element in the hairpin stem. Between this element and the nick site there are five thymidine residues included in the loop, and the site has a region to its side containing many alternating adenine and thymine residues, which likely increases flexibility. The creation of the loop likely allows the terminus to assume a specific 3-dimensional structure required to activate the nickase since origins that fail to reconfigure into a double-helical loop once HMG1 is added are not nicked. [ 18 ] Following nicking, a replication fork is established at the newly exposed 3′ nucleotide that proceeds to unfold and copy the right-end hairpin through a series of melting and reannealing reactions. [ 9 ] [ 18 ] This process begins once NS1 nicks the inboard end of the original hairpin. The terminal sequence is then copied in the opposite direction, which produces an inverted copy of the original sequence. [ 9 ] The end result is a duplex extended-form terminus that contains two copies of the terminal sequence. [ 18 ] While NS1 is required for this, it is unclear if unfolding is mediated by its helicase activity in front of the fork or by destabilization of the duplex following DNA binding at one of its 5′-(ACCA) n -3′ recognition sites. [ 6 ] This process is usually called terminal resolution but also hairpin transfer or hairpin resolution. [ 6 ] [ 9 ] Terminal resolution occurs with each round of replication, so progeny genomes contain an equal number of each terminal orientation. The two orientations are termed "flip" and "flop", [ 5 ] [ 6 ] and may be represented as R and r, or B and b, for the flip and flop of the right-end telomere and L and l, or A and a, for the flip and flop of the left-end telomere. [ 7 ] [ 19 ] Since parvoviral terminal palindromes are imperfect, it is easy to identify which orientation is which. [ 1 ] The extended-form duplex telomeres generated during terminal resolution are melted, mediated by NS1 with ATP hydrolysis , causing individual strands to fold back on themselves to create hairpin "rabbit ear" structures that have the flip and flop of the termini. This requires the NS1 helicase activity as well as its site-specific binding activity, the latter of which enables NS1 to bind to symmetrical copies of NS1-binding sites that surround the axis of the extended-form terminus. [ 10 ] [ 20 ] Rabbit ear formation allows the 3′ nucleotide of the newly synthesized DNA strand to pair with an internal base, which repositions the replication fork in a strand-switching maneuver that primes synthesis of additional linear sequences. [ 10 ] Switching from DNA synthesis to rabbit-ear formation at the end of terminal resolution may require different types of NS1 complexes. Alternatively, the NS1 complex may remain intact during this switch, being ready to start stand displacement synthesis following refolding into rabbit ears. [ 20 ] After the replication fork is repositioned, replication continues toward the left end, using the newly synthesized DNA strand as a template. [ 7 ] At the left end of the genome, NS1 is probably required to unfold the hairpin. NS1 appears to be directly involved in melting-out and reconfiguring the resulting extended-form left-end duplexes into rabbit ear structures, though this reaction seems to be less efficient than at the right-end terminus. Dimeric and tetrameric concatemers of the genome are generated successively for MVM. In these concatemers, alternating unit-length genomes are fused through a palindromic junction in left-end to left-end and right-end to right-end orientations. [ 1 ] [ 10 ] In total, RHR results in coding sequences of the genome being copied twice as often as the termini. [ 1 ] [ 7 ] [ 10 ] Both linear and hairpin configurations of the right-end telomere support initiation of RHR, so resolution of duplex right-end to right-end junctions can occur symmetrically on the basepaired duplex sequence or after this complex is melted and reconfigured into two hairpins. It is unclear which of these two reactions is more common since both appear to produce identical results. [ 20 ] For AAV, each telomere is 125 bases in length and capable folding into a T-shaped hairpin. AAV contains a Rep gene that encodes for four Rep proteins, two of which, Rep68 and Rep78, act as replication initiator proteins and fulfill the same functions, such the nickase and helicase activities, as NS1. They recognize and bind to a (GAGC) 3 sequence in the stem region of the terminus and nick a site 20 bases away termed trs . The same process of terminal resolution as MVM is done for AAV, but at both ends. The other two Rep proteins, Rep52 and Rep40, are not involved in DNA replication but are implicated in synthesis of progeny. AAV replication is dependent on a helper virus that is either an adenovirus or a herpesvirus that coinfects the cell. In the absence of coinfection, the AAV genome is integrated into the host cell's DNA until coinfection occurs. [ 1 ] A general rule is that parvoviruses with identical termini, i.e. homotelomeric parvoviruses such as AAV and B19, replicate both ends by terminal resolution, generating equal numbers of flips and flops of each telomere. [ 1 ] [ 4 ] [ 6 ] Parvoviruses that have different termini, i.e. heterotelomeric parvoviruses like MVM, replicate one end by terminal resolution and the other end by asymmetric junction resolution, which conserves a single-sequence orientation and requires different structural arrangements and cofactors to activate NS1's nickase. [ 4 ] [ 10 ] AAV DNA intermediates containing covalently linked sense and antisense strands yield genomic concatemers under denaturing conditions, indicating that AAV replication also synthesizes duplex concatemers that require some form of junction resolution. [ 10 ] In negative-sense MVM genomes, the left-end hairpin is 121 nucleotides in length and exists in a single flip sequence orientation. This telomere is Y-shaped and contains small internal palindromes that fold into the "ears" of the Y, a duplex stem region 43 nucleotides in length that is interrupted by an asymmetric thymidine residue, and a mismatched "bubble" sequence in which the 5′-GAA-3′ sequence on the inboard arm is opposite of 5′-GA-3′ in the outboard strand. [ 1 ] [ 20 ] Sequences in this hairpin are involved in both replication and regulation of transcription. The elements involved in these two functions separate the two arms of the hairpin. [ 20 ] The left-end telomere of MVM, and likely of all heterotelomeric parvoviruses, cannot function as a replication origin in its hairpin configuration. Instead, a single origin on the lower strand is created when the hairpin is unfolded, extended, and copied to form a duplex basepaired sequence that spans adjacent genomes in the dimer RF. Within this structure, the sequence from the outboard arm that surrounds a GA/TC [ 1 ] dinucleotide serves as an origin, OriL TC . The equivalent GAA/TTC sequence on the inboard arm that contains the bubble trinucleotide, called OriL GAA , does not serve as an origin. The inboard arm and hairpin configuration of the terminus instead appear to function as upstream control elements for the viral transcriptional promoter P4. Additionally, the ability to segregate one arm from nicking appears essential for replication. [ 20 ] The minimal linear left-end origin is about 50 basepairs long and extends from two 5′-ACGT-3′ motifs, spaced five nucleotides apart at one end, to a position seven basepairs beyond the nick site. The bubble's GA sequence itself is relatively unimportant, but the space that it occupies is necessary for the origin to function. [ 1 ] [ 20 ] Within the origin, there are three recognition sequences: an NS1-binding site that orients the NS1 complex over the nick site 5′-CTWWTCA-3′, which is located 17 nucleotides downstream (toward the 3′-end), and the two ACGT motifs. These motifs bind a heterodimeric cellular factor called either parvovirus initiation factor (PIF) or glucocorticoid modulating element-binding protein (GMEB). [ 21 ] PIF is a site-specific DNA-binding heterodimeric complex that contains two subunits, p96 and p79, and functions as a transcription modulator in the host cell. It binds DNA via a KDWK fold and recognizes two ACGT half-sites. The spacing between these sites can vary significantly for PIF, from one to nine nucleotides, with an optimal spacing of six. PIF stabilizes the binding of NS1 on the active form of the left-end origin, OriL TC , but not on the inactive form, OriL GAA , because the two complexes are able to establish contact over the bubble binucleotide. The left-end hairpin of all other species in the Protoparvovirus genus, [ note 6 ] of which MVM belongs, have bubble asymmetries and PIF-binding sites, though with slight variation in spacing. This suggests that they all share a similar origin segregation mechanism. [ 21 ] Due to the location of the active origin OriL TC in the dimer junction, synthesis of new copies of the left-end hairpin in the correct, i.e.flip, orientation is not straightforward since a replication fork moving from this site through the linear bridge structure should synthesize new DNA in the flop orientation. Instead, the left-hand MVM dimer junction is resolved asymmetrically in a process that creates a cruciform intermediate. This maneuver accomplishes two things: it allows synthesis of the new DNA in the correct sequence orientation, and it creates a structure that can be resolved by NS1. This "heterocruciform" model of synthesis suggests that resolution is driven by the NS1 helicase activity and depends on the inherent instability of the duplex palindrome, a property that allows it to switch between its linear and cruciform configurations. [ 21 ] NS1 initially introduces a single-strand nick in OriL TC in the B ("right") arm of the junction and becomes covalently attached to the DNA on the 5′ side of the nick, exposing a basepaired 3′ nucleotide. Two outcomes can then occur, depending on the speed with which a replication fork is assembled. If assembly is rapid, then while the junction is in its linear configuration, "read-through" synthesis copies the upper strand, which regenerates the duplex junction and displaces a positive-sense strand that feeds back into the replicative pool. This promotes MVM DNA amplification but does not lead to synthesis of new terminal sequences in the correct orientation or to junction resolution. [ 22 ] To create a resolvable structure, the initial nicking must be followed by melting and rearrangement of the dimer junction into a cruciform. This is driven by the 3′-to-5′ helicase activity of the 5′-linked NS1 complex. Once this cruciform extends to include sequences beyond the nick site, the exposed primer at the nick site in OriL TC undergoes template switching by annealing with its complement in the lower arm of the cruciform. If a fork assembles after this point, then the subsequent synthesis unfolds and copies the lower cruciform arm. This creates a heterocruciform intermediate that contains the newly synthesized telomere in the flip sequence orientation that is attached to the lower strand of the B arm. [ 22 ] This modified junction is called MJ2. [ 23 ] The lower arm of MJ2 is an extended-form duplex palindrome that is essentially identical to those generated during terminal resolution. Once MJ2 is synthesized, the lower arm becomes susceptible to rabbit-ear formation. This repositions the 3′ nucleotide of the newly synthesized copy of the lower arm so that it pairs with inboard sequences on the junction's B arm to prime strand displacement synthesis. If a replication fork is created at this 3′ nucleotide, then the lower strand of the B arm is copied, creating an intermediate junction called MJ1 and progressively displacing the upper strand. This leads to the release of the newly synthesized B turn-around (B-ta) sequence. The residual cruciform, called δJ, is partially single-stranded at the upper part of the B arm and contains the intact upper strand of the junction paired to the lower strand of the A ("left") arm, with an intact copy of the left-end hairpin, ending in a 5′ NS1 complex. Since δJ carries the NS1 helicase, it is presumed to periodically alter configuration. [ 22 ] [ 23 ] The next step is less certain but can be inferred based on what is known about the process thus far. The NS1 helicase is expected to create a dynamic structure in which the nick site in δJ in the normally inactive A side is temporarily but repeatedly exposed in a single-stranded form during duplex-to-hairpin rearrangements, which allows NS1 to engage the nick site in the origin OriL GAA without the help of a cofactor. The nick would leave NS1 covalently attached to the positive-sense "B" strand of δJ and lead to the release of this strand. Nicking also leaves open a basepaired 3′ nucleotide on the "A" strand of δJ to prime DNA synthesis. If a replication fork is established here, then the A strand is unfolded and copied to create its duplex extended form. [ 23 ] When MVM genomes replicate in vivo , the aforementioned nick may not occur because both ends of the dimer replicative form contain an efficient number of right-end hairpin origins. Therefore, replication forks may progress back toward the dimer junction from the genome's right end, copying the top strand of the B arm before the final resolution nick. This bypasses dimer bridge resolution and recycles the top strand into a replicating duplex dimer pool. In a closely related virus, LuIII, the single-strand nick releases a positive-sense strand with its left-end hairpin in the flop orientation. Unlike MVM, LuIII packages strands of both sense with equal frequency. In the negative-sense strands, the left-end hairpins are all in the flip orientation, while in the positive-sense strands, there are an equal number of flip and flop orientations. Compared to MVM, LuIII contains a two-base insertion immediately 3′ of the nick site in the right origin, which impairs its efficiency. Because of this, the reduced efficiency of replication fork assembly in the genome's right end may favor single-strand nicking by giving it more time to occur. [ 23 ] Individual progeny genomes are excised from genomic replicative concatemers starting by introducing breaks in replication origins, usually by the replication initiator protein. This results in the establishment of new replication forks that replicate the telomeres in a combination of terminal resolution and junction resolution and displaces individual ssDNA genomes from the replicative molecule. [ 7 ] [ 20 ] At the end of this process, the telomeres are folded back inwards to form hairpins on excised genomes. The extended-form termini created during excision resemble the extended-form molecules prior to terminal resolution, so they can be melted out and refolded into rabbit ears for additional rounds of replication. [ 1 ] Within an infected cell, numerous replicative concatemers are therefore able to arise. [ 7 ] Displacement of progeny ssDNA genomes either occurs: predominantly or exclusively during active DNA replication, or when cells are assembling viral particles. Displacement of single strands may therefore be associated with packaging viral DNA into capsids. Earlier research suggested that the preassembled viral particle may sequester the genome in a 5′-to-3′ direction as it is displaced from the fork, but more recent research suggests that packaging is performed in a 3′-to-5′ direction driven by the NS1 helicase using newly synthesized single strands. [ 24 ] It is not clear if these single strands are released into the nucleoplasm so that packaging complexes are physically separate from replication complexes or if the replication intermediates serve as both replication and packaging substrates. In the latter case, newly displaced progeny genomes would be kept in the replication complex via interactions between their 5′-linked NS1 molecules and NS1 or capsid proteins that are physically associated with replicating DNA. [ 24 ] Genomes are inserted into the capsid via an entrance called a portal situated at one of the icosahedral 5-fold axes of the capsid, [ 4 ] which is possibly opposite of the opening from which genomes are expelled early in the replication cycle. [ 5 ] Strand selection for encapsidation likely does not involve specific packaging signals but may be predictable by the Kinetic Hairpin Transfer (KHT) mathematical model, which explains the distribution of the strands and terminal conformations of packaged genomes in terms of the efficiency with which each terminus type can undergo reactions that allow it to be copied and reformed. In other words, the KHT model postulates that the relative efficiency with which two genomic termini are resolved and replicated determines the distribution of amplified replication intermediates created during infection and ultimately the efficiency with which ssDNAs of characteristic polarity and terminal orientations are excised, which will then be packaged with equal efficiency. [ 4 ] [ 24 ] Preferential excision of particular genomes is only apparent during packaging. Therefore, among parvoviruses that package strands of one sense, replication appears to be biphasic. At early times, both sense strands are excised. This is followed by a switch in the replication mode that allows for exclusive synthesis of a single sense for packaging. A modified form of the KHT model, called the preferential strand displacement model, proposes that the aforementioned switch in replication is caused by the onset of packaging because the substrate for packaging is probably a newly displaced DNA molecule. [ 24 ] For heterotelomeric parvoviruses, imbalance of origin firing leads to preferential displacement of negative sense strands from the right-end origin. The relative frequency of sense strands in packaged virions can therefore be used to infer the type of resolution mechanism used during excision. [ 5 ] Shortly after the start of S-phase, translation of viral mRNA leads to the accumulation of capsid proteins in the nucleus. These proteins form into oligomers that are assembled into intact empty capsids. After encapsidation, complete virions may be exported from the nucleus to the exterior of the cell before disintegration of the nucleus. Disruption of the host cell environment may also occur later on in infection. This results in cell lysis via necrosis or apoptosis , which releases virions to the outside of the cell. [ 4 ] [ 17 ] Many small replicons that have circular genomes such as circular ssDNA viruses and circular plasmids replicate via rolling circle replication (RCR), which is a unidirectional, strand displacement form of DNA replication similar to RHR. In RCR, successive rounds of replication, which proceeds in a loop around the genome, are initiated and terminated by site-specific single-strand nicks made by a replicon-encoded endonuclease, variously called the nickase, relaxase, mobilization protein (mob), transesterase, or replication protein (Rep). The replication initiator protein of parvoviruses is genetically related to these other endonucleases. [ 17 ] RCR initiator proteins contain three motifs considered to be important for replication. Two of these are retained within parvovirus initiator proteins: an HUHUUU cluster, which is presumed to bind to a Mg 2+ ion required for nicking, and a YxxxK motif that contains the active-site tyrosine residue that attacks the phosphodiester bond of target DNA. In contrast to RCR initiator proteins, which can join together DNA strands, RHR initiator proteins have only vestigial traces of being able to perform ligation. [ 17 ] RCR begins when the initiator protein nicks a DNA strand at a specific sequence in the replication origin region. This is done through a transesterification reaction that forms a 5′-phosphate bond that connects the DNA to the active-site tyrosine and frees the 3′-end hydroxyl (3′-OH) adjacent to the nick site. The 3′-end is then used as a primer for the host DNA polymerase to begin replication while the initiator protein remains attached to the 5′-end of the "original" strand. After one loop of replication around the circular genome, the initiator protein returns to the nick site, i.e. the original initiator complex, while still attached to the parent strand and attacks the regenerated duplex nick site, or a nearby second site in some cases, by means of a topoisomerase -like nicking-joining reaction. [ 17 ] During the aforementioned reaction, the initiator protein cleaves a new nick site and is transferred across the analogous phosphodiester bond. It thereby becomes attached to the new 5′-end while ligating the 5′-end of the first strand to which it was originally attached to the 3′-end of the same strand. This second mechanism varies depending on the replicon. Some replicons such as the virus ΦX174 contain a second active tyrosine residue in the initiator protein. Others use the analogous active-site tyrosine in a second initiator protein that is present as part of a multimeric nickase complex. [ 17 ] This second nicking reaction may occur after one loop or successive loops may occur in which a concatemer containing multiple copies of the genome is created. The result of this nick is that displaced genomes become detached from the replicative molecule. These copies of the genome are ligated and may either be encapsidated into progeny capsids, provided they are monomeric, or converted to a covalently-closed double-stranded form by a host DNA polymerase for further replication. While RHR generally involves replication of both sense strands in a continuous process, RCR has complementary strand synthesis and genomic strand synthesis occur separately. [ 7 ] The strategies used in RHR to engage the nick site are also present in RCR. Most RCR origins are in the form of duplex DNA that has to be melted before nicking. RCR initiators accomplish this by binding to specific DNA-binding sequences in the origin next to the initiation site. [ 17 ] The latter site is then melted in a process that consumes ATP and which is assisted by the ability of the separated strands to reconfigure into stem-loop structures. In these structures, the nick site is presented on an exposed loop. Like RHR initiator proteins, many RCR initiator proteins contain helicase activity, which allows them to melt the DNA prior to nicking and serve as the 3′-to-5′ helicase in the replication fork. [ 19 ]
https://en.wikipedia.org/wiki/Hairpin_transfer
A hairpin turn (also hairpin bend or hairpin corner ) is a bend in a road with a very acute inner angle, making it necessary for an oncoming vehicle to turn about 180° to continue on the road. It is named for its resemblance to a bent metal hairpin . Such turns in ramps and trails may be called switchbacks in American English , by analogy with switchback railways . Hairpin turns are often built when a route climbs up or down a steep slope, so that it can travel mostly across the slope with only moderate steepness, and are often arrayed in a zigzag pattern. Highways with repeating hairpin turns allow easier, safer ascents and descents of mountainous terrain than a direct, steep climb and descent, at the price of greater distances of travel and usually lower speed limits , due to the sharpness of the turn. Highways of this style are also generally less costly to build and maintain than highways with tunnels . On occasion, the road may loop completely, using a tunnel or bridge to cross itself at a different elevation (example on Reunion Island : 21°10′52″S 55°27′17″E  /  21.18111°S 55.45472°E  / -21.18111; 55.45472 ; example near Ashland, Oregon 42°05′31″N 122°35′21″W  /  42.09194°N 122.58917°W  / 42.09194; -122.58917 ). When this routing geometry is used for a rail line, it is called a spiral , or spiral loop. In trail building, an alternative to switchbacks is the stairway . If a railway curves back on itself like a hairpin turn, it is called a horseshoe curve . The Pennsylvania Railroad built one in Blair County, Pennsylvania , which ascends the Eastern Continental Divide from the east. However, the radius of curvature is much larger than that of a typical road hairpin. See this example at Zlatoust [ 1 ] or Hillclimbing for other railway ascent methods. Sections known as hairpins are also found in the slalom discipline of alpine skiing . A hairpin consists of two consecutive vertical or "closed gates", which must be negotiated very quickly. Three or more consecutive closed gates are known as a flush . [ 2 ] Media related to Hairpin turns at Wikimedia Commons
https://en.wikipedia.org/wiki/Hairpin_turn
Haitz's law is an observation and forecast about the steady improvement, over many years, of light-emitting diodes (LEDs). It claims that every decade, the cost per lumen (unit of useful light emitted) falls by a factor of 10, and the amount of light generated per LED package increases by a factor of 20, for a given wavelength (color) of light. It is considered the LED counterpart to Moore's law , which states that the number of transistors in a given integrated circuit doubles every 18 to 24 months. [ 1 ] Both laws rely on the process optimization of the production of semiconductor devices . Haitz's law is named after Roland Haitz (1935–2015), [ 2 ] a scientist at Agilent Technologies among others. It was first presented to the larger public at Strategies in Light 2000, the first of a series of annual conferences organized by Strategies Unlimited. [ 3 ] Besides the forecast of exponential development of cost per lumen and amount of light per package, the publication also forecast that the luminous efficacy of LED-based lighting could reach 200 lm/W ( lumen per watt) in 2020, crossing 100 lm/W in 2010. This would be the case if enough industrial and government resources were spent for research on LED-lighting. More than 50% of the electricity consumption for lighting (20% of the totally consumed electrical energy) would be saved reaching 200 lm/W. This prospect and other stepping-stone applications of LEDs (e.g. mobile phone flash and LCD-backlighting) led to a massive investment in LED-research so that the LED efficacy did indeed cross 100 lm/W in 2010. If this trend continues, LEDs will become the most efficient light source by 2020. The theoretical maximum for truncated blackbody white light source (at 5800K colour temperature with wavelengths restricted to the visible band of between 400nm and 700nm) is 251 lm/W. [ 4 ] However, some "white" LEDs have achieved efficacies of over 300 lm/W. [ 5 ] [ 6 ] In 2010, Cree Inc. , developed and marketed the XM-L LED that claimed 1000 lumens at 100 lm/W efficacy and 160 lm/W at 350 mA and 150 lm/W at 700 mA. [ 7 ] They also claimed to have broken the 200 lm/W barrier in R&D with a prototype producing 208 lm at 350 mA. [ 8 ] In May 2011, Cree announced another prototype with 231 lm/W efficacy at 350 mA. [ 9 ] In March 2014, Cree announced another prototype with a record-breaking 303 lm/W efficacy at 350 mA. [ 5 ] In 2017, Philips Lighting started offering consumer LED lights with 200 lm/W efficacy in Dubai [ 10 ] using LED filament technology, three years before what Haitz's law predicted.
https://en.wikipedia.org/wiki/Haitz's_law
Haiyan Zhang is a designer and engineer. She is Director of Innovation at Microsoft Research and Technical Advisor to Lab Director, Christopher Bishop . She appeared on the BBC show "Big Life Fix". Zhang was born in China, and migrated to Australia with her parents aged eight. [ 1 ] She attended Monash University , where she earned first-class honours for a bachelor's degree in computer science in 1998. Zhang worked for Space-Time Research until 2000, when she left Australia to further her studies in design. Zhang moved to Canada to complete a Post Graduate Diploma in Interactive Multimedia at Sheridan College , which she completed in 2003. [ 2 ] Whilst there she helped develop a tool for visualising electroencephalography for intra-operative monitoring. [ 3 ] She moved to Italy to study at the Interaction Design Institute Ivrea , where she completed a Masters in Interaction Design. [ 4 ] Zhang worked as an Interaction Designer for the British Design Council and Stanford University . [ 1 ] Zhang joined IDEO as Principal Interaction Designer in 2006. [ 5 ] Here she "created new technology experiences for community building, entertainment, financial services". with Mattel , Electronic Arts , HBO , France Telecom , Alcatel , Cisco , and AT&T . [ 6 ] [ 7 ] [ 8 ] She was a founder of the innovation platform OpenIDEO , which brings together makers around the world to solve challenges for social good. The site has over 150,000 users worldwide. [ 9 ] She joined Microsoft in 2013, working in the Lift London studio on games and wearables. [ 1 ] In 2015 she became Innovation Director at Microsoft Research . [ 1 ] She is interested in technology for "Connected Play" and wellness. [ 10 ] In 2011, Zhang developed an interactive map of radiation levels around Japan, widely used in the aftermath of the Fukushima Daiichi nuclear disaster . She was a mapping consultant for Safecast , a volunteer organisation developing geiger counters that can be built simply by non-experts, which are distributed around Fukushima . [ 11 ] The BBC Two show "Big Life Fix" paired technology experts with members of the public who were facing real life challenges. Zhang was one of seven makers. [ 12 ] Zhang developed two products, the Emma Watch and Fizzyo . Fizzyo is a toy to encourage children with cystic fibrosis to do their breathing exercises to clear their lungs. [ 13 ] It comprises a wireless electronic sensor in the mouthpiece, which sends an electronic signal to control a computer game on a tablet. [ 13 ] The Emma Watch is a device for sufferers Parkinson's disease , which looks to reduce limb tremors by disrupting the "feedback loop between the brain and hand". [ 1 ] [ 14 ] [ 15 ] Microsoft unveiled the device at their annual developer's conference. [ 16 ] It was named after Emma Lawton, a graphic designer who Zhang met on the show. [ 17 ] [ 18 ] The invention received significant media coverage. [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] Zhang regularly delivers public talks and appears on podcasts, where she discusses design and engineering. [ 26 ] [ 27 ] [ 28 ] [ 29 ] She is an advocate for increasing the representation of women in technology. [ 30 ] She is a Fellow of the Royal Society of the Arts (FRSA) and British Academy of Film and Television Arts (BAFTA). [ 31 ] [ 32 ] [ 33 ]
https://en.wikipedia.org/wiki/Haiyan_Zhang
Hajime Tanabe ( 田辺 元 , Tanabe Hajime , February 3, 1885 – April 29, 1962) was a Japanese philosopher of science , particularly of mathematics and physics . His work brought together elements of Buddhism , scientific thought, Western philosophy , Christianity , and Marxism . [ 1 ] In the postwar years, Tanabe coined the concept of metanoetics , proposing that the limits of speculative philosophy and reason must be surpassed by metanoia . Tanabe was a key member of what has become known in the West as the Kyoto School , alongside philosophers Kitaro Nishida (also Tanabe's teacher) [ 2 ] and Keiji Nishitani . [ 3 ] [ 4 ] He taught at Tōhoku Imperial University beginning in 1913 and later at Kyōto Imperial University, and studied at the universities of Berlin , Leipzig , and Freiburg in the 1920s under figures such as Edmund Husserl and Martin Heidegger . [ 1 ] In 1947 he became a member of the Japan Academy , and in 1950 he received the Order of Cultural Merit . Tanabe was born on February 3, 1885, in Tokyo to a household devoted to education. His father, the principal of Kaisei Academy , was a scholar of Confucius , whose teachings may have influenced Tanabe's philosophical and religious thought. [ 5 ] Tanabe enrolled at Tokyo Imperial University , first as a mathematics student before moving to literature and philosophy. [ 6 ] After graduation, he worked as a lecturer at Tohoku University and taught English at Kaisei Academy. [ 7 ] In 1916, Tanabe translated Henri Poincaré ’s La Valeur de la science . [ 8 ] In 1918, he received his doctorate from Kyoto Imperial University with a dissertation entitled ‘Investigations into the Philosophy of Mathematics’ (predecessor to the 1925 book with the same title). In 1919, at Nishida ’s invitation, Tanabe accepted the position of associate professor at Kyoto Imperial University. From 1922 to 23, he studied in Germany — first, under Alois Riehl at the University of Berlin and then under Edmund Husserl at the University of Freiburg . At Freiburg, he befriended the young Martin Heidegger and Oskar Becker . [ 9 ] One can recognise the influence of these philosophers in Tanabe. In September 1923, soon after the Great Kantō Earthquake , the Home Ministry ordered his return, so Tanabe used the little time he had left — about a couple of months — to visit London and Paris, before boarding his return ship at Marseille. He arrived back in Japan in 1924. [ 10 ] In 1928, Tanabe translated Max Planck ’s 1908 lecture, ‘Die Einheit des physikalischen Weltbildes’ for the Philosophical Essays [哲学論叢] translation series, which he co-edited, for his publisher Iwanami Shoten . [ 11 ] The same series published translations of essays by Bruno Bauch , Adolf Reinach , Wilhelm Windelband , Siegfried Marck , Max Planck , Franz Brentano , Paul Natorp , Nicolai Hartmann , Kazimierz Twardowski , Ernst Cassirer , Hermann Cohen , Emil Lask , Victor Brochard , Ernst Troeltsch , Theodor Lipps , Konrad Fiedler , Wincenty Lutosławski , Sergei Rubinstein , Hermann Bonitz , Max Weber , Émile Durkheim , Martin Grabmann , Heinrich Rickert , Alexius Meinong , Karl von Prantl and Wilhelm Dilthey (the series ended before the planned translations of Christoph von Sigwart , Carl Stumpf , Edmund Husserl , Clemens Baeumker , Josiah Royce and Hermann Ebbinghaus were published). After Nishida's retirement from teaching in 1928, Tanabe succeeded him. Though they began as friends, and shared several philosophical concepts such as the absolute nothing [絶対無], Tanabe became increasingly critical of Nishida's philosophy. Many of Tanabe's writings after Nishida left the university obliquely attacked the latter's philosophy. In 1935, Tanabe published his essay "The Logic of Species and the World Schema" wherein he formulated his own ‘logic of species’ for which he became known. During the Japanese expansion and war effort, Tanabe worked with Nishida and others to maintain the right for free academic expression. Though he criticized the Nazi -inspired letter of Heidegger, [ clarification needed ] Tanabe himself was caught up in the Japanese war effort, and his letters to students going off to war exhibit many of the same terms and ideology used by the reigning military powers. Even more damning are his essays written in defense of Japanese racial and state superiority, exploiting his theory of the Logic of Species to herald and abet the militaristic ideology. [ 12 ] This proposed dialectic argued that every contradictory opposition is to be mediated by a third term in the same manner a species mediates a genus and an individual. [ 13 ] During the war years, however, Tanabe wrote and published little, perhaps reflecting the moral turmoil that he attests to in his monumental post-war work, Philosophy as Metanoetics . The work is framed as a confession of repentance (metanoia) for his support of the war effort. It purports to show a philosophical way to overcome philosophy itself, which suggests [ citation needed ] that traditional Western thought contained seeds of the ideological framework that led to World War II. His activities, and the actions of Japan as a whole, haunted Tanabe for the rest of his life. In 1951, he writes: But as the tensions of World War II grew ever more fierce and with it the regulation of thinking, weak-willed as I was, I found myself unable to resist and could not but yield to some degree to the prevalent mood, which is a shame deeper than I can bear. The already blind militarism had led so many of our graduates precipitously to the battlefields; among the fallen were more than ten from philosophy, for which I feel the height of responsibility and remorse. I can only lower my head and earnestly lament my sin. [ 14 ] He lived for another eleven years after writing these words, dying in 1962 in Kita-Karuizawa, Japan. As James Heisig and others note, Tanabe and other members of the Kyoto School accepted the Western philosophical tradition stemming from the Greeks. This tradition attempts to explain the meaning of human experience in rational terms. This sets them apart from other Eastern writers who, though thinking about what life means and how best to live a good life, spoke in religious terms. Although the Kyoto School used Western philosophical terminology and rational exploration, they made these items serve the purpose of presenting a unique vision of reality from within their cultural heritage. Specifically, they could enrich a discussion of the ultimate nature of reality using the experience and thought of various forms of Buddhism like Zen and Pure Land , but embedded in an analysis that calls upon conceptual tools forged and honed in western philosophy by thinkers ranging from Plato to Descartes to Heidegger. [ 15 ] Tanabe's own contribution to this dialog between Eastern and Western philosophy ultimately sets him apart from the other members of the Kyoto School. His radical critique of philosophical reason and method, while stemming from Immanuel Kant and Søren Kierkegaard , which emerges in his work Philosophy as Metanoetics , easily sets him as a major thinker with a unique position on perennial philosophical questions. Some commentators, for example, suggest that Tanabe's work in metanoetics is a forerunner of deconstruction . [ 16 ] Tanabe engaged with philosophers of Continental philosophy, especially Existentialism . His work is often a dialogue with philosophers like Kierkegaard, Friedrich Nietzsche , and Heidegger. Because of his engaging these thinkers, especially the first two, Tanabe's thought has been characterized as Existentialist, though Makoto Ozaki writes that Tanabe preferred the terms "existentialist philosophy of history", "historical existentialism", or "existential metaphysics of history". [ 17 ] In his masterpiece, Philosophy as Metanoetics , Tanabe characterized his work as "philosophy that is not a philosophy", foreshadowing various approaches to thinking by deconstructionists. Like other Existentialists, Tanabe emphasizes the importance of philosophy as being meaning; that is, what humans think about and desire is finding a meaning to life and death. In company with the other members of the Kyoto School, Tanabe believed that the foremost problem facing humans in the modern world is the lack of meaning and its consequent Nihilism . Jean-Paul Sartre , following Kierkegaard in his Concept of Anxiety , was keen to characterize this as Nothingness . Heidegger, as well, appropriated the notion of Nothingness in his later writings. The Kyoto School philosophers believed that their contribution to this discussion of Nihilism centered on the Buddhist-inspired concept of nothingness, aligned with its correlate Sunyata . Tanabe and Nishida attempted to distinguish their philosophical use of this concept, however, by calling it Absolute Nothingness. This term differentiates it from the Buddhist religious concept of nothingness, as well as underlines the historical aspects of human existence that they believed Buddhism does not capture. Tanabe disagreed with Nishida and Nishitani on the meaning of Absolute Nothingness, emphasizing the practical, historical aspect over what he termed the latter's intuitionism . By this, Tanabe hoped to emphasize the working of Nothingness in time, as opposed to an eternal Now. He also wished to center the human experience in action rather than contemplation, since he thought that action embodies a concern for ethics whereas contemplation ultimately disregards this, resulting in a form of Monism , after the mold of Plotinus and Georg Wilhelm Friedrich Hegel . [ 18 ] That is, echoing Kierkegaard's undermining in Philosophical Fragments of systematic philosophy from Plato to Baruch Spinoza to Hegel, [ 19 ] Tanabe questions whether there is an aboriginal condition of preexisting awareness that can or must be regained to attain enlightenment. [ 20 ] Tanabe's insistence on this point is not simply philosophical and instead points again to his insistence that the proper mode of human being is action, especially ethics. However, he is critical of the notion of a pre-existing condition of enlightenment because he accepts the Kantian notion of radical evil , wherein humans exhibit an ineluctable propensity to act against their own desires for the good and instead perpetrate evil. [ 21 ] [ 22 ] Tanabe's "Demonstration of Christianity" presents religion as a cultural entity in tension with the existential meaning that religion plays in individual lives. Tanabe uses the terms genus to represent the universality of form that all entities strive for, contrasting them with the stable, though ossified form they can become as species as social systems. Tanabe contraposes Christianity and Christ , represented here as the opposition between Paul and Jesus . Jesus, in Tanabe's terms, is a historical being who manifests the action of Absolute Nothingness, or God understood in non-theistic terms. God is beyond all conceptuality and human thinking, which can only occur in terms of self-identity, or Being . God becomes, as manifested in human actions, though God can never be reduced to being, or self-identity. For Tanabe, humans have the potential to realize compassionate divinity, Nothingness, through continual death and resurrection, by way of seeing their nothingness. Tanabe believes that the Christian Incarnation narrative is important for explaining the nature of reality, since he believed Absolute Nothingness becoming human exemplifies the true nature of the divine, as well as exemplar to realization of human being in relationship to divinity. Jesus signifies this process in a most pure form, thereby setting an example for others to follow. Ultimately, Tanabe chooses philosophy over religion, since the latter tends toward socialization and domestication of the original impulse of the religious action. Philosophy, understood as metanoetics , always remains open to questions and the possibility self-delusion in the form of radical evil . Therefore, Tanabe's statement is a philosophy of religion. 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1958 1960 1961 1962 Early Works (1910–1919) Middle Work (1920–1930) Logic of Species (1931–1945) Later Works (1946–1962)
https://en.wikipedia.org/wiki/Hajime_Tanabe
The Hajos–Parrish–Eder–Sauer–Wiechert and Barbas-List [ 1 ] reactions in organic chemistry are a family of proline -catalysed asymmetric aldol reactions . In the 1970s, two research groups discovered (and published) almost simultaneously their discoveries of two related intramolecular reactions : Zoltan Hajos and David Parrish at Hoffmann-La Roche [ 2 ] [ 3 ] and Rudolf Wiechert et al at Schering AG . [ 4 ] The original Hajos-Parrish procedure begins with an achiral triketone in dimethylformamide and 3% (molar) catalytic (S)‑(−)‑proline. The product is a chiral ketol with 93% enantiomeric excess : In the Eder-Sauer-Wiechert modification, the product shown above loses water to give the conjugated alkene . Three decades later, Carlos Barbas and Benjamin List demonstrated that larger catalyst concentrates could enable a similar intermolecular reaction. The reaction has seen extensive use in many enantiomerically -pure molecular syntheses . [ 5 ] Indeed, it presaged the modern field of asymmetric organocatalysis . Researches on asymmetric enamine catalysis applied to important intermediates in steroids synthesis is due to an increased interest for efficient and convenient steroid total syntheses in the 1960s. In particular, two industrial groups in the early 1970s reported proline-catalyzed intramolecular aldol reactions. In 1971, Escher headed a research group at Schering AG examining reactions under non-biological conditions: (S)-Proline (47 mol%) and 1N perchloric acid in acetonitrile at 80 °C. They observed condensation to the conjugated alkene, [ 4 ] but discarded the result as not particularly useful. [ 6 ] Their work would not become common knowledge for another 37 years, when a new group at Schering analyzed extensions to the reaction, by then associated with Hajos and Parrish. [ 7 ] Meanwhile, Hajos and Parrish examined similar reactions at Hoffmann-La Roche under quasi-biological conditions. Their reaction sequence produced bicyclic ketol intermediates in good yield , which, to their surprise, exhibited circular dichroism corresponding to a large enantiomeric excess . [ 2 ] [ 3 ] A single-crystal X-ray diffraction study confirmed this hypothesis, [ 2 ] [ 3 ] showing an axial methyl and equatorial hydroxyl group, as in digitoxigenin 's CD-ring: [ 8 ] [ improper synthesis? ] Hajos and Parrish published and patented their results in 1974, [ 2 ] [ 3 ] and then the field lay dormant. In 2000, Barbas ' group at Scripps began investigating antibodies for a series of aldolase enzymes , known to operate through an enamine intermediate, [ 9 ] and discovered that one of their antibodies catalyzed an intermolecular Hajos-Parrish-Eder-Saurt-Wiechert reaction. [ 10 ] Searching the literature, they noticed that Hajos et al had already identified a similar reaction, and began investigating whether simple enamines could substitute for their antibodies. [ 11 ] Indeed, proline did, albeit at higher concentrations than in the original 1970s reports: [ 12 ] The flurry of research sparked by this publication clarified multiple long-standing questions. The mechanism of the reaction had remained in question, but Barbas' group showed that it occurred through combined iminium-enamine catalysis. [ 13 ] Barbas' collaborator List also extended the reaction to asymmetric prochiral ketones: List and Notz also revealed that proline and 5,5-dimethyl thiazolidinium-4-carboxylate appeared to be optimal catalysts within a large group of screened amines. [ 14 ] In 2002 the Macmillan group demonstrated a proline-catalyzed aldol reaction between aldehydes . [ 15 ] This reaction is unusual because in general aldehydes will self-condense. (S)-1-(2-pyrrolidinylmethyl)-pyrrolidine salts would forme the basis for the development of diamine organocatalysts that have proven effective in a wide variety or organocatalytic reactions. [ 16 ] Several reaction mechanisms for the triketone reaction have been proposed over the years. Hajos and Parrish proposed the enamine mechanism in their paper [2] . However, their experiment with a stoichiometric amount of labeled water (H 2 18 O) supported a carbinolamine mechanism. Therefore, Hajos put forward (1974) a hemiaminal intermediate. [2] The Agami mechanism (1984) has an enamine intermediate with two proline units involved in the transition state (based on experimental reaction kinetics ) [ 17 ] and according to a mechanism by Houk (2001) [ 18 ] [ 19 ] a single proline unit suffices with a cyclic transition state and with the proline carboxyl group involved in hydrogen bonding . The hemiaminal (carbinolamine) put forward by Hajos in 1974 can change to a tautomeric iminium hydroxide intermediate. The iminium hydroxide ion caused enolization of the side chain methyl ketone would be followed by ring closure to the above shown optically active bicyclic ketol product (see Figure 1.) under the influence of the catalytic amount of (S)-(−)-proline. Pengxin Zhou, Long Zhang, Sanzhong Luo, and Jin-Pei Cheng obtained excellent results using the simple chiral primary amine t-Bu-CH(NH 2 )-CH 2 -NEt 2 .TfOH for the synthesis of both the Wieland-Miescher ketone and the Hajos-Parrish ketone as well as their analogues. [ 20 ] This supports the iminium mechanism, because it is textbook chemistry that primary amines form imines rather than enamines with carbonyl compounds. The Hajos 1974 carbinolamine mechanism has had an unwitting support in a more recent paper by Michael Limbach. [ 21 ] The triketone starting material 2- methyl-2-(3-oxobutyl)-1,3-cyclopentanedione gave the expected optically active bicyclic ketol (+)-(3aS,7aS)-3a,4,7,7a-tetrahydro-3a-hydroxy-7a-methyl-1,5(6H)-indanedione with (S)-(−)-proline catalyst. On the other hand, the stereochemical outcome is reversed with ee selectivities of up to 83% by using the homologue amino acid catalysts, such as (S)-β-homoproline, [(pyrrolidine-(2S)-yl) acetic acid]. The virtual anomaly can be explained with a top side approach of the bulkier beta amino acids to the above triketone starting material of reflective symmetry. The top side approach results in the formation of an enantiotopic carbinolamine to give the (−)-(3aR,7aR)-3a,4,7,7a-tetrahydro-3a-hydroxy-7a-methyl-1,5(6H)-indanedione bicyclic ketol enantiomer identical to the one obtained with unnatural (R)-(+)-proline. List in 2010 [ 22 ] on the other hand is perplexed and surprised that Hajos rejected the enamine mechanism, certainly in light of earlier work by Spencer in 1965 on amine catalysed aldol reactions. [ 23 ] It is interesting and surprising that Eder, Sauer and Wiechert have not attempted to explain the reaction mechanism. [3] The reaction mechanism as proposed by the Barbas group in 2000 for the intermolecular reactions [ 12 ] is based also on enamine formation and the observed stereoselectivity based on the Zimmerman-Traxler model favoring Re -face approach. This is the same mechanism proposed by Barbas for aldolase antibodies reported by the group in 1995: This enamine mechanism also drives the original Hajos-Parrish triketone reaction but the involvement of two proline molecules in it as proposed by Agami [ 17 ] is disputed by Barbas based on the lack of a non-linear effects [ 16 ] and supported by later studies of List based on reaction kinetics . [ 24 ] The general mechanism is further supported by List by the finding that in a reaction carried out in labeled water (H 2 18 O), the oxygen isotope finds its way into the reaction product. [ 25 ] The Hajos and Parrish experiment with a stoechiometric amount of labeled water (H 2 18 O) supported the carbinolamine mechanism. [2] In the same study [20] the reaction of proline with acetone to the oxazolidinone (in DMSO ) was examined: The equilibrium constant for this reaction is only 0.12 leading List to conclude that the involvement of oxazolidinone is only parasitic. Blackmond in 2004 also found oxazolidinones as intermediates (NMR) in a related proline-catalysed α-aminooxylation of propanal with nitrosobenzene : [ 26 ] Chiong Teck Wong of the Institute of High Performance Computing Singapore studied the similar oxyamination reaction of nitrosobenzene with butanal using a chiral prolinol silyl ether catalyst. [ 27 ] His studies strongly suggest that the catalyst generates the enol , and forms an enol-catalyst complex. Nitsosobenzene subsequently reacts with the enol-catalyst complex to afford the (S)-N-nitroso aldol product in agreement with Pauling’s chart of electronegativity . Sodiumborohydride reduction of the primarily formed aldol products gave the corresponding alcohols in good yield and excellent enantioselectivity in the ratio of P N /P O =>99:1 as shown in the Scheme below. Wong suggests that the reaction mechanism of the (S)-Cat catalyzed N-nitroso aldol reaction between nitrosobenzene and butanal proceeds via an enol intermediate and not via an enamine intermediate. The view of oxazolidinones as a parasitic species is contested by Seebach and Eschenmoser who in 2007 published an article [ 28 ] in which they argue that oxazolidinones in fact play a pivotal role in proline catalysis. One of the things they did was reacting an oxazolidinone with the activated aldehyde chloral in an aldol addition: In 2008, Barbas in an essay addressed the question why it took until the year 2000 before interest regenerated for this seemingly simple reaction 30 years after the pioneering work by Hajos and Parrish and why the proline catalysis mechanism appeared to be an enigma for so long. [ 29 ] One explanation has to do with different scientific cultures: a proline mechanism in the context of aldolase catalysis already postulated in 1964 by a biochemist [ 30 ] was ignored by organic chemists. Another part of the explanation was the presumed complexity of aldolase catalysis that dominated chemical thinking for a long time. Finally, research did not expand in this area at Hoffmann-La Roche after the resignation of ZGH in November, 1970. The name for this reaction took some time to develop. In 1985 Professor Agami and associates were the first to name the proline catalyzed Robinson annulation the Hajos-Parrish reaction. [ 31 ] In 1986 Professor Henri B. Kagan and Professor Agami [ 32 ] still called it the Hajos-Parrish reaction in the Abstract of this paper. In 2001 Kagan published a paper entitled "Nonlinear Effects in Asymmetric Catalysis: A Personal Account" in Synlett . [ 33 ] In this paper he introduced the new title the Hajos-Parrish-Wiechert reaction. In 2002 Benjamin List added two more names and introduced the term Hajos–Parrish–Eder–Sauer–Wiechert reaction. [ 34 ] Scientific papers published as late as 2008 in the field of organocatalysis use either the 1985, 2001 or 2002 names of the reaction. A June, 2014 search limited to the years 2009–2014 by Google Scholar returns 44 hits for Hajos-Parrish reaction, 3 for Hajos-Parrish-Wiechert reaction and 184 for Hajos–Parrish–Eder–Sauer–Wiechert reaction. The term 'Hajos-Parrish ketone' (and similar) remains common, however.
https://en.wikipedia.org/wiki/Hajos–Parrish–Eder–Sauer–Wiechert_reaction
In group theory , Hajós's theorem states that if a finite abelian group is expressed as the Cartesian product of simplexes , that is, sets of the form { e , a , a 2 , … , a s − 1 } {\displaystyle \{e,a,a^{2},\dots ,a^{s-1}\}} where e {\displaystyle e} is the identity element, then at least one of the factors is a subgroup . The theorem was proved by the Hungarian mathematician György Hajós in 1941 using group rings . Rédei later proved the statement when the factors are only required to contain the identity element and be of prime cardinality. Rédei's proof of Hajós's theorem was simplified by Tibor Szele . An equivalent statement on homogeneous linear forms was originally conjectured by Hermann Minkowski . A consequence is Minkowski's conjecture on lattice tilings , which says that in any lattice tiling of space by cubes, there are two cubes that meet face to face. Keller's conjecture is the same conjecture for non-lattice tilings, which turns out to be false in high dimensions.
https://en.wikipedia.org/wiki/Hajós's_theorem
In chemistry , the Halcon process refers to technology for the production of propylene oxide by oxidation of propylene with tert-butyl hydroperoxide . The reaction requires metal catalysts , which typically contain molybdenum : [ 1 ] The byproduct tert-butanol is recycled or converted to other useful compounds. The process once operated at the scale of >2 billion kg/y. The lighter analogue of propylene oxide, ethylene oxide , is produced by silver-catalyzed reaction of ethylene with oxygen . Attempts to implement this relatively simple technology to the conversion of propylene to propylene oxide fail. Instead only combustion predominates. The problems are attributed to the sensitivity of allylic C-H bonds. The oxidation is thought to proceed by formation of Mo(η 2 -O 2 -tert-Bu) complexes. The peroxy O center is rendered highly electrophilic, leading to attack on the alkene. [ 2 ] The Halcon process was developed by Halcon International . [ 3 ] This catalysis article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Halcon_process
Haldane's dilemma , also known as the waiting time problem , [ 1 ] is a limit on the speed of beneficial evolution , calculated by J. B. S. Haldane in 1957. Before the invention of DNA sequencing technologies, it was not known how much polymorphism DNA harbored, although alloenzymes (variant forms of an enzyme which differ structurally but not functionally from other alloenzymes coded for by different alleles at the same locus) were beginning to make it clear that substantial polymorphism existed. This was puzzling because the amount of polymorphism known to exist seemed to exceed the theoretical limits that Haldane calculated, that is, the limits imposed if polymorphisms present in the population generally influence an organism's fitness. Motoo Kimura 's landmark paper on neutral theory in 1968 [ 2 ] built on Haldane's work to suggest that most molecular evolution is neutral, resolving the dilemma. Although neutral evolution remains the consensus theory among modern biologists, [ 3 ] and thus Kimura's resolution of Haldane's dilemma is widely regarded as correct, some biologists argue that adaptive evolution explains a large fraction of substitutions in protein coding sequence, [ 4 ] and they propose alternative solutions to Haldane's dilemma. In the introduction to The Cost of Natural Selection Haldane writes that it is difficult for breeders to simultaneously select all the desired qualities, partly because the required genes may not be found together in the stock; but, he writes, [ 5 ] especially in slowly breeding animals such as cattle, one cannot cull even half the females, even though only one in a hundred of them combines the various qualities desired. [ 5 ] That is, the problem for the cattle breeder is that keeping only the specimens with the desired qualities will lower the reproductive capability too much to keep a useful breeding stock. Haldane states that this same problem arises with respect to natural selection. Characters that are positively correlated at one time may be negatively correlated at a later time, so simultaneous optimization of more than one character is a problem also in nature. And, as Haldane writes [ 5 ] [i]n this paper I shall try to make quantitative the fairly obvious statement that natural selection cannot occur with great intensity for a number of characters at once unless they happen to be controlled by the same genes. [ 5 ] In faster breeding species there is less of a problem. Haldane mentions the peppered moth , Biston betularia , whose variation in pigmentation is determined by several alleles at a single gene. [ 5 ] [ 6 ] One of these alleles, "C", is dominant to all the others, and any CC or Cx moths are dark (where "x" is any other allele). Another allele, "c", is recessive to all the others, and cc moths are light. Against the originally pale lichens the darker moths were easier for birds to pick out, but in areas, where pollution has darkened the lichens, the cc moths had become rare. Haldane mentions that in a single day the frequency of cc moths might be halved. Another potential problem is that if "ten other independently inherited characters had been subject to selection of the same intensity as that for colour, only ( 1 / 2 ) 10 {\displaystyle (1/2)^{10}} , or one in 1024, of the original genotype would have survived." The species would most likely have become extinct; but it might well survive ten other selective periods of comparable selectivity, if they happened in different centuries. [ 5 ] Haldane proceeds to define the intensity of selection regarding "juvenile survival" (that is, survival to reproductive age) as I = ln ⁡ ( s 0 / S ) {\displaystyle I=\ln(s_{0}/S)} , where s 0 {\displaystyle s_{0}} is the proportion of those with the optimal genotype (or genotypes) that survive to reproduce, and S {\displaystyle S} is the proportion of the entire population that similarly so survive. The proportion for the entire population that die without reproducing is thus 1 − S {\displaystyle 1-S} , and this would have been 1 − s 0 {\displaystyle 1-s_{0}} if all genotypes had survived as well as the optimal. Hence s 0 − S {\displaystyle s_{0}-S} is the proportion of "genetic" deaths due to selection. As Haldane mentions, if s 0 ≈ S {\displaystyle s_{0}\approx S} , then I ≈ s 0 − S {\displaystyle I\approx s_{0}-S} . [ 7 ] Haldane writes I shall investigate the following case mathematically. A population is in equilibrium under selection and mutation. One or more genes are rare because their appearance by mutation is balanced by natural selection. A sudden change occurs in the environment, for example, pollution by smoke, a change of climate, the introduction of a new food source, predator, or pathogen, and above all migration to a new habitat. It will be shown later that the general conclusions are not affected if the change is slow. The species is less adapted to the new environment, and its reproductive capacity is lowered. It is gradually improved as a result of natural selection. But meanwhile, a number of deaths, or their equivalents in lowered fertility, have occurred. If selection at the i t h {\displaystyle i^{th}} selected locus is responsible for d i {\displaystyle d_{i}} of these deaths in any generation the reproductive capacity of the species will be ∏ ( 1 − d i ) {\displaystyle \prod \left(1-d_{i}\right)} of that of the optimal genotype, or exp ⁡ ( − ∑ d i ) {\displaystyle \exp \left(-\sum d_{i}\right)} nearly, if every d i {\displaystyle d_{i}} is small. Thus the intensity of selection approximates to ∑ d i {\displaystyle \sum d_{i}} . [ 5 ] Comparing to the above, we have that d i = s 0 i − S {\displaystyle d_{i}=s_{0i}-S} , if we say that s 0 i {\displaystyle s_{0i}} is the quotient of deaths for the i t h {\displaystyle i^{th}} selected locus and S {\displaystyle S} is again the quotient of deaths for the entire population. The problem statement is therefore that the alleles in question are not particularly beneficial under the previous circumstances; but a change in environment favors these genes by natural selection. The individuals without the genes are therefore disfavored, and the favorable genes spread in the population by the death (or lowered fertility) of the individuals without the genes. Note that Haldane's model as stated here allows for more than one gene to move towards fixation at a time; but each such will add to the cost of substitution. The total cost of substitution of the i t h {\displaystyle i^{th}} gene is the sum D i {\displaystyle D_{i}} of all values of d i {\displaystyle d_{i}} over all generations of selection; that is, until fixation of the gene. Haldane states that he will show that D i {\displaystyle D_{i}} depends mainly on p 0 {\displaystyle p_{0}} , the small frequency of the gene in question, as selection begins – that is, at the time that the environmental change occurs (or begins to occur). [ 5 ] Let A and a be two alleles with frequencies p n {\displaystyle p_{n}} and q n {\displaystyle q_{n}} in the n th {\displaystyle n^{\mbox{th}}} generation. Their relative fitness is given by [ 5 ] where 0 ≤ K {\displaystyle K} ≤ 1, and 0 ≤ λ ≤ 1. If λ = 0, then Aa has the same fitness as AA , e.g. if Aa is phenotypically equivalent with AA ( A dominant), and if λ = 1, then Aa has the same fitness as aa , e.g. if Aa is phenotypically equivalent with aa ( A recessive). In general λ indicates how close in fitness Aa is to aa . The fraction of selective deaths in the n th {\displaystyle n^{\mbox{th}}} generation then is and the total number of deaths is the population size multiplied by Haldane approximates the above equation by taking the continuum limit of the above equation. [ 5 ] This is done by multiplying and dividing it by dq so that it is in integral form substituting q=1-p, the cost (given by the total number of deaths, 'D', required to make a substitution) is given by Assuming λ < 1, this gives where the last approximation assumes p 0 {\displaystyle p_{0}} to be small. If λ = 1, then we have In his discussion Haldane writes that the substitution cost, if it is paid by juvenile deaths, "usually involves a number of deaths equal to about 10 or 20 times the number in a generation" – the minimum being the population size (= "the number in a generation") and rarely being 100 times that number. Haldane assumes 30 to be the mean value. [ 5 ] Assuming substitution of genes to take place slowly, one gene at a time over n generations, the fitness of the species will fall below the optimum (achieved when the substitution is complete) by a factor of about 30/ n , so long as this is small – small enough to prevent extinction. Haldane doubts that high intensities – such as in the case of the peppered moth – have occurred frequently and estimates that a value of n = 300 is a probable number of generations. This gives a selection intensity of I = 30 / 300 = 0.1 {\displaystyle I=30/300=0.1} . Haldane then continues: [ 5 ] The number of loci in a vertebrate species has been estimated at about 40,000. 'Good' species, even when closely related, may differ at several thousand loci, even if the differences at most of them are very slight. But it takes as many deaths, or their equivalents, to replace a gene by one producing a barely distinguishable phenotype as by one producing a very different one. If two species differ at 1000 loci, and the mean rate of gene substitution, as has been suggested, is one per 300 generations, it will take 300,000 generations to generate an interspecific difference. It may take a good deal more, for if an allele a1 is replaced by a10, the population may pass through stages where the commonest genotype is a1a1, a2a2, a3a3, and so on, successively, the various alleles in turn giving maximal fitness in the existing environment and the residual environment. [ 5 ] The number 300 of generations is a conservative estimate for a slowly evolving species not at the brink of extinction by Haldane's calculation. For a difference of at least 1,000 genes, 300,000 generations might be needed – maybe more, if some gene runs through more than one optimisation. Apparently the first use of the term "Haldane's dilemma" was by paleontologist Leigh Van Valen in his 1963 paper "Haldane's Dilemma, Evolutionary Rates, and Heterosis". Van Valen writes: [ 8 ] Haldane (1957 [= The Cost of Natural Selection ]) drew attention to the fact that in the process of the evolutionary substitution of one allele for another, at any intensity of selection and no matter how slight the importance of the locus, a substantial number of individuals would usually be lost because they did not already possess the new allele. Kimura (1960, 1961) has referred to this loss as the substitutional (or evolutional) load, but because it necessarily involves either a completely new mutation or (more usually) previous change in the environment or the genome, I like to think of it as a dilemma for the population: for most organisms, rapid turnover in a few genes precludes rapid turnover in the others. A corollary of this is that, if an environmental change occurs that necessitates the rather rapid replacement of several genes if a population is to survive, the population becomes extinct. [ 8 ] That is, since a high number of deaths are required to fix one gene rapidly, and dead organisms do not reproduce, fixation of more than one gene simultaneously would conflict. Note that Haldane's model assumes independence of genes at different loci; if the selection intensity is 0.1 for each gene moving towards fixation, and there are N such genes, then the reproductive capacity of the species will be lowered to 0.9 N times the original capacity. Therefore, if it is necessary for the population to fix more than one gene, it may not have reproductive capacity to counter the deaths. Various models evolve at rates above Haldane's limit. J. A. Sved [ 9 ] showed that a threshold model of selection, where individuals with a phenotype less than the threshold die and individuals with a phenotype above the threshold are all equally fit, allows for a greater substitution rate than Haldane's model (though no obvious upper limit was found, though tentative paths to calculate one were examined e.g. the death rate). John Maynard Smith [ 10 ] and Peter O'Donald [ 11 ] followed on the same track. Additionally, the effects of density-dependent processes, epistasis, and soft selective sweeps on the maximum rate of substitution have been examined. [ 12 ] By looking at the polymorphisms within species and divergence between species an estimate can be obtained for the fraction of substitutions that occur due to selection. This parameter is generally called alpha (hence DFE-alpha), and appears to be large in some species, although almost all approaches suggest that the human-chimp divergence was primarily neutral. However, if divergence between Drosophila species was as adaptive as the alpha parameter suggests, then it would exceed Haldane's limit.
https://en.wikipedia.org/wiki/Haldane's_dilemma
Haldane's rule is an observation about the early stage of speciation , formulated in 1922 by the British evolutionary biologist J. B. S. Haldane , that states that if — in a species hybrid — only one sex is inviable or sterile , that sex is more likely to be the heterogametic sex . The heterogametic sex is the one with two different sex chromosomes ; in therian mammals , [ a ] for example, this is the male. [ 2 ] Haldane himself described the rule as: When in the F1 offspring of two different animal races one sex is absent, rare, or sterile, that sex is the heterozygous sex (heterogametic sex). [ 3 ] Haldane's rule applies to the vast majority of species that have heterogametic chromosomal sex determination (e.g. XX females vs. XY males, or ZW females vs. ZZ males). The rule includes both male heterogametic ( XY or XO-type sex determination , such as found in mammals and Drosophila fruit flies) and female heterogametic ( ZW or Z0-type sex determination , as found in birds and butterflies ), and some dioecious plants such as campions . [ 4 ] Hybrid dysfunction (sterility and inviability) is a major form of post- zygotic reproductive isolation , which occurs in early stages of speciation. Evolution can produce a similar pattern of isolation in a vast array of different organisms. However, the actual mechanisms leading to Haldane's rule in different taxa remain largely undefined. Many different hypotheses have been advanced to address the evolutionary mechanisms to produce Haldane's rule. Currently, the most popular explanation for Haldane's rule is the composite hypothesis, which divides Haldane's rule into multiple subdivisions, including sterility, inviability, male heterogamety, and female heterogamety. The composite hypothesis states that Haldane's rule in different subdivisions has different causes. Individual genetic mechanisms may not be mutually exclusive, and these mechanisms may act together to cause Haldane's rule in any given subdivision. [ 5 ] [ 6 ] In contrast to these views that emphasize genetic mechanisms, another view hypothesizes that population dynamics during population divergence may cause Haldane's rule. [ 7 ] The main genetic hypotheses are: Data from multiple phylogenetic groups support a combination of dominance and faster X-chromosome theories. [ 9 ] However, it has recently been argued that dominance theory can not explain Haldane's rule in marsupials since both sexes experience the same incompatibilities due to paternal X-inactivation in females. [ 10 ] The dominance hypothesis is the core of the composite theory, and X-linked recessive/dominance effects have been demonstrated in many cases to cause hybrid incompatibilities. There is also supporting evidence for the faster male and meiotic drive hypotheses. For example, a significant reduction of male-driven gene flow is observed in Asian elephants , suggesting faster evolution of male traits. [ 11 ] Although the rule was initially stated in context of diploid organisms with chromosomal sex determination , it has recently been argued that it can be extended to certain species lacking chromosomal sex determination, such as haplodiploids [ 12 ] and hermaphrodites . [ 9 ] In some instances, the homogametic sex turns out to be inviable while the heterogametic sex is viable and fertile. This is seen in some Drosophila fruit flies. [ 13 ] [ 14 ]
https://en.wikipedia.org/wiki/Haldane's_rule
Haldane's sieve is a concept in population genetics named after the British geneticist J. B. S. Haldane . It refers to the fact that dominant advantageous alleles are more likely to fix in the population than recessive alleles. [ 1 ] Haldane's sieve is particularly relevant in situations where the effects of natural selection are strong and the beneficial mutations have a significant impact on an organism's fitness. According to Haldane's sieve, when a new advantageous mutation arises in a population, it initially occurs as a single copy (a de novo mutation ), borne by an heterozygous individual. This way, genetic dominance is important to estimate the fate of new mutations, that is, if new mutations are going to fix or go extinct. Dominant alleles are more readily exposed to directional selection since the moment they are rare, and thus they are more likely to fix as a result of a " hard sweep ". The term "sieve" in Haldane's sieve metaphorically represents this filtering effect of natural selection. When adaptation stems from the species pool of standing genetic variation, a " soft sweep ", the rationale does not apply, because the allele is no longer rare in the beginning of the sweep. In fact, recessive alleles are more likely to sweep than dominant sweeps when alleles are previously maintained in the population. [ 2 ] Limited dispersal and population structure can reduce the effects of Haldane's sieve. In subdivided populations, limited dispersal increases inbreeding and homozygosity, allowing recessive alleles to express their beneficial effects more frequently and thus accelerate their fixation. This effect is most pronounced when dispersal is strongly limited (e.g., F S T > 0.2 {\displaystyle FST>0.2} ). [ 3 ] [ 4 ] Haldane's sieve has important implications for understanding the dynamics of adaptation and evolution in diploid populations. It highlights the role of natural selection in driving genetic changes in the presence of genetic dominance. This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haldane's_sieve
In mathematics , the Hales–Jewett theorem [ 1 ] is a fundamental combinatorial result of Ramsey theory named after Alfred W. Hales and Robert I. Jewett , concerning the degree to which high-dimensional objects must necessarily exhibit some combinatorial structure. An informal geometric statement of the theorem is that for any positive integers n and c there is a number H such that if the cells of a H -dimensional n × n × n ×...× n cube are colored with c colors, there must be one row, column, or certain diagonal (more details below) of length n all of whose cells are the same color. In other words, assuming n and c are fixed, the higher-dimensional, multi-player, n -in-a-row generalization of a game of tic-tac-toe with c players cannot end in a draw, no matter how large n is, no matter how many people c are playing, and no matter which player plays each turn, provided only that it is played on a board of sufficiently high dimension H . By a standard strategy-stealing argument , one can thus conclude that if two players alternate, then the first player has a winning strategy when H is sufficiently large, though no practical algorithm for obtaining this strategy is known. Let W H n be the set of words of length H over an alphabet with n letters; that is, the set of sequences of {1, 2, ..., n } of length H . This set forms the hypercube that is the subject of the theorem. A variable word w ( x ) over W H n still has length H but includes the special element x in place of at least one of the letters. The words w (1), w (2), ..., w ( n ) obtained by replacing all instances of the special element x with 1, 2, ..., n , form a combinatorial line in the space W H n ; combinatorial lines correspond to rows, columns, and (some of the) diagonals of the hypercube . The Hales–Jewett theorem then states that for given positive integers n and c , there exists a positive integer H , depending on n and c , such that for any partition of W H n into c parts, there is at least one part that contains an entire combinatorial line. For example, take n = 3, H = 2, and c = 2. The hypercube W H n in this case is just the standard tic-tac-toe board, with nine positions: A typical combinatorial line would be the word 2x, which corresponds to the line 21, 22, 23; another combinatorial line is xx, which is the line 11, 22, 33. (Note that the line 13, 22, 31, while a valid line for the game tic-tac-toe , is not considered a combinatorial line.) In this particular case, the Hales–Jewett theorem does not apply; it is possible to divide the tic-tac-toe board into two sets, e.g. {11, 22, 23, 31} and {12, 13, 21, 32, 33}, neither of which contain a combinatorial line (and would correspond to a draw in the game of tic-tac-toe ). On the other hand, if we increase H to, say, 8 (so that the board is now eight-dimensional, with 3 8 = 6561 positions), and partition this board into two sets (the "noughts" and "crosses"), then one of the two sets must contain a combinatorial line (i.e. no draw is possible in this variant of tic-tac-toe ). For a proof, see below. We now prove the Hales–Jewett theorem in the special case n = 3, c = 2, H = 8 discussed above. The idea is to reduce this task to that of proving simpler versions of the Hales–Jewett theorem (in this particular case, to the cases n = 2, c = 2, H = 2 and n = 2, c = 6, H = 6). One can prove the general case of the Hales–Jewett theorem by similar methods, using mathematical induction . Each element of the hypercube W 8 3 is a string of eight numbers from 1 to 3, e.g. 13211321 is an element of the hypercube. We are assuming that this hypercube is completely filled with "noughts" and "crosses". We shall use a proof by contradiction and assume that neither the set of noughts nor the set of crosses contains a combinatorial line. If we fix the first six elements of such a string and let the last two vary, we obtain an ordinary tic-tac-toe board, for instance "132113??" gives such a board. For each such board "abcdef??", we consider the positions "abcdef11", "abcdef12", "abcdef22". Each of these must be filled with either a nought or a cross, so by the pigeonhole principle two of them must be filled with the same symbol. Since any two of these positions are part of a combinatorial line, the third element of that line must be occupied by the opposite symbol (since we are assuming that no combinatorial line has all three elements filled with the same symbol). In other words, for each choice of "abcdef" (which can be thought of as an element of the six-dimensional hypercube W 6 3 ), there are six (overlapping) possibilities: Thus we can partition the six-dimensional hypercube W 6 3 into six classes, corresponding to each of the above six possibilities. (If an element abcdef obeys multiple possibilities, we can choose one arbitrarily, e.g. by choosing the highest one on the above list). Now consider the seven elements 111111, 111112, 111122, 111222, 112222, 122222, 222222 in W 6 3 . By the pigeonhole principle , two of these elements must fall into the same class. Suppose for instance 111112 and 112222 fall into class (5), thus 11111211, 11111222, 11222211, 11222222 are crosses and 11111233, 11222233 are noughts. But now consider the position 11333233, which must be filled with either a cross or a nought. If it is filled with a cross, then the combinatorial line 11xxx2xx is filled entirely with crosses, contradicting our hypothesis. If instead it is filled with a nought, then the combinatorial line 11xxx233 is filled entirely with noughts, again contradicting our hypothesis. Similarly if any other two of the above seven elements of W 6 3 fall into the same class. Since we have a contradiction in all cases, the original hypothesis must be false; thus there must exist at least one combinatorial line consisting entirely of noughts or entirely of crosses. The above argument was somewhat wasteful; in fact the same theorem holds for H = 4. [ 2 ] If one extends the above argument to general values of n and c , then H will grow very fast; even when c = 2 (which corresponds to two-player tic-tac-toe ) the H given by the above argument grows as fast as the Ackermann function . The first primitive recursive bound is due to Saharon Shelah , [ 3 ] and is still the best known bound in general for the Hales–Jewett number H = H ( n , c ). Observe that the above argument also gives the following corollary: if we let A be the set of all eight-digit numbers whose digits are all either 1, 2, 3 (thus A contains numbers such as 11333233), and we color A with two colors, then A contains at least one arithmetic progression of length three, all of whose elements are the same color. This is simply because all of the combinatorial lines appearing in the above proof of the Hales–Jewett theorem, also form arithmetic progressions in decimal notation . A more general formulation of this argument can be used to show that the Hales–Jewett theorem generalizes van der Waerden's theorem . Indeed the Hales–Jewett theorem is substantially a stronger theorem. Just as van der Waerden's theorem has a stronger density version in Szemerédi's theorem , the Hales–Jewett theorem also has a density version. In this strengthened version of the Hales–Jewett theorem, instead of coloring the entire hypercube W H n into c colors, one is given an arbitrary subset A of the hypercube W H n with some given density 0 < δ < 1. The theorem states that if H is sufficiently large depending on n and δ, then the set A must necessarily contain an entire combinatorial line. The density Hales–Jewett theorem was originally proved by Furstenberg and Katznelson using ergodic theory . [ 4 ] In 2009, the Polymath Project developed a new proof [ 5 ] [ 6 ] of the density Hales–Jewett theorem based on ideas from the proof of the corners theorem . [ 7 ] Dodos, Kanellopoulos, and Tyros gave a simplified version of the Polymath proof. [ 8 ] The Hales–Jewett is generalized by the Graham–Rothschild theorem , on higher-dimensional combinatorial cubes .
https://en.wikipedia.org/wiki/Hales–Jewett_theorem
A half-carry flag (also known as an auxiliary flag ) is a condition flag bit in the status register of many CPU families, such as the Intel 8080 , Zilog Z80 , the x86 , [ 1 ] and the Atmel AVR series, among others. It indicates when a carry or borrow has been generated out of the least significant four bits of the accumulator register following the execution of an arithmetic instruction. It is primarily used in decimal ( BCD ) arithmetic instructions. Normally, a processor that uses binary arithmetic (which includes almost all modern CPUs) will add two 8-bit byte values according to the rules of simple binary addition. For example, adding 25 16 and 48 16 produces 6D 16 . However, for binary-coded decimal (BCD) values, where each 4-bit nibble represents a decimal digit, addition is more complicated. For example, adding the decimal value 25 and 48, which are encoded as the BCD values 25 16 and 48 16 , the binary addition of the two values produces 6D 16 . Since the lower nibble of this value is a non-decimal digit (D), it must be adjusted by adding 06 16 to produce the correct BCD result of 73 16 , which represents the decimal value 73. Likewise, adding the BCD values 39 16 and 48 16 produces 81 16 . This result does not have a non-decimal low nibble, but it does cause a carry out of the least significant digit (lower four bits) into the most significant digit (upper four bits). This is indicated by the CPU setting the half-carry flag. This value must also be corrected, by adding 06 16 to 81 16 to produce a corrected BCD result of 87 16 . Finally, if an addition results in a non-decimal high digit, then 60 16 must be added to the value to produce the correct BCD result. For example, adding 72 16 and 73 16 produces E5 16 . Since the most significant digit of this sum is non-decimal (E), adding 60 16 to it produces a corrected BCD result of 145 16 . (Note that the leading 1 digit is actually a carry bit .) Summarizing, if the result of a binary addition contains a non-decimal low digit or causes the half-carry flag to be set, the result must be corrected by adding 06 16 to it; if the result contains a non-decimal high digit, the result must be further corrected by adding 60 16 to produce the correct final BCD value. The Auxiliary Carry Flag (AF) is a CPU flag in the FLAGS register of all x86 -compatible CPUs , [ 2 ] and the preceding 8080-family . It has occasionally been called the Adjust Flag by Intel. [ 3 ] The flag bit is located at position 4 in the CPU flag register. It indicates when an arithmetic carry or borrow has been generated out of the four least significant bits, or lower nibble. It is primarily used to support binary-coded decimal (BCD) arithmetic. The Auxiliary Carry flag is set (to 1) if during an " add " operation there is a carry from the low nibble (lowest four bits) to the high nibble (upper four bits), or a borrow from the high nibble to the low nibble, in the low-order 8-bit portion, during a subtraction. Otherwise, if no such carry or borrow occurs, the flag is cleared or "reset" (set to 0). [ 4 ]
https://en.wikipedia.org/wiki/Half-carry_flag
In electrochemistry , a half-cell is a structure that contains a conductive electrode and a surrounding conductive electrolyte separated by a naturally occurring Helmholtz double layer . Chemical reactions within this layer momentarily pump electric charges between the electrode and the electrolyte, resulting in a potential difference between the electrode and the electrolyte. The typical anode reaction involves a metal atom in the electrode being dissolved and transported as a positive ion across the double layer, causing the electrolyte to acquire a net positive charge while the electrode acquires a net negative charge. The growing potential difference creates an intense electric field within the double layer, and the potential rises in value until the field halts the net charge-pumping reactions. This self-limiting action occurs almost instantly in an isolated half-cell; in applications two dissimilar half-cells are appropriately connected to constitute a Galvanic cell . A standard half-cell consists of a metal electrode in an aqueous solution where the concentration of the metal ions is 1 molar (1 mol/L) at 298 kelvins (25 °C). [ 1 ] In the case of the standard hydrogen electrode (SHE) , a platinum electrode is used and is immersed in an acidic solution where the concentration of hydrogen ions is 1M, with hydrogen gas at 1 atm being bubbled through solution. [ 2 ] The electrochemical series , which consists of standard electrode potentials and is closely related to the reactivity series , was generated by measuring the difference in potential between the metal half-cell in a circuit with a standard hydrogen half-cell, connected by a salt bridge . The standard hydrogen half-cell: The half-cells of a Daniell cell : This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Half-cell
Half-life (symbol t ½ ) is the time required for a quantity (of substance) to reduce to half of its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo radioactive decay or how long stable atoms survive. The term is also used more generally to characterize any type of exponential (or, rarely, non-exponential ) decay. For example, the medical sciences refer to the biological half-life of drugs and other chemicals in the human body. The converse of half-life (in exponential growth) is doubling time . The original term, half-life period , dating to Ernest Rutherford 's discovery of the principle in 1907, was shortened to half-life in the early 1950s. [ 1 ] Rutherford applied the principle of a radioactive element's half-life in studies of age determination of rocks by measuring the decay period of radium to lead-206 . Half-life is constant over the lifetime of an exponentially decaying quantity, and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed. A half-life often describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition that states "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom, and its half-life is one second, there will not be "half of an atom" left after one second. Instead, the half-life is defined in terms of probability : "Half-life is the time required for exactly half of the entities to decay on average ". In other words, the probability of a radioactive atom decaying within its half-life is 50%. [ 2 ] For example, the accompanying image is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately , because of the random variation in the process. Nevertheless, when there are many identical atoms decaying (right boxes), the law of large numbers suggests that it is a very good approximation to say that half of the atoms remain after one half-life. Various simple exercises can demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program . [ 3 ] [ 4 ] [ 5 ] An exponential decay can be described by any of the following four equivalent formulas: [ 6 ] : 109–112 N ( t ) = N 0 ( 1 2 ) t t 1 / 2 N ( t ) = ( 2 − t t 1 / 2 ) N 0 N ( t ) = N 0 e − t τ N ( t ) = N 0 e − λ t {\displaystyle {\begin{aligned}N(t)&=N_{0}\left({\frac {1}{2}}\right)^{\frac {t}{t_{1/2}}}\\N(t)&=\left(2^{-{\frac {t}{t_{1/2}}}}\right)N_{0}\\N(t)&=N_{0}e^{-{\frac {t}{\tau }}}\\N(t)&=N_{0}e^{-\lambda t}\end{aligned}}} where The three parameters t ½ , τ , and λ are directly related in the following way: t 1 / 2 = ln ⁡ ( 2 ) λ = τ ln ⁡ ( 2 ) {\displaystyle t_{1/2}={\frac {\ln(2)}{\lambda }}=\tau \ln(2)} where ln(2) is the natural logarithm of 2 (approximately 0.693). [ 6 ] : 112 In chemical kinetics , the value of the half-life depends on the reaction order : The rate of this kind of reaction does not depend on the substrate concentration , [A] . Thus the concentration decreases linearly. [ A ] = [ A ] 0 − k t {\displaystyle [{\ce {A}}]=[{\ce {A}}]_{0}-kt} In order to find the half-life, we have to replace the concentration value for the initial concentration divided by 2: [ A ] 0 / 2 = [ A ] 0 − k t 1 / 2 {\displaystyle [{\ce {A}}]_{0}/2=[{\ce {A}}]_{0}-kt_{1/2}} and isolate the time: t 1 / 2 = [ A ] 0 2 k {\displaystyle t_{1/2}={\frac {[{\ce {A}}]_{0}}{2k}}} This t ½ formula indicates that the half-life for a zero order reaction depends on the initial concentration and the rate constant. In first order reactions, the rate of reaction will be proportional to the concentration of the reactant. Thus the concentration will decrease exponentially. [ A ] = [ A ] 0 exp ⁡ ( − k t ) {\displaystyle [{\ce {A}}]=[{\ce {A}}]_{0}\exp(-kt)} as time progresses until it reaches zero, and the half-life will be constant, independent of concentration. The time t ½ for [A] to decrease from [A] 0 to ⁠ 1 / 2 ⁠ [A] 0 in a first-order reaction is given by the following equation: [ A ] 0 / 2 = [ A ] 0 exp ⁡ ( − k t 1 / 2 ) {\displaystyle [{\ce {A}}]_{0}/2=[{\ce {A}}]_{0}\exp(-kt_{1/2})} It can be solved for k t 1 / 2 = − ln ⁡ ( [ A ] 0 / 2 [ A ] 0 ) = − ln ⁡ 1 2 = ln ⁡ 2 {\displaystyle kt_{1/2}=-\ln \left({\frac {[{\ce {A}}]_{0}/2}{[{\ce {A}}]_{0}}}\right)=-\ln {\frac {1}{2}}=\ln 2} For a first-order reaction, the half-life of a reactant is independent of its initial concentration. Therefore, if the concentration of A at some arbitrary stage of the reaction is [A] , then it will have fallen to ⁠ 1 / 2 ⁠ [A] after a further interval of ⁠ ln ⁡ 2 k . {\displaystyle {\tfrac {\ln 2}{k}}.} ⁠ Hence, the half-life of a first order reaction is given as the following: t 1 / 2 = ln ⁡ 2 k {\displaystyle t_{1/2}={\frac {\ln 2}{k}}} The half-life of a first order reaction is independent of its initial concentration and depends solely on the reaction rate constant, k . In second order reactions, the rate of reaction is proportional to the square of the concentration. By integrating this rate, it can be shown that the concentration [A] of the reactant decreases following this formula: 1 [ A ] = k t + 1 [ A ] 0 {\displaystyle {\frac {1}{[{\ce {A}}]}}=kt+{\frac {1}{[{\ce {A}}]_{0}}}} We replace [A] for ⁠ 1 / 2 ⁠ [A] 0 in order to calculate the half-life of the reactant A 1 [ A ] 0 / 2 = k t 1 / 2 + 1 [ A ] 0 {\displaystyle {\frac {1}{[{\ce {A}}]_{0}/2}}=kt_{1/2}+{\frac {1}{[{\ce {A}}]_{0}}}} and isolate the time of the half-life ( t ½ ): t 1 / 2 = 1 [ A ] 0 k {\displaystyle t_{1/2}={\frac {1}{[{\ce {A}}]_{0}k}}} This shows that the half-life of second order reactions depends on the initial concentration and rate constant . Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life T ½ can be related to the half-lives t 1 and t 2 that the quantity would have if each of the decay processes acted in isolation: 1 T 1 / 2 = 1 t 1 + 1 t 2 {\displaystyle {\frac {1}{T_{1/2}}}={\frac {1}{t_{1}}}+{\frac {1}{t_{2}}}} For three or more processes, the analogous formula is: 1 T 1 / 2 = 1 t 1 + 1 t 2 + 1 t 3 + ⋯ {\displaystyle {\frac {1}{T_{1/2}}}={\frac {1}{t_{1}}}+{\frac {1}{t_{2}}}+{\frac {1}{t_{3}}}+\cdots } For a proof of these formulas, see Exponential decay § Decay by two or more processes . There is a half-life describing any exponential-decay process. For example: The term "half-life" is almost exclusively used for decay processes that are exponential (such as radioactive decay or the other examples above), or approximately exponential (such as biological half-life discussed below). In a decay process that is not even close to exponential, the half-life will change dramatically while the decay is happening. In this situation it is generally uncommon to talk about half-life in the first place, but sometimes people will describe the decay in terms of its "first half-life", "second half-life", etc., where the first half-life is defined as the time required for decay from the initial value to 50%, the second half-life is from 50% to 25%, and so on. [ 7 ] A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration of a substance in blood plasma to reach one-half of its steady-state value (the "plasma half-life"). The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues , active metabolites , and receptor interactions. [ 8 ] While a radioactive isotope decays almost perfectly according to first order kinetics, where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics. For example, the biological half-life of water in a human being is about 9 to 10 days, [ 9 ] though this can be altered by behavior and other conditions. The biological half-life of caesium in human beings is between one and four months. The concept of a half-life has also been utilized for pesticides in plants , [ 10 ] and certain authors maintain that pesticide risk and impact assessment models rely on and are sensitive to information describing dissipation from plants. [ 11 ] In epidemiology , the concept of half-life can refer to the length of time for the number of incident cases in a disease outbreak to drop by half, particularly if the dynamics of the outbreak can be modeled exponentially . [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/Half-life
In geometry , a straight line , usually abbreviated line , is an infinitely long object with no width, depth, or curvature , an idealization of such physical objects as a straightedge , a taut string, or a ray of light . Lines are spaces of dimension one, which may be embedded in spaces of dimension two, three, or higher. The word line may also refer, in everyday life, to a line segment , which is a part of a line delimited by two points (its endpoints ). Euclid's Elements defines a straight line as a "breadthless length" that "lies evenly with respect to the points on itself", and introduced several postulates as basic unprovable properties on which the rest of geometry was established. Euclidean line and Euclidean geometry are terms introduced to avoid confusion with generalizations introduced since the end of the 19th century, such as non-Euclidean , projective , and affine geometry . In the Greek deductive geometry of Euclid's Elements , a general line (now called a curve ) is defined as a "breadthless length", and a straight line (now called a line segment ) was defined as a line "which lies evenly with the points on itself". [ 1 ] : 291 These definitions appeal to readers' physical experience, relying on terms that are not themselves defined, and the definitions are never explicitly referenced in the remainder of the text. In modern geometry, a line is usually either taken as a primitive notion with properties given by axioms , [ 1 ] : 95 or else defined as a set of points obeying a linear relationship, for instance when real numbers are taken to be primitive and geometry is established analytically in terms of numerical coordinates . In an axiomatic formulation of Euclidean geometry, such as that of Hilbert (modern mathematicians added to Euclid's original axioms to fill perceived logical gaps), [ 1 ] : 108 a line is stated to have certain properties that relate it to other lines and points . For example, for any two distinct points, there is a unique line containing them, and any two distinct lines intersect at most at one point. [ 1 ] : 300 In two dimensions (i.e., the Euclidean plane ), two lines that do not intersect are called parallel . In higher dimensions, two lines that do not intersect are parallel if they are contained in a plane , or skew if they are not. On a Euclidean plane , a line can be represented as a boundary between two regions. [ 2 ] : 104 Any collection of finitely many lines partitions the plane into convex polygons (possibly unbounded); this partition is known as an arrangement of lines . In three-dimensional space , a first degree equation in the variables x , y , and z defines a plane, so two such equations, provided the planes they give rise to are not parallel, define a line which is the intersection of the planes. More generally, in n -dimensional space n −1 first-degree equations in the n coordinate variables define a line under suitable conditions. In more general Euclidean space , R n (and analogously in every other affine space ), the line L passing through two different points a and b is the subset L = { ( 1 − t ) a + t b ∣ t ∈ R } . {\displaystyle L=\left\{(1-t)\,a+tb\mid t\in \mathbb {R} \right\}.} The direction of the line is from a reference point a ( t = 0) to another point b ( t = 1), or in other words, in the direction of the vector b − a . Different choices of a and b can yield the same line. Three or more points are said to be collinear if they lie on the same line. If three points are not collinear, there is exactly one plane that contains them. In affine coordinates , in n -dimensional space the points X = ( x 1 , x 2 , ..., x n ), Y = ( y 1 , y 2 , ..., y n ), and Z = ( z 1 , z 2 , ..., z n ) are collinear if the matrix [ 1 x 1 x 2 ⋯ x n 1 y 1 y 2 ⋯ y n 1 z 1 z 2 ⋯ z n ] {\displaystyle {\begin{bmatrix}1&x_{1}&x_{2}&\cdots &x_{n}\\1&y_{1}&y_{2}&\cdots &y_{n}\\1&z_{1}&z_{2}&\cdots &z_{n}\end{bmatrix}}} has a rank less than 3. In particular, for three points in the plane ( n = 2), the above matrix is square and the points are collinear if and only if its determinant is zero. Equivalently for three points in a plane, the points are collinear if and only if the slope between one pair of points equals the slope between any other pair of points (in which case the slope between the remaining pair of points will equal the other slopes). By extension, k points in a plane are collinear if and only if any ( k –1) pairs of points have the same pairwise slopes. In Euclidean geometry , the Euclidean distance d ( a , b ) between two points a and b may be used to express the collinearity between three points by: [ 3 ] [ 4 ] However, there are other notions of distance (such as the Manhattan distance ) for which this property is not true. In the geometries where the concept of a line is a primitive notion , as may be the case in some synthetic geometries , other methods of determining collinearity are needed. In Euclidean geometry, all lines are congruent , meaning that every line can be obtained by moving a specific line. However, lines may play special roles with respect to other geometric objects and can be classified according to that relationship. For instance, with respect to a conic (a circle , ellipse , parabola , or hyperbola ), lines can be: In the context of determining parallelism in Euclidean geometry, a transversal is a line that intersects two other lines that may or not be parallel to each other. For more general algebraic curves , lines could also be: With respect to triangles we have: For a convex quadrilateral with at most two parallel sides, the Newton line is the line that connects the midpoints of the two diagonals . [ 7 ] For a hexagon with vertices lying on a conic we have the Pascal line and, in the special case where the conic is a pair of lines, we have the Pappus line . Parallel lines are lines in the same plane that never cross. Intersecting lines share a single point in common. Coincidental lines coincide with each other—every point that is on either one of them is also on the other. Perpendicular lines are lines that intersect at right angles . [ 8 ] In three-dimensional space , skew lines are lines that are not in the same plane and thus do not intersect each other. In synthetic geometry , the concept of a line is often considered as a primitive notion , [ 1 ] : 95 meaning it is not being defined by using other concepts, but it is defined by the properties, called axioms , that it must satisfy. [ 9 ] However, the axiomatic definition of a line does not explain the relevance of the concept and is often too abstract for beginners. So, the definition is often replaced or completed by a mental image or intuitive description that allows understanding what is a line. Such descriptions are sometimes referred to as definitions, but are not true definitions since they cannot used in mathematical proofs . The "definition" of line in Euclid's Elements falls into this category; [ 1 ] : 95 and is never used in proofs of theorems. Lines in a Cartesian plane or, more generally, in affine coordinates , are characterized by linear equations. More precisely, every line L {\displaystyle L} (including vertical lines) is the set of all points whose coordinates ( x , y ) satisfy a linear equation; that is, L = { ( x , y ) ∣ a x + b y = c } , {\displaystyle L=\{(x,y)\mid ax+by=c\},} where a , b and c are fixed real numbers (called coefficients ) such that a and b are not both zero. Using this form, vertical lines correspond to equations with b = 0. One can further suppose either c = 1 or c = 0 , by dividing everything by c if it is not zero. There are many variant ways to write the equation of a line which can all be converted from one to another by algebraic manipulation. The above form is sometimes called the standard form . If the constant term is put on the left, the equation becomes a x + b y − c = 0 , {\displaystyle ax+by-c=0,} and this is sometimes called the general form of the equation. However, this terminology is not universally accepted, and many authors do not distinguish these two forms. These forms are generally named by the type of information (data) about the line that is needed to write down the form. Some of the important data of a line is its slope, x-intercept , known points on the line and y-intercept. The equation of the line passing through two different points P 0 ( x 0 , y 0 ) {\displaystyle P_{0}(x_{0},y_{0})} and P 1 ( x 1 , y 1 ) {\displaystyle P_{1}(x_{1},y_{1})} may be written as ( y − y 0 ) ( x 1 − x 0 ) = ( y 1 − y 0 ) ( x − x 0 ) . {\displaystyle (y-y_{0})(x_{1}-x_{0})=(y_{1}-y_{0})(x-x_{0}).} If x 0 ≠ x 1 , this equation may be rewritten as y = ( x − x 0 ) y 1 − y 0 x 1 − x 0 + y 0 {\displaystyle y=(x-x_{0})\,{\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}+y_{0}} or y = x y 1 − y 0 x 1 − x 0 + x 1 y 0 − x 0 y 1 x 1 − x 0 . {\displaystyle y=x\,{\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}+{\frac {x_{1}y_{0}-x_{0}y_{1}}{x_{1}-x_{0}}}\,.} In two dimensions , the equation for non-vertical lines is often given in the slope–intercept form : y = m x + b {\displaystyle y=mx+b} where: The slope of the line through points A ( x a , y a ) {\displaystyle A(x_{a},y_{a})} and B ( x b , y b ) {\displaystyle B(x_{b},y_{b})} , when x a ≠ x b {\displaystyle x_{a}\neq x_{b}} , is given by m = ( y b − y a ) / ( x b − x a ) {\displaystyle m=(y_{b}-y_{a})/(x_{b}-x_{a})} and the equation of this line can be written y = m ( x − x a ) + y a {\displaystyle y=m(x-x_{a})+y_{a}} . As a note, lines in three dimensions may also be described as the simultaneous solutions of two linear equations a 1 x + b 1 y + c 1 z − d 1 = 0 {\displaystyle a_{1}x+b_{1}y+c_{1}z-d_{1}=0} a 2 x + b 2 y + c 2 z − d 2 = 0 {\displaystyle a_{2}x+b_{2}y+c_{2}z-d_{2}=0} such that ( a 1 , b 1 , c 1 ) {\displaystyle (a_{1},b_{1},c_{1})} and ( a 2 , b 2 , c 2 ) {\displaystyle (a_{2},b_{2},c_{2})} are not proportional (the relations a 1 = t a 2 , b 1 = t b 2 , c 1 = t c 2 {\displaystyle a_{1}=ta_{2},b_{1}=tb_{2},c_{1}=tc_{2}} imply t = 0 {\displaystyle t=0} ). This follows since in three dimensions a single linear equation typically describes a plane and a line is what is common to two distinct intersecting planes. Parametric equations are also used to specify lines, particularly in those in three dimensions or more because in more than two dimensions lines cannot be described by a single linear equation. In three dimensions lines are frequently described by parametric equations: x = x 0 + a t y = y 0 + b t z = z 0 + c t {\displaystyle {\begin{aligned}x&=x_{0}+at\\y&=y_{0}+bt\\z&=z_{0}+ct\end{aligned}}} where: Parametric equations for lines in higher dimensions are similar in that they are based on the specification of one point on the line and a direction vector. The normal form (also called the Hesse normal form , [ 10 ] after the German mathematician Ludwig Otto Hesse ), is based on the normal segment for a given line, which is defined to be the line segment drawn from the origin perpendicular to the line. This segment joins the origin with the closest point on the line to the origin. The normal form of the equation of a straight line on the plane is given by: x cos ⁡ φ + y sin ⁡ φ − p = 0 , {\displaystyle x\cos \varphi +y\sin \varphi -p=0,} where φ {\displaystyle \varphi } is the angle of inclination of the normal segment (the oriented angle from the unit vector of the x -axis to this segment), and p is the (positive) length of the normal segment. The normal form can be derived from the standard form a x + b y = c {\displaystyle ax+by=c} by dividing all of the coefficients by a 2 + b 2 . {\displaystyle {\sqrt {a^{2}+b^{2}}}.} and also multiplying through by − 1 {\displaystyle -1} if c < 0. {\displaystyle c<0.} Unlike the slope-intercept and intercept forms, this form can represent any line but also requires only two finite parameters, φ {\displaystyle \varphi } and p , to be specified. If p > 0 , then φ {\displaystyle \varphi } is uniquely defined modulo 2 π . On the other hand, if the line is through the origin ( c = p = 0 ), one drops the c /| c | term to compute sin ⁡ φ {\displaystyle \sin \varphi } and cos ⁡ φ {\displaystyle \cos \varphi } , and it follows that φ {\displaystyle \varphi } is only defined modulo π . The vector equation of the line through points A and B is given by r = O A + λ A B {\displaystyle \mathbf {r} =\mathbf {OA} +\lambda \,\mathbf {AB} } (where λ is a scalar ). If a is vector OA and b is vector OB , then the equation of the line can be written: r = a + λ ( b − a ) {\displaystyle \mathbf {r} =\mathbf {a} +\lambda (\mathbf {b} -\mathbf {a} )} . A ray starting at point A is described by limiting λ. One ray is obtained if λ ≥ 0, and the opposite ray comes from λ ≤ 0. In a Cartesian plane , polar coordinates ( r , θ ) are related to Cartesian coordinates by the parametric equations: [ 11 ] x = r cos ⁡ θ , y = r sin ⁡ θ . {\displaystyle x=r\cos \theta ,\quad y=r\sin \theta .} In polar coordinates, the equation of a line not passing through the origin —the point with coordinates (0, 0) —can be written r = p cos ⁡ ( θ − φ ) , {\displaystyle r={\frac {p}{\cos(\theta -\varphi )}},} with r > 0 and φ − π / 2 < θ < φ + π / 2. {\displaystyle \varphi -\pi /2<\theta <\varphi +\pi /2.} Here, p is the (positive) length of the line segment perpendicular to the line and delimited by the origin and the line, and φ {\displaystyle \varphi } is the (oriented) angle from the x -axis to this segment. It may be useful to express the equation in terms of the angle α = φ + π / 2 {\displaystyle \alpha =\varphi +\pi /2} between the x -axis and the line. In this case, the equation becomes r = p sin ⁡ ( θ − α ) , {\displaystyle r={\frac {p}{\sin(\theta -\alpha )}},} with r > 0 and 0 < θ < α + π . {\displaystyle 0<\theta <\alpha +\pi .} These equations can be derived from the normal form of the line equation by setting x = r cos ⁡ θ , {\displaystyle x=r\cos \theta ,} and y = r sin ⁡ θ , {\displaystyle y=r\sin \theta ,} and then applying the angle difference identity for sine or cosine. These equations can also be proven geometrically by applying right triangle definitions of sine and cosine to the right triangle that has a point of the line and the origin as vertices, and the line and its perpendicular through the origin as sides. The previous forms do not apply for a line passing through the origin, but a simpler formula can be written: the polar coordinates ( r , θ ) {\displaystyle (r,\theta )} of the points of a line passing through the origin and making an angle of α {\displaystyle \alpha } with the x -axis, are the pairs ( r , θ ) {\displaystyle (r,\theta )} such that r ≥ 0 , and θ = α or θ = α + π . {\displaystyle r\geq 0,\qquad {\text{and}}\quad \theta =\alpha \quad {\text{or}}\quad \theta =\alpha +\pi .} In modern mathematics, given the multitude of geometries, the concept of a line is closely tied to the way the geometry is described. For instance, in analytic geometry , a line in the plane is often defined as the set of points whose coordinates satisfy a given linear equation , but in a more abstract setting, such as incidence geometry , a line may be an independent object, distinct from the set of points which lie on it. When a geometry is described by a set of axioms , the notion of a line is usually left undefined (a so-called primitive object). The properties of lines are then determined by the axioms which refer to them. One advantage to this approach is the flexibility it gives to users of the geometry. Thus in differential geometry , a line may be interpreted as a geodesic (shortest path between points), while in some projective geometries , a line is a 2-dimensional vector space (all linear combinations of two independent vectors). This flexibility also extends beyond mathematics and, for example, permits physicists to think of the path of a light ray as being a line. In many models of projective geometry , the representation of a line rarely conforms to the notion of the "straight curve" as it is visualised in Euclidean geometry. In elliptic geometry we see a typical example of this. [ 1 ] : 108 In the spherical representation of elliptic geometry, lines are represented by great circles of a sphere with diametrically opposite points identified. In a different model of elliptic geometry, lines are represented by Euclidean planes passing through the origin. Even though these representations are visually distinct, they satisfy all the properties (such as, two points determining a unique line) that make them suitable representations for lines in this geometry. The "shortness" and "straightness" of a line, interpreted as the property that the distance along the line between any two of its points is minimized (see triangle inequality ), can be generalized and leads to the concept of geodesics in metric spaces . Given a line and any point A on it, we may consider A as decomposing this line into two parts. Each such part is called a ray and the point A is called its initial point . It is also known as half-line (sometimes, a half-axis if it plays a distinct role, e.g., as part of a coordinate axis ). It is a one-dimensional half-space . The point A is considered to be a member of the ray. [ a ] Intuitively, a ray consists of those points on a line passing through A and proceeding indefinitely, starting at A , in one direction only along the line. However, in order to use this concept of a ray in proofs a more precise definition is required. Given distinct points A and B , they determine a unique ray with initial point A . As two points define a unique line, this ray consists of all the points between A and B (including A and B ) and all the points C on the line through A and B such that B is between A and C . [ 12 ] This is, at times, also expressed as the set of all points C on the line determined by A and B such that A is not between B and C . [ 13 ] A point D , on the line determined by A and B but not in the ray with initial point A determined by B , will determine another ray with initial point A . With respect to the AB ray, the AD ray is called the opposite ray . Thus, we would say that two different points, A and B , define a line and a decomposition of this line into the disjoint union of an open segment ( A , B ) and two rays, BC and AD (the point D is not drawn in the diagram, but is to the left of A on the line AB ). These are not opposite rays since they have different initial points. In Euclidean geometry two rays with a common endpoint form an angle . [ 14 ] The definition of a ray depends upon the notion of betweenness for points on a line. It follows that rays exist only for geometries for which this notion exists, typically Euclidean geometry or affine geometry over an ordered field . On the other hand, rays do not exist in projective geometry nor in a geometry over a non-ordered field, like the complex numbers or any finite field . A line segment is a part of a line that is bounded by two distinct end points and contains every point on the line between its end points. Depending on how the line segment is defined, either of the two end points may or may not be part of the line segment. Two or more line segments may have some of the same relationships as lines, such as being parallel, intersecting, or skew, but unlike lines they may be none of these, if they are coplanar and either do not intersect or are collinear . A point on number line corresponds to a real number and vice versa. [ 15 ] Usually, integers are evenly spaced on the line, with positive numbers are on the right, negative numbers on the left. As an extension to the concept, an imaginary line representing imaginary numbers can be drawn perpendicular to the number line at zero. [ 16 ] The two lines forms the complex plane , a geometrical representation of the set of complex numbers .
https://en.wikipedia.org/wiki/Half-line_(geometry)
A half-metal is any substance that acts as a conductor to electrons of one spin orientation, but as an insulator or semiconductor to those of the opposite orientation. Although all half-metals are ferromagnetic (or ferrimagnetic ), most ferromagnets are not half-metals. Many of the known examples of half-metals are oxides , sulfides , or Heusler alloys . [ 1 ] Types of half-metallic compounds theoretically predicted so far include some Heusler alloys, such as Co 2 FeSi , NiMnSb, and PtMnSb; some Si-containing half–Heusler alloys with Curie temperatures over 600 K, such as NiCrSi and PdCrSi; some transition-metal oxides, including rutile structured CrO 2 ; some perovskites, such as LaMnO 3 and SeMnO 3 ; and a few more simply structured zincblende (ZB) compounds, including CrAs and superlattices. NiMnSb and CrO 2 have been experimentally determined to be half-metals at very low temperatures. In half-metals, the valence band for one spin orientation is partially filled while there is a gap in the density of states for the other spin orientation. This results in conducting behavior for only electrons in the first spin orientation. In some half-metals, the majority spin channel is the conducting one while in others the minority channel is. [ 2 ] Half-metals were first described in 1983, as an explanation for the electrical properties of manganese -based Heusler alloys . [ 3 ] Some notable half-metals are chromium(IV) oxide , magnetite , and lanthanum strontium manganite (LSMO), [ 1 ] as well as chromium arsenide . Half-metals have attracted some interest for their potential use in spintronics . This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Half-metal
The half-month is a calendar subdivision used in astronomy . Each calendar month is separated into two parts: Newly identified small Solar System bodies , such as comets and asteroids , are given systematic designations that contain the half-month encoded as a letter of the English alphabet . [ 1 ] [ 2 ] For example, an object discovered in the second half of January would be identified with the letter B; if found in the first half of February, the letter would be C. The letter I is not used, to prevent confusion with the number 1. Instead, the letters proceed directly from H (April 16–30) to J (May 1–15). The letter appears in the provisional designation , then when the object is confirmed the letter is incorporated into the comet designation (for comets) or minor planet designation (for asteroids and other minor planets ). This time -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Half-month
In mathematics , the half-period ratio τ of an elliptic function is the ratio of the two half-periods ω 1 2 {\displaystyle {\frac {\omega _{1}}{2}}} and ω 2 2 {\displaystyle {\frac {\omega _{2}}{2}}} of the elliptic function, where the elliptic function is defined in such a way that is in the upper half-plane . [ 1 ] Quite often in the literature, ω 1 and ω 2 are defined to be the periods of an elliptic function rather than its half-periods. Regardless of the choice of notation, the ratio ω 2 /ω 1 of periods is identical to the ratio (ω 2 /2)/(ω 1 /2) of half-periods. Hence, the period ratio is the same as the "half-period ratio". Note that the half-period ratio can be thought of as a simple number, namely, one of the parameters to elliptic functions, or it can be thought of as a function itself, because the half periods can be given in terms of the elliptic modulus or in terms of the nome . See the pages on quarter period and elliptic integrals for additional definitions and relations on the arguments and parameters to elliptic functions. This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Half-period_ratio
In chemistry , a half reaction (or half-cell reaction ) is either the oxidation or reduction reaction component of a redox reaction. A half reaction is obtained by considering the change in oxidation states of individual substances involved in the redox reaction. Often, the concept of half reactions is used to describe what occurs in an electrochemical cell , such as a Galvanic cell battery. Half reactions can be written to describe both the metal undergoing oxidation (known as the anode ) and the metal undergoing reduction (known as the cathode ). Half reactions are often used as a method of balancing redox reactions. For oxidation-reduction reactions in acidic conditions, after balancing the atoms and oxidation numbers, one will need to add H + ions to balance the hydrogen ions in the half reaction. For oxidation-reduction reactions in basic conditions, after balancing the atoms and oxidation numbers, first treat it as an acidic solution and then add OH − ions to balance the H + ions in the half reactions (which would give H 2 O ). Consider the Galvanic cell shown in the adjacent image: it is constructed with a piece of zinc (Zn) submerged in a solution of zinc sulfate ( ZnSO 4 ) and a piece of copper (Cu) submerged in a solution of copper(II) sulfate ( CuSO 4 ). The overall reaction is: At the Zn anode, oxidation takes place (the metal loses electrons). This is represented in the following oxidation half reaction (note that the electrons are on the products side): At the Cu cathode, reduction takes place (electrons are accepted). This is represented in the following reduction half reaction (note that the electrons are on the reactants side): Consider the example burning of magnesium ribbon (Mg). When magnesium burns, it combines with oxygen ( O 2 ) from the air to form magnesium oxide (MgO) according to the following equation: Magnesium oxide is an ionic compound containing Mg 2+ and O 2− ions whereas Mg (s) and O 2(g) are elements with no charges. The Mg (s) with zero charge gains a +2 charge going from the reactant side to product side, and the O 2(g) with zero charge gains a −2 charge. This is because when Mg (s) becomes Mg 2+ , it loses 2 electrons. Since there are 2 Mg on left side, a total of 4 electrons are lost according to the following oxidation half reaction: On the other hand, O 2 was reduced: its oxidation state goes from 0 to −2. Thus, a reduction half reaction can be written for the O2 as it gains 4 electrons: The overall reaction is the sum of both half reactions: When chemical reaction, especially, redox reaction takes place, we do not see the electrons as they appear and disappear during the course of the reaction. What we see is the reactants (starting material) and end products. Due to this, electrons appearing on both sides of the equation are canceled. After canceling, the equation is re-written as Two ions, positive ( Mg 2+ ) and negative ( O 2− ) exist on product side and they combine immediately to form a compound magnesium oxide (MgO) due to their opposite charges (electrostatic attraction). In any given oxidation-reduction reaction, there are two half reactions—oxidation half reaction and reduction half reaction. The sum of these two half reactions is the oxidation–reduction reaction. Consider the reaction below: The two elements involved, iron and chlorine , each change oxidation state; iron from +2 to +3, chlorine from 0 to −1. There are then effectively two half reactions occurring. These changes can be represented in formulas by inserting appropriate electrons into each half reaction: Given two half reactions it is possible, with knowledge of appropriate electrode potentials, to arrive at the complete (original) reaction the same way. The decomposition of a reaction into half reactions is key to understanding a variety of chemical processes. For example, in the above reaction, it can be shown that this is a redox reaction in which Fe is oxidised, and Cl is reduced. Note the transfer of electrons from Fe to Cl. Decomposition is also a way to simplify the balancing of a chemical equation . A chemist can atom balance and charge balance one piece of an equation at a time. For example: It is also possible and sometimes necessary to consider a half reaction in either basic or acidic conditions, as there may be an acidic or basic electrolyte in the redox reaction . Due to this electrolyte it may be more difficult to satisfy the balance of both the atoms and charges. This is done by adding H 2 O, OH − , e − , and/or H + to either side of the reaction until both atoms and charges are balanced. Consider the half reaction below: OH − , H 2 O , and e − can be used to balance the charges and atoms in basic conditions, as long as it is assumed that the reaction is in water. Again consider the half reaction below: H + , H 2 O , and e − can be used to balance the charges and atoms in acidic conditions, as long as it is assumed that the reaction is in water. Notice that both sides are both charge balanced and atom balanced. Often there will be both H + and OH − present in acidic and basic conditions but that the resulting reaction of the two ions will yield water, H 2 O (shown below):
https://en.wikipedia.org/wiki/Half-reaction
A half-sider budgerigar is an unusual congenital condition that causes a budgerigar to display one color on one side of its body and a different color on the other. This is not a simple genetic mutation, as can be observed in other color and pattern variations in this species . It is a rare example of a tetragametic chimera , which originates when two fertilized embryos merge during a very early stage of development — between the 2-cell and the 64-cell stage. Each half has different DNA, with genetically distinct cells and the resultant bird is in effect two budgerigars fused together to form a single autonomous individual. [ 1 ] The half-sider's coloring is usually divided bilaterally down the center, although, it can differ depending on which stage the twin embryos merged during development. Twin embryos that merged later in development will result in a budgerigar that has a splotchier distribution of the different cell populations. [ 1 ] In the case of the half-sider budgerigar, both embryos must possess different genetic phenotypes (one yellow-based and one white-based) [ 2 ] in order for a visible half-sider to be produced. If both "halves" were the same base, it would still be a tetragametic chimera, but not a half-sider. It is also possible for the half-sider to be male on one side and female on the other (evidenced by a half blue, half brown cere ) – an example of a bilateral gynandromorph . [ 1 ] Breeding a half-sider is unlikely to produce more half-siders, even when breeding two half-siders together, as the genetic makeup of the half that contributed the cells that make up the reproductive system is that which would then be perpetuated, assuming that the bird is even fertile in the first place. The chance of producing another half-sider would be the same as for any other budgerigar pairing. [ 3 ]
https://en.wikipedia.org/wiki/Half-sider_budgerigar
In the mathematical field of graph theory , a half-transitive graph is a graph that is both vertex-transitive and edge-transitive , but not symmetric . [ 1 ] In other words, a graph is half-transitive if its automorphism group acts transitively upon both its vertices and its edges, but not on ordered pairs of linked vertices. Every connected symmetric graph must be vertex-transitive and edge-transitive , and the converse is true for graphs of odd degree, [ 2 ] so that half-transitive graphs of odd degree do not exist. However, there do exist half-transitive graphs of even degree. [ 3 ] The smallest half-transitive graph is the Holt graph , with degree 4 and 27 vertices. [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Half-transitive_graph
Half sandwich compounds , also known as piano stool complexes , are organometallic complexes that feature a cyclic polyhapto ligand bound to an ML n center, where L is a unidentate ligand. Thousands of such complexes are known. [ 1 ] [ page needed ] Well-known examples include cyclobutadieneiron tricarbonyl and (C 5 H 5 )TiCl 3 . Commercially useful examples include (C 5 H 5 )Co(CO) 2 , which is used in the synthesis of substituted pyridines , and methylcyclopentadienyl manganese tricarbonyl , an antiknock agent in petrol . Half sandwich complexes containing cyclopentadienyl ligands are common. Well studied examples include ( η 5 -C 5 H 5 )V(CO) 4 , ( η 5 -C 5 H 5 )Cr(CO) 3 H, ( η 5 -CH 3 C 5 H 4 )Mn(CO) 3 , ( η 5 -C 5 H 5 )Cr(CO) 3 H, [( η 5 -C 5 H 5 )Fe(CO) 3 ] + , ( η 5 -C 5 H 5 )V(CO) 4 I, and ( η 5 -C 5 H 5 )Ru(NCMe) + 3 . ( η 5 -C 5 H 5 )Co(CO) 2 is a two-legged piano stool complex. Bulky cyclopentadienyl ligands such as 1,2,4-C 5 H 2 ( tert -Bu) 3 − form unusual half-sandwich complexes. [ 3 ] In organometallic chemistry , ( η 6 -C 6 H 6 ) piano stool compounds are half-sandwich compounds with ( η 6 -C 6 H 6 )ML 3 structure (M = Cr, Mo, W, Mn(I), Re(I) and L = typically CO). ( η 6 -C 6 H 6 ) piano stool complexes are stable 18-electron coordination compounds with a variety of chemical and material applications. Early studies on ( η 6 -C 6 H 6 )Cr(CO) 3 were carried out by Natta, Ercoli and Calderazzo, [ 4 ] and Fischer and Ofele, [ 5 ] [ 6 ] and the crystal structure was determined by Corradini and Allegra in 1959. [ 7 ] The X-ray data indicate that the plane of the benzene ring is nearly parallel to the plane defined by the oxygen atoms of the carbonyl ligands, and so the structure resembles a benzene seat mounted on three carbonyl legs tethered by the metal atom. Piano stool complexes of the type ( η 6 -C 6 H 6 )M(CO) 3 are typically synthesized by heating the appropriate metal carbonyl compound with benzene . Alternately, the same compounds can be obtained by carbonylation of the bis(arene) sandwich compounds, such as ( η 6 -C 6 H 6 ) 2 M compound with the metal carbonyl compound. This second approach may be more appropriate for arene ligands containing thermally fragile substituents. [ 8 ] The benzene ligand in ( η 6 -C 6 H 6 )Cr(CO) 3 is prone to deprotonation. [ 9 ] For example, Organolithium compounds form adducts featuring cyclohexadienyl ligands. Subsequent oxidation of the complex results in the release of a substituted benzene. [ 10 ] [ 11 ] Oxidation of the chromium atom by I 2 and other iodine reagents has been shown to promote exchange of arene ligands, but the intermediate chromium iodide species has not been characterized. [ 12 ] ( η 6 -C 6 H 6 )Cr(CO) 3 complexes exhibit " cine " and " tele " nucleophilic aromatic addition. [ 13 ] Processes of this type involve reaction of ( η 6 -C 6 H 6 )Cr(CO) 3 with an alkyl lithium reagent. Subsequent treatment with an acid results in the addition of a nucleophile to the benzene ring at a site ortho (" cine "), meta or para (" tele ") to the ipso carbon (see Arene substitution patterns ). Reflecting its increased acidity, the benzene ligand can be lithiated with n -butyllithium . The resulting organolithium compound serves as a nucleophile in various reactions, for example, with trimethylsilyl chloride : [ citation needed ] ( η 6 -C 6 H 6 )Cr(CO) 3 is a useful catalyst for the hydrogenation of 1,3- dienes . The product alkene results from 1,4-addition of hydrogen . The complex does not hydrogenate isolated double bonds . [ citation needed ] A variety of arenes ligands have been installed aside from benzene. [ 14 ] Weakly coordinating ligands may be employed to improve ligand exchange and thus the turnover rates for ( η 6 -C 6 H 6 )M(CO) 3 complexes. [ 8 ] : 248 ( η 6 -C 6 H 6 )M(CO) 3 complexes have been incorporated into high surface area porous materials. [ 15 ] ( η 6 -C 6 H 6 )M(CO) 3 complexes serve as models for the interaction of metal carbonyls with graphene and carbon nanotubes . [ 16 ] The presence of M(CO) 3 on extended π-network materials has been shown to improve electrical conductivity across the material. [ 17 ] Typical arene tricarbonyl piano stool complexes of Mn(I) and Re(I) are cationic and thus exhibit enhanced reactivity toward nucleophiles. Subsequent to nucleophilic addition, the modified arene can be recovered from the metal. [ 18 ] [ 19 ] Half-sandwich compounds employing Ru(II) , such as (cymene)ruthenium dichloride dimer , have been mainly investigated as catalysts for transfer hydrogenation . [ 20 ] These complexes feature three coordination sites that are susceptible to substitution, while the arene ligand is tightly bonded and protects the metal against oxidation to Ru(III). They are prepared by reaction of RuCl 3 · x (H 2 O) with 1,3-cyclohexadienes . [ 21 ] Work is also conducted on their potential as anticancer drugs. [ 22 ] ( η 6 -C 6 H 6 )RuCl 2 readily undergoes ligand exchange via cleavage of the chloride bridges, making this complex a versatile precursor to Ru(II) piano stool derivatives. [ 23 ]
https://en.wikipedia.org/wiki/Half_sandwich_compound
In chemistry , a halide (rarely halogenide [ 1 ] ) is a binary chemical compound , of which one part is a halogen atom and the other part is an element or radical that is less electronegative (or more electropositive) than the halogen, to make a fluoride , chloride , bromide , iodide , astatide , or theoretically tennesside compound. The alkali metals combine directly with halogens under appropriate conditions forming halides of the general formula, MX (X = F, Cl, Br or I). Many salts are halides; the hal- syllable in halide and halite reflects this correlation . [ 2 ] A halide ion is a halogen atom bearing a negative charge. The common halide anions are fluoride ( F − ), chloride ( Cl − ), bromide ( Br − ), and iodide ( I − ). Such ions are present in many ionic halide salts. Halide minerals contain halides. All these halide anions are colorless. Halides also form covalent bonds, examples being colorless TiF 4 , colorless TiCl 4 , orange TiBr 4 , and brown TiI 4 . The heavier members TiCl 4 , TiBr 4 , TiI 4 can be distilled readily because they are molecular. The outlier is TiF 4 , m.p. 284 °C , because it has a polymeric structure. Fluorides often differ from the heavier halides. [ 3 ] Halides cannot be reduced under the usual laboratory conditions, but they all can be oxidized to the parent halogens, which are diatomic . Especially for iodide and less so for the lighter halides, intermediates can be observed and isolated. Best characterized is triiodide . Many related species are known, including a host of polyiodides . Halides are conjugate bases of hydrogen halides , which are all gases. When the protonation is conducted in aqueous solution, hydrohalic acids are produced. Halide salts such as KCl , KBr and KI are highly soluble in water to give colorless solutions. The solutions react readily with a solution of silver nitrate AgNO 3 . These three halides form solid precipitates : [ 4 ] Similar but slower reactions occur with alkyl halides in place of alkali metal halides, as described in the Beilstein test . Metal halides are used in high-intensity discharge lamps called metal halide lamps , such as those used in modern street lights . These are more energy-efficient than mercury-vapor lamps , and have much better colour rendition than orange high-pressure sodium lamps . Metal halide lamps are also commonly used in greenhouses or in rainy climates to supplement natural sunlight . Silver halides are used in photographic films and papers . When the film is developed , the silver halides which have been exposed to light are reduced to metallic silver, forming an image. Halides are also used in solder paste , commonly as a Cl or Br equivalent. [ 5 ] Synthetic organic chemistry often incorporates halogens into organohalide compounds.
https://en.wikipedia.org/wiki/Halide
Haliovirgaceae is a family of bacteria in the order Fusobacteriales . [ 1 ] The family contains one genus: Haliovirga . [ 2 ] Bacteria in this family are gram-negative , mesophilic , anerobic , and sulfur-reducing . [ 3 ] This bacteria -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haliovirgaceae
In mathematics , Hall's marriage theorem , proved by Philip Hall ( 1935 ), is a theorem with two equivalent formulations. In each case, the theorem gives a necessary and sufficient condition for an object to exist: Let F {\displaystyle {\mathcal {F}}} be a finite family of sets (note that although F {\displaystyle {\mathcal {F}}} is not itself allowed to be infinite, the sets in it may be so, and F {\displaystyle {\mathcal {F}}} may contain the same set multiple times ). [ 1 ] Let X {\displaystyle X} be the union of all the sets in F {\displaystyle {\mathcal {F}}} , the set of elements that belong to at least one of its sets. A transversal for F {\displaystyle {\mathcal {F}}} is a subset of X {\displaystyle X} that can be obtained by choosing a distinct element from each set in F {\displaystyle {\mathcal {F}}} . This concept can be formalized by defining a transversal to be the image of an injective function f : F → X {\displaystyle f:{\mathcal {F}}\to X} such that f ( S ) ∈ S {\displaystyle f(S)\in S} for each S ∈ F {\displaystyle S\in {\mathcal {F}}} . An alternative term for transversal is system of distinct representatives . The collection F {\displaystyle {\mathcal {F}}} satisfies the marriage condition when each subfamily of F {\displaystyle {\mathcal {F}}} contains at least as many distinct members as its number of sets. That is, for all G ⊆ F {\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}} , | G | ≤ | ⋃ S ∈ G S | . {\displaystyle |{\mathcal {G}}|\leq {\Bigl |}\bigcup _{S\in {\mathcal {G}}}S{\Bigr |}.} If a transversal exists then the marriage condition must be true: the function f {\displaystyle f} used to define the transversal maps G {\displaystyle {\mathcal {G}}} to a subset of its union, of size equal to | G | {\displaystyle |{\mathcal {G}}|} , so the whole union must be at least as large. Hall's theorem states that the converse is also true: Hall's Marriage Theorem — A family F {\displaystyle {\mathcal {F}}} of finite sets has a transversal if and only if F {\displaystyle {\mathcal {F}}} satisfies the marriage condition. The name "marriage theorem" came from ( Halmos & Vaughan 1950 ) Suppose that each of a (possibly infinite) set of boys is acquainted with a finite set of girls. Under what conditions is it possible for each boy to marry one of his acquaintances? It is clearly necessary that every finite set of k boys be. collectively, acquainted with at least k girls... this condition is also sufficient. A lower bound on the different number of transversals that a given finite family F {\displaystyle {\mathcal {F}}} of size n {\displaystyle n} may have is obtained as follows: If each of the sets in F {\displaystyle {\mathcal {F}}} has cardinality ≥ r {\displaystyle \geq r} , then the number of different transversals for F {\displaystyle {\mathcal {F}}} is either r ! {\displaystyle r!} if r ≤ n {\displaystyle r\leq n} , or r ( r − 1 ) ⋯ ( r − n + 1 ) {\displaystyle r(r-1)\cdots (r-n+1)} if r > n {\displaystyle r>n} . [ 2 ] Recall that a transversal for a family F {\displaystyle {\mathcal {F}}} is an ordered sequence, so two different transversals could have exactly the same elements. For instance, the collection A 1 = { 1 , 2 , 3 } {\displaystyle A_{1}=\{1,2,3\}} , A 2 = { 1 , 2 , 5 } {\displaystyle A_{2}=\{1,2,5\}} has ( 1 , 2 ) {\displaystyle (1,2)} and ( 2 , 1 ) {\displaystyle (2,1)} as distinct transversals. Let G = ( X , Y , E ) {\displaystyle G=(X,Y,E)} be a finite bipartite graph with bipartite sets X {\displaystyle X} and Y {\displaystyle Y} and edge set E {\displaystyle E} . An X {\displaystyle X} -perfect matching (also called an X {\displaystyle X} -saturating matching ) is a matching , a set of disjoint edges, which covers every vertex in X {\displaystyle X} . For a subset W {\displaystyle W} of X {\displaystyle X} , let N G ( W ) {\displaystyle N_{G}(W)} denote the neighborhood of W {\displaystyle W} in G {\displaystyle G} , the set of all vertices in Y {\displaystyle Y} that are adjacent to at least one element of W {\displaystyle W} . The marriage theorem in this formulation states that there is an X {\displaystyle X} -perfect matching if and only if for every subset W {\displaystyle W} of X {\displaystyle X} : | W | ≤ | N G ( W ) | . {\displaystyle |W|\leq |N_{G}(W)|.} In other words, every subset W {\displaystyle W} of X {\displaystyle X} must have sufficiently many neighbors in Y {\displaystyle Y} . In an X {\displaystyle X} -perfect matching M {\displaystyle M} , every edge incident to W {\displaystyle W} connects to a distinct neighbor of W {\displaystyle W} in Y {\displaystyle Y} , so the number of these matched neighbors is at least | W | {\displaystyle |W|} . The number of all neighbors of W {\displaystyle W} is at least as large. Consider the contrapositive : if there is no X {\displaystyle X} -perfect matching then Hall's condition must be violated for at least one W ⊆ X {\displaystyle W\subseteq X} . Let M {\displaystyle M} be a maximum matching, and let u {\displaystyle u} be any unmatched vertex in X {\displaystyle X} . Consider all alternating paths (paths in G {\displaystyle G} that alternately use edges outside and inside M {\displaystyle M} ) starting from u {\displaystyle u} . Let W {\displaystyle W} be the set of vertices in these paths that belong to X {\displaystyle X} (including u {\displaystyle u} itself) and let Z {\displaystyle Z} be the set of vertices in these paths that belong to Y {\displaystyle Y} . Then every vertex in Z {\displaystyle Z} is matched by M {\displaystyle M} to a vertex in W {\displaystyle W} , because an alternating path to an unmatched vertex could be used to increase the size of the matching by toggling whether each of its edges belongs to M {\displaystyle M} or not. Therefore, the size of W {\displaystyle W} is at least the number | Z | {\displaystyle |Z|} of these matched neighbors of Z {\displaystyle Z} , plus one for the unmatched vertex u {\displaystyle u} . That is, | W | ≥ | Z | + 1 {\displaystyle |W|\geq |Z|+1} . However, for every vertex v ∈ W {\displaystyle v\in W} , every neighbor w {\displaystyle w} of v {\displaystyle v} belongs to Z {\displaystyle Z} : an alternating path to w {\displaystyle w} can be found either by removing the matched edge v w {\displaystyle vw} from the alternating path to v {\displaystyle v} , or by adding the unmatched edge v w {\displaystyle vw} to the alternating path to v {\displaystyle v} . Therefore, Z = N G ( W ) {\displaystyle Z=N_{G}(W)} and | W | ≥ | N G ( W ) | + 1 {\displaystyle |W|\geq |N_{G}(W)|+1} , showing that Hall's condition is violated. A problem in the combinatorial formulation, defined by a finite family of finite sets F {\displaystyle {\mathcal {F}}} with union X {\displaystyle X} can be translated into a bipartite graph G = ( F , X , E ) {\displaystyle G=({\mathcal {F}},X,E)} where each edge connects a set in F {\displaystyle {\mathcal {F}}} to an element of that set. An F {\displaystyle {\mathcal {F}}} -perfect matching in this graph defines a system of unique representatives for F {\displaystyle {\mathcal {F}}} . In the other direction, from any bipartite graph G = ( X , Y , E ) {\displaystyle G=(X,Y,E)} one can define a finite family of sets, the family of neighborhoods of the vertices in X {\displaystyle X} , such that any system of unique representatives for this family corresponds to an X {\displaystyle X} -perfect matching in G {\displaystyle G} . In this way, the combinatorial formulation for finite families of finite sets and the graph-theoretic formulation for finite graphs are equivalent. The same equivalence extends to infinite families of finite sets and to certain infinite graphs. In this case, the condition that each set be finite corresponds to a condition that in the bipartite graph G = ( X , Y , E ) {\displaystyle G=(X,Y,E)} , every vertex in X {\displaystyle X} should have finite degree . The degrees of the vertices in Y {\displaystyle Y} are not constrained. Hall's theorem can be proved (non-constructively) based on Sperner's lemma . [ 3 ] : Thm.4.1, 4.2 The theorem has many applications. For example, for a standard deck of cards , dealt into 13 piles of 4 cards each, the marriage theorem implies that it is possible to select one card from each pile so that the selected cards contain exactly one card of each rank (Ace, 2, 3, ..., Queen, King). This can be done by constructing a bipartite graph with one partition containing the 13 piles and the other partition containing the 13 ranks. The remaining proof follows from the marriage condition. More generally, any regular bipartite graph has a perfect matching. [ 4 ] : 2 More abstractly, let G {\displaystyle G} be a group , and H {\displaystyle H} be a finite index subgroup of G {\displaystyle G} . Then the marriage theorem can be used to show that there is a set T {\displaystyle T} such that T {\displaystyle T} is a transversal for both the set of left cosets and right cosets of H {\displaystyle H} in G {\displaystyle G} . [ 5 ] The marriage theorem is used in the usual proofs of the fact that an r × n {\displaystyle r\times n} Latin rectangle can always be extended to an ( r + 1 ) × n {\displaystyle (r+1)\times n} Latin rectangle when r < n {\displaystyle r<n} , and so, ultimately to a Latin square . [ 6 ] This theorem is part of a collection of remarkably powerful theorems in combinatorics, all of which are related to each other in an informal sense in that it is more straightforward to prove one of these theorems from another of them than from first principles. These include: In particular, [ 8 ] [ 9 ] there are simple proofs of the implications Dilworth's theorem ⇔ Hall's theorem ⇔ König–Egerváry theorem ⇔ König's theorem. By examining Philip Hall 's original proof carefully, Marshall Hall Jr. (no relation to Philip Hall) was able to tweak the result in a way that permitted the proof to work for infinite F {\displaystyle {\mathcal {F}}} . [ 10 ] This variant extends Philip Hall's Marriage theorem. Suppose that F = { A i } i ∈ I {\displaystyle {\mathcal {F}}=\{A_{i}\}_{i\in I}} , is a (possibly infinite) family of finite sets that need not be distinct, then F {\displaystyle {\mathcal {F}}} has a transversal if and only if F {\displaystyle {\mathcal {F}}} satisfies the marriage condition. The following example, due to Marshall Hall Jr., shows that the marriage condition will not guarantee the existence of a transversal in an infinite family in which infinite sets are allowed. Let F {\displaystyle {\mathcal {F}}} be the family, A 0 = N {\displaystyle A_{0}=\mathbb {N} } , A i = { i − 1 } {\displaystyle A_{i}=\{i-1\}} for i ≥ 1 {\displaystyle i\geq 1} . The marriage condition holds for this infinite family, but no transversal can be constructed. [ 11 ] The graph theoretic formulation of Marshal Hall's extension of the marriage theorem can be stated as follows: Given a bipartite graph with sides A and B , we say that a subset C of B is smaller than or equal in size to a subset D of A in the graph if there exists an injection in the graph (namely, using only edges of the graph) from C to D , and that it is strictly smaller in the graph if in addition there is no injection in the graph in the other direction. Note that omitting in the graph yields the ordinary notion of comparing cardinalities. The infinite marriage theorem states that there exists an injection from A to B in the graph, if and only if there is no subset C of A such that N ( C ) is strictly smaller than C in the graph. [ 12 ] The more general problem of selecting a (not necessarily distinct) element from each of a collection of non-empty sets (without restriction as to the number of sets or the size of the sets) is permitted in general only if the axiom of choice is accepted. A fractional matching in a graph is an assignment of non-negative weights to each edge, such that the sum of weights adjacent to each vertex is at most 1. A fractional matching is X -perfect if the sum of weights adjacent to each vertex is exactly 1. The following are equivalent for a bipartite graph G = ( X+Y, E ): [ 13 ] When Hall's condition does not hold, the original theorem tells us only that a perfect matching does not exist, but does not tell what is the largest matching that does exist. To learn this information, we need the notion of deficiency of a graph . Given a bipartite graph G = ( X + Y , E ), the deficiency of G w.r.t. X is the maximum, over all subsets W of X , of the difference | W | - | N G ( W )|. The larger is the deficiency, the farther is the graph from satisfying Hall's condition. Using Hall's marriage theorem, it can be proved that, if the deficiency of a bipartite graph G is d , then G admits a matching of size at least | X |- d . This article incorporates material from proof of Hall's marriage theorem on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Hall's_marriage_theorem
In mathematics , the Hall algebra is an associative algebra with a basis corresponding to isomorphism classes of finite abelian p -groups . It was first discussed by Steinitz (1901) but forgotten until it was rediscovered by Philip Hall ( 1959 ), both of whom published no more than brief summaries of their work. The Hall polynomials are the structure constants of the Hall algebra . The Hall algebra plays an important role in the theory of Masaki Kashiwara and George Lusztig regarding canonical bases in quantum groups . Ringel (1990) generalized Hall algebras to more general categories , such as the category of representations of a quiver . A finite abelian p -group M is a direct sum of cyclic p -power components C p λ i , {\displaystyle C_{p^{\lambda _{i}}},} where λ = ( λ 1 , λ 2 , … ) {\displaystyle \lambda =(\lambda _{1},\lambda _{2},\ldots )} is a partition of n {\displaystyle n} called the type of M . Let g μ , ν λ ( p ) {\displaystyle g_{\mu ,\nu }^{\lambda }(p)} be the number of subgroups N of M such that N has type ν {\displaystyle \nu } and the quotient M/N has type μ {\displaystyle \mu } . Hall proved that the functions g are polynomial functions of p with integer coefficients. Thus we may replace p with an indeterminate q , which results in the Hall polynomials Hall next constructs an associative ring H {\displaystyle H} over Z [ q ] {\displaystyle \mathbb {Z} [q]} , now called the Hall algebra . This ring has a basis consisting of the symbols u λ {\displaystyle u_{\lambda }} and the structure constants of the multiplication in this basis are given by the Hall polynomials: It turns out that H is a commutative ring, freely generated by the elements u 1 n {\displaystyle u_{\mathbf {1} ^{n}}} corresponding to the elementary p -groups . The linear map from H to the algebra of symmetric functions defined on the generators by the formula (where e n is the n th elementary symmetric function ) uniquely extends to a ring homomorphism and the images of the basis elements u λ {\displaystyle u_{\lambda }} may be interpreted via the Hall–Littlewood symmetric functions . Specializing q to 0, these symmetric functions become Schur functions , which are thus closely connected with the theory of Hall polynomials.
https://en.wikipedia.org/wiki/Hall_algebra
Hall circles (also known as M-circles and N-circles ) are a graphical tool in control theory used to obtain values of a closed-loop transfer function from the Nyquist plot (or the Nichols plot ) of the associated open-loop transfer function. Hall circles have been introduced in control theory by Albert C. Hall in his thesis. [ 1 ] Consider a closed-loop linear control system with open-loop transfer function given by transfer function G ( s ) {\displaystyle G(s)} and with a unit gain in the feedback loop. The closed-loop transfer function is given by T ( s ) = G ( s ) 1 + G ( s ) {\textstyle T(s)={\frac {G(s)}{1+G(s)}}} . To check the stability of T ( s ), it is possible to use the Nyquist stability criterion with the Nyquist plot of the open-loop transfer function G ( s ). Note, however, that only the Nyquist plot of G ( s ) does not give the actual values of T ( s ). To get this information from the G(s)-plane, Hall proposed to construct the locus of points in the G ( s )-plane such that T ( s ) has constant magnitude and also the locus of points in the G ( s )-plane such that T ( s ) has constant phase angle. Given a positive real value M representing a fixed magnitude, and denoting G(s) by z , the points satisfying M = | T ( s ) | = | G ( s ) | | 1 + G ( s ) | = | z | | 1 + z | {\displaystyle M=|T(s)|={\frac {|G(s)|}{|1+G(s)|}}={\frac {|z|}{|1+z|}}} are given by the points z in the G ( s )-plane such that the ratio of the distance between z and 0 and the distance between z and -1 is equal to M . The points z satisfying this locus condition are circles of Apollonius , and this locus is known in the context of control systems as M-circles . Given a positive real value N representing a phase angle, the points satisfying N = arg ⁡ [ G ( s ) 1 + G ( s ) ] = arg ⁡ [ G ( s ) ] − arg ⁡ [ 1 + G ( s ) ] = arg ⁡ [ z ] − arg ⁡ [ 1 + z ] {\displaystyle N=\arg \left[{\frac {G(s)}{1+G(s)}}\right]=\arg[G(s)]-\arg[1+G(s)]=\arg[z]-\arg[1+z]} are given by the points z in the G ( s )-plane such that the angle between -1 and z and the angle between 0 and z is constant. In other words, the angle opposed to the line segment between -1 and 0 must be constant. This implies that the points z satisfying this locus condition are arcs of circles, [ 2 ] and this locus is known in the context of control systems as N-circles . To use the Hall circles, a plot of M and N circles is done over the Nyquist plot of the open-loop transfer function. The points of the intersection between these graphics give the corresponding value of the closed-loop transfer function. Hall circles are also used with the Nichols plot and in this setting, are also known as Nichols chart. Rather than overlaying directly the Hall circles over the Nichols plot, the points of the circles are transferred to a new coordinate system where the ordinate is given by 20 log 10 ⁡ ( | G ( s ) | ) {\displaystyle 20\log _{10}(|G(s)|)} and the abscissa is given by arg ⁡ ( G ( s ) ) {\displaystyle \arg(G(s))} . The advantage of using Nichols chart is that adjusting the gain of the open loop transfer function directly reflects in up and down translation of the Nichols plot in the chart.
https://en.wikipedia.org/wiki/Hall_circles
The Hall effect is the production of a potential difference (the Hall voltage ) across an electrical conductor that is transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current. It was discovered by Edwin Hall in 1879. [ 1 ] [ 2 ] The Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. It is a characteristic of the material from which the conductor is made, since its value depends on the type, number, and properties of the charge carriers that constitute the current. Wires carrying current in a magnetic field experience a mechanical force perpendicular to both the current and magnetic field. In the 1820s, André-Marie Ampère observed this underlying mechanism that led to the discovery of the Hall effect. [ 3 ] However it was not until a solid mathematical basis for electromagnetism was systematized by James Clerk Maxwell 's " On Physical Lines of Force " (published in 1861–1862) that details of the interaction between magnets and electric current could be understood. Edwin Hall then explored the question of whether magnetic fields interacted with the conductors or the electric current, and reasoned that if the force was specifically acting on the current, it should crowd current to one side of the wire, producing a small measurable voltage. [ 3 ] In 1879, he discovered this Hall effect while he was working on his doctoral degree at Johns Hopkins University in Baltimore , Maryland . [ 4 ] Eighteen years before the electron was discovered, his measurements of the tiny effect produced in the apparatus he used were an experimental tour de force , published under the name "On a New Action of the Magnet on Electric Currents". [ 5 ] [ 6 ] [ 7 ] The Hall effect is due to the nature of the current in a conductor. Current consists of the movement of many small charge carriers , typically electrons , holes , ions (see Electromigration ) or all three. When a magnetic field is present, these charges experience a force, called the Lorentz force . [ 8 ] When such a magnetic field is absent, the charges follow approximately straight paths between collisions with impurities, phonons , etc. However, when a magnetic field with a perpendicular component is applied, their paths between collisions are curved; thus, moving charges accumulate on one face of the material. This leaves equal and opposite charges exposed on the other face, where there is a scarcity of mobile charges. The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the straight path and the applied magnetic field. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing. [ 9 ] In classical electromagnetism electrons move in the opposite direction of the current I (by convention "current" describes a theoretical "hole flow"). In some metals and semiconductors it appears "holes" are actually flowing because the direction of the voltage is opposite to the derivation below. For a simple metal where there is only one type of charge carrier (electrons), the Hall voltage V H can be derived by using the Lorentz force and seeing that, in the steady-state condition, charges are not moving in the y -axis direction. Thus, the magnetic force on each electron in the y -axis direction is cancelled by a y -axis electrical force due to the buildup of charges. The v x term is the drift velocity of the current which is assumed at this point to be holes by convention. The v x B z term is negative in the y -axis direction by the right hand rule. F = q ( E + v × B ) {\displaystyle \mathbf {F} =q{\bigl (}\mathbf {E} +\mathbf {v} \times \mathbf {B} {\bigl )}} In steady state, F = 0 , so 0 = E y − v x B z , where E y is assigned in the direction of the y -axis, (and not with the arrow of the induced electric field ξ y as in the image (pointing in the − y direction), which tells you where the field caused by the electrons is pointing). In wires, electrons instead of holes are flowing, so v x → − v x and q → − q . Also E y = − ⁠ V H / w ⁠ . Substituting these changes gives V H = v x B z w {\displaystyle V_{\mathrm {H} }=v_{x}B_{z}w} The conventional "hole" current is in the negative direction of the electron current and the negative of the electrical charge which gives I x = ntw (− v x )(− e ) where n is charge carrier density , tw is the cross-sectional area, and − e is the charge of each electron. Solving for w {\displaystyle w} and plugging into the above gives the Hall voltage: V H = I x B z n t e {\displaystyle V_{\mathrm {H} }={\frac {I_{x}B_{z}}{nte}}} If the charge build up had been positive (as it appears in some metals and semiconductors), then the V H assigned in the image would have been negative (positive charge would have built up on the left side). The Hall coefficient is defined as R H = E y j x B z {\displaystyle R_{\mathrm {H} }={\frac {E_{y}}{j_{x}B_{z}}}} or E = − R H ( J c × B ) {\displaystyle \mathbf {E} =-R_{\mathrm {H} }(\mathbf {J} _{c}\times \mathbf {B} )} where j is the current density of the carrier electrons, and E y is the induced electric field. In SI units, this becomes R H = E y j x B = V H t I B = 1 n e . {\displaystyle R_{\mathrm {H} }={\frac {E_{y}}{j_{x}B}}={\frac {V_{\mathrm {H} }t}{IB}}={\frac {1}{ne}}.} (The units of R H are usually expressed as m 3 /C, or Ω·cm/ G , or other variants.) As a result, the Hall effect is very useful as a means to measure either the carrier density or the magnetic field. One very important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite. In the diagram above, the Hall effect with a negative charge carrier (the electron) is presented. But consider the same magnetic field and current are applied but the current is carried inside the Hall effect device by a positive particle. The particle would of course have to be moving in the opposite direction of the electron in order for the current to be the same—down in the diagram, not up like the electron is. And thus, mnemonically speaking, your thumb in the Lorentz force law , representing (conventional) current, would be pointing the same direction as before, because current is the same—an electron moving up is the same current as a positive charge moving down. And with the fingers (magnetic field) also being the same, interestingly the charge carrier gets deflected to the left in the diagram regardless of whether it is positive or negative. But if positive carriers are deflected to the left, they would build a relatively positive voltage on the left whereas if negative carriers (namely electrons) are, they build up a negative voltage on the left as shown in the diagram. Thus for the same current and magnetic field, the electric polarity of the Hall voltage is dependent on the internal nature of the conductor and is useful to elucidate its inner workings. This property of the Hall effect offered the first real proof that electric currents in most metals are carried by moving electrons, not by protons. It also showed that in some substances (especially p-type semiconductors ), it is contrarily more appropriate to think of the current as positive " holes " moving rather than negative electrons. A common source of confusion with the Hall effect in such materials is that holes moving one way are really electrons moving the opposite way, so one expects the Hall voltage polarity to be the same as if electrons were the charge carriers as in most metals and n-type semiconductors . Yet we observe the opposite polarity of Hall voltage, indicating positive charge carriers. However, of course there are no actual positrons or other positive elementary particles carrying the charge in p-type semiconductors , hence the name "holes". In the same way as the oversimplistic picture of light in glass as photons being absorbed and re-emitted to explain refraction breaks down upon closer scrutiny, this apparent contradiction too can only be resolved by the modern quantum mechanical theory of quasiparticles wherein the collective quantized motion of multiple particles can, in a real physical sense, be considered to be a particle in its own right (albeit not an elementary one). [ 11 ] Unrelatedly, inhomogeneity in the conductive sample can result in a spurious sign of the Hall effect, even in ideal van der Pauw configuration of electrodes. For example, a Hall effect consistent with positive carriers was observed in evidently n-type semiconductors. [ 12 ] Another source of artefact, in uniform materials, occurs when the sample's aspect ratio is not long enough: the full Hall voltage only develops far away from the current-introducing contacts, since at the contacts the transverse voltage is shorted out to zero. When a current-carrying semiconductor is kept in a magnetic field, the charge carriers of the semiconductor experience a force in a direction perpendicular to both the magnetic field and the current. At equilibrium, a voltage appears at the semiconductor edges. The simple formula for the Hall coefficient given above is usually a good explanation when conduction is dominated by a single charge carrier . However, in semiconductors and many metals the theory is more complex, because in these materials conduction can involve significant, simultaneous contributions from both electrons and holes , which may be present in different concentrations and have different mobilities . For moderate magnetic fields the Hall coefficient is [ 13 ] [ 14 ] R H = p μ h 2 − n μ e 2 e ( p μ h + n μ e ) 2 {\displaystyle R_{\mathrm {H} }={\frac {p\mu _{\mathrm {h} }^{2}-n\mu _{\mathrm {e} }^{2}}{e(p\mu _{\mathrm {h} }+n\mu _{\mathrm {e} })^{2}}}} or equivalently R H = p − n b 2 e ( p + n b ) 2 {\displaystyle R_{\mathrm {H} }={\frac {p-nb^{2}}{e(p+nb)^{2}}}} with b = μ e μ h . {\displaystyle b={\frac {\mu _{\mathrm {e} }}{\mu _{\mathrm {h} }}}.} Here n is the electron concentration, p the hole concentration, μ e the electron mobility, μ h the hole mobility and e the elementary charge. For large applied fields the simpler expression analogous to that for a single carrier type holds. Although it is well known that magnetic fields play an important role in star formation, research models [ 15 ] [ 16 ] [ 17 ] indicate that Hall diffusion critically influences the dynamics of gravitational collapse that forms protostars. For a two-dimensional electron system which can be produced in a MOSFET , in the presence of large magnetic field strength and low temperature , one can observe the quantum Hall effect, in which the Hall conductance σ undergoes quantum Hall transitions to take on quantized values. The spin Hall effect consists in the spin accumulation on the lateral boundaries of a current-carrying sample. No magnetic field is needed. It was predicted by Mikhail Dyakonov and V. I. Perel in 1971 and observed experimentally more than 30 years later, both in semiconductors and in metals, at cryogenic as well as at room temperatures. The quantity describing the strength of the Spin Hall effect is known as Spin Hall angle, and it is defined as: θ S H = 2 e ℏ | j s | | j e | {\displaystyle \theta _{SH}={\frac {2e}{\hbar }}{\frac {|j_{s}|}{|j_{e}|}}} Where j s {\displaystyle j_{s}} is the spin current generated by the applied current density j e {\displaystyle j_{e}} . [ 18 ] For mercury telluride two dimensional quantum wells with strong spin-orbit coupling, in zero magnetic field, at low temperature, the quantum spin Hall effect has been observed in 2007. [ 19 ] In ferromagnetic materials (and paramagnetic materials in a magnetic field ), the Hall resistivity includes an additional contribution, known as the anomalous Hall effect (or the extraordinary Hall effect ), which depends directly on the magnetization of the material, and is often much larger than the ordinary Hall effect. (Note that this effect is not due to the contribution of the magnetization to the total magnetic field .) For example, in nickel, the anomalous Hall coefficient is about 100 times larger than the ordinary Hall coefficient near the Curie temperature, but the two are similar at very low temperatures. [ 20 ] Although a well-recognized phenomenon, there is still debate about its origins in the various materials. The anomalous Hall effect can be either an extrinsic (disorder-related) effect due to spin -dependent scattering of the charge carriers , or an intrinsic effect which can be described in terms of the Berry phase effect in the crystal momentum space ( k -space). [ 21 ] The Hall effect in an ionized gas ( plasma ) is significantly different from the Hall effect in solids (where the Hall parameter is always much less than unity). In a plasma, the Hall parameter can take any value. The Hall parameter, β , in a plasma is the ratio between the electron gyrofrequency , Ω e , and the electron-heavy particle collision frequency, ν : β = Ω e ν = e B m e ν {\displaystyle \beta ={\frac {\Omega _{\mathrm {e} }}{\nu }}={\frac {eB}{m_{\mathrm {e} }\nu }}} where The Hall parameter value increases with the magnetic field strength. Physically, the trajectories of electrons are curved by the Lorentz force . Nevertheless, when the Hall parameter is low, their motion between two encounters with heavy particles ( neutral or ion ) is almost linear. But if the Hall parameter is high, the electron movements are highly curved. The current density vector, J , is no longer collinear with the electric field vector, E . The two vectors J and E make the Hall angle , θ , which also gives the Hall parameter: β = tan ⁡ ( θ ) . {\displaystyle \beta =\tan(\theta ).} The Hall Effects family has expanded to encompass other quasi-particles in semiconductor nanostructures. Specifically, a set of Hall Effects has emerged based on excitons [ 22 ] [ 23 ] and exciton-polaritons [ 24 ] in 2D materials and quantum wells. Hall sensors amplify and use the Hall effect for a variety of sensing applications. The Corbino effect, named after its discoverer Orso Mario Corbino , is a phenomenon involving the Hall effect, but a disc-shaped metal sample is used in place of a rectangular one. Because of its shape the Corbino disc allows the observation of Hall effect–based magnetoresistance without the associated Hall voltage. A radial current through a circular disc, subjected to a magnetic field perpendicular to the plane of the disc, produces a "circular" current through the disc. The absence of the free transverse boundaries renders the interpretation of the Corbino effect simpler than that of the Hall effect. [ 25 ]
https://en.wikipedia.org/wiki/Hall_effect
Benzophenone is a naturally occurring organic compound with the formula (C 6 H 5 ) 2 CO, generally abbreviated Ph 2 CO. Benzophenone has been found in some fungi, fruits and plants, including grapes. [ 4 ] It is a white solid with a low melting point and rose-like odor [ 5 ] that is soluble in organic solvents. Benzophenone is the simplest diaromatic ketone . It is a widely used building block in organic chemistry, being the parent diarylketone. [ citation needed ] Carl Graebe of the University of Königsberg , in an early literature report from 1874, described working with benzophenone. [ 5 ] Benzophenone can be used as a photo initiator in ultraviolet (UV)-curing applications [ 6 ] such as inks, imaging, and clear coatings in the printing industry. Benzophenone prevents UV light from damaging scents and colors in products such as perfumes and soaps. Benzophenone can also be added to plastic packaging as a UV blocker to prevent photo-degradation of the packaging polymers or its contents. Its use allows manufacturers to package the product in clear glass or plastic (such as a PETE water bottle). [ 7 ] Without it, opaque or dark packaging would be required. In biological applications, benzophenones have been used extensively as photophysical probes to identify and map peptide–protein interactions. [ 8 ] Benzophenone is used as an additive in flavorings or perfumes for "sweet-woody-geranium-like notes". [ 9 ] Benzophenone is produced by the copper-catalyzed oxidation of diphenylmethane with air. [ 10 ] A laboratory route involves the reaction of benzene with carbon tetrachloride followed by hydrolysis of the resulting diphenyldichloromethane . [ 11 ] It can also be prepared by Friedel–Crafts acylation of benzene with benzoyl chloride in the presence of a Lewis acid (e.g. aluminium chloride ) catalyst: since benzoyl chloride can itself be produced by the reaction of benzene with phosgene the first synthesis proceeded directly from those materials. [ 12 ] Another route of synthesis is through a palladium(II)/oxometalate catalyst. This converts an alcohol to a ketone with two groups on each side. [ 13 ] Another, less well-known reaction to produce benzophenone is the pyrolysis of anhydrous calcium benzoate. [ 14 ] Benzophenone is a common photosensitizer in photochemistry . It crosses from the S 1 state into the triplet state with nearly 100% yield. The resulting diradical will abstract a hydrogen atom from a suitable hydrogen donor to form a ketyl radical . Alkali metals reduce benzophenone to the deeply blue colored radical anion , diphenylketyl: [ 15 ] Generally sodium is used as the alkali metal. Sodium-benzophenone ketyl is used in the purification of organic solvents, particularly ethers, because it reacts with water and oxygen to give non-volatile products. [ 16 ] [ 17 ] Adsorbents such as alumina, silica gel, and especially molecular sieves are superior and far safer. [ 18 ] The sodium-benzophenone method is common since it gives a visual indication that water, oxygen, and peroxides are absent from the solvent. Large scale purification may be more economical using devices which utilize adsorbents such as the aforementioned alumina or molecular sieves. [ 19 ] The ketyl is soluble in the organic solvent being dried, which leads to faster purification. In comparison, sodium is insoluble, and its heterogeneous reaction is much slower. When excess alkali metal is present a second reduction may occur, resulting in a color transformation from deep blue to purple: [ 15 ] There are over 300 natural benzophenones, with great structural diversity and biological activities. They are being investigated as potential sources of new drugs. [ 20 ] Substituted benzophenones such as oxybenzone and dioxybenzone are used in many sunscreens . The use of benzophenone-derivatives which structurally resemble a strong photosensitizer has been criticized (see sunscreen controversy ). Michler's ketone has dimethylamino substituents at each para position . The high-strength polymer PEEK is prepared from derivatives of benzophenone. 2-Amino-5-chlorobenzophenone is used in the synthesis of benzodiazepines . [ 21 ] It is considered "essentially nontoxic". [ 10 ] Benzophenone is however banned as a food additive by the US Food and Drug Administration , despite the FDA's continuing stance that this chemical does not pose a risk to public health under the conditions of its intended use. [ 22 ] [ 23 ] The European Union permits it as a flavouring substance, [ 24 ] having established a Total Dietary Intake of 0.3mg/kg of body weight per day. [ 25 ] Benzophenone derivatives are known to be pharmacologically active. From a molecular chemistry point of view interaction of benzophenone with B-DNA has been demonstrated experimentally. [ 26 ] The interaction with DNA and the successive photo-induced energy transfer is at the base of the benzophenone activity as a DNA photosensitizer and may explain part of its therapeutic potentialities. In 2014, benzophenones were named Contact Allergen of the Year by the American Contact Dermatitis Society. [ 27 ] Benzophenone is an endocrine disruptor capable of binding to the pregnane X receptor . [ 28 ]
https://en.wikipedia.org/wiki/Haller-Bauer_reaction
The Haller index , created in 1987 by J. Alex Haller , S. S. Kramer, and S. A. Lietman, [ 1 ] is a mathematical relationship that exists in a human chest section observed with a CT scan . It is defined as the ratio of the transverse diameter (the horizontal distance of the inside of the ribcage ) and the anteroposterior diameter (the shortest distance between the vertebrae and sternum ). [ 2 ] where: More recent studies show that simple chest x-rays are just as effective as CT scans for calculating the Haller index and recommend replacing CT scans with CXR to reduce radiation exposure in all but gross deformities. [ 3 ] [ 4 ] [ 5 ] A normal Haller index should be about 2.5. Chest wall deformities such as pectus excavatum can cause the sternum to invert, thus increasing the index. [ 6 ] [ 7 ] In severe asymmetric cases, where the sternum dips below the level of the vertebra, the index can be a negative value. [ 8 ]
https://en.wikipedia.org/wiki/Haller_index
Aging is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. The hallmarks of aging are the types of biochemical changes that occur in all organisms that experience biological aging and lead to a progressive loss of physiological integrity, impaired function and, eventually, death . They were first listed in a landmark paper in 2013 [ 1 ] to conceptualize the essence of biological aging and its underlying mechanisms. The following three premises for the interconnected hallmarks have been proposed: [ 2 ] Over time, almost all living organisms experience a gradual and irreversible increase in senescence and an associated loss of proper function of the bodily systems. As aging is the primary risk factor for major human diseases, including cancer , diabetes , cardiovascular disorders , and neurodegenerative diseases , it is important to describe and classify the types of changes that it entails. [ citation needed ] After a decade, the authors of the heavily cited original paper updated the set of proposed hallmarks in January 2023. [ 3 ] [ 2 ] In the new review , three new hallmarks have been added: macroautophagy , chronic inflammation and dysbiosis , totaling 12 proposed hallmarks. [ 2 ] The nine hallmarks of aging of the original paper are grouped into three categories as below: [ 1 ] Primary hallmarks (causes of damage) Antagonistic hallmarks (responses to damage) Integrative hallmarks (culprits of the phenotype) Primary hallmarks are the primary causes of cellular damage. Antagonistic hallmarks are antagonistic or compensatory responses to the manifestation of the primary hallmarks. Integrative hallmarks are the functional result of the previous two groups of hallmarks that lead to further operational deterioration associated with aging. [ 1 ] There are also proposed further hallmarks or underlying mechanisms that drive multiple of these hallmarks. [ citation needed ] Each hallmark was chosen to try to fulfill the following criteria: [ 1 ] These conditions are met to different extents by each of these hallmarks. The last criterion is not present in many of the hallmarks, as science has not yet found feasible ways to amend these problems in living organisms. [ citation needed ] Proper functioning of the genome is one of the most important prerequisites for the smooth functioning of a cell and the organism as a whole. Alterations in the genetic code have long been considered one of the main causal factors in aging. [ 4 ] [ 5 ] In multicellular organisms genome instability is central to carcinogenesis , [ 6 ] and in humans it is also a factor in some neurodegenerative diseases such as amyotrophic lateral sclerosis or the neuromuscular disease myotonic dystrophy . Abnormal chemical structures in the DNA are formed mainly through oxidative stress and environmental factors. [ 7 ] A number of molecular processes work continuously to repair this damage . [ 8 ] Unfortunately, the results are not perfect, and thus damage accumulates over time. [ 4 ] Several review articles have shown that deficient DNA repair , allowing greater accumulation of DNA damages , causes premature aging ; and that increased DNA repair facilitates greater longevity. [ 9 ] Telomeres are regions of repetitive nucleotide sequences associated with specialized proteins at the ends of linear chromosomes . They protect the terminal regions of chromosomal DNA from progressive degradation and ensure the integrity of linear chromosomes by preventing DNA repair systems from mistaking the ends of the DNA strand for a double strand break . [ citation needed ] Telomere shortening is associated with aging, mortality and aging-related diseases . Normal aging is associated with telomere shortening in both humans and mice, and studies on genetically modified animal models suggest causal links between telomere erosion and aging. [ 10 ] Leonard Hayflick demonstrated that a normal human fetal cell population will divide between 40 and 60 times in cell culture before entering a senescence phase. Each time a cell undergoes mitosis , the telomeres on the ends of each chromosome shorten slightly. Cell division will cease once telomeres shorten to a critical length. [ 11 ] This is useful when uncontrolled cell proliferation (like in cancer) needs to be stopped, but detrimental when normally functioning cells are unable to divide when necessary. [ citation needed ] An enzyme called telomerase elongates telomeres in gametes and stem cells . [ 12 ] Telomerase deficiency in humans has been linked to several aging-related diseases related to loss of regenerative capacity of tissues. [ 13 ] It has also been shown that premature aging in telomerase-deficient mice is reverted when telomerase is reactivated. [ 14 ] The shelterin protein complex regulates telomerase activity in addition to protecting telomeres from DNA repair in eukaryotes . [ citation needed ] Out of all the genes that make up a genome, only a subset are expressed at any given time. The functioning of a genome depends both on the specific order of its nucleotides (genomic factors), and also on which sections of the DNA chain are spooled on histones and thus rendered inaccessible, and which ones are unspooled and available for transcription ( epi genomic factors). Depending on the needs of the specific tissue type and environment that a given cell is in, histones can be modified to turn specific genes on or off as needed. [ 15 ] The profile of where, when and to what extent these modifications occur (the epigenetic profile) changes with aging, turning useful genes off and unnecessary ones on, disrupting the normal functioning of the cell. [ 16 ] As an example, sirtuins are a type of protein deacetylases that promote the binding of DNA onto histones and thus turn unnecessary genes off. [ 17 ] These enzymes use NAD as a cofactor . With aging, the level of NAD in cells decreases and so does the ability of sirtuins to turn off unneeded genes at the right time. Decreasing the activity of sirtuins has been associated with accelerated aging and increasing their activity has been shown to stave off several age-related diseases. [ 18 ] [ 19 ] Proteostasis is the homeostatic process of maintaining all the proteins necessary for the functioning of the cell in their proper shape, structure and abundance. [ 20 ] Protein misfolding, oxidation, abnormal cleavage or undesired post-translational modification can create dysfunctional or even toxic proteins or protein aggregates that hinder the normal functioning of the cell. [ 21 ] Though these proteins are continually removed and recycled, formation of damaged or aggregated proteins increases with age, leading to a gradual loss of proteostasis. [ 22 ] This can be slowed or suppressed by caloric restriction [ 23 ] or by administration of rapamycin , both through inhibiting the mTOR pathway . [ 24 ] Nutrient sensing is a cell's ability to recognize, and respond to, changes in the concentration of macronutrients such as glucose , fatty acids and amino acids . In times of abundance, anabolism is induced through various pathways , the most well-studied among them the mTOR pathway. [ 25 ] When energy and nutrients are scarce, the AMPK receptor senses this and switches off mTOR to conserve resources. [ 26 ] In a growing organism, growth and cell proliferation are important and thus mTOR is upregulated . In a fully grown organism, mTOR-activating signals naturally decline during aging. [ 27 ] It has been found that forcibly overactivating these pathways in grown mice leads to accelerated aging and increased incidence of cancer. [ 28 ] mTOR inhibition methods like dietary restriction or administering rapamycin have been shown to be one of the most robust methods of increasing lifespan in worms, flies and mice. [ 29 ] [ 30 ] The mitochondrion is the powerhouse of the cell. Different human cells contain from several up to 2500 mitochondria, [ 31 ] each one converting carbon (in the form of acetyl-CoA ) and oxygen into energy (in the form of ATP ) and carbon dioxide . During aging, the efficiency of mitochondria tends to decrease. The reasons for this are still quite unclear, but several mechanisms are suspected: reduced biogenesis , [ 32 ] accumulation of damage and mutations in mitochondrial DNA , oxidation of mitochondrial proteins, and defective quality control by mitophagy . [ 33 ] Dysfunctional mitochondria contribute to aging through interfering with intracellular signaling [ 34 ] [ 35 ] and triggering inflammatory reactions. [ 36 ] Under certain conditions, a cell will exit the cell cycle without dying, instead becoming dormant and ceasing its normal function. This is called cellular senescence. Senescence can be induced by several factors, including telomere shortening, [ 37 ] DNA damage [ 38 ] and stress. Since the immune system is programmed to seek out and eliminate senescent cells, [ 39 ] it might be that senescence is one way for the body to rid itself of cells damaged beyond repair. The links between cell senescence and aging are several: Stem cells are undifferentiated or partially differentiated cells with the unique ability to self-renew and differentiate into specialized cell types. For the first few days after fertilization, the embryo consists almost entirely of stem cells. As the fetus grows, the cells multiply, differentiate and assume their appropriate function within the organism. In adults, stem cells are mostly located in areas that undergo gradual wear ( intestine , lung , mucosa , skin ) or need continuous replenishment ( red blood cells , immune cells , sperm cells , hair follicles ). [ citation needed ] Loss of regenerative ability is one of the most obvious consequences of aging. This is largely because the proportion of stem cells and the speed of their division gradually lowers over time. [ 43 ] It has been found that stem cell rejuvenation can reverse some of the effects of aging at the organismal level. [ 44 ] Different tissues and the cells they consist of need to orchestrate their work in a tightly controlled manner so that the organism as a whole can function. One of the main ways this is achieved is through excreting signal molecules into the blood where they make their way to other tissues, affecting their behavior. [ 45 ] [ 46 ] The profile of these molecules changes as we age. One of the most prominent changes in cell signaling biomarkers is " inflammaging ", the development of a chronic low-grade inflammation throughout the body with advanced age. [ 47 ] The normal role of inflammation is to recruit the body's immune system and repair mechanisms to a specific damaged area for as long as the damage and threat are present. The constant presence of inflammation markers throughout the body wears out the immune system and damages healthy tissue. [ 48 ] It's also been found that senescent cells excrete a specific set of molecules called the SASP (Senescence-Associated Secretory Phenotype) which induce senescence in neighboring cells. [ 49 ] Conversely, lifespan-extending manipulations targeting one tissue can slow the aging process in other tissues as well. [ 50 ] These may constitute further hallmarks or underlying mechanisms that drive multiple of these hallmarks. In 2014, other scientists have defined a slightly different conceptual model for aging, called 'The Seven Pillars of Aging', in which just three of the 'hallmarks of aging' are included (stem cells and regeneration, proteostasis, epigenetics). [ 53 ] The seven pillars model highlights the interconnectedness between all of the seven pillars which is not highlighted in the nine hallmarks of aging model. [ 54 ] Authors of the original paper merged or linked various hallmarks of cancer with those of aging. [ 55 ] The authors also concluded that the hallmarks are not only interconnected among each other but also "to the recently proposed hallmarks of health , which include organizational features of spatial compartmentalization, maintenance of homeostasis , and adequate responses to stress". [ 2 ] [ 56 ]
https://en.wikipedia.org/wiki/Hallmarks_of_aging
For over two millennia, texts in Chinese herbology and traditional Chinese medicine have recorded medicinal plants that are also hallucinogens and psychedelics . Some are familiar psychoactive plants in Western herbal medicine (e.g., Chinese : 莨菪 ; pinyin : làngdàng , i.e. Hyoscyamus niger ), but several Chinese plants have not been noted as hallucinogens in modern works (e.g., Chinese : 雲實 ; pinyin : yúnshí ; lit. 'cloud seed', i.e. Caesalpinia decapetala ). Chinese herbals are an important resource for the history of botany , for instance, Zhang Hua 's c. 290 Bowuzhi is the earliest record of the psilocybin mushroom xiàojùn 笑菌 (lit. "laughing mushroom", i.e. Gymnopilus junonius ). There is a lexical gap between Chinese names and descriptions of hallucinogenic plants and English pharmacological terminology for hallucinogens, which are commonly divided into psychedelics, dissociatives, and deliriants. The English lexicon has a complex semantic field for psychoactive drugs , and most terms are neologisms . [ a ] Hallucination (from Latin alucinor "to wander in mind") is defined as: "The apparent, often strong subjective perception of an external object or event when no such stimulus or situation is present; may be visual, auditory, olfactory, gustatory, or tactile." Hallucinogen (coined in 1952 from Latin alucinor and -gen "producing"): "A mind-altering chemical, drug, or agent, specifically a chemical the most prominent pharmacologic action of which is on the central nervous system (mescaline); in normal people, it elicits optic or auditory hallucinations, depersonalization, perceptual disturbances, and disturbances of thought processes." Pharmacology divides hallucinogens into three classes. Psychedelic (first used in 1956 from Greek psyche- "mind; soul" and delein "to manifest"): "Pertaining to a rather imprecise category of drugs with mainly central nervous system action, and with effects said to be the expansion or heightening of consciousness, LSD, hashish, mescaline, psilocybin." Dissociative is a class of hallucinogen that produces feelings of dissociation (Latin dissocioatus "to disjoin, separate" from socius "partner, ally") meaning "(3) An unconscious separation of a group of mental processes from the rest, resulting in an independent functioning of these processes and a loss of the usual associations, a separation of affect from cognition." Dissociative disorders is defined as "a group of mental disorders characterized by disturbances in the functions of identity, memory, consciousness, or perception of the environment; this diagnostic group includes dissociative (older term, psychogenic) amnesia, dissociative fugue, dissociative identity (older term, multiple personality) disorder, and depersonalization disorder." Deliriant is a technical term introduced to distinguish hallucinogens that primarily cause delirium (1982, from Latin deliro "to be crazy" and delira "go out of the furrow"): "An altered state of consciousness, consisting of confusion, distractibility, disorientation, disordered thinking and memory, defective perception (illusions and hallucinations), prominent hyperactivity, agitation, and autonomic nervous system overactivity; caused by illness, medication, or toxic, structural, and metabolic disorders." The equivalent semantic field in the Chinese lexicon comprises contemporary loanwords . [ b ] Huànjué ( 幻覺 "hallucination; delusion; illusion") compounds huàn ( 幻 "unreal; imaginary; illusory") and ( jué'' 覺 "feeling; sensation; perception"). Zhìhuànjì ( 致幻劑 "psychedelic; hallucinogen") compounds zhì ( 致 "incur; cause"), huàn "unreal; imaginary; illusory", and jì ( 劑 "medicinal preparation; dose"). Zhìhuànyào ( 致幻藥 "hallucinogenic drug") with yào ( 藥 "medicine; drug") is a less common synonym. Míhuànyàowù ( 迷幻藥物 "psychedelic") combines míhuàn ( 迷幻 "phantasmagoric; surreal; mysterious; psychedelic") and yàowù ( 藥物 "medicine; pharmaceutical; medicament"). The Chinese technical names for the last two classes of hallucinogens are rare: Yóulíyàopǐn ( 游离藥品 "dissociative") compounds yóulí ( 游离 "dissociated; drifting") and yàopǐn ( 藥品 " medicine; chemical reagent; drug"); and Zhìzhānwàngyào ( 致谵妄藥 "deliriant") combines zhì "incur; cause", zhānwàng ( 譫妄 "(medical) delirium"), and yào "medicine; drug". Chinese pharmaceutical literature mainly comprises texts called bencao ( Chinese : 本草 ; pinyin : běncǎo ; Wade–Giles : pen-ts'ao ), translatable as English herbal , pharmacopoeia , or materia medica . This word compounds ben "(plant) root/stem; basis, origin; foundation; book" and cao "grass; herb; straw". Although bencao is sometimes misinterpreted as "roots and herbs", the approximate meaning is "[pharmaceutics whose] basis [ ben ] [is] herbs [ cao ]". [ 1 ] [ failed verification ] These works deal with drugs of all origins, mainly vegetable but also mineral, animal, and even the human body . The Chinese botanist , academic, and researcher Hui-lin Li (1911-2002) wrote seminal articles about the history and use of hallucinogenic plants in China. Li cites a story in Li Shizhen 's 1596 magnum opus Bencao gangmu as the first discussion about the general use of psychoactive plants. In 1561, after horrific murders in Changli , the Ming dynasty Jiajing Emperor proclaimed a nationwide edict warning about the dangers of hallucinogens. Lang-tang ( Hyoscyamus niger ), Yün-shih ( Caesalpinia Sepiaria ), Fang-k'uei ( Peucedanum japonica ) and Red Shanglu ( Phytolacca acinosa ) all can cause hallucination in peoples. In the past, this significance has not been fully divulged. Plants of this kind are all toxic, which can obscure the mind, alter one's consciousness, and confuse one's perception of sight and sound. In the T'ang times, An Lu-shan [a foreign warlord in the Chinese army service] once enticed the Kitan [tribesmen surrendered to his command] to drink Lang-tang wine and buried them alive while they were unconscious. Again in the second month of the 43rd year of the Chia-ch'in period (1561 A.D.), a wandering monk, Wu Ju-hsiang of Shensi province, who possessed wizardry, arrived at Ch'ang-li and stopped over at the house of a resident, Chang Shu. Upon finding the latter's wife being very beautiful, he asked that the entire family sit together at the table with him when he was being offered a meal. He put some reddish potion in the rice and after a while the whole family became unconscious and submitted to his assault. He then blew a magic spell into the ears of Chang Shu and the latter turned crazy and violent. Chang visualized his entire family as all devils and thereby killed them all, sixteen altogether, without any blood shed. The local authorities captured Chang Shu and kept him in prison. After ten days, he spat out nearly two spittoonsful of phlegm, became conscious, and found out himself that those he killed were his parents, brothers, sisters-in-law, his wife, sons, sisters, nephews. Both Chang and Wu were committed to the death sentence. The Emperor, Shih-tsung, proclaimed throughout the country about the case. The particular magic potion must be of the kind of Lang-tang or similar drugs. When the man was under the spell, he saw everyone else as a devil. It is thus very important to find out the remedy that counteracts such a thing. [ 2 ] The following eight examples of confirmed and possible hallucinogens recorded in Chinese herbals are primarily based on the ten in Li Hui-Lin's 1977 article. [ 3 ] Two edible plants, with only one Chinese source and no Western ones mentioning psychoactive properties, are omitted as unlikely: fangfeng ( 防风 ; 防風 ; fángfēng ; fang-feng " Saposhnikovia divaricata ; Chinese parsnip ") and longli ( 龙荔 ; 龍荔 ; lónglì ; lung-li " Nephelium topengii"; a type of lychee "). [ 4 ] The làngdàng ( 莨菪 ; làngdàng ; lang-tang " Hyoscyamus niger ; black henbane") is one of the most famous hallucinogenic drugs in Chinese herbals. The seeds, which contain psychoactive tropane alkaloids , are called làngdàngzi ( 莨菪子 , with -zi "child; seed") or tiānxiānzi ( 天仙子 "heavenly transcendent seeds"). For use in medicine, the seeds are supposedly treated by soaking in vinegar and milk to reduce their toxicity. The Shennong Bencaojing says, "[The seeds] when taken [when properly prepared] for a prolonged period enable one to walk for long distances, benefiting to the mind and adding to the strength ... and to communicate with spirits and seeing devils. When taken in excess, it causes one to stagger madly." [ 5 ] Lei Xiao's 470 Leigong paozhilun ( 雷公炮炙論 "Master Lei's Treatise on the Decoction and Preparation of Drugs") states that the seed "is extremely poisonous, and when accidentally taken, it causes delirium and seeing sparks and flashes", and Zhen Chuan's c. 620 Bencao yaoxing ( 本草藥性 "Nature of Drugs in Materia Medica") says the seeds "should not be taken raw as it hurts people, causing them to see devils, acting madly like picking needles". [ 6 ] The yunshi ( 云实 ; 雲實 ; yúnshí ; yun-shih " Caesalpinia decapetala ; cat's claw") was a versatile drug plant in the Chinese pharmacopeia, and the root, flowers, and seeds were all used in medicine. The Shennong Bencao says, "[The flowers] could enable one to see spirits, and when taken in excess, cause one to stagger madly. If taken over a prolonged period, they produce somatic levitation and effect communication with spirits." Tao Hongjing , who edited the official Shangqing Daoist canon, also compiled the c. 510 Mingyi bielu ( 名醫別錄 "Supplementary Records of Famous Physicians") that says "[The flowers] will drive away evil spirits. When put in water and burned, spirits can be summoned" and "The seeds are like langdang ( Hyoscyamus niger ), if burned, spirits can be summoned; but this [sorcery] method has not been observed." [ 7 ] Li Hui-Lin notes this plant "has not been noted as a hallucinogenic plant in modern works. In fact, as far as I am aware, it has not been investigated medicinally or chemically". [ 8 ] The fangkui ( 防葵 ; fángkuí ; fang-k'ui " Peucedanum japonicum ") root is used in Chinese medicine, and like the previous cat's claw, has not been noted as a hallucinogenic in modern works. The c. 510 Tao Hongjing mingyi bielu states, "Feverish people should not take it, because it causes one to be delirious and see spirits"; and Chen Yanzhi's ( 陳延之 ) c. 454-473 Xiaoping fang ( 小品方 "Minor Prescriptions") says that fangkui , "if taken in excess, makes one become delirious and act somewhat like mad". [ 9 ] P. japonicum is also used quite extensively in Korean cuisine - not only as a culinary herb, but also as a leaf vegetable, raising the question as to what constitutes consumption 'to excess'. It may be the case that the strain of plant grown in Korea is less toxic / medicinal than that found in China, or that very substantial quantities of the plant must be eaten before any psychoactive effects are manifested. Alternatively, the psychoactive components of the plant may be deactivated by the cooking processes employed in the preparation of the plant in Korea. [ 10 ] [ 11 ] The shanglu ( 商陆 ; 商陸 ; shānglù ; shang-lu " Phytolacca acinosa ; India pokeweed") has edible leaves and poisonous roots. China's oldest extant dictionary, the c. 3rd-century BCE Erya (13: 110) gives two names for pokeweed: chùtāng ( 蓫薚 ) and mǎwěi ( 馬尾 "horsetail"). Chinese herbals distinguish two kinds of shanglu , white with white flowers and white root, and red with red flowers and purple root. The white root is edible when cooked but the red root is extremely poisonous. The Tao Hongjing mingyi bielu records how Daoists used the red variety, "By boiling or brewing and then taken, it can be used for abdominal parasitic worms and for seeing spirits"; Su Song 's 1061 Bencao tujing ( 本草圖經 "Illustrated Pharmacopeia") says, "It was much used by sorcerers in ancient times". [ 12 ] Su Gong's 659 Tang bencao (唐本草 " Tang dynasty pharmacopeia") says "The red kind can be used to summon spirits; it is very poisonous. It can be only used as external application for inflammation. When ingested, it is extremely harmful, causing unceasing bloody stool. It may be fatal. It causes one to see spirits." [ 13 ] The 1406 Jiuhuang Bencao "Famine Relief Herbal" lists pokeweed as a famine food . It gives instructions for removing the poisonous phytolaccatoxin from the white roots and mentions Daoist xian using the flowers: "Cut them up into slices, scald, then soak and wash repeatedly (throwing away the extract) until the material is clean; then just eat it with garlic. … Plants with white flowers can (it is said) confer longevity; the immortals collected them to make savouries to take with their wine." [ 14 ] Dama ( 大麻 ; dàmá ; ta-ma " Cannabis sativa ; hemp; marijuana") has been grown in China since Neolithic times. At a very early period the Chinese recognized the Cannabis plant as dioecious , the male plants produce better fibers and the female plants produce more cannabinoids . In modern usage, the names are xǐ ( 枲 "male cannabis") and jū ( 苴 "female cannabis"). [ 15 ] Reflecting the importance of cannabis in ancient China, the ca. 3rd century BCE Erya dictionary (13) has four definitions: fén ( 黂 ; 蕡 ) and xǐshí ( 枲實 ) mean "cannabis flower"; xǐ ( 枲 ) and má ( 麻 ) mean "cannabis" generally and not "male cannabis"; fú ( 莩 , lit. "reed membrane") and mámǔ ( 麻母 , "cannabis mother") mean "female cannabis"; and bò ( 薜 ) and shānmá ( 山麻 "mountain cannabis") mean "wild cannabis", possibly C. ruderalis . The Shennong bencao calls "cannabis flowers/buds" mafen ( 麻蕡 ) or mabo ( 麻勃 ) and says: "To take much makes people see demons and throw themselves about like maniacs [ 多食令人見鬼狂走 ]. But if one takes it over a long period of time one can communicate with the spirits, and one's body becomes light [ 久服通神明輕身 ]". [ 16 ] The Mingyi bielu records that in the 6th century, mabo were, "very little used in medicine, but the magician-technicians [ shujia 術家 ] say that if one consumes them with ginseng it will give one preternatural knowledge of events in the future." [ 17 ] Meng Shen's c. 670 Shiliao bencao ( 食療本草 "Nutritional Therapy Pharmacopeia") says people will combine equal parts of raw cannabis flowers, Japanese sweet flag , and wild mandrake , "pound them into pills of the size of marbles and take one facing the sun every day. After one hundred days, one can see spirits." [ 18 ] Tang Shengwei's 1108 Zhenglei bencao ( 證類本草 "Reorganized Pharmacopeia") gives a more complete account on the pharmaceutical uses of cannabis: "Ma-fen has a spicy taste; it is toxic; it is used for waste diseases and injuries; it clears blood and cools temperature; it relieves fluxes; it undoes rheumatism; it discharges pus. If taken in excess, it produces hallucinations and a staggering gait. If taken over a long term, it causes one to communicate with spirits and lightens one's body." [ 19 ] According to the sinologists and historians Joseph Needham and Lu Gwei-djen , some early Daoists adapted censers for the religious and spiritual use of cannabis . The c. 570 Daoist encyclopedia Wushang Biyao ( 無上秘要 "Supreme Secret Essentials") recorded adding cannabis into ritual censers, and they suggest Yang Xi (330-c. 386), who wrote the Shangqing scriptures during alleged visitations by Daoist xian , was "aided almost certainly by cannabis". [ 20 ] The mantuoluo ( 曼陀罗 ; 曼陀羅 ; màntuóluó ; man-t'o-lo " Datura stramonium ; jimsonweed" or "(Buddhism) mandala ") contains highly toxic Tropane alkaloids . Several Datura species were introduced into China from India, and Li Shizhen's 1596 Bencao gangmu was the first herbal to record the medicinal use of flowers and seeds. The drug is used in combination with Cannabis sativa and taken with wine as an anesthetic for small operations and cauterizations. Li Shizhen personally experimented with jimsonweed and recorded his experience as follows: "According to traditions, it is alleged that when the flowers are picked for use with wine while one is laughing, the wine will cause one to produce laughing movements; and when the flowers are picked while one is dancing, the wine will cause one to produce dancing movements. [I have found out] that such movements will be produced when one becomes half-drunk with the wine and someone else laughs or dances to induce these actions." [ 21 ] The maogen ( 毛茛 ; máogèn ; mao-ken " Ranunculus japonicus ; buttercup") is a poisonous plant with bright yellow flowers. The Daoist alchemist Ge Hong 's c. 340 Zhouhou jiuzu fang ( 肘後救卒方 "Remedies for Emergencies", [ 22 ] says, "Among the herbs there is the Shui Lang (water Lang, a kind of Mao-ken) a plant with rounded leaves which grows along water courses and is eaten by crabs. It is poisonous to man and when eaten by mistake, it produces a maniacal delirium, appearing like a stroke and sometimes with blood-spitting. The remedy is to use licorice." Later herbals, which do not mention maogen as a deliriant, say the whole plant is considered poisonous and is should only be externally used as a medicine for irritation and inflammation. The xiaojun ( 笑菌 ; xiàojùn ; hsiao-ch'un "laughing mushroom") was known to Chinese herbalists for centuries before modern botanists identified it as a type of psilocybin mushroom , most likely either Gymnopilus junonius or Laughing Gym or Panaeolus papilionaceus or Petticoat Mottlegill. The earliest record of a mushroom that causes uncontrollable laughter appears in Zhang Hua 's c. 290 Bowuzhi compendium of natural wonders, in a context describing two unusual kinds of jùn ( 菌 "mushroom; fungus") that grow on tree bark. In all the mountain commanderies to the South of the Yangzi, there is a fungus which grows [ 生菌 ] throughout the spring and summer on the large trees that have fallen down; it is known as the Zhen [ 椹 "chopping block (for execution)"]. If one eats it, it is tasty, but suddenly the poison takes effect and kills the eater. … If one eats Sweet gum tree growths [ 生者 ], they will induce uncontrollable laughter. If one drinks "earth sauce" [ tǔjiāng 土漿 ] one will recover. [ 23 ] The Bencao gangmu records Tao Hongjing's recipe for preparing "earth sauce": "Dig out a pit three chi deep in a place where there is yellow earth. Take freshly-drawn water and pour it into the pit, stirring the water so as to make it turbid. After a short while, draw off the clear water and use this. It is called either 'soil sauce' or 'earth sauce'." [ 24 ] Hui-lin Li quotes a Chinese-language study of "laughing mushrooms" that this "soil infusion" is the clear liquid after soil is mixed with water and allowed to settle, and an effective antidote for poisons. [ 25 ] Subsequent Chinese authors give many similar records. Chen Renyu's ( 陳仁玉 ) I245 Jùnpǔ ( 菌譜 "Mushroom Guidebook") says this fungus is named tǔxùn ( 土蕈 "earth mushroom") or dùxùn ( 杜蕈 "pear mushroom") and "grows in the ground. People believe it to be formed by the air from poisonous vermin, and kills people if taken.... Those poisoned by it will laugh. As an antidote, use strong tea, mixed with alum and fresh clear water. Upon swallowing this, it will cure immediately". [ 25 ] The c. 304 Nanfang Caomu Zhuang mentions sweetgum tree growths in a quite different context, the shamans in the southern state of Yue use a magical fēngrén ( 楓人 "sweetgum person") that is a kind of liúyǐng ( 瘤癭 " gall ") found growing on sweetgum trees. "When aged they develop tumors. Sometimes in a violent thunder storm, the tree tumors grow suddenly three to five feet in one night, and these are called Feng-jen. The witches of Yueh collect these for witchcraft, saying that they have proof of their supernatural quality." [ 26 ] Later sources gave two explanations of the sweetgum tree growths, either as galls that resemble humans and have magical powers or as parasitic plants with rain-giving powers. [ 27 ] In Japan, both medieval and modern sources record laughing mushrooms. An 11th-century story in the Konjaku Monogatarishū describes a group of Buddhist nuns who ate maitake ( 舞茸 "dancing mushrooms") and began to laugh and dance uncontrollably. It is also known as the waraitake ( 笑茸 "laughing mushroom"), which scholars have identified as the Panaeolus papilionaceus or Petticoat Mottlegill; the related Panaeolus cinctulus or Banded Mottlegill; and the psilocybin mushroom Gymnopilus junonius or Laughing Cap also called ōwaraitake ( 大笑茸 "Big Laughing Mushroom"). [ 28 ] In a study on early Daoist practitioners searching for the elixir of Immortality , Needham and Lu mention the possible use of hallucinogenic plants, such as Amanita muscaria "fly agaric" and xiaojun "laughing mushrooms". Based on Tang dynasty and Song dynasty references, they tentatively identify it as a Panaeolus or Pholiota and suggest that the properties of at least some psychoactive mushrooms were widely known. They predict the further exploration of hallucinogenic fungi and other plants in Daoism and in Chinese culture in general "will be an exciting task". [ 29 ] Footnotes
https://en.wikipedia.org/wiki/Hallucinogenic_plants_in_Chinese_herbals
In mathematical group theory , the Hall–Higman theorem , due to Philip Hall and Graham Higman ( 1956 , Theorem B), describes the possibilities for the minimal polynomial of an element of prime power order for a representation of a p -solvable group . Suppose that G is a p -solvable group with no normal p -subgroups, acting faithfully on a vector space over a field of characteristic p . If x is an element of order p n of G then the minimal polynomial is of the form ( X − 1) r for some r ≤ p n . The Hall–Higman theorem states that one of the following 3 possibilities holds: The group SL 2 ( F 3 ) is 3-solvable (in fact solvable ) and has an obvious 2-dimensional representation over a field of characteristic p =3, in which the elements of order 3 have minimal polynomial ( X −1) 2 with r =3−1.
https://en.wikipedia.org/wiki/Hall–Higman_theorem
The Hall–Héroult process is the major industrial process for smelting aluminium . It involves dissolving aluminium oxide (alumina) (obtained most often from bauxite , aluminium 's chief ore, through the Bayer process ) in molten cryolite and electrolyzing the molten salt bath, typically in a purpose-built cell. The process conducted at an industrial scale, happens at 940–980 °C (1700 to 1800°F) and produces aluminium with a purity of 99.5-99.8%. Recycling aluminum , which does not require electrolysis, is thus not treated using this method. [ 1 ] The Hall–Héroult process consumes substantial electrical energy, and its electrolysis stage can produce significant amounts of carbon dioxide if the electricity is generated from high-emission sources. Furthermore, the process generates fluorocarbon compounds as byproducts , contributing to both air pollution and climate change . [ 2 ] [ 3 ] Elemental aluminium cannot be produced by the electrolysis of an aqueous aluminium salt , because hydronium ions readily oxidize elemental aluminium. Although a molten aluminium salt could be used instead, aluminium oxide has a melting point of 2072 °C (3762°F) [ 4 ] so electrolysing it is impractical. In the Hall–Héroult process, alumina, Al 2 O 3 , is dissolved in molten synthetic cryolite , Na 3 AlF 6 , to lower its melting point for easier electrolysis. [ 1 ] The carbon source is generally a coke (fossil fuel) . [ 2 ] In the Hall–Héroult process the following simplified reactions take place at the carbon electrodes: Cathode : Anode : Overall: In reality, much more CO 2 is formed at the anode than CO: Pure cryolite has a melting point of 1009 ± 1 °C (1848°F). With a small percentage of alumina dissolved in it, its melting point drops to about 1000 °C (1832°F). Besides having a relatively low melting point, cryolite is used as an electrolyte because, among other things, it also dissolves alumina well, conducts electricity, dissociates electrolytically at higher voltage than alumina, and is less dense than aluminum at the temperatures required by the electrolysis. [ 1 ] Aluminium fluoride (AlF 3 ) is usually added to the electrolyte. The ratio NaF/AlF 3 is called the cryolite ratio and it is 3 in pure cryolite. In industrial production, AlF 3 is added so that the cryolite ratio is 2–3 to further reduce the melting point, so that the electrolysis can happen at temperatures between 940 and 980 °C (1700 to 1800°F). The density of liquid aluminum is 2.3 g/ml at temperatures between 950 and 1000 °C (1750° to 1830°F). The density of the electrolyte should be less than 2.1 g/ml, so that the molten aluminum separates from the electrolyte and settles properly to the bottom of the electrolysis cell. In addition to AlF 3 , other additives like lithium fluoride may be added to alter different properties (melting point, density, conductivity etc.) of the electrolyte. [ 1 ] The mixture is electrolysed by passing a low voltage (under 5 V) direct current at 100–300 kA through it. This causes liquid aluminium to be deposited at the cathode , while the oxygen from the alumina combines with carbon from the anode to produce mostly carbon dioxide. [ 1 ] The theoretical minimum energy requirement for this process is 6.23 kWh/(kg of Al), but it commonly requires 15.37 kWh. [ 5 ] Cells in factories are operated 24 hours per day so that the molten material in them will not solidify. Temperature within the cell is maintained via electrical resistance. Oxidation of the carbon anode increases the electrical efficiency at a cost of consuming the carbon electrodes and producing carbon dioxide. [ 1 ] [ clarification needed ] While solid cryolite is denser than solid aluminium at room temperature, liquid aluminium is denser than molten cryolite at temperatures around 1,000 °C (1,830 °F). The aluminium sinks to the bottom of the electrolytic cell, where it is periodically collected. The liquid aluminium is siphoned every 1 to 3 days to avoid having to use extremely high temperature valves and pumps. Alumina is added to the cells as the aluminum is removed. Collected aluminium from different cells in a factory is finally melted together to ensure uniform product and made into metal sheets. The electrolytic mixture is sprinkled with coke to prevent the anode's oxidation by the oxygen involved. [ 1 ] The cell produces gases at the anode, primarily CO 2 produced from anode consumption and hydrogen fluoride (HF) from the cryolite and flux (AlF 3 ). In modern facilities, fluorides are almost completely recycled to the cells and therefore used again in the electrolysis. Escaped HF can be neutralized to its sodium salt, sodium fluoride . Particulates are captured using electrostatic or bag filters. The CO 2 is usually vented into the atmosphere. [ 1 ] Agitation of the molten material in the cell increases its production rate at the expense of an increase in cryolite impurities in the product. Properly designed cells can leverage magnetohydrodynamic forces induced by the electrolysing current to agitate the electrolyte. In non-agitating static pool cells, the impurities either rise to the top of the metallic aluminium, or sink to the bottom, leaving high-purity aluminium in the middle area. [ 1 ] Electrodes in cells are mostly coke which has been purified at high temperatures. Pitch resin or tar is used as a binder. The materials most often used in anodes, coke and pitch resin, are mainly residues from the petroleum industry and need to be of high enough purity so no impurities end up into the molten aluminum or the electrolyte. [ 1 ] There are two primary anode technologies using the Hall–Héroult process: Söderberg technology and prebaked technology. In cells using Söderberg or self-baking anodes, there is a single anode per electrolysis cell. The anode is contained within a frame and, as the bottom of the anode turns mainly into CO 2 during the electrolysis, the anode loses mass and, being amorphous , it slowly sinks within its frame. More material to the top of the anode is continuously added in the form of briquettes made from coke and pitch. The lost heat from the smelting operation is used to bake the briquettes into the carbon form required for the reaction with alumina. The baking process in Söderberg anodes during electrolysis releases more carcinogenic PAHs and other pollutants than electrolysis with prebaked anodes and, partially for this reason, prebaked anode-using cells have become more common in the aluminium industry. More alumina is added to the electrolyte from the sides of the Söderberg anode after the crust on top of the electrolyte mixture is broken. [ 1 ] Prebaked anodes are baked in very large gas-fired ovens at high temperature before being lowered by various heavy industrial lifting systems into the electrolytic solution. There are usually 24 prebaked anodes in two rows per cell. Each anode is lowered vertically and individually by a computer, as the bottom surfaces of the anodes are eaten away during the electrolysis. Compared to Söderberg anodes, computer-controlled prebaked anodes can be brought closer to the molten aluminium layer at the bottom of the cell without any of them touching the layer and interfering with the electrolysis. This smaller distance decreases the resistance caused by the electrolyte mixture and increases the efficiency of prebaked anodes over Söderberg anodes. Prebake technology also has much lower risk of the anode effect (see below), but cells using it are more expensive to build and labor-intensive to use, as each prebaked anode in a cell needs to be removed and replaced once it has been used. Alumina is added to the electrolyte from between the anodes in prebake cells. [ 1 ] Prebaked anodes contain a smaller percentage of pitch, as they need to be more solid than Söderberg anodes. The remains of prebaked anodes are used to make more new prebaked anodes. Prebaked anodes are either made in the same factory where electrolysis happens, or are brought there from elsewhere. [ 1 ] The inside of the cell's bath is lined with cathode made from coke and pitch. Cathodes also degrade during electrolysis, but much more slowly than anodes do, so their purity and maintenance requirements are lower than the anodes. Cathodes are typically replaced every 2–6 years. This requires the whole cell to be shut down. [ 1 ] The anode effect is a situation where too many gas bubbles form at the bottom of the anode and join, forming a layer. This increases the resistance of the cell, because smaller areas of the electrolyte touch the anode. These areas of the electrolyte and anode heat up when the density of the electric current of the cell focuses to go through only them. This heats up the gas layer and causes it to expand, thus further reducing the surface area where electrolyte and anode are in contact with each other. The anode effect decreases the energy-efficiency and the aluminium production of the cell. It also induces the formation of tetrafluoromethane (CF 4 ) in significant quantities, increases formation of CO and, to a lesser extent, also causes the formation of hexafluoroethane (C 2 F 6 ). CF 4 and C 2 F 6 are not CFCs , and, although not detrimental to the ozone layer , are still potent greenhouse gases . The anode effect is mainly a problem in Söderberg technology cells, not in prebaked. [ 1 ] Aluminium is the most abundant metallic element in the Earth's crust, but it is rarely found in its elemental state . It occurs in many minerals, but its primary commercial source is bauxite , a mixture of hydrated aluminium oxides and compounds of other elements such as iron. Prior to the Hall–Héroult process, elemental aluminium was made by heating ore along with elemental sodium or potassium in a vacuum . [ citation needed ] The method was complicated and consumed materials that were in themselves expensive at that time. This meant that the cost to produce the small amount of aluminium made in the early 19th century was very high, higher than for gold or platinum . [ 6 ] Bars of aluminium were exhibited alongside the French crown jewels at the Exposition Universelle of 1855 , and Emperor Napoleon III of France was said to have reserved his few sets of aluminium dinner plates and eating utensils for his most honored guests. [ 7 ] [ 8 ] [ 9 ] Production costs using older methods did come down, but when aluminium was selected as the material for the cap/lightning rod to sit atop the Washington Monument in Washington, D.C. upon its completion in 1884, it was still more expensive than silver . [ 10 ] The Hall–Héroult process was invented independently and almost simultaneously in 1886 by the American chemist Charles Martin Hall [ 11 ] and by the Frenchman Paul Héroult [ 12 ] —both 22 years old. Some authors claim Hall was assisted by his sister Julia Brainerd Hall ; [ 13 ] however, the extent to which she was involved has been disputed. [ 14 ] [ 15 ] In 1888, Hall opened the first large-scale aluminium production plant in Pittsburgh . It later became the Alcoa corporation. In 1997, the Hall–Héroult process was designated a National Historic Chemical Landmark by the American Chemical Society in recognition of the importance of the process in the commercialization of aluminum. [ 16 ] Aluminium produced via the Hall–Héroult process, in combination with cheaper electric power , helped make aluminium (and incidentally magnesium ) an inexpensive commodity rather than a precious metal. This, in turn, helped make it possible for pioneers like Hugo Junkers to utilize aluminium and aluminium-magnesium alloys to make items like metal airplanes by the thousands, or Howard Lund to make aluminium fishing boats. [ 17 ] In 2012 it was estimated that 12.7 tons of CO 2 emissions are generated per ton of aluminium produced. [ 18 ] In the 20th and 21st century the aluminum industry due to its large-scale requirements for cheap electricity has often been sited in locations where such electricity is available. For example Iceland, a country with no notable bauxite reserves and a population of less than half a million, is the world's twelfth largest aluminum producer due to the availability of cheap and plentiful electricity, particularly hydropower . Similarly Aluminerie Alouette in Sept-Îles, Quebec is dependent for its electricity needs on the 5,428 MW Churchill Falls Generating Station operated by Churchill Falls (Labrador) Corporation Limited . The company town of Kitimat in British Columbia was built by Alcan to meet the growing demand for aluminum in the postwar era. It makes use of the Kenney Dam built to power the smelters. The Tiwai Point Aluminium Smelter on the South Island of New Zealand consumes some 570 MW of electricity, most of which is supplied by nearby Manapōuri Power Station . This amounts to around a third of the electricity demand of South Island and 13% of that of the entirety of New Zealand. Borssele Nuclear Power Station was built primarily to supply electricity to an aluminum smelter operated by French Pechiney at the time.
https://en.wikipedia.org/wiki/Hall–Héroult_process
In mathematics , the Hall–Littlewood polynomials are symmetric functions depending on a parameter t and a partition λ. They are Schur functions when t is 0 and monomial symmetric functions when t is 1 and are special cases of Macdonald polynomials . They were first defined indirectly by Philip Hall using the Hall algebra , and later defined directly by Dudley E. Littlewood (1961). The Hall–Littlewood polynomial P is defined by where λ is a partition of at most n with elements λ i , and m ( i ) elements equal to i , and S n is the symmetric group of order n !. As an example, We have that P λ ( x ; 1 ) = m λ ( x ) {\displaystyle P_{\lambda }(x;1)=m_{\lambda }(x)} , P λ ( x ; 0 ) = s λ ( x ) {\displaystyle P_{\lambda }(x;0)=s_{\lambda }(x)} and P λ ( x ; − 1 ) = P λ ( x ) {\displaystyle P_{\lambda }(x;-1)=P_{\lambda }(x)} where the latter is the Schur P polynomials. Expanding the Schur polynomials in terms of the Hall–Littlewood polynomials, one has where K λ μ ( t ) {\displaystyle K_{\lambda \mu }(t)} are the Kostka–Foulkes polynomials . Note that as t = 1 {\displaystyle t=1} , these reduce to the ordinary Kostka coefficients. A combinatorial description for the Kostka–Foulkes polynomials was given by Lascoux and Schützenberger, where "charge" is a certain combinatorial statistic on semistandard Young tableaux, and the sum is taken over the set S S Y T ( λ , μ ) {\displaystyle SSYT(\lambda ,\mu )} of all semi-standard Young tableaux T with shape λ and type μ .
https://en.wikipedia.org/wiki/Hall–Littlewood_polynomials
In nuclear physics , an atomic nucleus is called a halo nucleus or is said to have a nuclear halo when it has a core nucleus surrounded by a "halo" of orbiting protons or neutrons, which makes the radius of the nucleus appreciably larger than that predicted by the liquid drop model . Halo nuclei form at the extreme edges of the table of nuclides — the neutron drip line and proton drip line — and have short half-lives, measured in milliseconds. These nuclei are studied shortly after their formation in an ion beam . Typically, an atomic nucleus is a tightly bound group of protons and neutrons. However, in some nuclides, there is an overabundance of one species of nucleon. In some of these cases, a nuclear core and a halo will form. Often, this property may be detected in scattering experiments, which show the nucleus to be much larger than the otherwise expected value. Normally, the cross-section (corresponding to the classical radius) of the nucleus is proportional to the cube root of its mass, as would be the case for a sphere of constant density. Specifically, for a nucleus of mass number A , the radius r is (approximately) where r ∘ {\displaystyle r_{\circ }} is 1.2 fm . One example of a halo nucleus is 11 Li , which has a half-life of 8.6 ms. It contains a core of 3 protons and 6 neutrons, and a halo of two independent and loosely bound neutrons. It decays into 11 Be by the emission of an antineutrino and an electron. [ 1 ] Its mass radius of 3.16 fm is close to that of 32 S or, even more impressively, of 208 Pb , both much heavier nuclei. [ 2 ] Experimental confirmation of nuclear halos is recent and ongoing. Additional candidates are suspected. Several nuclides including 9 B, 13 N, and 15 N are calculated to have a halo in the excited state but not in the ground state . [ 3 ] Nuclei that have a neutron halo include 11 Be [ 5 ] and 19 C . A two-neutron halo is exhibited by 6 He , 11 Li , 17 B , 19 B and 22 C . Two-neutron halo nuclei break into three fragments and are called Borromean because of this behavior, analogously to how all three of the Borromean rings are linked together but no two share a link. For example, the two-neutron halo nucleus 6 He (which can be taken as a three-body system consisting of an alpha particle and two neutrons) is bound, but neither 5 He nor the dineutron is. 8 He and 14 Be both exhibit a four-neutron halo. Nuclei that have a proton halo include 8 B and 26 P . A two-proton halo is exhibited by 17 Ne and 27 S . Proton halos are expected to be rarer and more unstable than neutron halos because of the repulsive forces of the excess proton(s).
https://en.wikipedia.org/wiki/Halo_nucleus
The haloalkanes (also known as halogenoalkanes or alkyl halides ) are alkanes containing one or more halogen substituents of hydrogen atom. [ 1 ] They are a subset of the general class of halocarbons , although the distinction is not often made. Haloalkanes are widely used commercially. They are used as flame retardants , fire extinguishants , refrigerants , propellants , solvents , and pharmaceuticals . Subsequent to the widespread use in commerce, many halocarbons have also been shown to be serious pollutants and toxins. For example, the chlorofluorocarbons have been shown to lead to ozone depletion . Methyl bromide is a controversial fumigant. Only haloalkanes that contain chlorine, bromine, and iodine are a threat to the ozone layer , but fluorinated volatile haloalkanes in theory may have activity as greenhouse gases . Methyl iodide , a naturally occurring substance, however, does not have ozone-depleting properties and the United States Environmental Protection Agency has designated the compound a non-ozone layer depleter. For more information, see Halomethane . Haloalkane or alkyl halides are the compounds which have the general formula "RX" where R is an alkyl or substituted alkyl group and X is a halogen (F, Cl, Br, I). Haloalkanes have been known for centuries. Chloroethane was produced in the 15th century. The systematic synthesis of such compounds developed in the 19th century in step with the development of organic chemistry and the understanding of the structure of alkanes. Methods were developed for the selective formation of C-halogen bonds. Especially versatile methods included the addition of halogens to alkenes, hydrohalogenation of alkenes, and the conversion of alcohols to alkyl halides. These methods are so reliable and so easily implemented that haloalkanes became cheaply available for use in industrial chemistry because the halide could be further replaced by other functional groups. While many haloalkanes are human-produced, substantial amounts are biogenic. From the structural perspective, haloalkanes can be classified according to the connectivity of the carbon atom to which the halogen is attached. In primary (1°) haloalkanes, the carbon that carries the halogen atom is only attached to one other alkyl group. An example is chloroethane ( CH 3 CH 2 Cl ). In secondary (2°) haloalkanes, the carbon that carries the halogen atom has two C–C bonds. In tertiary (3°) haloalkanes, the carbon that carries the halogen atom has three C–C bonds. [ citation needed ] Haloalkanes can also be classified according to the type of halogen on group 17 responding to a specific halogenoalkane. Haloalkanes containing carbon bonded to fluorine , chlorine , bromine , and iodine results in organofluorine , organochlorine , organobromine and organoiodine compounds, respectively. Compounds containing more than one kind of halogen are also possible. Several classes of widely used haloalkanes are classified in this way chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs). These abbreviations are particularly common in discussions of the environmental impact of haloalkanes. [ citation needed ] Haloalkanes generally resemble the parent alkanes in being colorless, relatively odorless, and hydrophobic. The melting and boiling points of chloro-, bromo-, and iodoalkanes are higher than the analogous alkanes, scaling with the atomic weight and number of halides. This effect is due to the increased strength of the intermolecular forces —from London dispersion to dipole-dipole interaction because of the increased polarizability. Thus tetraiodomethane ( CI 4 ) is a solid whereas tetrachloromethane ( CCl 4 ) is a liquid. Many fluoroalkanes, however, go against this trend and have lower melting and boiling points than their nonfluorinated analogues due to the decreased polarizability of fluorine. For example, methane ( CH 4 ) has a melting point of −182.5 °C whereas tetrafluoromethane ( CF 4 ) has a melting point of −183.6 °C. [ citation needed ] As they contain fewer C–H bonds, haloalkanes are less flammable than alkanes, and some are used in fire extinguishers. Haloalkanes are better solvents than the corresponding alkanes because of their increased polarity. Haloalkanes containing halogens other than fluorine are more reactive than the parent alkanes—it is this reactivity that is the basis of most controversies. Many are alkylating agents , with primary haloalkanes and those containing heavier halogens being the most active (fluoroalkanes do not act as alkylating agents under normal conditions). The ozone-depleting abilities of the CFCs arises from the photolability of the C–Cl bond. [ citation needed ] An estimated 4,100,000,000 kg of chloromethane are produced annually by natural sources. [ 2 ] The oceans are estimated to release 1 to 2 million tons of bromomethane annually. [ 3 ] The formal naming of haloalkanes should follow IUPAC nomenclature , which put the halogen as a prefix to the alkane. For example, ethane with bromine becomes bromoethane , methane with four chlorine groups becomes tetrachloromethane . However, many of these compounds have already an established trivial name, which is endorsed by the IUPAC nomenclature, for example chloroform (trichloromethane) and methylene chloride ( dichloromethane ). But nowadays, IUPAC nomenclature is used. To reduce confusion this article follows the systematic naming scheme throughout. Haloalkanes can be produced from virtually all organic precursors. From the perspective of industry, the most important ones are alkanes and alkenes. Alkanes react with halogens by free radical halogenation . In this reaction a hydrogen atom is removed from the alkane, then replaced by a halogen atom by reaction with a diatomic halogen molecule. Free radical halogenation typically produces a mixture of compounds mono- or multihalogenated at various positions. [ citation needed ] In hydrohalogenation , an alkene reacts with a dry hydrogen halide (HX) electrophile like hydrogen chloride ( HCl ) or hydrogen bromide ( HBr ) to form a mono-haloalkane. The double bond of the alkene is replaced by two new bonds, one with the halogen and one with the hydrogen atom of the hydrohalic acid. Markovnikov's rule states that under normal conditions, hydrogen is attached to the unsaturated carbon with the most hydrogen substituents. The rule is violated when neighboring functional groups polarize the multiple bond, or in certain additions of hydrogen bromide (addition in the presence of peroxides and the Wohl-Ziegler reaction ) which occur by a free-radical mechanism. [ citation needed ] Alkenes also react with halogens (X 2 ) to form haloalkanes with two neighboring halogen atoms in a halogen addition reaction . Alkynes react similarly, forming the tetrahalo compounds. This is sometimes known as "decolorizing" the halogen, since the reagent X 2 is colored and the product is usually colorless and odorless. [ citation needed ] Alcohol can be converted to haloalkanes. Direct reaction with a hydrohalic acid rarely gives a pure product, instead generating ethers . However, some exceptions are known: ionic liquids suppress the formation or promote the cleavage of ethers, [ 4 ] hydrochloric acid converts tertiary alcohols to choloroalkanes, and primary and secondary alcohols convert similarly in the presence of a Lewis acid activator, such as zinc chloride . The latter is exploited in the Lucas test . [ citation needed ] In the laboratory, more active deoxygenating and halogenating agents combine with base to effect the conversion. In the " Darzens halogenation ", thionyl chloride ( SOCl 2 ) with pyridine converts less reactive alcohols to chlorides. Both phosphorus pentachloride ( PCl 5 ) and phosphorus trichloride ( PCl 3 ) function similarly, and alcohols convert to bromoalkanes under hydrobromic acid or phosphorus tribromide (PBr 3 ). The heavier halogens do not require preformed reagents: A catalytic amount of PBr 3 may be used for the transformation using phosphorus and bromine; PBr 3 is formed in situ . [ 5 ] Iodoalkanes may similarly be prepared using red phosphorus and iodine (equivalent to phosphorus triiodide ). [ citation needed ] One family of named reactions relies on the deoxygenating effect of triphenylphosphine . In the Appel reaction , the reagent is tetrahalomethane and triphenylphosphine ; the co-products are haloform and triphenylphosphine oxide . In the Mitsunobu reaction , the reagents are any nucleophile , triphenylphosphine, and a diazodicarboxylate ; the coproducts are triphenyl­phosphine oxide and a hydrazodiamide . [ citation needed ] Two methods for the synthesis of haloalkanes from carboxylic acids are Hunsdiecker reaction and Kochi reaction . [ citation needed ] Many chloro- and bromoalkanes are formed naturally. The principal pathways involve the enzymes chloroperoxidase and bromoperoxidase . [ citation needed ] Primary aromatic amines yield diazonium ions in a solution of sodium nitrite . Upon heating this solution with copper(I) chloride, the diazonium group is replaced by -Cl. This is a comparatively easy method to make aryl halides as the gaseous product can be separated easily from aryl halide. [ citation needed ] When an iodide is to be made, copper chloride is not needed. Addition of potassium iodide with gentle shaking produces the haloalkane. [ citation needed ] Haloalkanes are reactive towards nucleophiles . They are polar molecules: the carbon to which the halogen is attached is slightly electropositive where the halogen is slightly electronegative . This results in an electron deficient (electrophilic) carbon which, inevitably, attracts nucleophiles . [ citation needed ] Substitution reactions involve the replacement of the halogen with another molecule—thus leaving saturated hydrocarbons , as well as the halogenated product. Haloalkanes behave as the R + synthon , and readily react with nucleophiles. [ citation needed ] Hydrolysis , a reaction in which water breaks a bond, is a good example of the nucleophilic nature of haloalkanes. The polar bond attracts a hydroxide ion, OH − (NaOH (aq) being a common source of this ion). This OH − is a nucleophile with a clearly negative charge, as it has excess electrons it donates them to the carbon, which results in a covalent bond between the two. Thus C–X is broken by heterolytic fission resulting in a halide ion, X − . As can be seen, the OH is now attached to the alkyl group, creating an alcohol . (Hydrolysis of bromoethane, for example, yields ethanol ). Reactions with ammonia give primary amines. [ citation needed ] Chloro- and bromoalkanes are readily substituted by iodide in the Finkelstein reaction . The iodoalkanes produced easily undergo further reaction. Sodium iodide is used as a catalyst . [ citation needed ] Haloalkanes react with ionic nucleophiles (e.g. cyanide , thiocyanate , azide ); the halogen is replaced by the respective group. This is of great synthetic utility: chloroalkanes are often inexpensively available. For example, after undergoing substitution reactions, cyanoalkanes may be hydrolyzed to carboxylic acids, or reduced to primary amines using lithium aluminium hydride . Azoalkanes may be reduced to primary amines by Staudinger reduction or lithium aluminium hydride . Amines may also be prepared from alkyl halides in amine alkylation , Gabriel synthesis and Delepine reaction , by undergoing nucleophilic substitution with potassium phthalimide or hexamine respectively, followed by hydrolysis. [ citation needed ] In the presence of a base, haloalkanes alkylate alcohols, amines, and thiols to obtain ethers , N -substituted amines, and thioethers respectively. They are substituted by Grignard reagent to give magnesium salts and an extended alkyl compound. [ citation needed ] In dehydrohalogenation reactions, the halogen and an adjacent proton are removed from halocarbons, thus forming an alkene . For example, with bromoethane and sodium hydroxide (NaOH) in ethanol , the hydroxide ion HO − abstracts a hydrogen atom. A Bromide ion is then lost, resulting in ethene , H 2 O and NaBr. Thus, haloalkanes can be converted to alkenes. Similarly, dihaloalkanes can be converted to alkynes . [ citation needed ] In related reactions, 1,2-dibromocompounds are debrominated by zinc dust to give alkenes and geminal dihalides can react with strong bases to give carbenes . [ citation needed ] Haloalkanes undergo free-radical reactions with elemental magnesium to give alkyl-magnesium compound: Grignard reagent . Haloalkanes also react with lithium metal to give organolithium compounds . Both Grignard reagents and organolithium compounds behave as the R − synthon. Alkali metals such as sodium and lithium are able to cause haloalkanes to couple in Wurtz reaction , giving symmetrical alkanes. Haloalkanes, especially iodoalkanes, also undergo oxidative addition reactions to give organometallic compounds . [ citation needed ] Chlorinated or fluorinated alkenes undergo polymerization. Important halogenated polymers include polyvinyl chloride (PVC), and polytetrafluoroethene (PTFE, or teflon). [ citation needed ] Nature produces massive amounts of chloromethane and bromomethane. Most concern focuses on anthropogenic sources, which are potential toxins, even carcinogens. Similarly, great interest has been shown in remediation of man made halocarbons such as those produced on large scale, such as dry cleaning fluids. Volatile halocarbons degrade photochemically because the carbon-halogen bond can be labile. Some microorganisms dehalogenate halocarbons. While this behavior is intriguing, the rates of remediation are generally very slow. [ 8 ] As alkylating agents , haloalkanes are potential carcinogens. The more reactive members of this large class of compounds generally pose greater risk, e.g. carbon tetrachloride . [ 9 ]
https://en.wikipedia.org/wiki/Haloalkane
A halochromic material is a material which changes colour when pH changes occur. [ 1 ] The term ‘ chromic ’ is defined for materials that can change colour reversibly with the presence of an external factor. In this case, the factor is pH. One class of compounds with this property are pH indicators . Halochromic substances are suited for use in environments where pH changes occur frequently, or places where changes in pH are extreme. Halochromic substances detect alterations in the acidity of substances, like detection of corrosion in metals. Halochromic substances may be used as indicators to determine the pH of solutions of unknown pH. [ 1 ] The colour obtained is compared with the colour obtained when the indicator is mixed with solutions of known pH. The pH of the unknown solution can then be estimated. Obvious disadvantages of this method include its dependency on the colour sensitivity of the human eye, and that unknown solutions that are already coloured cannot be used. The colour change of halochromic substances occur when the chemical binds to existing hydrogen and hydroxide ions in solution. [ 1 ] Such bonds result in changes in the conjugated systems of the molecule, or the range of electron flow. This alters the wavelength of light absorbed, which in turn results in a visible change of colour. Halochromic substances do not display a full range of colour for a full range of pH because, after certain acidities, the conjugated system will not change. The various shades result from different concentrations of halochromic molecules with different conjugated systems. This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Halochromism
Halocins are bacteriocins produced by halophilic Archaea [ 1 ] [ 2 ] and a type of archaeocin . [ 3 ] Since their discovery in 1982, [ 4 ] halocins have been demonstrated to be diverse in a similar ways as the other bacteriocins. [ 5 ] [ 6 ] [ 7 ] Some are large proteins, some small polypeptides (microhalocins). This diversity is surprising for a number of reasons, including the original presumptions that Archaea, particularly extremophiles, live at relatively low densities under conditions that may not require antagonistic behavior. The genetics, [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] mechanism of production [ 13 ] [ 14 ] and mechanism of action [ 15 ] [ 16 ] [ 17 ] [ 18 ] of the halocins have been studied, but not exhaustively. The ecology of the halocins [ 19 ] [ 20 ] [ 21 ] has been investigated as well. One interesting observation is that the halocins are active across the major divisions of archaea, [ 22 ] thus violating the dogma that they should be most effective against the most closely related strains. Halocins are particularly interesting because of the way the pore-forming bacteriocins have been used to probe cell membrane structure and the production and maintenance of energetic ion gradients across the membrane. The halophiles live at such extreme ion concentrations that they represent a set of unusual solutions and adaptations with regard to their energetic gradients. The ability to use native halocins to study these gradients provides a motivation for their characterization. They may have a role in human medicine. [ 23 ] [ 24 ] They are also found in many of the type species that are used to learn about halophiles in general. [ 25 ] Like other bacteriocins, the halocins are under investigation as antimicrobials for use in controlling spoilage during industrial processes; in this case, leather production. [ 26 ] Because the literature about halocins is relatively circumscribed, it can be exhaustively cited. Several times they have been addressed in book chapters. [ 3 ] [ 27 ] [ 28 ] [ 29 ] BACTIBASE [ 30 ] [ 31 ] database is an open-access database for bacteriocins including halocins ( view complete list ).
https://en.wikipedia.org/wiki/Halocin
In chemistry , a halogen bond ( XB or HaB [ 1 ] ) occurs when there is evidence of a net attractive interaction between an electrophilic region associated with a halogen atom in a molecular entity and a nucleophilic region in another, or the same, molecular entity. [ 2 ] Like a hydrogen bond , the result is not a formal chemical bond , but rather a strong electrostatic attraction. [ 3 ] [ 4 ] Mathematically, the interaction can be decomposed in two terms: one describing an electrostatic, orbital-mixing charge-transfer and another describing electron-cloud dispersion. Halogen bonds find application in supramolecular chemistry ; [ 3 ] [ 4 ] [ 5 ] drug design and biochemistry ; [ 6 ] [ 7 ] crystal engineering [ 7 ] and liquid crystals ; [ 3 ] and organic catalysis . [ 7 ] Halogen bonds occur when a halogen atom is electrostatically attracted to a partial negative charge . Necessarily, the atom must be covalently bonded in an antipodal σ-bond ; the electron concentration associated with that bond leaves a positively charged "hole" on the other side. [ 8 ] Although all halogens can theoretically participate in halogen bonds, the σ-hole shrinks if the electron cloud in question polarizes poorly or the halogen is so electronegative as to polarize the associated σ-bond. [ 3 ] [ 9 ] Consequently halogen-bond propensity follows the trend [ 10 ] [ Note 1 ] F < Cl < Br < I. There is no clear distinction between halogen bonds and expanded octet partial bonds ; what is superficially a halogen bond may well turn out to be a full bond in an unexpectedly relevant resonance structure . [ 11 ] [ 12 ] [ 13 ] [ 14 ] A halogen bond is almost collinear with the halogen atom's other, conventional bond, but the geometry of the electron-charge donor may be much more complex. Anions are usually better halogen-bond acceptors than neutral species: the more dissociated an ion pair is, the stronger the halogen bond formed with the anion. [ 17 ] A parallel relationship can easily be drawn between halogen bonding and hydrogen bonding . Both interactions revolve around an electron donor / electron acceptor relationship, between a halogen-like atom and an electron-dense one. But halogen bonding is both much stronger and more sensitive to direction than hydrogen bonding. A typical hydrogen bond has energy of formation 20 kJ/mol ; known halogen bond energies range from 10–200 kJ/mol. [ 16 ] The σ-hole concept readily extends to pnictogen, chalcogen and aerogen bonds, corresponding to atoms of Groups 15 , 16 and 18 (respectively). [ 18 ] In 1814, Jean-Jacques Colin discovered (to his surprise) that a mixture of dry gaseous ammonia and iodine formed a shiny, metallic-appearing liquid. Frederick Guthrie established the precise composition of the resulting I 2 ···NH 3 complex fifty years later, but the physical processes underlying the molecular interaction remained mysterious until the development of Robert S. Mulliken 's theory of inner-sphere and outer-sphere interactions. [ 19 ] In Mulliken's categorization, the intermolecular interactions associated with small partial charges affect only the "inner sphere" of an atom's electron distribution; the electron redistribution associated with Lewis adducts affects the "outer sphere" instead. [ 20 ] Then, in 1954, Odd Hassel fruitfully applied the distinction to rationalize the X-ray diffraction patterns associated with a mixture of 1,4-dioxane and bromine. [ 21 ] The patterns suggested that only 2.71 Å separated the dioxane oxygen atoms and bromine atoms, much closer than the sum (3.35 Å) of the atoms' van der Waals radii; and that the angle between the O−Br and Br−Br bond was about 180°. From these facts, Hassel concluded that halogen atoms are directly linked to electron pair donors in a direction with a bond direction that coincides with the axes of the orbitals of the lone pairs in the electron pair donor molecule. [ 8 ] For this work, Hassel was awarded the 1969 Nobel Prize in Chemistry . [ 22 ] Dumas and coworkers first coined the term "halogen bond" in 1978, during their investigations into complexes of CCl 4 , CBr 4 , SiCl 4 , and SiBr 4 with tetrahydrofuran , tetrahydropyran , pyridine , anisole , and di-n-butyl ether in organic solvents. [ 23 ] However, it was not until the mid-1990s, that the nature and applications of the halogen bond began to be intensively studied. Through systematic and extensive microwave spectroscopy of gas-phase halogen bond adducts, Legon and coworkers drew attention to the similarities between halogen-bonding and better-known hydrogen-bonding interactions. [ 24 ] In 2007, computational calculations by Politzer and Murray showed that an anisotropic electron density distribution around the halogen nucleus — the "σ-hole" [ 9 ] — underlay the high directionality of the halogen bond. [ 25 ] This hole was then experimentally observed using Kelvin probe force microscopy . [ 26 ] [ 27 ] In 2020, Kellett et al. showed that halogen bonds also have a π-covalent character similar to metal coordination bonds . [ 28 ] In August 2023 the "π-hole" was too experimentally observed [ 29 ] [ 30 ] The strength and directionality of halogen bonds are a key tool in the discipline of crystal engineering , which attempts to shape crystal structures through close control of intermolecular interactions. [ 32 ] Halogen bonds can stabilize copolymers [ 33 ] [ 34 ] or induce mesomorphism in otherwise isotropic liquids . [ 35 ] Indeed, halogen bond-induced liquid crystalline phases are known in both alkoxystilbazoles [ 35 ] and silsesquioxanes (pictured). [ 31 ] Alternatively, the steric sensitivity of halogen bonds can cause bulky molecules to crystallize into porous structures ; in one notable case, halogen bonds between iodine and aromatic π-orbitals caused molecules to crystallize into a pattern that was nearly 40% void . [ 36 ] Conjugated polymers offer the tantalizing possibility of organic molecules with a manipulable electronic band structure , but current methods for production have an uncontrolled topology . Sun, Lauher, and Goroff discovered that certain amides ensure a linear polymerization of poly(diiododiacetylene) . The underlying mechanism is a self-organization of the amides via hydrogen bonds that then transfers to the diiododiacetylene monomers via halogen bonds. Although pure diiododiacetylene crystals do not polymerize spontaneously, the halogen-bond induced organization is sufficiently strong that the cocrystals do spontaneously polymerize. [ 37 ] Most biological macromolecules contain few or no halogen atoms. But when molecules do contain halogens, halogen bonds are often essential to understanding molecular conformation . Computational studies suggest that known halogenated nucleobases form halogen bonds with oxygen , nitrogen , or sulfur in vitro . Interestingly, oxygen atoms typically do not attract halogens with their lone pairs , but rather the π electrons in the carbonyl or amide group . [ 6 ] Halogen bonding can be significant in drug design as well. For example, inhibitor IDD 594 binds to human aldose reductase through a bromine halogen bond, as shown in the figure. The molecules fail to bind to each other if similar aldehyde reductase replaces the enzyme, or chlorine replaces the drug halogen, because the variant geometries inhibit the halogen bond. [ 38 ]
https://en.wikipedia.org/wiki/Halogen_bond
In chemistry , halogenation is a chemical reaction which introduces one or more halogens into a chemical compound . Halide -containing compounds are pervasive, making this type of transformation important, e.g. in the production of polymers , drugs . [ 1 ] This kind of conversion is in fact so common that a comprehensive overview is challenging. This article mainly deals with halogenation using elemental halogens ( F 2 , Cl 2 , Br 2 , I 2 ). Halides are also commonly introduced using salts of the halides and halogen acids. [ clarification needed ] Many specialized reagents exist for introducing halogens into diverse substrates , e.g. thionyl chloride . Several pathways exist for the halogenation of organic compounds, including free radical halogenation , ketone halogenation , electrophilic halogenation , and halogen addition reaction . The nature of the substrate determines the pathway. The facility of halogenation is influenced by the halogen. Fluorine and chlorine are more electrophilic and are more aggressive halogenating agents. Bromine is a weaker halogenating agent than both fluorine and chlorine, while iodine is the least reactive of them all. The facility of dehydrohalogenation follows the reverse trend: iodine is most easily removed from organic compounds, and organofluorine compounds are highly stable. Halogenation of saturated hydrocarbons is a substitution reaction . The reaction typically involves free radical pathways. The regiochemistry of the halogenation of alkanes is largely determined by the relative weakness of the C–H bonds . This trend is reflected by the faster reaction at tertiary and secondary positions. Free radical chlorination is used for the industrial production of some solvents : [ 2 ] Naturally-occurring organobromine compounds are usually produced by free radical pathway catalyzed by the enzyme bromoperoxidase . The reaction requires bromide in combination with oxygen as an oxidant . The oceans are estimated to release 1–2 million tons of bromoform and 56,000 tons [ which? ] of bromomethane annually. [ 3 ] The iodoform reaction , which involves degradation of methyl ketones , proceeds by the free radical iodination. Because of its extreme reactivity, fluorine ( F 2 ) represents a special category with respect to halogenation. Most organic compounds, saturated or otherwise, burn upon contact with F 2 , ultimately yielding carbon tetrafluoride . By contrast, the heavier halogens are far less reactive toward saturated hydrocarbons. Highly specialised conditions and apparatus are required for fluorinations with elemental fluorine . Commonly, fluorination reagents are employed instead of F 2 . Such reagents include cobalt trifluoride , chlorine trifluoride , and iodine pentafluoride . [ 4 ] The method electrochemical fluorination is used commercially for the production of perfluorinated compounds . It generates small amounts of elemental fluorine in situ from hydrogen fluoride . The method avoids the hazards of handling fluorine gas. Many commercially important organic compounds are fluorinated using this technology. Unsaturated compounds , especially alkenes and alkynes , add halogens: In oxychlorination , the combination of hydrogen chloride and oxygen serves as the equivalent of chlorine , as illustrated by this route to 1,2-dichloroethane : The addition of halogens to alkenes proceeds via intermediate halonium ions . In special cases, such intermediates have been isolated. [ 5 ] Bromination is more selective than chlorination because the reaction is less exothermic . Illustrative of the bromination of an alkene is the route to the anesthetic halothane from trichloroethylene : [ 6 ] Iodination and bromination can be effected by the addition of iodine and bromine to alkenes. The reaction, which conveniently proceeds with the discharge of the color of I 2 and Br 2 , is the basis of the analytical method . The iodine number and bromine number are measures of the degree of unsaturation for fats and other organic compounds. Aromatic compounds are subject to electrophilic halogenation : This kind of reaction typically works well for chlorine and bromine . Often a Lewis acidic catalyst is used, such as ferric chloride . [ 7 ] Many detailed procedures are available. [ 8 ] [ 9 ] Because fluorine is so reactive , other methods, such as the Balz–Schiemann reaction , are used to prepare fluorinated aromatic compounds. In the Hunsdiecker reaction , carboxylic acids are converted to organic halide , whose carbon chain is shortened by one carbon atom with respect to the carbon chain of the particular carboxylic acid. The carboxylic acid is first converted to its silver salt, which is then oxidized with halogen : Many organometallic compounds react with halogens to give the organic halide: All elements aside from argon , neon , and helium form fluorides by direct reaction with fluorine . Chlorine is slightly more selective, but still reacts with most metals and heavier nonmetals . Following the usual trend, bromine is less reactive and iodine least of all. Of the many reactions possible, illustrative is the formation of gold(III) chloride by the chlorination of gold . The chlorination of metals is usually not very important industrially since the chlorides are more easily made from the oxides and hydrogen chloride . Where chlorination of inorganic compounds is practiced on a relatively large scale is for the production of phosphorus trichloride and disulfur dichloride . [ 10 ]
https://en.wikipedia.org/wiki/Halogenation
In organic chemistry a halohydrin (also a haloalcohol or β-halo alcohol ) is a functional group in which a halogen and a hydroxyl are bonded to adjacent carbon atoms, which otherwise bear only hydrogen or hydrocarbyl groups (e.g. 2-chloroethanol , 3-chloropropane-1,2-diol ). [ 1 ] The term only applies to saturated motifs, as such compounds like 2-chlorophenol would not normally be considered halohydrins. Megatons of some chlorohydrins, e.g. propylene chlorohydrin , are produced annually as precursors to polymers. Halohydrins may be categorized as chlorohydrins, bromohydrins, fluorohydrins or iodohydrins depending on the halogen present. Halohydrins are usually prepared by treatment of an alkene with a halogen, in the presence of water. The reaction is a form of electrophilic addition , with the halogen acting as electrophile. [ 2 ] In that regard, it resembles the halogen addition reaction and proceeds with anti addition , leaving the newly added X and OH groups in a trans configuration . The chemical equation for the conversion of ethylene to ethylene chlorohydrin is: When bromination is desired, N -bromosuccinimide (NBS) can be preferable to bromine because fewer side-products are produced. Halohydrins may also be prepared from the reaction of an epoxide with a hydrohalic acid , [ 3 ] or a metal halide. [ 4 ] This reaction is produced on an industrial scale for the production of chlorohydrin precursors to two important epoxides, epichlorohydrin and propylene oxide [ citation needed ] . At one time, 2-chloroethanol was produced on a large scale as a precursor to ethylene oxide , but the latter is now prepared by the direct oxidation of ethylene. [ 5 ] 2-Chlorocarboxylic acids can be reduced with lithium aluminium hydride to the 2-chloroalcohols. The required 2-chlorocarboxylic acids are obtained in a variety of ways, including the Hell–Volhard–Zelinsky halogenation . 2-Chloropropionic acid is produced by chlorination of propionyl chloride followed by hydrolysis of the 2-chloropropionyl chloride. Enantiomerically pure ( S )-2-chloropropionic acid and several related compounds can be prepared from amino acids via diazotization . [ 6 ] In presence of a base halohydrins undergo internal S N 2 reaction to form epoxides . Industrially, the base is calcium hydroxide , whereas in the laboratory, potassium hydroxide is often used. This reaction is the reverse of the formation reaction from an epoxide and can be considered a variant of the Williamson ether synthesis . Most of the world's supply of propylene oxide arises via this route. [ 7 ] Such reactions can form the basis of more complicated processes, for example epoxide formation is one of the key steps in the Darzens reaction . Compounds such as 2,2,2-trichloroethanol , which contain multiple geminal halogens adjacent to a hydroxyl group may be considered halohydrins (although, strictly speaking, they fail the IUPAC definition) as they possess similar chemistry. In particular they also undergo intramolecular cyclisation to form dihaloepoxy groups. These species are both highly reactive and synthetically useful, forming the basis of the Jocic–Reeve reaction , Bargellini reaction and Corey–Link reaction . [ 8 ] As with any functional group, the hazards of halohydrins are difficult to generalize as they may form part of an almost limitless series of compounds, with each structure having different pharmacology. In general, simpler low molecular weight compounds are often toxic and carcinogenic (e.g. 2-chloroethanol , 3-MCPD ) by virtue of being alkylating agents . This reactivity can be put to good use, for instance in the anti-cancer drug mitobronitol . A number of synthetic corticosteroids exist bearing a fluorohydrin motif ( triamcinolone , dexamethasone ). Despite their rather suggestive names epichlorohydrin and sulfuric chlorohydrin are not halohydrins, although the former is most commonly produced using a chlorohydrin intermediate.
https://en.wikipedia.org/wiki/Halohydrin
Halomethane compounds are derivatives of methane ( CH 4 ) with one or more of the hydrogen atoms replaced with halogen atoms ( F , Cl , Br , or I ). Halomethanes are both naturally occurring, especially in marine environments, and human-made, most notably as refrigerants, solvents, propellants, and fumigants. Many, including the chlorofluorocarbons , have attracted wide attention because they become active when exposed to ultraviolet light found at high altitudes and destroy the Earth's protective ozone layer . Like methane itself, halomethanes are tetrahedral molecules. The halogen atoms differ greatly in size and charge from hydrogen and from each other. Consequently, most halomethanes deviate from the perfect tetrahedral symmetry of methane. [ 1 ] The physical properties of halomethanes depend on the number and identity of the halogen atoms in the compound. In general, halomethanes are volatile but less so than methane because of the polarizability of the halides. The polarizability of the halides and the polarity of the molecules makes them useful as solvents. The halomethanes are far less flammable than methane. Broadly speaking, reactivity of the compounds is greatest for the iodides and lowest for the fluorides. The halomethanes are produced on an industrial scale from abundant precursors such as natural gas or methanol , and from halogens or halides . They are usually prepared by one of three methods. [ 2 ] This method is useful for the production of CH 4− n Cl n ( n = 1, 2, 3, or 4). The main problems with this method are that it cogenerates HCl and it produces mixtures of different products. Using CH 4 in large excess generates primarily CH 3 Cl and using Cl 2 in large excess generates primarily CCl 4 , but mixtures of other products will still be present. Traces of halomethanes in the atmosphere arise through the introduction of other non-natural, industrial materials. Many marine organisms biosynthesize halomethanes, especially bromine-containing compounds. [ 3 ] Small amounts of chloromethanes arise from the interaction of chlorine sources with various carbon compounds. The biosyntheses of these halomethanes are catalyzed by the chloroperoxidase and bromoperoxidase enzymes, respectively. An idealized equation is: Halons are usually defined as hydrocarbons where the hydrogen atoms have been replaced by bromine, along with other halogens. [ 4 ] They are referred to by a system of code numbers similar to (but simpler than) the system used for freons. The first digit specifies the number of carbon atoms in the molecule, the second is the number of fluorine atoms, the third is the chlorine atoms, and the fourth is the number of bromine atoms. If the number includes a fifth digit, the fifth number indicates the number of iodine atoms (though iodine in halon is rare). Any bonds not taken up by halogen atoms are then allocated to hydrogen atoms. For example, consider Halon 1211. This halon has number 1211 in its name, which tells it has 1 carbon atom, 2 fluorine atoms, 1 chlorine atom, and 1 bromine atom. A single carbon only has four bonds, all of which are taken by the halogen atoms, so there is no hydrogen. Thus its formula is CF 2 ClBr , hence its IUPAC name is bromochlorodifluoromethane. The refrigerant naming system is mainly used for fluorinated and chlorinated short alkanes used as refrigerants. In the United States, the standard is specified in ANSI/ASHRAE Standard 34–1992, with additional annual supplements. [ 5 ] The specified ANSI/ASHRAE prefixes were FC (fluorocarbon) or R (refrigerant), but today most are prefixed by a more specific classification: The decoding system for CFC-01234a is: Other coding systems are in use as well. Hydrofluorocarbons (HFCs) contain no chlorine. They are composed entirely of carbon, hydrogen, and fluorine. They have no known effects on the ozone layer; fluorine itself is not ozone-toxic. [ 6 ] [ 7 ] However, HFCs and perfluorocarbons (PFCs) are greenhouse gases , which cause global warming . Two groups of haloalkanes, hydrofluorocarbons (HFCs) and perfluorocarbons , are targets of the Kyoto Protocol . [ 8 ] Allan Thornton, President of the Environmental Investigation Agency , a non-governmental, environmental watchdog, says that HFCs are up to 12,500 times as potent as carbon dioxide in global warming. [ 9 ] The higher global warming potential has two causes: HFCs remain in the atmosphere for long periods of time, and they have more chemical bonds than CO 2 , which means that they are able to absorb more solar energy per molecule than carbon dioxide. Wealthy countries are clamping down on these gases. Thornton says that many countries are needlessly producing these chemicals just to get the carbon credits. Thus, as a result of carbon trading rules under the Kyoto Protocol, nearly half the credits from developing countries are from HFCs, with China scoring billions of dollars from catching and destroying HFCs that would be in the atmosphere as industrial byproducts. [ 10 ] Most permutations of hydrogen, fluorine, chlorine, bromine, and iodine on one carbon atom have been evaluated experimentally. ( Freon is a trade name for a group of chlorofluorocarbons used primarily as a refrigerant . The main chemical used under this trademark is dichlorodifluoromethane. The word Freon is a registered trademark belonging to DuPont .) Because they have many applications and are easily prepared, halomethanes have been of intense commercial interest. Dichloromethane is the most important halomethane-based solvent. Its volatility, low flammability, and ability to dissolve a wide range of organic compounds makes this colorless liquid a useful solvent. [ 2 ] It is widely used as a paint stripper and a degreaser . In the food industry , it was previously used to decaffeinate coffee and tea as well as to prepare extracts of hops and other flavorings . [ 11 ] Its volatility has led to its use as an aerosol spray propellant and as a blowing agent for polyurethane foams . One major use of CFCs has been as propellants of aerosols , including metered-dose inhalers for drugs used to treat asthma . The conversion of these devices and treatments from CFC to propellants that do not deplete the ozone layer is almost complete. Production and import is now banned in the United States. At high temperatures, halons decompose to release halogen atoms that combine readily with active hydrogen atoms, quenching flame propagation reactions even when adequate fuel, oxygen, and heat remain. The chemical reaction in a flame proceeds as a free radical chain reaction ; by sequestering the radicals which propagate the reaction, halons are able to halt the fire at much lower concentrations than are required by fire suppressants using the more traditional methods of cooling, oxygen deprivation, or fuel dilution. As of 2023 [update] , due to ozone depletion problems, halon fire extinguishers are largely banned in some countries and alternatives are being deployed by the US military. [ 12 ] Halon 1301 total flooding systems are typically used at concentrations no higher than 7% by volume in air, and can suppress many fires at 2.9% v/v. By contrast, carbon dioxide fire suppression flood systems operate from 34% concentration by volume (surface-only combustion of liquid fuels) up to 75% (dust traps). Carbon dioxide can cause severe distress at concentrations of 3–6%, and has caused death by respiratory paralysis in a few minutes at 10% concentration. Halon 1301 causes only slight giddiness at its effective concentration of 5%, and even at 15% those exposed remain conscious but impaired and suffer no long-term effects. (Experimental animals have also been exposed to 2% concentrations of Halon 1301 for 30 hours per week for 4 months, with no discernible health effects. [ citation needed ] ) Halon 1211 also has low toxicity, although it is more toxic than Halon 1301, and thus considered unsuitable for flooding systems. However, Halon 1301 fire suppression is not completely non-toxic; very high temperature flame, or contact with red-hot metal, can cause decomposition of Halon 1301 to toxic byproducts. The presence of such byproducts is readily detected because they include hydrobromic acid and hydrofluoric acid , which are intensely irritating. Halons are very effective on Class A (organic solids), B (flammable liquids and gases), and C (electrical) fires, but they are unsuitable for Class D (metal) fires, as they will not only produce toxic gas and fail to halt the fire, but in some cases pose a risk of explosion. Halons can be used on Class K (kitchen oils and greases) fires, but offer no advantages over specialised foams. Halon 1301 is common in total flooding systems. In these systems, banks of halon cylinders are kept pressurised to about 4 MPa (600 psi ) with compressed nitrogen , and a fixed piping network leads to the protected enclosure. On triggering, the entire measured contents of one or more cylinders are discharged into the enclosure in a few seconds, through nozzles designed to ensure uniform mixing throughout the room. The quantity dumped is pre-calculated to achieve the desired concentration, typically 3–7% v/v. This level is maintained for some time, typically with a minimum of ten minutes and sometimes up to a twenty-minute "soak" time, to ensure all items have cooled so reignition is unlikely to occur, then the air in the enclosure is purged, generally via a fixed purge system that is activated by the proper authorities. During this time the enclosure may be entered by persons wearing SCBA . (There exists a common myth that this is because halon is highly toxic; in fact, it is because it can cause giddiness and mildly impaired perception, and due to the risk of combustion byproducts.) Flooding systems may be manually operated or automatically triggered by a VESDA or other automatic detection system. In the latter case, a warning siren and strobe lamp will first be activated for a few seconds to warn personnel to evacuate the area. The rapid discharge of halon and consequent rapid cooling fills the air with fog , and is accompanied by a loud, disorienting noise. Halon 1301 is also used in the inerting system of the F-16 fighter to prevent the fuel vapors in the fuel tanks from becoming explosive; when the aircraft enters an area with the possibility of attack, Halon 1301 is injected into the fuel tanks for one-time use. Due to ozone depletion, trifluoroiodomethane ( CF 3 I ) is being considered as an alternative. [ 13 ] Halon 1211 is typically used in hand-held extinguishers, in which a stream of liquid halon is directed at a smaller fire by a user. The stream evaporates under reduced pressure, producing strong local cooling, as well as a high concentration of halon in the immediate vicinity of the fire. In this mode, fire is extinguished by cooling and oxygen deprivation at the core of the fire, as well as radical quenching over a larger area. After fire suppression, the halon diffuses, leaving no residue. Chloromethane and bromomethane are used to introduce methyl groups in organic synthesis . Chlorodifluoromethane is the main precursor of tetrafluoroethylene , which is the monomeric precursor to Teflon . [ 1 ] Haloalkanes are diverse in their properties, making generalizations difficult. Few are acutely toxic, but many pose risks from prolonged exposure. Some problematic aspects include carcinogenicity and liver damage (e.g., carbon tetrachloride). Under certain combustion conditions, chloromethanes convert to phosgene , which is highly toxic.
https://en.wikipedia.org/wiki/Halomethane
Organohalide respiration (OHR) (previously named halorespiration or dehalorespiration ) is the use of halogenated compounds as terminal electron acceptors in anaerobic respiration . [ 1 ] [ 2 ] [ 3 ] Organohalide respiration can play a part in microbial biodegradation . The most common substrates are chlorinated aliphatics ( PCE , TCE , chloroform ) and chlorinated phenols. Organohalide-respiring bacteria are highly diverse. This trait is found in some Campylobacterota , Thermodesulfobacteriota , Chloroflexota (green nonsulfur bacteria), low G+C gram positive Clostridia , [ 4 ] and ultramicrobacteria. [ 5 ] The process of organohalide respiration, uses reductive dehalogenation to produce energy that can be used by the respiring microorganism to carry out its growth and metabolism. [ 6 ] Halogenated organic compounds are used as the terminal electron acceptor , which results in their dehalogenation. [ 6 ] Reductive dehalogenation is the process by which this occurs. [ 6 ] It involves the reduction of halogenated compounds by removing the halogen substituents, while simultaneously adding electrons to the compound. [ 7 ] Hydrogenolysis and vicinal reduction are the two known processes of this mechanism that have been identified. [ 7 ] In both processes, the removed halogen substituents are released as anions. [ 7 ] Reductive dehalogenation is catalyzed by reductive dehalogenases , which are membrane-associated enzymes. [ 6 ] [ 8 ] [ 3 ] A number of not only membrane-associated but also cytoplasmic hydrogenases, in some cases as part of the protein complexes, are predicted to play roles in the organohalide respiration process. [ 9 ] Most of these enzymes contain iron-sulfur (Fe-S) clusters, and a corrinoid cofactor at their active sites. [ 6 ] Although the exact mechanism is unknown, research suggests that these two components of the enzyme may be involved in the reduction. [ 6 ] Common substrates that are used as terminal electron acceptors in organohalide respiration are organochloride pesticides, aryl halides and alkyl solvents. [ 7 ] Many of these are persistent pollutants that can only be degraded anaerobically by organohalide respiration, either partially or completely. [ 6 ] [ 7 ] Trichloroethylene (TCE) and tetrachloroethylene (PCE) are two examples of such pollutants, and their degradation has been a focus of research. [ 6 ] [ 7 ] [ 10 ] PCE is a chlorinated solvent that is widely used in dry cleaning, degreasing machinery and other applications. [ 6 ] [ 7 ] It remains a common contaminant of groundwater. [ 6 ] [ 7 ] Bacteria that are capable of completely degrading PCE to ethene , a gaseous chemical, have been isolated. [ 10 ] They have been found to belong to the genus Dehalococcoides and to use H 2 as their electron donor . [ 10 ] The process of organohalide respiration has been applied to in situ bioremediation of PCE and TCE in the past. [ 6 ] [ 8 ] For example, enhanced reductive dechlorination has been used to treat contaminated groundwater by introducing electron donors and dehalorespiring bacteria into the contaminated site, to create conditions that stimulate bacterial growth and organohalide respiration. [ 8 ] In enhanced reductive dechlorination, the pollutants act as the electron acceptors and are completely reduced to ultimately produce ethene in a series of reactions. [ 8 ] An ecologically significant aspect of bacterial organohalide respiration is the reduction of the two anthropogenic pollutants tetrachloroethylene (PCE) and trichloroethylene (TCE). [ 11 ] Their presence as environmental pollutants arose from their common industrial use as metal-degreasing agents from the 1920s - 1970. [ 12 ] These xenobiotic compounds tend to form partially insoluble layers called dense non-aqueous phase liquids (DNAPLs) at the bottom of groundwater aquifers , which solubilize in a slow, reservoir-like manner, making TCE and PCE among the most common groundwater pollutants. [ 13 ] A commonly used strategy for the removal of TCE and PCE from groundwater is the use of bioremediation via enhanced reductive dechlorination (ERD). [ 14 ] ERD involves in-situ injections of dehalorespiring bacteria, among fermentable organic substrates serving as electron donors , while the two pollutants, TCE and PCE, act as the electron acceptors . [ 14 ] This facilitates the sequential dechlorination of PCE and TCE into noxious cis- 1,2-Dichloroethylene (DCE) and Vinyl chloride (VC), which then suit as electron acceptors for the full dechlorination into ethylene . [ 14 ] A wide array of bacteria across different genera have the capacity to partially dechlorinate PCE and TCE into cis -DCE and VC. [ 14 ] One such example of this is the Magnetospirillum bacterium, strain MS-1, which can reduce PCE into cis- DCE under aerobic conditions. [ 15 ] However, these daughter substrates have higher toxicity profiles than their parent compounds. [ 14 ] As such, effective dechlorination of cis -DCE and VC into innocuous ethene is crucial for bioremediation of PCE and TCE-contaminated aquifers. [ 14 ] Currently, bacteria of the Dehalococcoides genera are the only known organisms that can fully dechlorinate PCE into ethylene. This is due to their specific transmembrane reductive dehalogenases (RDases) that metabolize the chlorine atoms on the xenobiotic pollutants for cellular energy. [ 16 ] In particular, Dehalococcoides isolates VS and BAV1 encode Vinyl Chloride RDases, which metabolize VC into innocuous ethene, making them required species in ERD systems used in bioremediation of PCE and TCE. [ 16 ]
https://en.wikipedia.org/wiki/Halorespiration
Halothane , sold under the brand name Fluothane among others, is a general anaesthetic . [ 5 ] It can be used to induce or maintain anaesthesia . [ 5 ] One of its benefits is that it does not increase the production of saliva , which can be particularly useful in those who are difficult to intubate . [ 5 ] It is given by inhalation . [ 5 ] Side effects include an irregular heartbeat , respiratory depression , and hepatotoxicity . [ 5 ] Like all volatile anesthetics, it should not be used in people with a personal or family history of malignant hyperthermia . [ 5 ] It appears to be safe in porphyria . [ 6 ] It is unclear whether its usage during pregnancy is harmful to the fetus, and its use during a C-section is generally discouraged. [ 7 ] Halothane is a chiral molecule that is used as a racemic mixture . [ 8 ] Halothane was discovered in 1951. [ 9 ] It was approved for medical use in the United States in 1958. [ 3 ] It is on the World Health Organization's List of Essential Medicines . [ 10 ] Its use in developed countries has been mostly replaced by newer anesthetic agents such as sevoflurane . [ 11 ] It is no longer commercially available in the United States. [ 7 ] Halothane also contributes to ozone depletion . [ 12 ] [ 13 ] It is a potent anesthetic with a minimum alveolar concentration (MAC) of 0.74%. [ 14 ] Its blood/gas partition coefficient of 2.4 makes it an agent with moderate induction and recovery time. [ 15 ] It is not a good analgesic and its muscle relaxation effect is moderate. [ 16 ] Halothane is colour-coded red on anaesthetic vaporisers . [ 17 ] Side effects include irregular heartbeat , respiratory depression , and hepatotoxicity . [ 5 ] It appears to be safe in porphyria . [ 6 ] It is unclear whether use during pregnancy is harmful to the baby, and it is not generally recommended for use during a C-section . [ 7 ] In rare cases, repeated exposure to halothane in adults was noted to result in severe liver injury. This occurred in about one in 10,000 exposures. The resulting syndrome was referred to as halothane hepatitis , immunoallergic in origin, [ 18 ] and is thought to result from the metabolism of halothane to trifluoroacetic acid via oxidative reactions in the liver. About 20% of inhaled halothane is metabolized by the liver and these products are excreted in the urine. The hepatitis syndrome had a mortality rate of 30% to 70%. [ 19 ] Concern for hepatitis resulted in a dramatic reduction in the use of halothane for adults and it was replaced in the 1980s by enflurane and isoflurane . [ 20 ] [ 21 ] By 2005, the most common volatile anesthetics used were isoflurane , sevoflurane , and desflurane . Since the risk of halothane hepatitis in children was substantially lower than in adults, halothane continued to be used in pediatrics in the 1990s as it was especially useful for inhalation induction of anesthesia. [ 22 ] [ 23 ] However, by 2000, sevoflurane, excellent for inhalation induction, had largely replaced the use of halothane in children. [ 24 ] Halothane sensitises the heart to catecholamines, so it is liable to cause cardiac arrhythmia, occasionally fatal, particularly if hypercapnia has been allowed to develop. This seems to be especially problematic in dental anesthesia. [ 25 ] Like all the potent inhalational anaesthetic agents, it is a potent trigger for malignant hyperthermia . [ 5 ] Similarly, in common with the other potent inhalational agents, it relaxes uterine smooth muscle and this may increase blood loss during delivery or termination of pregnancy. [ 26 ] People can be exposed to halothane in the workplace by breathing it in as waste anaesthetic gas, skin contact, eye contact, or swallowing it. [ 27 ] The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 2 ppm (16.2 mg/m 3 ) over 60 minutes. [ 28 ] The exact mechanism of the action of general anaesthetics has not been delineated . [ 29 ] Halothane activates GABA A and glycine receptors . [ 30 ] [ 31 ] It also acts as an NMDA receptor antagonist , [ 31 ] inhibits nACh and voltage-gated sodium channels , [ 30 ] [ 32 ] and activates 5-HT 3 and twin-pore K + channels . [ 30 ] [ 33 ] It does not affect the AMPA or kainate receptors . [ 31 ] Halothane (2-bromo-2-chloro-1,1,1-trifluoroethane) is a dense, highly volatile, clear, colourless, nonflammable liquid with a chloroform-like sweet odour. It is very slightly soluble in water and miscible with various organic solvents. Halothane can decompose to hydrogen fluoride , hydrogen chloride and hydrogen bromide in the presence of light and heat. [ 34 ] Chemically, halothane is an alkyl halide (not an ether like many other anesthetics). [ 4 ] The structure has one stereocenter, so ( R )- and ( S )- optical isomers occur. [ citation needed ] The commercial synthesis of halothane starts from trichloroethylene , which is reacted with hydrogen fluoride in the presence of antimony trichloride at 130 °C to form 2-chloro-1,1,1-trifluoroethane . This is then reacted with bromine at 450 °C to produce halothane. [ 35 ] Attempts to find anesthetics with less metabolism led to halogenated ethers such as enflurane and isoflurane . The incidence of hepatic reactions with these agents is lower. The exact degree of hepatotoxic potential of enflurane is debated, although it is minimally metabolized. Isoflurane is essentially not metabolized and reports of associated liver injury are quite rare. [ 36 ] Small amounts of trifluoroacetic acid can be formed from both halothane and isoflurane metabolism and possibly accounts for cross sensitization of patients between these agents. [ 37 ] [ 38 ] The main advantage of the more modern agents is lower blood solubility, resulting in faster induction of and recovery from anaesthesia. [ 39 ] Halothane was first synthesized by C. W. Suckling of Imperial Chemical Industries in 1951 at the ICI Widnes Laboratory and was first used clinically by M. Johnstone in Manchester in 1956. Initially, many pharmacologists and anaesthesiologists had doubts about the safety and efficacy of the new drug. But halothane, which required specialist knowledge and technologies for safe administration, also afforded British anaesthesiologists the opportunity to remake their speciality as a profession during a period, when the newly established National Health Service needed more specialist consultants. [ 40 ] In this context, halothane eventually became popular as a nonflammable general anesthetic replacing other volatile anesthetics such as trichloroethylene , diethyl ether and cyclopropane . In many parts of the world it has been largely replaced by newer agents since the 1980s but is still widely used in developing countries because of its lower cost. [ 41 ] Halothane was given to many millions of people worldwide from its introduction in 1956 through the 1980s. [ 42 ] Its properties include cardiac depression at high levels, cardiac sensitization to catecholamines such as norepinephrine , and potent bronchial relaxation. Its lack of airway irritation made it a common inhalation induction agent in pediatric anesthesia. [ 43 ] [ 44 ] Its use in developed countries has been mostly replaced by newer anesthetic agents such as sevoflurane . [ 45 ] It is not commercially available in the United States. [ 7 ] It is on the World Health Organization's List of Essential Medicines . [ 10 ] It is available as a volatile liquid, at 30, 50, 200, and 250 ml per container but in many developed nations is not available having been displaced by newer agents. [ 46 ] It is the only inhalational anesthetic containing bromine , which makes it radiopaque . [ 47 ] It is colorless and pleasant-smelling, but unstable in light. It is packaged in dark-colored bottles and contains 0.01% thymol as a stabilizing agent. [ 20 ] Owing to the presence of covalently bonded fluorine, halothane absorbs in the atmospheric window and is therefore a greenhouse gas . However, it is much less potent than most other chlorofluorocarbons and bromofluorocarbons due to its short atmospheric lifetime, estimated at only one year vis-à-vis over 100 years for many perfluorocarbons . [ 48 ] Despite its short lifespan, halothane still has a global warming potential 50 times that of carbon dioxide, although this is over 100 times smaller than the most abundant fluorinated gases, and about 800 times smaller than the GWP of sulfur hexafluoride over 500 years. [ 49 ] Halothane is believed to make a negligible contribution to global warming . [ 48 ] Halothane is an ozone depleting substance with an ODP of 1.56 and it is calculated to be responsible for 1% of total stratospheric ozone layer depletion. [ 12 ] [ 13 ] Unlike most ozone depleting substances, it is not governed under the Montreal Protocol . [ 50 ]
https://en.wikipedia.org/wiki/Halothane
Halotolerance is the adaptation of living organisms to conditions of high salinity . [ 1 ] Halotolerant species tend to live in areas such as hypersaline lakes , coastal dunes , saline deserts , salt marshes , and inland salt seas and springs . Halophiles are also organisms that not only live in highly saline environments but also require the salinity to survive. Halotolerant organisms on the other hand (belonging to different domains of life) can grow under saline conditions, but do not require elevated concentrations of salt for growth. Halophytes are salt-tolerant higher plants. Halotolerant microorganisms are of considerable biotechnological interest. [ 2 ] Fields of scientific research relevant to halotolerance include biochemistry , molecular biology , cell biology , physiology , ecology , and genetics . An understanding of halotolerance can be applicable to areas such as arid-zone agriculture , xeriscaping , aquaculture (of fish or algae), bioproduction of desirable compounds (such as phycobiliproteins or carotenoids ) using seawater to support growth, or remediation of salt-affected soils. In addition, many environmental stressors involve or induce osmotic changes, so knowledge gained about halotolerance can also be relevant to understanding tolerance to extremes in moisture or temperature. Goals of studying halotolerance include increasing the agricultural productivity of lands affected by soil salination or where only saline water is available. Conventional agricultural species could be made more halotolerant by gene transfer from naturally halotolerant species (by conventional breeding or genetic engineering ) or by applying treatments developed from an understanding of the mechanisms of halotolerance. In addition, naturally halotolerant plants or microorganisms could be developed into useful agricultural crops or fermentation organisms. Tolerance of high salt conditions can be obtained through several routes. High levels of salt entering the plant can trigger ionic imbalances which cause complications in respiration and photosynthesis, leading to reduced rates of growth, injury and death in severe cases. To be considered tolerant of saline conditions, the protoplast must show methods of balancing the toxic and osmotic effects of the increased salt concentrations. Halophytic vascular plants can survive on soils with salt concentrations around 6%, or up to 20% in extreme cases ( ocean salinity is around 3.5%). Tolerance of such conditions is reached through the use of stress proteins and compatible cytoplasm osmotic solutes. [ 3 ] To exist in such conditions, halophytes tend to be subject to the uptake of high levels of salt into their cells, and this is often required to maintain an osmotic potential lower than that of the soil to ensure water uptake. High salt concentrations within the cell can be damaging to sensitive organelles such as the chloroplast, so sequestration of salt is seen. Under this action, salt is stored within the vacuole to protect such delicate areas. If high salt concentrations are seen within the vacuole, a high concentration gradient will be established between the vacuole and the cytoplasm, leading to high levels of energy investment to maintain this state. Therefore, the accumulation of compatible cytoplasmic osmotic solutes can be seen to prevent this situation from occurring. Amino acids such as proline accumulate in halophytic Brassica species, quaternary ammonium bases such as Glycine Betaine and sugars have been shown to act in this role within halophytic members of Chenopodiaceae and members of Asteraceae show the buildup of cyclites and soluble sugars. The buildup of these compounds allow for the balancing of the osmotic effect while preventing the establishment of toxic concentrations of salt or requiring the maintenance of high concentration gradients. [ citation needed ] The extent of halotolerance varies widely amongst different species of bacteria. [ 4 ] A number of cyanobacteria are halotolerant; an example location of occurrence for such cyanobacteria is in the Makgadikgadi Pans , a large hypersaline lake in Botswana . [ 5 ] Fungi from habitats with high concentration of salt are mostly halotolerant (i.e. they do not require salt for growth) and not halophilic. Halophilic fungi are a rare exception. [ 6 ] Halotolerant fungi constitute a relatively large and constant part of hypersaline environment communities, such as those in the solar salterns . [ 7 ] Well studied examples include the yeast Debaryomyces hansenii and black yeasts Aureobasidium pullulans and Hortaea werneckii . [ 8 ] The latter can grow in media without salt, as well as in almost saturated NaCl solutions. To emphasize this unusually wide adaptability , some authors describe H. werneckii as "extremely halotolerant". [ 9 ]
https://en.wikipedia.org/wiki/Halotolerance
In mathematics , the Halpern–Läuchli theorem is a partition result about finite products of infinite trees . Its original purpose was to give a model for set theory in which the Boolean prime ideal theorem is true but the axiom of choice is false. It is often called the Halpern–Läuchli theorem, but the proper attribution for the theorem as it is formulated below is to Halpern–Läuchli–Laver–Pincus or HLLP (named after James D. Halpern, Hans Läuchli, Richard Laver , and David Pincus), following Milliken (1979) . Let d , r < ω, ⟨ T i : i ∈ d ⟩ {\displaystyle \langle T_{i}:i\in d\rangle } be a sequence of finitely splitting trees of height ω. Let then there exists a sequence of subtrees ⟨ S i : i ∈ d ⟩ {\displaystyle \langle S_{i}:i\in d\rangle } strongly embedded in ⟨ T i : i ∈ d ⟩ {\displaystyle \langle T_{i}:i\in d\rangle } such that Alternatively, let and The HLLP theorem says that not only is the collection S d {\displaystyle \mathbb {S} ^{d}} partition regular for each d < ω , but that the homogeneous subtree guaranteed by the theorem is strongly embedded in
https://en.wikipedia.org/wiki/Halpern–Läuchli_theorem
Halpin–Tsai model is a mathematical model for the prediction of elasticity of composite material based on the geometry and orientation of the filler and the elastic properties of the filler and matrix. The model is based on the self-consistent field method although often consider to be empirical. This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Halpin–Tsai_model
Halteromyces is a genus of fungi belonging to the family Cunninghamellaceae . [ 1 ] Species: [ 1 ] This Zygomycota -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Halteromyces
In mathematical measure theory , for every positive integer n the ham sandwich theorem states that given n measurable "objects" in n - dimensional Euclidean space , it is possible to divide each one of them in half (with respect to their measure , e.g. volume) with a single ( n − 1) -dimensional hyperplane . This is possible even if the objects overlap. It was proposed by Hugo Steinhaus and proved by Stefan Banach (explicitly in dimension 3, without stating the theorem in the n -dimensional case), and also years later called the Stone–Tukey theorem after Arthur H. Stone and John Tukey . The ham sandwich theorem takes its name from the case when n = 3 and the three objects to be bisected are the ingredients of a ham sandwich . Sources differ on whether these three ingredients are two slices of bread and a piece of ham ( Peters 1981 ), bread and cheese and ham ( Cairns 1963 ), or bread and butter and ham ( Dubins & Spanier 1961 ). In two dimensions, the theorem is known as the pancake theorem to refer to the flat nature of the two objects to be bisected by a line ( Cairns 1963 ). According to Beyer & Zardecki (2004) , the earliest known paper about the ham sandwich theorem, specifically the n = 3 case of bisecting three solids with a plane, is a 1938 note in a Polish mathematics journal ( Editors 1938 ). Beyer and Zardecki's paper includes a translation of this note, which attributes the posing of the problem to Hugo Steinhaus , and credits Stefan Banach as the first to solve the problem, by a reduction to the Borsuk–Ulam theorem . The note poses the problem in two ways: first, formally, as "Is it always possible to bisect three solids, arbitrarily located, with the aid of an appropriate plane?" and second, informally, as "Can we place a piece of ham under a meat cutter so that meat, bone, and fat are cut in halves?" The note then offers a proof of the theorem. A more modern reference is Stone & Tukey (1942) , which is the basis of the name "Stone–Tukey theorem". This paper proves the n -dimensional version of the theorem in a more general setting involving measures. The paper attributes the n = 3 case to Stanislaw Ulam , based on information from a referee; but Beyer & Zardecki (2004) claim that this is incorrect, given the note mentioned above, although "Ulam did make a fundamental contribution in proposing" the Borsuk–Ulam theorem . The two-dimensional variant of the theorem (also known as the pancake theorem ) can be proved by an argument which appears in the fair cake-cutting literature (see e.g. Robertson–Webb rotating-knife procedure ). For each angle α ∈ [ 0 , 180 ∘ ] {\displaystyle \alpha \in [0,180^{\circ }]} , a straight line ("knife") of angle α {\displaystyle \alpha } can bisect pancake #1. To see this, translate along its normal a straight line of angle α {\displaystyle \alpha } from − ∞ {\displaystyle -\infty } to ∞ {\displaystyle \infty } ; the fraction of pancake #1 covered by the line changes continuously from 0 to 1, so by the intermediate value theorem it must be equal to 1/2 somewhere along the way. It is possible that an entire range of translations of our line yield a fraction of 1/2; in this case, it is a canonical choice to pick the middle one of all such translations. When the knife is at angle 0, it also cuts pancake #2, but the pieces are probably unequal (if we are lucky and the pieces are equal, we are done). Define the 'positive' side of the knife as the side in which the fraction of pancake #2 is larger. We now turn the knife, and translate it as described above. When the angle is α {\displaystyle \alpha } , define p ( α ) {\displaystyle p(\alpha )} as the fraction of pancake #2 at the positive side of the knife. Initially p ( 0 ) > 1 / 2 {\displaystyle p(0)>1/2} . The function p {\displaystyle p} is continuous, since small changes in the angle lead to small changes in the position of the knife. When the knife is at angle 180, the knife is upside-down, so p ( 180 ) < 1 / 2 {\displaystyle p(180)<1/2} . By the intermediate value theorem , there must be an angle in which p ( α ) = 1 / 2 {\displaystyle p(\alpha )=1/2} . Cutting at that angle bisects both pancakes simultaneously. The ham sandwich theorem can be proved as follows using the Borsuk–Ulam theorem . This proof follows the one described by Steinhaus and others (1938), attributed there to Stefan Banach , for the n = 3 case. In the field of Equivariant topology , this proof would fall under the configuration-space/tests-map paradigm. Let A 1 , A 2 , ..., A n denote the n compact (or more generally bounded and Lebesgue-measurable ) subsets of R n {\displaystyle \mathbb {R} ^{n}} that we wish to simultaneously bisect. Let S = { v = ( v 1 , … , v n ) ∈ R n : v 1 2 + … + v n 2 = 1 } {\displaystyle S=\{v=(v_{1},\ldots ,v_{n})\in \mathbb {R} ^{n}\colon v_{1}^{2}+\ldots +v_{n}^{2}=1\}} be the unit ( n − 1) -sphere in R n {\displaystyle \mathbb {R} ^{n}} . For each point v on S , we can define a continuum ( E v , c ) c ∈ R {\displaystyle (E_{v,c})_{c\in \mathbb {R} }} of affine hyperplanes with normal vector v : E v , c = { x ∈ R n : x 1 v 1 + … + x n v n = c } {\displaystyle E_{v,c}=\{x\in \mathbb {R} ^{n}\colon x_{1}v_{1}+\ldots +x_{n}v_{n}=c\}} for c ∈ R {\displaystyle c\in \mathbb {R} } . For each c ∈ R {\displaystyle c\in \mathbb {R} } , we call the space E v , c + = { x ∈ R n : x 1 v 1 + … + x n v n > c } {\displaystyle E_{v,c}^{+}=\{x\in \mathbb {R} ^{n}\colon x_{1}v_{1}+\ldots +x_{n}v_{n}>c\}} the "positive side" of E v , c {\displaystyle E_{v,c}} , which is the side pointed to by the vector v . By the intermediate value theorem , every family of such hyperplanes contains at least one hyperplane that bisects the bounded set A n : at one extreme translation, no volume of A n is on the positive side, and at the other extreme translation, all of A n 's volume is on the positive side, so in between there must be a closed interval I v of possible values of c ∈ R {\displaystyle c\in \mathbb {R} } , for which E v , c {\displaystyle E_{v,c}} bisects the volume of A n . If A n has volume zero, we pick c = 0 {\displaystyle c=0} for all v ∈ S {\displaystyle v\in S} . Otherwise, the interval I v is compact and we can canonically pick c = 1 2 ( inf I v + sup I v ) {\displaystyle c={\frac {1}{2}}(\inf I_{v}+\sup I_{v})} as its midpoint for each v ∈ S {\displaystyle v\in S} . Thus we obtain a continuous function α : S → R {\displaystyle \alpha \colon S\to \mathbb {R} } such that for each point v on the sphere S the hyperplane E v , α ( v ) {\displaystyle E_{v,\alpha (v)}} bisects A n . Note further that we have I − v = − I v {\displaystyle I_{-v}=-I_{v}} and thus α ( − v ) = − α ( v ) {\displaystyle \alpha (-v)=-\alpha (v)} for all v ∈ S {\displaystyle v\in S} . Now we define a function f : S → R n − 1 {\displaystyle f\colon S\to \mathbb {R} ^{n-1}} as follows: This function f is continuous (which can be proven with the dominated convergence theorem ). By the Borsuk–Ulam theorem , there are antipodal points v {\displaystyle v} and − v {\displaystyle -v} on the sphere S such that f ( v ) = f ( − v ) {\displaystyle f(v)=f(-v)} . Antipodal points correspond to hyperplanes E v , α ( v ) {\displaystyle E_{v,\alpha (v)}} and E − v , α ( − v ) = E − v , − α ( v ) {\displaystyle E_{-v,\alpha (-v)}=E_{-v,-\alpha (v)}} that are equal except that they have opposite positive sides. Thus, f ( v ) = f ( − v ) {\displaystyle f(v)=f(-v)} means that the volume of A i is the same on the positive and negative side of E v , α ( v ) {\displaystyle E_{v,\alpha (v)}} , for i = 1 , … , n {\displaystyle i=1,\ldots ,n} . Thus, E v , α ( v ) {\displaystyle E_{v,\alpha (v)}} is the desired ham sandwich cut that simultaneously bisects the volumes of A 1 , ..., A n . In measure theory , Stone & Tukey (1942) proved two more general forms of the ham sandwich theorem. Both versions concern the bisection of n subsets X 1 , X 2 , ..., X n of a common set X , where X has a Carathéodory outer measure and each X i has finite outer measure. Their first general formulation is as follows: for any continuous real function f : S n × X → R {\displaystyle f\colon S^{n}\times X\to \mathbb {R} } , there is a point p of the n - sphere S n and a real number s 0 such that the surface f ( p , x ) = s 0 divides X into f ( p , x ) < s 0 and f ( p , x ) > s 0 of equal measure and simultaneously bisects the outer measure of X 1 , X 2 , ..., X n . The proof is again a reduction to the Borsuk-Ulam theorem. This theorem generalizes the standard ham sandwich theorem by letting f ( s , x ) = s 1 x 1 + ... + s n x n . Their second formulation is as follows: for any n + 1 measurable functions f 0 , f 1 , ..., f n over X that are linearly independent over any subset of X of positive measure, there is a linear combination f = a 0 f 0 + a 1 f 1 + ... + a n f n such that the surface f ( x ) = 0 , dividing X into f ( x ) < 0 and f ( x ) > 0 , simultaneously bisects the outer measure of X 1 , X 2 , ..., X n . This theorem generalizes the standard ham sandwich theorem by letting f 0 ( x ) = 1 and letting f i ( x ) , for i > 0 , be the i -th coordinate of x . In discrete geometry , the ham sandwich theorem usually refers to the special case in which each of the sets being divided is a finite set of points . Here the relevant measure is the counting measure , which simply counts the number of points on either side of the hyperplane. In two dimensions, the theorem can be stated as follows: There is an exceptional case when points lie on the line. In this situation, we count each of these points as either being on one side, on the other, or on neither side of the line (possibly depending on the point), i.e. "bisecting" in fact means that each side contains less than half of the total number of points. This exceptional case is actually required for the theorem to hold, of course when the number of red points or the number of blue is odd, but also in specific configurations with even numbers of points, for instance when all the points lie on the same line and the two colors are separated from each other (i.e. colors don't alternate along the line). A situation where the numbers of points on each side cannot match each other is provided by adding an extra point out of the line in the previous configuration. In computational geometry , this ham sandwich theorem leads to a computational problem, the ham sandwich problem . In two dimensions, the problem is this: given a finite set of n points in the plane, each colored "red" or "blue", find a ham sandwich cut for them. First, Megiddo (1985) described an algorithm for the special, separated case. Here all red points are on one side of some line and all blue points are on the other side, a situation where there is a unique ham sandwich cut, which Megiddo could find in linear time. Later, Edelsbrunner & Waupotitsch (1986) gave an algorithm for the general two-dimensional case; the running time of their algorithm is O ( n log n ) , where the symbol O indicates the use of Big O notation . Finally, Lo & Steiger (1990) found an optimal O ( n ) -time algorithm . This algorithm was extended to higher dimensions by Lo, Matoušek & Steiger (1994) where the running time is o ( n d − 1 ) {\displaystyle o(n^{d-1})} . Given d sets of points in general position in d -dimensional space, the algorithm computes a ( d −1) -dimensional hyperplane that has an equal number of points of each of the sets in both of its half-spaces, i.e., a ham-sandwich cut for the given points. If d is a part of the input, then no polynomial time algorithm is expected to exist, as if the points are on a moment curve , the problem becomes equivalent to necklace splitting , which is PPA-complete . A linear-time algorithm that area-bisects two disjoint convex polygons is described by Stojmenovíc (1991) . The original theorem works for at most n collections, where n is the number of dimensions. To bisect a larger number of collections without going to higher dimensions, one can use, instead of a hyperplane, an algebraic surface of degree k , i.e., an ( n −1 )–dimensional surface defined by a polynomial function of degree k : Given ( k + n n ) − 1 {\displaystyle {\binom {k+n}{n}}-1} measures in an n –dimensional space, there exists an algebraic surface of degree k which bisects them all. ( Smith & Wormald (1998) ). This generalization is proved by mapping the n –dimensional plane into a ( k + n n ) − 1 {\displaystyle {\binom {k+n}{n}}-1} dimensional plane, and then applying the original theorem. For example, for n = 2 and k = 2 , the 2–dimensional plane is mapped to a 5–dimensional plane via:
https://en.wikipedia.org/wiki/Ham_sandwich_theorem
In molecular physics , the Hamaker constant (denoted A ; named for H. C. Hamaker ) is a physical constant that can be defined for a van der Waals (vdW) body–body interaction: where ρ 1 , ρ 2 are the number densities of the two interacting kinds of particles, and C is the London coefficient in the particle–particle pair interaction. [ 1 ] [ 2 ] The magnitude of this constant reflects the strength of the vdW-force between two particles, or between a particle and a substrate . [ 1 ] The Hamaker constant provides the means to determine the interaction parameter C from the vdW-pair potential, Hamaker's method and the associated Hamaker constant ignores the influence of an intervening medium between the two particles of interaction. In 1956 Lifshitz developed a description of the vdW energy but with consideration of the dielectric properties of this intervening medium (often a continuous phase). [ 3 ] The Van der Waals forces are effective only up to several hundred angstroms . When the interactions are too far apart, the dispersion potential decays faster than 1 / r 6 ; {\displaystyle 1/r^{6};} this is called the retarded regime, and the result is a Casimir–Polder force . This molecular physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hamaker_constant
After the explanation of van der Waals forces by Fritz London , several scientists soon realised that his definition could be extended from the interaction of two molecules with induced dipoles to macro-scale objects by summing all of the forces between the molecules in each of the bodies involved. The theory is named after H. C. Hamaker , who derived the interaction between two spheres, a sphere and a wall, and presented a general discussion in a heavily cited 1937 paper. [ 1 ] The interaction of two bodies is then treated as the pairwise interaction of a set of N molecules at positions: R i { i :1,2,... ..., N }. The distance between the molecules i and j is then: The interaction energy of the system is taken to be: where V i n t i j {\displaystyle V_{\mathrm {int} }^{ij}} is the interaction of molecules i and j in the absence of the influence of other molecules. The theory is however only an approximation which assumes that the interactions can be treated independently, the theory must also be adjusted to take into account quantum perturbation theory . This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hamaker_theory
Hamburg Aviation , formerly the "Luftfahrtcluster Metropolregion Hamburg e.V." (Aviation Cluster Hamburg Metropolitan Region) is an association of aviation organizations in Hamburg , Germany. Its goal is to promote the aviation industry in the Hamburg Metropolitan Region . [ 1 ] Companies based in the Hamburg Metropolitan Region include the aircraft manufacturer Airbus and Lufthansa Technik . [ 2 ] [ 3 ] Hamburg Airport , which first opened in 1912, is one of the world's oldest operational airports to still be based at its original location. [ 2 ] [ 4 ] There are over 300 specialist suppliers, including branches of Diehl Aerospace . [ 5 ] As of 2012, it had over 40,000 employees [ 6 ] making it one of the largest sites for civil aviation in the world. [ 7 ] Also based in Hamburg are the German Aerospace Center ’s Institute of Aerospace Medicine and Institute of Air Transportation Systems. [ 8 ] Hamburg is the host city of the annual Aircraft Interiors Expo, [ 9 ] a trade show for the aircraft cabin industry. The Crystal Cabin Award was launched in 2007 to honour innovation in the field of cabin design. [ 10 ] [ 11 ] The prize is funded by sponsors from the aviation industry. In 2001, companies, universities and government bodies collaborated forming Hamburg Aviation. This developed into the “Luftfahrtcluster Metropolregion Hamburg E.V.” association, with 15 founding members, officially established in 2011. Its mission statement is to promote the aviation industry in the Hamburg business cluster . [ 12 ]
https://en.wikipedia.org/wiki/Hamburg_Aviation
In mathematics , the Hamburger moment problem , named after Hans Ludwig Hamburger , is formulated as follows: given a sequence ( m 0 , m 1 , m 2 , ...) , does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable ) on the real line such that In other words, an affirmative answer to the problem means that ( m 0 , m 1 , m 2 , ...) is the sequence of moments of some positive Borel measure μ . The Stieltjes moment problem , Vorobyev moment problem , and the Hausdorff moment problem are similar but replace the real line by [ 0 , + ∞ ) {\displaystyle [0,+\infty )} (Stieltjes and Vorobyev; but Vorobyev formulates the problem in the terms of matrix theory), or a bounded interval (Hausdorff). The Hamburger moment problem is solvable (that is, ( m n ) is a sequence of moments ) if and only if the corresponding Hankel kernel on the nonnegative integers is positive definite , i.e., for every arbitrary sequence ( c j ) j ≥ 0 of complex numbers that are finitary (i.e., c j = 0 except for finitely many values of j ). For the "only if" part of the claims simply note that which is non-negative if μ {\displaystyle \mu } is non-negative. We sketch an argument for the converse. Let Z + be the nonnegative integers and F 0 ( Z + ) denote the family of complex valued sequences with finitary support. The positive Hankel kernel A induces a (possibly degenerate) sesquilinear product on the family of complex-valued sequences with finite support. This in turn gives a Hilbert space whose typical element is an equivalence class denoted by [ f ] . Let e n be the element in F 0 ( Z + ) defined by e n ( m ) = δ nm . One notices that Therefore, the shift operator T on H {\displaystyle {\mathcal {H}}} , with T [ e n ] = [ e n + 1 ] , is symmetric . On the other hand, the desired expression suggests that μ is the spectral measure of a self-adjoint operator . (More precisely stated, μ is the spectral measure for an operator T ¯ {\displaystyle {\overline {T}}} defined below and the vector [1], ( Reed & Simon 1975 , p. 145)). If we can find a "function model" such that the symmetric operator T is multiplication by x , then the spectral resolution of a self-adjoint extension of T proves the claim. A function model is given by the natural isomorphism from F 0 ( Z + ) to the family of polynomials , in one single real variable and complex coefficients: for n ≥ 0, identify e n with x n . In the model, the operator T is multiplication by x and a densely defined symmetric operator. It can be shown that T always has self-adjoint extensions. Let T ¯ {\displaystyle {\overline {T}}} be one of them and μ be its spectral measure. So On the other hand, For an alternative proof of the existence that only uses Stieltjes integrals , see also, [ 1 ] in particular theorem 3.2. The solutions form a convex set, so the problem has either infinitely many solutions or a unique solution. Consider the ( n + 1) × ( n + 1) Hankel matrix Positivity of A means that, for each n , det(Δ n ) ≥& 0 . If det(Δ n ) = 0 , for some n , then is finite-dimensional and T is self-adjoint. So in this case the solution to the Hamburger moment problem is unique and μ , being the spectral measure of T , has finite support. More generally, the solution is unique if there are constants C and D such that, for all n , | m n | ≤ CD n n ! ( Reed & Simon 1975 , p. 205). This follows from the more general Carleman's condition . There are examples where the solution is not unique; see e.g. [ 2 ] The Hamburger moment problem is intimately related to orthogonal polynomials on the real line. That is, assume { m n } n ∈ N 0 {\displaystyle \{m_{n}\}_{n\in \mathbb {N} _{0}}} is the moment sequence of some positive measure μ {\displaystyle \mu } on R {\displaystyle \mathbb {R} } . Then for any polynomial p ( x ) = ∑ j = 0 n a j x j ∈ R , {\displaystyle p(x)=\sum _{j=0}^{n}a_{j}x^{j}\in \mathbb {R} ,} it holds that ∫ p ( x ) 2 d μ = ∫ ( ∑ j , k = 0 n a j a k x j + k ) d μ = ∑ j , k = 0 n a j a k m j + k ≥ 0 , ∀ n ∈ N 0 , {\displaystyle \int p(x)^{2}d\mu =\int \left(\sum _{j,k=0}^{n}a_{j}a_{k}x^{j+k}\right)d\mu =\sum _{j,k=0}^{n}a_{j}a_{k}m_{j+k}\geq 0,\quad \forall n\in \mathbb {N} _{0},} such that the Hankel matrix is positive semidefinite. This is a necessary condition for a sequence to be a moment sequence and a sufficient condition for the existence of a positive measure. [ 3 ] The Gram–Schmidt procedure gives a basis of orthogonal polynomials in which the operator: T ¯ {\displaystyle {\overline {T}}} has a tridiagonal Jacobi matrix representation . This in turn leads to a tridiagonal model of positive Hankel kernels. An explicit calculation of the Cayley transform of T shows the connection with what is called the Nevanlinna class of analytic functions on the left half plane. Passing to the non-commutative setting, this motivates Krein's formula which parametrizes the extensions of partial isometries. The cumulative distribution function and the probability density function can often be found by applying the inverse Laplace transform to the moment generating function provided that this function converges.
https://en.wikipedia.org/wiki/Hamburger_moment_problem
Hamid Jafarkhani ( Persian : حمید جعفرخانی ) (born 1966, in Tehran ) is an Iranian-born American electrical engineer and professor. He serves as the Chancellor's Professor in electrical engineering and computer science in the Henry Samueli School of Engineering at the University of California, Irvine (UC Irvine). His research focuses on communications theory, particularly coding and wireless communications and networks . Prior to studying at the University of Tehran , he was ranked first in the nationwide entrance examination of Iranian universities in 1984. After receiving his B.S. degree in 1989, he studied at the University of Maryland College Park and obtained his M.S. degree in 1994 followed by his Ph.D. in 1997. After graduating, Jafarkhani joined AT&T Laboratories-Research in August 1997 before moving to Broadcom in July 2000 and to the University of California, Irvine in September 2001. Within the wireless communications field, Jafarkhani is best known as the primary/main inventor of space-time codes (jointly with Siavash Alamouti and Nambirajan Seshadri) [ 1 ] and for his two seminal papers [ 2 ] [ 3 ] which established the field of space–time block coding , published whilst working for AT&T. The first of these, "Space–time block codes from orthogonal designs", established the theoretical basis for space–time block codes, and the second, "Space–time block coding for wireless communications: performance results", provided numerical analysis of the performance of the first such codes. Space–time codes rely on the use of multiple antennas at the transmit side of a wireless link. Multiple copies of the same data are transmitted from these multiple antennas in such a way that the receiver has a much better chance of correctly detecting the signal in the presence of corruption and noise than if just one copy is sent. The performance of space–time coded systems, in terms of the reliability of the transmission is significantly better than non-coded systems. Space–time block codes in particular are known to be simple to implement and effective, and Jafarkhani's ideas in these two papers triggered the massive international research effort into them that continues today. Later, in 2001, Jafarkhani introduced quasi-orthogonal space–time block codes [ 4 ] which overcome some of the difficulties inherent in earlier codes, at a cost of transmitting less data. These too are now widely studied. Then, in 2003 he introduced a more powerful version of his original codes, the super-orthogonal space–time trellis codes [ 5 ] which combine the effects of both block codes and space–time trellis codes . Again, this work has led to significant research efforts around the world. Jafarkhani received a National Science Foundation CAREER award in January 2003 [ 6 ] which "recognizes outstanding scientists and engineers who, early in their careers, show exceptional potential for leadership at the frontiers of knowledge". [ 7 ] He is also a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for contributions to space-time coding, [ 8 ] an editor of IEEE Transactions on Wireless Communications and an associate editor of IEEE Communications Letters . Jafarkhani is the author of Space-Time Coding: Theory and Practice , [ 9 ] published in September 2005. Jafarkhani is a co-recipient of the 2013 IEEE Eric E. Sumner Award for outstanding contributions to communications technology. He is a recipient of the IEEE Communications Society Award for Advances in Communication. [ 10 ]
https://en.wikipedia.org/wiki/Hamid_Jafarkhani
Hamilton's optico-mechanical analogy is a conceptual parallel between trajectories in classical mechanics and wavefronts in optics , introduced by William Rowan Hamilton around 1831. [ 1 ] It may be viewed as linking Huygens' principle of optics with Maupertuis' principle of mechanics. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] While Hamilton discovered the analogy in 1831, it was not applied practically until Hans Busch used it to explain electron beam focusing in 1925. [ 7 ] According to Cornelius Lanczos , the analogy has been important in the development of ideas in quantum physics. [ 3 ] Erwin Schrödinger cites the analogy in the very first sentence of his paper introducing his wave mechanics . [ 8 ] Later in the body of his paper he says: Unfortunately this powerful and momentous conception of Hamilton is deprived, in most modern reproductions, of its beautiful raiment as a superfluous accessory, in favour of a more colourless representation of the analytical correspondence. [ 9 ] Quantitative and formal analysis based on the analogy use the Hamilton–Jacobi equation ; conversely the analogy provides an alternative and more accessible path for introducing the Hamilton–Jacobi equation approach to mechanics. The orthogonality of mechanical trajectories characteristic of geometrical optics to the optical wavefronts characteristic of a full wave equation, resulting from the variational principle, leads to the corresponding differential equations. [ 10 ] The propagation of light can be considered in terms of rays and wavefronts in ordinary physical three-dimensional space. The wavefronts are two-dimensional curved surfaces; the rays are one-dimensional curved lines. [ 11 ] Hamilton's analogy amounts to two interpretations of a figure like the one shown here. In the optical interpretation, the green wavefronts are lines of constant phase and the orthogonal red lines are the rays of geometrical optics . In the mechanical interpretation, the green lines denote constant values of action derived by applying Hamilton's principle to mechanical motion and the red lines are the orthogonal object trajectories . [ 11 ] The orthogonality of the wavefronts to rays (or equal-action surfaces to trajectories) means we can compute one set from the other set. [ 10 ] This explains how Kirchhoff's diffraction formula predicts a wave phenomenon – diffraction – using only geometrical ray tracing. [ 7 ] : 745 Rays traced from the source to an aperture give a wavefront that becomes sources for rays reaching the diffraction pattern where they are summed using complex phases from the orthogonal wavefronts. The wavefronts and rays or the equal-action surfaces and trajectories are dual objects linked by orthogonality. [ 10 ] On one hand, a ray can be regarded as the orbit of a particle of light. It successively punctures the wave surfaces. The successive punctures can be regarded as defining the trajectory of the particle. On the other hand, a wave-front can be regarded as a level surface of displacement of some quantity, such as electric field intensity, hydrostatic pressure, particle number density, oscillatory phase, or probability amplitude. Then the physical meaning of the rays is less evident. [ 12 ] The Hamilton optico-mechanical analogy is closely related to Fermat's principle and thus to the Huygens–Fresnel principle . [ 10 ] Fermat's principle states that the rays between wavefronts will take the path least time; the concept of successive wavefronts derives from Huygens principle. Going beyond ordinary three-dimensional physical space, one can imagine a higher dimensional abstract configuration "space", with a dimension a multiple of 3. In this space, one can imagine again rays as one-dimensional curved lines. Now the wavefronts are hypersurfaces of dimension one less than the dimension of the space. [ 6 ] Such a multi-dimensional space can serve as a configuration space for a multi-particle system. Albert Messiah considers a classical limit of the Schrödinger equation. He finds there an optical analogy. The trajectories of his particles are orthogonal to the surfaces of equal phase. He writes "In the language of optics, the latter are the wave fronts, and the trajectories of the particles are the rays. Hence the classical approximation is equivalent to the geometric optics approximation: we find once again, as a consequence of the Schrödinger equation, the basic postulate of the theory of matter waves." [ 13 ] Hamilton's optico-mechanical analogy played a critical part [ 14 ] [ 11 ] in the thinking of Schrödinger , one of the originators of quantum mechanics. Section 1 of his paper published in December 1926 is titled "The Hamiltonian analogy between mechanics and optics". [ 15 ] Section 1 of the first of his four lectures on wave mechanics delivered in 1928 is titled "Derivation of the fundamental idea of wave mechanics from Hamilton's analogy between ordinary mechanics and geometrical optics". [ 16 ] In a brief paper in 1923, de Broglie wrote : "Dynamics must undergo the same evolution that optics has undergone when undulations took the place of purely geometrical optics." [ 17 ] In his 1924 thesis, though Louis de Broglie did not name the optico-mechanical analogy, he wrote in his introduction, [ 18 ] ... a single principle, that of Maupertuis, and later in another form as Hamilton's Principle of least action ... Fermat's ... principle ..., which nowadays is usually called the principle of least action. ... Huygens propounded an undulatory theory of light, while Newton, calling on an analogy with the material point dynamics that he created, developed a corpuscular theory, the so-called "emission theory", which enabled him even to explain, albeit with a contrived hypothesis, effects nowadays considered wave effects, (i.e., Newton's rings). In the opinion of Léon Rosenfeld , a close colleague of Niels Bohr , "... Schrödinger [was] inspired by Hamilton's beautiful comparison of classical mechanics and geometrical optics ..." [ 19 ] The first textbook in English on wave mechanics [ 20 ] devotes the second of its two chapters to "Wave mechanics in relation to ordinary mechanics". It opines "... de Broglie and Schrödinger have turned this false analogy into a true one by using the natural Unit or Measure of Action, h , .... ... We must now go into Hamilton's theory in more detail, for when once its true meaning is grasped the step to wave mechanics is but a short one—indeed now, after the event, almost seems to suggest itself." [ 21 ] According to one textbook, "The first part of our problem, namely, the establishment of a system of first-order equations satisfying the spacetime symmetry condition, can be solved in a very simple way, with the help of the analogy between mechanics and optics, which was the starting point for the development of wave mechanics and which can still be used—with reservations—as a source of inspiration." [ 22 ] Recently the concept has been extended to wavelength dependent regime. [ 23 ]
https://en.wikipedia.org/wiki/Hamilton's_optico-mechanical_analogy
Hamilton Othanel Smith (born August 23, 1931 in New York) [ 1 ] is an American microbiologist and Nobel laureate. [ 2 ] [ 3 ] Smith graduated from University Laboratory High School of Urbana, Illinois . He attended the University of Illinois at Urbana-Champaign , but in 1950 transferred to the University of California, Berkeley , where he earned his B.A. in Mathematics in 1952 [1] . He received his medical degree from Johns Hopkins School of Medicine in 1956. Between 1956 and 1957 Smith worked for the Washington University in St. Louis Medical Service. In 1975, he was awarded a Guggenheim Fellowship he spent at the University of Zurich . In 1970, Smith and Kent W. Wilcox discovered the first type II restriction enzyme , [ 4 ] which is now known as HindII. [ 2 ] Smith went on to discover DNA methylases that constitute the other half of the bacterial host restriction and modification systems, as hypothesized by Werner Arber of Switzerland. [ 2 ] He was awarded the Nobel Prize in Physiology or Medicine in 1978 for discovering type II restriction enzymes with Werner Arber and Daniel Nathans as co-recipients. He later became a leading figure in the nascent field of genomics , when in 1995 he and a team at The Institute for Genomic Research sequenced the first bacterial genome , that of Haemophilus influenzae . [ 5 ] H. influenza was the same organism in which Smith had discovered restriction enzymes in the late 1960s. He subsequently played a key role in the sequencing of many of the early genomes at The Institute for Genomic Research, and in the assembly of the human genome at Celera Genomics , which he joined when it was founded in 1998. More recently, he has directed a team at the J. Craig Venter Institute that works towards creating a partially synthetic bacterium, Mycoplasma laboratorium . In 2003 the same group synthetically assembled the genome of a virus, Phi X 174 bacteriophage . Smith is scientific director of privately held Synthetic Genomics , which was founded in 2005 by Craig Venter to continue this work. Synthetic Genomics is working to produce biofuels on an industrial-scale using recombinant algae and other microorganisms. [ 6 ] This article incorporates CC-BY-2.5 text from the reference [ 2 ]
https://en.wikipedia.org/wiki/Hamilton_O._Smith
In quantum mechanics , the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy . Its spectrum , the system's energy spectrum or its set of energy eigenvalues , is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory . The Hamiltonian is named after William Rowan Hamilton , who developed a revolutionary reformulation of Newtonian mechanics , known as Hamiltonian mechanics , which was historically important to the development of quantum physics. Similar to vector notation , it is typically denoted by H ^ {\displaystyle {\hat {H}}} , where the hat indicates that it is an operator. It can also be written as H {\displaystyle H} or H ˇ {\displaystyle {\check {H}}} . The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one. By analogy with classical mechanics , the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form H ^ = T ^ + V ^ , {\displaystyle {\hat {H}}={\hat {T}}+{\hat {V}},} where V ^ = V = V ( r , t ) , {\displaystyle {\hat {V}}=V=V(\mathbf {r} ,t),} is the potential energy operator and T ^ = p ^ ⋅ p ^ 2 m = p ^ 2 2 m = − ℏ 2 2 m ∇ 2 , {\displaystyle {\hat {T}}={\frac {\mathbf {\hat {p}} \cdot \mathbf {\hat {p}} }{2m}}={\frac {{\hat {p}}^{2}}{2m}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2},} is the kinetic energy operator in which m {\displaystyle m} is the mass of the particle, the dot denotes the dot product of vectors, and p ^ = − i ℏ ∇ , {\displaystyle {\hat {p}}=-i\hbar \nabla ,} is the momentum operator where a ∇ {\displaystyle \nabla } is the del operator . The dot product of ∇ {\displaystyle \nabla } with itself is the Laplacian ∇ 2 {\displaystyle \nabla ^{2}} . In three dimensions using Cartesian coordinates the Laplace operator is ∇ 2 = ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 + ∂ 2 ∂ z 2 {\displaystyle \nabla ^{2}={\frac {\partial ^{2}}{{\partial x}^{2}}}+{\frac {\partial ^{2}}{{\partial y}^{2}}}+{\frac {\partial ^{2}}{{\partial z}^{2}}}} Although this is not the technical definition of the Hamiltonian in classical mechanics , it is the form it most commonly takes. Combining these yields the form used in the Schrödinger equation : H ^ = T ^ + V ^ = p ^ ⋅ p ^ 2 m + V ( r , t ) = − ℏ 2 2 m ∇ 2 + V ( r , t ) {\displaystyle {\begin{aligned}{\hat {H}}&={\hat {T}}+{\hat {V}}\\[6pt]&={\frac {\mathbf {\hat {p}} \cdot \mathbf {\hat {p}} }{2m}}+V(\mathbf {r} ,t)\\[6pt]&=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} ,t)\end{aligned}}} which allows one to apply the Hamiltonian to systems described by a wave function Ψ ( r , t ) {\displaystyle \Psi (\mathbf {r} ,t)} . This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics. One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields. It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system. Consider computing the expectation value of kinetic energy: T = − ℏ 2 2 m ∫ − ∞ + ∞ ψ ∗ d 2 ψ d x 2 d x = − ℏ 2 2 m ( [ ψ ′ ( x ) ψ ∗ ( x ) ] − ∞ + ∞ − ∫ − ∞ + ∞ d ψ d x d ψ ∗ d x d x ) = ℏ 2 2 m ∫ − ∞ + ∞ | d ψ d x | 2 d x ≥ 0 {\displaystyle {\begin{aligned}T&=-{\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{+\infty }\psi ^{*}{\frac {d^{2}\psi }{dx^{2}}}\,dx\\[1ex]&=-{\frac {\hbar ^{2}}{2m}}\left({\left[\psi '(x)\psi ^{*}(x)\right]}_{-\infty }^{+\infty }-\int _{-\infty }^{+\infty }{\frac {d\psi }{dx}}{\frac {d\psi ^{*}}{dx}}\,dx\right)\\[1ex]&={\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{+\infty }\left|{\frac {d\psi }{dx}}\right|^{2}\,dx\geq 0\end{aligned}}} Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as: E = T + ⟨ V ( x ) ⟩ = T + ∫ − ∞ + ∞ V ( x ) | ψ ( x ) | 2 d x ≥ V min ( x ) ∫ − ∞ + ∞ | ψ ( x ) | 2 d x ≥ V min ( x ) {\displaystyle E=T+\langle V(x)\rangle =T+\int _{-\infty }^{+\infty }V(x)|\psi (x)|^{2}\,dx\geq V_{\text{min}}(x)\int _{-\infty }^{+\infty }|\psi (x)|^{2}\,dx\geq V_{\text{min}}(x)} which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem . The formalism can be extended to N {\displaystyle N} particles: H ^ = ∑ n = 1 N T ^ n + V ^ {\displaystyle {\hat {H}}=\sum _{n=1}^{N}{\hat {T}}_{n}+{\hat {V}}} where V ^ = V ( r 1 , r 2 , … , r N , t ) , {\displaystyle {\hat {V}}=V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t),} is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and T ^ n = p ^ n ⋅ p ^ n 2 m n = − ℏ 2 2 m n ∇ n 2 {\displaystyle {\hat {T}}_{n}={\frac {\mathbf {\hat {p}} _{n}\cdot \mathbf {\hat {p}} _{n}}{2m_{n}}}=-{\frac {\hbar ^{2}}{2m_{n}}}\nabla _{n}^{2}} is the kinetic energy operator of particle n {\displaystyle n} , ∇ n {\displaystyle \nabla _{n}} is the gradient for particle n {\displaystyle n} , and ∇ n 2 {\displaystyle \nabla _{n}^{2}} is the Laplacian for particle n : ∇ n 2 = ∂ 2 ∂ x n 2 + ∂ 2 ∂ y n 2 + ∂ 2 ∂ z n 2 , {\displaystyle \nabla _{n}^{2}={\frac {\partial ^{2}}{\partial x_{n}^{2}}}+{\frac {\partial ^{2}}{\partial y_{n}^{2}}}+{\frac {\partial ^{2}}{\partial z_{n}^{2}}},} Combining these yields the Schrödinger Hamiltonian for the N {\displaystyle N} -particle case: H ^ = ∑ n = 1 N T ^ n + V ^ = ∑ n = 1 N p ^ n ⋅ p ^ n 2 m n + V ( r 1 , r 2 , … , r N , t ) = − ℏ 2 2 ∑ n = 1 N 1 m n ∇ n 2 + V ( r 1 , r 2 , … , r N , t ) {\displaystyle {\begin{aligned}{\hat {H}}&=\sum _{n=1}^{N}{\hat {T}}_{n}+{\hat {V}}\\[6pt]&=\sum _{n=1}^{N}{\frac {\mathbf {\hat {p}} _{n}\cdot \mathbf {\hat {p}} _{n}}{2m_{n}}}+V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t)\\[6pt]&=-{\frac {\hbar ^{2}}{2}}\sum _{n=1}^{N}{\frac {1}{m_{n}}}\nabla _{n}^{2}+V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N},t)\end{aligned}}} However, complications can arise in the many-body problem . Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles: − ℏ 2 2 M ∇ i ⋅ ∇ j {\displaystyle -{\frac {\hbar ^{2}}{2M}}\nabla _{i}\cdot \nabla _{j}} where M {\displaystyle M} denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms , and appear in the Hamiltonian of many-electron atoms (see below). For N {\displaystyle N} interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function V {\displaystyle V} is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle. For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, [ 1 ] that is V = ∑ i = 1 N V ( r i , t ) = V ( r 1 , t ) + V ( r 2 , t ) + ⋯ + V ( r N , t ) {\displaystyle V=\sum _{i=1}^{N}V(\mathbf {r} _{i},t)=V(\mathbf {r} _{1},t)+V(\mathbf {r} _{2},t)+\cdots +V(\mathbf {r} _{N},t)} The general form of the Hamiltonian in this case is: H ^ = − ℏ 2 2 ∑ i = 1 N 1 m i ∇ i 2 + ∑ i = 1 N V i = ∑ i = 1 N ( − ℏ 2 2 m i ∇ i 2 + V i ) = ∑ i = 1 N H ^ i {\displaystyle {\begin{aligned}{\hat {H}}&=-{\frac {\hbar ^{2}}{2}}\sum _{i=1}^{N}{\frac {1}{m_{i}}}\nabla _{i}^{2}+\sum _{i=1}^{N}V_{i}\\[6pt]&=\sum _{i=1}^{N}\left(-{\frac {\hbar ^{2}}{2m_{i}}}\nabla _{i}^{2}+V_{i}\right)\\[6pt]&=\sum _{i=1}^{N}{\hat {H}}_{i}\end{aligned}}} where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below. The Hamiltonian generates the time evolution of quantum states. If | ψ ( t ) ⟩ {\displaystyle \left|\psi (t)\right\rangle } is the state of the system at time t {\displaystyle t} , then H | ψ ( t ) ⟩ = i ℏ d d t | ψ ( t ) ⟩ . {\displaystyle H\left|\psi (t)\right\rangle =i\hbar {d \over \ dt}\left|\psi (t)\right\rangle .} This equation is the Schrödinger equation . It takes the same form as the Hamilton–Jacobi equation , which is one of the reasons H {\displaystyle H} is also called the Hamiltonian. Given the state at some initial time ( t = 0 {\displaystyle t=0} ), we can solve it to obtain the state at any subsequent time. In particular, if H {\displaystyle H} is independent of time, then | ψ ( t ) ⟩ = e − i H t / ℏ | ψ ( 0 ) ⟩ . {\displaystyle \left|\psi (t)\right\rangle =e^{-iHt/\hbar }\left|\psi (0)\right\rangle .} The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in H {\displaystyle H} . One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous , or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient. By the *- homomorphism property of the functional calculus, the operator U = e − i H t / ℏ {\displaystyle U=e^{-iHt/\hbar }} is a unitary operator . It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent, { U ( t ) } {\displaystyle \{U(t)\}} form a one parameter unitary group (more than a semigroup ); this gives rise to the physical principle of detailed balance . However, in the more general formalism of Dirac , the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way: The eigenkets of H {\displaystyle H} , denoted | a ⟩ {\displaystyle \left|a\right\rangle } , provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted { E a } {\displaystyle \{E_{a}\}} , solving the equation: H | a ⟩ = E a | a ⟩ . {\displaystyle H\left|a\right\rangle =E_{a}\left|a\right\rangle .} Since H {\displaystyle H} is a Hermitian operator , the energy is always a real number . From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator ). However, all routine quantum mechanical calculations can be done using the physical formulation. [ clarification needed ] Following are expressions for the Hamiltonian in a number of situations. [ 2 ] Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by m {\displaystyle m} , and charges by q {\displaystyle q} . The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension: H ^ = − ℏ 2 2 m ∂ 2 ∂ x 2 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}} and in higher dimensions: H ^ = − ℏ 2 2 m ∇ 2 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}} For a particle in a region of constant potential V = V 0 {\displaystyle V=V_{0}} (no dependence on space or time), in one dimension, the Hamiltonian is: H ^ = − ℏ 2 2 m ∂ 2 ∂ x 2 + V 0 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V_{0}} in three dimensions H ^ = − ℏ 2 2 m ∇ 2 + V 0 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V_{0}} This applies to the elementary " particle in a box " problem, and step potentials . For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to: V = k 2 x 2 = m ω 2 2 x 2 {\displaystyle V={\frac {k}{2}}x^{2}={\frac {m\omega ^{2}}{2}}x^{2}} where the angular frequency ω {\displaystyle \omega } , effective spring constant k {\displaystyle k} , and mass m {\displaystyle m} of the oscillator satisfy: ω 2 = k m {\displaystyle \omega ^{2}={\frac {k}{m}}} so the Hamiltonian is: H ^ = − ℏ 2 2 m ∂ 2 ∂ x 2 + m ω 2 2 x 2 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {m\omega ^{2}}{2}}x^{2}} For three dimensions, this becomes H ^ = − ℏ 2 2 m ∇ 2 + m ω 2 2 r 2 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+{\frac {m\omega ^{2}}{2}}r^{2}} where the three-dimensional position vector r {\displaystyle \mathbf {r} } using Cartesian coordinates is ( x , y , z ) {\displaystyle (x,y,z)} , its magnitude is r 2 = r ⋅ r = | r | 2 = x 2 + y 2 + z 2 {\displaystyle r^{2}=\mathbf {r} \cdot \mathbf {r} =|\mathbf {r} |^{2}=x^{2}+y^{2}+z^{2}} Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction: H ^ = − ℏ 2 2 m ( ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 + ∂ 2 ∂ z 2 ) + m ω 2 2 ( x 2 + y 2 + z 2 ) = ( − ℏ 2 2 m ∂ 2 ∂ x 2 + m ω 2 2 x 2 ) + ( − ℏ 2 2 m ∂ 2 ∂ y 2 + m ω 2 2 y 2 ) + ( − ℏ 2 2 m ∂ 2 ∂ z 2 + m ω 2 2 z 2 ) {\displaystyle {\begin{aligned}{\hat {H}}&=-{\frac {\hbar ^{2}}{2m}}\left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}}\right)+{\frac {m\omega ^{2}}{2}}\left(x^{2}+y^{2}+z^{2}\right)\\[6pt]&=\left(-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {m\omega ^{2}}{2}}x^{2}\right)+\left(-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {m\omega ^{2}}{2}}y^{2}\right)+\left(-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial z^{2}}}+{\frac {m\omega ^{2}}{2}}z^{2}\right)\end{aligned}}} For a rigid rotor —i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom , say due to double or triple chemical bonds ), the Hamiltonian is: H ^ = − ℏ 2 2 I x x J ^ x 2 − ℏ 2 2 I y y J ^ y 2 − ℏ 2 2 I z z J ^ z 2 {\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2I_{xx}}}{\hat {J}}_{x}^{2}-{\frac {\hbar ^{2}}{2I_{yy}}}{\hat {J}}_{y}^{2}-{\frac {\hbar ^{2}}{2I_{zz}}}{\hat {J}}_{z}^{2}} where I x x {\displaystyle I_{xx}} , I y y {\displaystyle I_{yy}} , and I z z {\displaystyle I_{zz}} are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor ), and J ^ x {\displaystyle {\hat {J}}_{x}} , J ^ y {\displaystyle {\hat {J}}_{y}} , and J ^ z {\displaystyle {\hat {J}}_{z}} are the total angular momentum operators (components), about the x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} axes respectively. The Coulomb potential energy for two point charges q 1 {\displaystyle q_{1}} and q 2 {\displaystyle q_{2}} (i.e., those that have no spatial extent independently), in three dimensions, is (in SI units —rather than Gaussian units which are frequently used in electromagnetism ): V = q 1 q 2 4 π ε 0 | r | {\displaystyle V={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}|\mathbf {r} |}}} However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For N {\displaystyle N} charges, the potential energy of charge q j {\displaystyle q_{j}} due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges ): [ 3 ] V j = 1 2 ∑ i ≠ j q i ϕ ( r i ) = 1 8 π ε 0 ∑ i ≠ j q i q j | r i − r j | {\displaystyle V_{j}={\frac {1}{2}}\sum _{i\neq j}q_{i}\phi (\mathbf {r} _{i})={\frac {1}{8\pi \varepsilon _{0}}}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}} where ϕ ( r i ) {\displaystyle \phi (\mathbf {r} _{i})} is the electrostatic potential of charge q j {\displaystyle q_{j}} at r i {\displaystyle \mathbf {r} _{i}} . The total potential of the system is then the sum over j {\displaystyle j} : V = 1 8 π ε 0 ∑ j = 1 N ∑ i ≠ j q i q j | r i − r j | {\displaystyle V={\frac {1}{8\pi \varepsilon _{0}}}\sum _{j=1}^{N}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}} so the Hamiltonian is: H ^ = − ℏ 2 2 ∑ j = 1 N 1 m j ∇ j 2 + 1 8 π ε 0 ∑ j = 1 N ∑ i ≠ j q i q j | r i − r j | = ∑ j = 1 N ( − ℏ 2 2 m j ∇ j 2 + 1 8 π ε 0 ∑ i ≠ j q i q j | r i − r j | ) {\displaystyle {\begin{aligned}{\hat {H}}&=-{\frac {\hbar ^{2}}{2}}\sum _{j=1}^{N}{\frac {1}{m_{j}}}\nabla _{j}^{2}+{\frac {1}{8\pi \varepsilon _{0}}}\sum _{j=1}^{N}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}\\&=\sum _{j=1}^{N}\left(-{\frac {\hbar ^{2}}{2m_{j}}}\nabla _{j}^{2}+{\frac {1}{8\pi \varepsilon _{0}}}\sum _{i\neq j}{\frac {q_{i}q_{j}}{|\mathbf {r} _{i}-\mathbf {r} _{j}|}}\right)\\\end{aligned}}} For an electric dipole moment d {\displaystyle \mathbf {d} } constituting charges of magnitude q {\displaystyle q} , in a uniform, electrostatic field (time-independent) E {\displaystyle \mathbf {E} } , positioned in one place, the potential is: V = − d ^ ⋅ E {\displaystyle V=-\mathbf {\hat {d}} \cdot \mathbf {E} } the dipole moment itself is the operator d ^ = q r ^ {\displaystyle \mathbf {\hat {d}} =q\mathbf {\hat {r}} } Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: H ^ = − d ^ ⋅ E = − q r ^ ⋅ E {\displaystyle {\hat {H}}=-\mathbf {\hat {d}} \cdot \mathbf {E} =-q\mathbf {\hat {r}} \cdot \mathbf {E} } For a magnetic dipole moment μ {\displaystyle {\boldsymbol {\mu }}} in a uniform, magnetostatic field (time-independent) B {\displaystyle \mathbf {B} } , positioned in one place, the potential is: V = − μ ⋅ B {\displaystyle V=-{\boldsymbol {\mu }}\cdot \mathbf {B} } Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: H ^ = − μ ⋅ B {\displaystyle {\hat {H}}=-{\boldsymbol {\mu }}\cdot \mathbf {B} } For a spin- 1 ⁄ 2 particle, the corresponding spin magnetic moment is: [ 4 ] μ S = g s e 2 m S {\displaystyle {\boldsymbol {\mu }}_{S}={\frac {g_{s}e}{2m}}\mathbf {S} } where g s {\displaystyle g_{s}} is the "spin g-factor " (not to be confused with the gyromagnetic ratio ), e {\displaystyle e} is the electron charge, S {\displaystyle \mathbf {S} } is the spin operator vector, whose components are the Pauli matrices , hence H ^ = g s e 2 m S ⋅ B {\displaystyle {\hat {H}}={\frac {g_{s}e}{2m}}\mathbf {S} \cdot \mathbf {B} } For a particle with mass m {\displaystyle m} and charge q {\displaystyle q} in an electromagnetic field, described by the scalar potential ϕ {\displaystyle \phi } and vector potential A {\displaystyle \mathbf {A} } , there are two parts to the Hamiltonian to substitute for. [ 1 ] The canonical momentum operator p ^ {\displaystyle \mathbf {\hat {p}} } , which includes a contribution from the A {\displaystyle \mathbf {A} } field and fulfils the canonical commutation relation , must be quantized; p ^ = m r ˙ + q A , {\displaystyle \mathbf {\hat {p}} =m{\dot {\mathbf {r} }}+q\mathbf {A} ,} where m r ˙ {\displaystyle m{\dot {\mathbf {r} }}} is the kinetic momentum . The quantization prescription reads p ^ = − i ℏ ∇ , {\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla ,} so the corresponding kinetic energy operator is T ^ = 1 2 m r ˙ ⋅ r ˙ = 1 2 m ( p ^ − q A ) 2 {\displaystyle {\hat {T}}={\frac {1}{2}}m{\dot {\mathbf {r} }}\cdot {\dot {\mathbf {r} }}={\frac {1}{2m}}\left(\mathbf {\hat {p}} -q\mathbf {A} \right)^{2}} and the potential energy, which is due to the ϕ {\displaystyle \phi } field, is given by V ^ = q ϕ . {\displaystyle {\hat {V}}=q\phi .} Casting all of these into the Hamiltonian gives H ^ = 1 2 m ( − i ℏ ∇ − q A ) 2 + q ϕ . {\displaystyle {\hat {H}}={\frac {1}{2m}}\left(-i\hbar \nabla -q\mathbf {A} \right)^{2}+q\phi .} In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength . A wave propagating in the x {\displaystyle x} direction is a different state from one propagating in the y {\displaystyle y} direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate . It turns out that degeneracy occurs whenever a nontrivial unitary operator U {\displaystyle U} commutes with the Hamiltonian. To see this, suppose that | a ⟩ {\displaystyle |a\rangle } is an energy eigenket. Then U | a ⟩ {\displaystyle U|a\rangle } is an energy eigenket with the same eigenvalue, since U H | a ⟩ = U E a | a ⟩ = E a ( U | a ⟩ ) = H ( U | a ⟩ ) . {\displaystyle UH|a\rangle =UE_{a}|a\rangle =E_{a}(U|a\rangle )=H\;(U|a\rangle ).} Since U {\displaystyle U} is nontrivial, at least one pair of | a ⟩ {\displaystyle |a\rangle } and U | a ⟩ {\displaystyle U|a\rangle } must represent distinct states. Therefore, H {\displaystyle H} has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator , which rotates the wavefunctions by some angle while otherwise preserving their shape. The existence of a symmetry operator implies the existence of a conserved observable. Let G {\displaystyle G} be the Hermitian generator of U {\displaystyle U} : U = I − i ε G + O ( ε 2 ) {\displaystyle U=I-i\varepsilon G+O(\varepsilon ^{2})} It is straightforward to show that if U {\displaystyle U} commutes with H {\displaystyle H} , then so does G {\displaystyle G} : [ H , G ] = 0 {\displaystyle [H,G]=0} Therefore, ∂ ∂ t ⟨ ψ ( t ) | G | ψ ( t ) ⟩ = 1 i ℏ ⟨ ψ ( t ) | [ G , H ] | ψ ( t ) ⟩ = 0. {\displaystyle {\frac {\partial }{\partial t}}\langle \psi (t)|G|\psi (t)\rangle ={\frac {1}{i\hbar }}\langle \psi (t)|[G,H]|\psi (t)\rangle =0.} In obtaining this result, we have used the Schrödinger equation, as well as its dual , ⟨ ψ ( t ) | H = − i ℏ d d t ⟨ ψ ( t ) | . {\displaystyle \langle \psi (t)|H=-i\hbar {d \over \ dt}\langle \psi (t)|.} Thus, the expected value of the observable G {\displaystyle G} is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum . Hamilton 's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states { | n ⟩ } {\displaystyle \left\{\left|n\right\rangle \right\}} , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e., ⟨ n ′ | n ⟩ = δ n n ′ {\displaystyle \langle n'|n\rangle =\delta _{nn'}} Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time. The instantaneous state of the system at time t {\displaystyle t} , | ψ ( t ) ⟩ {\displaystyle \left|\psi \left(t\right)\right\rangle } , can be expanded in terms of these basis states: | ψ ( t ) ⟩ = ∑ n a n ( t ) | n ⟩ {\displaystyle |\psi (t)\rangle =\sum _{n}a_{n}(t)|n\rangle } where a n ( t ) = ⟨ n | ψ ( t ) ⟩ . {\displaystyle a_{n}(t)=\langle n|\psi (t)\rangle .} The coefficients a n ( t ) {\displaystyle a_{n}(t)} are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole. The expectation value of the Hamiltonian of this state, which is also the mean energy, is ⟨ H ( t ) ⟩ = d e f ⟨ ψ ( t ) | H | ψ ( t ) ⟩ = ∑ n n ′ a n ′ ∗ a n ⟨ n ′ | H | n ⟩ {\displaystyle \langle H(t)\rangle \mathrel {\stackrel {\mathrm {def} }{=}} \langle \psi (t)|H|\psi (t)\rangle =\sum _{nn'}a_{n'}^{*}a_{n}\langle n'|H|n\rangle } where the last step was obtained by expanding | ψ ( t ) ⟩ {\displaystyle \left|\psi \left(t\right)\right\rangle } in terms of the basis states. Each a n ( t ) {\displaystyle a_{n}(t)} actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use a n ( t ) {\displaystyle a_{n}(t)} and its complex conjugate a n ∗ ( t ) {\displaystyle a_{n}^{*}(t)} . With this choice of independent variables, we can calculate the partial derivative ∂ ⟨ H ⟩ ∂ a n ′ ∗ = ∑ n a n ⟨ n ′ | H | n ⟩ = ⟨ n ′ | H | ψ ⟩ {\displaystyle {\frac {\partial \langle H\rangle }{\partial a_{n'}^{*}}}=\sum _{n}a_{n}\langle n'|H|n\rangle =\langle n'|H|\psi \rangle } By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to ∂ ⟨ H ⟩ ∂ a n ′ ∗ = i ℏ ∂ a n ′ ∂ t {\displaystyle {\frac {\partial \langle H\rangle }{\partial a_{n'}^{*}}}=i\hbar {\frac {\partial a_{n'}}{\partial t}}} Similarly, one can show that ∂ ⟨ H ⟩ ∂ a n = − i ℏ ∂ a n ∗ ∂ t {\displaystyle {\frac {\partial \langle H\rangle }{\partial a_{n}}}=-i\hbar {\frac {\partial a_{n}^{*}}{\partial t}}} If we define "conjugate momentum" variables π n {\displaystyle \pi _{n}} by π n ( t ) = i ℏ a n ∗ ( t ) {\displaystyle \pi _{n}(t)=i\hbar a_{n}^{*}(t)} then the above equations become ∂ ⟨ H ⟩ ∂ π n = ∂ a n ∂ t , ∂ ⟨ H ⟩ ∂ a n = − ∂ π n ∂ t {\displaystyle {\frac {\partial \langle H\rangle }{\partial \pi _{n}}}={\frac {\partial a_{n}}{\partial t}},\quad {\frac {\partial \langle H\rangle }{\partial a_{n}}}=-{\frac {\partial \pi _{n}}{\partial t}}} which is precisely the form of Hamilton's equations, with the a n {\displaystyle a_{n}} s as the generalized coordinates, the π n {\displaystyle \pi _{n}} s as the conjugate momenta, and ⟨ H ⟩ {\displaystyle \langle H\rangle } taking the place of the classical Hamiltonian.
https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)
In theoretical physics , Hamiltonian field theory is the field-theoretic analogue to classical Hamiltonian mechanics . It is a formalism in classical field theory alongside Lagrangian field theory . It also has applications in quantum field theory . The Hamiltonian for a system of discrete particles is a function of their generalized coordinates and conjugate momenta, and possibly, time. For continua and fields, Hamiltonian mechanics is unsuitable but can be extended by considering a large number of point masses, and taking the continuous limit, that is, infinitely many particles forming a continuum or field. Since each point mass has one or more degrees of freedom , the field formulation has infinitely many degrees of freedom. The Hamiltonian density is the continuous analogue for fields; it is a function of the fields, the conjugate "momentum" fields, and possibly the space and time coordinates themselves. For one scalar field φ ( x , t ) , the Hamiltonian density is defined from the Lagrangian density by [ nb 1 ] with ∇ the "del" or "nabla" operator , x is the position vector of some point in space, and t is time . The Lagrangian density is a function of the fields in the system, their space and time derivatives, and possibly the space and time coordinates themselves. It is the field analogue to the Lagrangian function for a system of discrete particles described by generalized coordinates. As in Hamiltonian mechanics where every generalized coordinate has a corresponding generalized momentum, the field φ ( x , t ) has a conjugate momentum field π ( x , t ) , defined as the partial derivative of the Lagrangian density with respect to the time derivative of the field, in which the overdot [ nb 2 ] denotes a partial time derivative ∂/∂ t , not a total time derivative d / dt . For many fields φ i ( x , t ) and their conjugates π i ( x , t ) the Hamiltonian density is a function of them all: where each conjugate field is defined with respect to its field, In general, for any number of fields, the volume integral of the Hamiltonian density gives the Hamiltonian, in three spatial dimensions: The Hamiltonian density is the Hamiltonian per unit spatial volume. The corresponding dimension is [energy][length] −3 , in SI units Joules per metre cubed, J m −3 . The above equations and definitions can be extended to vector fields and more generally tensor fields and spinor fields . In physics, tensor fields describe bosons and spinor fields describe fermions . The equations of motion for the fields are similar to the Hamiltonian equations for discrete particles. For any number of fields: ϕ ˙ i = + δ H δ π i , π ˙ i = − δ H δ ϕ i , {\displaystyle {\dot {\phi }}_{i}=+{\frac {\delta {H}}{\delta \pi _{i}}}\,,\quad {\dot {\pi }}_{i}=-{\frac {\delta {H}}{\delta \phi _{i}}}\,,} where again the overdots are partial time derivatives, the variational derivative with respect to the fields with · the dot product , must be used instead of simply partial derivatives . The fields φ i and conjugates π i form an infinite dimensional phase space , because fields have an infinite number of degrees of freedom. For two functions which depend on the fields φ i and π i , their spatial derivatives, and the space and time coordinates, and the fields are zero on the boundary of the volume the integrals are taken over, the field theoretic Poisson bracket is defined as (not to be confused with the anticommutator from quantum mechanics). [ 1 ] where δ F / δ f {\displaystyle \delta {\mathcal {F}}/\delta f} is the variational derivative Under the same conditions of vanishing fields on the surface, the following result holds for the time evolution of A (similarly for B ): which can be found from the total time derivative of A , integration by parts , and using the above Poisson bracket. The following results are true if the Lagrangian and Hamiltonian densities are explicitly time-independent (they can still have implicit time-dependence via the fields and their derivatives), The Hamiltonian density is the total energy density, the sum of the kinetic energy density ( T {\displaystyle {\mathcal {T}}} ) and the potential energy density ( V {\displaystyle {\mathcal {V}}} ), Taking the partial time derivative of the definition of the Hamiltonian density above, and using the chain rule for implicit differentiation and the definition of the conjugate momentum field, gives the continuity equation : in which the Hamiltonian density can be interpreted as the energy density, and the energy flux, or flow of energy per unit time per unit surface area. Covariant Hamiltonian field theory is the relativistic formulation of Hamiltonian field theory. Hamiltonian field theory usually means the symplectic Hamiltonian formalism when applied to classical field theory , that takes the form of the instantaneous Hamiltonian formalism on an infinite-dimensional phase space , and where canonical coordinates are field functions at some instant of time. [ 2 ] This Hamiltonian formalism is applied to quantization of fields , e.g., in quantum gauge theory . In Covariant Hamiltonian field theory, canonical momenta p μ i corresponds to derivatives of fields with respect to all world coordinates x μ . [ 3 ] Covariant Hamilton equations are equivalent to the Euler–Lagrange equations in the case of hyperregular Lagrangians . Covariant Hamiltonian field theory is developed in the Hamilton–De Donder, [ 4 ] polysymplectic, [ 5 ] multisymplectic [ 6 ] and k -symplectic [ 7 ] variants. A phase space of covariant Hamiltonian field theory is a finite-dimensional polysymplectic or multisymplectic manifold. Hamiltonian non-autonomous mechanics is formulated as covariant Hamiltonian field theory on fiber bundles over the time axis, i.e. the real line R {\displaystyle \mathbb {R} } .
https://en.wikipedia.org/wiki/Hamiltonian_field_theory
Hamiltonian fluid mechanics is the application of Hamiltonian methods to fluid mechanics . Note that this formalism only applies to non- dissipative fluids. Take the simple example of a barotropic , inviscid vorticity-free fluid. Then, the conjugate fields are the mass density field ρ and the velocity potential φ . The Poisson bracket is given by and the Hamiltonian by: where e is the internal energy density, as a function of ρ . For this barotropic flow, the internal energy is related to the pressure p by: where an apostrophe ('), denotes differentiation with respect to ρ . This Hamiltonian structure gives rise to the following two equations of motion : where u → = d e f ∇ φ {\displaystyle {\vec {u}}\ {\stackrel {\mathrm {def} }{=}}\ \nabla \varphi } is the velocity and is vorticity-free . The second equation leads to the Euler equations : after exploiting the fact that the vorticity is zero: As fluid dynamics is described by non-canonical dynamics, which possess an infinite amount of Casimir invariants, an alternative formulation of Hamiltonian formulation of fluid dynamics can be introduced through the use of Nambu mechanics [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Hamiltonian_fluid_mechanics
In physics , Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton , [ 1 ] Hamiltonian mechanics replaces (generalized) velocities q ˙ i {\displaystyle {\dot {q}}^{i}} used in Lagrangian mechanics with (generalized) momenta . Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures ) and serves as a link between classical and quantum mechanics . Let ( M , L ) {\displaystyle (M,{\mathcal {L}})} be a mechanical system with configuration space M {\displaystyle M} and smooth Lagrangian L . {\displaystyle {\mathcal {L}}.} Select a standard coordinate system ( q , q ˙ ) {\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})} on M . {\displaystyle M.} The quantities p i ( q , q ˙ , t ) = def ∂ L / ∂ q ˙ i {\displaystyle \textstyle p_{i}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)~{\stackrel {\text{def}}{=}}~{\partial {\mathcal {L}}}/{\partial {\dot {q}}^{i}}} are called momenta . (Also generalized momenta , conjugate momenta , and canonical momenta ). For a time instant t , {\displaystyle t,} the Legendre transformation of L {\displaystyle {\mathcal {L}}} is defined as the map ( q , q ˙ ) → ( p , q ) {\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\to \left({\boldsymbol {p}},{\boldsymbol {q}}\right)} which is assumed to have a smooth inverse ( p , q ) → ( q , q ˙ ) . {\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})\to ({\boldsymbol {q}},{\boldsymbol {\dot {q}}}).} For a system with n {\displaystyle n} degrees of freedom, the Lagrangian mechanics defines the energy function E L ( q , q ˙ , t ) = def ∑ i = 1 n q ˙ i ∂ L ∂ q ˙ i − L . {\displaystyle E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\,{\stackrel {\text{def}}{=}}\,\sum _{i=1}^{n}{\dot {q}}^{i}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\mathcal {L}}.} The Legendre transform of L {\displaystyle {\mathcal {L}}} turns E L {\displaystyle E_{\mathcal {L}}} into a function H ( p , q , t ) {\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)} known as the Hamiltonian . The Hamiltonian satisfies H ( ∂ L ∂ q ˙ , q , t ) = E L ( q , q ˙ , t ) {\displaystyle {\mathcal {H}}\left({\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {\dot {q}}}}},{\boldsymbol {q}},t\right)=E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)} which implies that H ( p , q , t ) = ∑ i = 1 n p i q ˙ i − L ( q , q ˙ , t ) , {\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t),} where the velocities q ˙ = ( q ˙ 1 , … , q ˙ n ) {\displaystyle {\boldsymbol {\dot {q}}}=({\dot {q}}^{1},\ldots ,{\dot {q}}^{n})} are found from the ( n {\displaystyle n} -dimensional) equation p = ∂ L / ∂ q ˙ {\displaystyle \textstyle {\boldsymbol {p}}={\partial {\mathcal {L}}}/{\partial {\boldsymbol {\dot {q}}}}} which, by assumption, is uniquely solvable for ⁠ q ˙ {\displaystyle {\boldsymbol {\dot {q}}}} ⁠ . The ( 2 n {\displaystyle 2n} -dimensional) pair ( p , q ) {\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})} is called phase space coordinates . (Also canonical coordinates ). In phase space coordinates ⁠ ( p , q ) {\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})} ⁠ , the ( n {\displaystyle n} -dimensional) Euler–Lagrange equation ∂ L ∂ q − d d t ∂ L ∂ q ˙ = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {q}}}}-{\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {\boldsymbol {q}}}}}=0} becomes Hamilton's equations in 2 n {\displaystyle 2n} dimensions d q d t = ∂ H ∂ p , d p d t = − ∂ H ∂ q . {\displaystyle {\frac {\mathrm {d} {\boldsymbol {q}}}{\mathrm {d} t}}={\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}},\quad {\frac {\mathrm {d} {\boldsymbol {p}}}{\mathrm {d} t}}=-{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}.} The Hamiltonian H ( p , q ) {\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}})} is the Legendre transform of the Lagrangian L ( q , q ˙ ) {\displaystyle {\mathcal {L}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})} , thus one has L ( q , q ˙ ) + H ( p , q ) = p q ˙ {\displaystyle {\mathcal {L}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}})={\boldsymbol {p}}{\dot {\boldsymbol {q}}}} and thus ∂ H ∂ p = q ˙ ∂ L ∂ q = − ∂ H ∂ q , {\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}}&={\dot {\boldsymbol {q}}}\\{\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {q}}}}&=-{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}},\end{aligned}}} Besides, since p = ∂ L / ∂ q ˙ {\displaystyle {\boldsymbol {p}}=\partial {\mathcal {L}}/\partial {\dot {\boldsymbol {q}}}} , the Euler–Lagrange equations yield p ˙ = d p d t = ∂ L ∂ q = − ∂ H ∂ q . {\displaystyle {\dot {\boldsymbol {p}}}={\frac {\mathrm {d} {\boldsymbol {p}}}{\mathrm {d} t}}={\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {q}}}}=-{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}.} Let P ( a , b , x a , x b ) {\displaystyle {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})} be the set of smooth paths q : [ a , b ] → M {\displaystyle {\boldsymbol {q}}:[a,b]\to M} for which q ( a ) = x a {\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}} and q ( b ) = x b . {\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.} The action functional S : P ( a , b , x a , x b ) → R {\displaystyle {\mathcal {S}}:{\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} } is defined via S [ q ] = ∫ a b L ( t , q ( t ) , q ˙ ( t ) ) d t = ∫ a b ( ∑ i = 1 n p i q ˙ i − H ( p , q , t ) ) d t , {\displaystyle {\mathcal {S}}[{\boldsymbol {q}}]=\int _{a}^{b}{\mathcal {L}}(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt=\int _{a}^{b}\left(\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)\right)\,dt,} where ⁠ q = q ( t ) {\displaystyle {\boldsymbol {q}}={\boldsymbol {q}}(t)} ⁠ , and p = ∂ L / ∂ q ˙ {\displaystyle {\boldsymbol {p}}=\partial {\mathcal {L}}/\partial {\boldsymbol {\dot {q}}}} (see above). A path q ∈ P ( a , b , x a , x b ) {\displaystyle {\boldsymbol {q}}\in {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})} is a stationary point of S {\displaystyle {\mathcal {S}}} (and hence is an equation of motion) if and only if the path ( p ( t ) , q ( t ) ) {\displaystyle ({\boldsymbol {p}}(t),{\boldsymbol {q}}(t))} in phase space coordinates obeys the Hamilton equations. A simple interpretation of Hamiltonian mechanics comes from its application on a one-dimensional system consisting of one nonrelativistic particle of mass m . The value H ( p , q ) {\displaystyle H(p,q)} of the Hamiltonian is the total energy of the system, in this case the sum of kinetic and potential energy , traditionally denoted T and V , respectively. Here p is the momentum mv and q is the space coordinate. Then H = T + V , T = p 2 2 m , V = V ( q ) {\displaystyle {\mathcal {H}}=T+V,\qquad T={\frac {p^{2}}{2m}},\qquad V=V(q)} T is a function of p alone, while V is a function of q alone (i.e., T and V are scleronomic ). In this example, the time derivative of q is the velocity, and so the first Hamilton equation means that the particle's velocity equals the derivative of its kinetic energy with respect to its momentum. The time derivative of the momentum p equals the Newtonian force , and so the second Hamilton equation means that the force equals the negative gradient of potential energy. A spherical pendulum consists of a mass m moving without friction on the surface of a sphere . The only forces acting on the mass are the reaction from the sphere and gravity . Spherical coordinates are used to describe the position of the mass in terms of ( r , θ , φ ) , where r is fixed, r = ℓ . The Lagrangian for this system is [ 2 ] L = 1 2 m ℓ 2 ( θ ˙ 2 + sin 2 ⁡ θ φ ˙ 2 ) + m g ℓ cos ⁡ θ . {\displaystyle L={\frac {1}{2}}m\ell ^{2}\left({\dot {\theta }}^{2}+\sin ^{2}\theta \ {\dot {\varphi }}^{2}\right)+mg\ell \cos \theta .} Thus the Hamiltonian is H = P θ θ ˙ + P φ φ ˙ − L {\displaystyle H=P_{\theta }{\dot {\theta }}+P_{\varphi }{\dot {\varphi }}-L} where P θ = ∂ L ∂ θ ˙ = m ℓ 2 θ ˙ {\displaystyle P_{\theta }={\frac {\partial L}{\partial {\dot {\theta }}}}=m\ell ^{2}{\dot {\theta }}} and P φ = ∂ L ∂ φ ˙ = m ℓ 2 sin 2 θ φ ˙ . {\displaystyle P_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}.} In terms of coordinates and momenta, the Hamiltonian reads H = [ 1 2 m ℓ 2 θ ˙ 2 + 1 2 m ℓ 2 sin 2 θ φ ˙ 2 ] ⏟ T + [ − m g ℓ cos ⁡ θ ] ⏟ V = P θ 2 2 m ℓ 2 + P φ 2 2 m ℓ 2 sin 2 ⁡ θ − m g ℓ cos ⁡ θ . {\displaystyle H=\underbrace {\left[{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+{\frac {1}{2}}m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}^{2}\right]} _{T}+\underbrace {{\Big [}-mg\ell \cos \theta {\Big ]}} _{V}={\frac {P_{\theta }^{2}}{2m\ell ^{2}}}+{\frac {P_{\varphi }^{2}}{2m\ell ^{2}\sin ^{2}\theta }}-mg\ell \cos \theta .} Hamilton's equations give the time evolution of coordinates and conjugate momenta in four first-order differential equations, θ ˙ = P θ m ℓ 2 φ ˙ = P φ m ℓ 2 sin 2 ⁡ θ P θ ˙ = P φ 2 m ℓ 2 sin 3 ⁡ θ cos ⁡ θ − m g ℓ sin ⁡ θ P φ ˙ = 0. {\displaystyle {\begin{aligned}{\dot {\theta }}&={P_{\theta } \over m\ell ^{2}}\\[6pt]{\dot {\varphi }}&={P_{\varphi } \over m\ell ^{2}\sin ^{2}\theta }\\[6pt]{\dot {P_{\theta }}}&={P_{\varphi }^{2} \over m\ell ^{2}\sin ^{3}\theta }\cos \theta -mg\ell \sin \theta \\[6pt]{\dot {P_{\varphi }}}&=0.\end{aligned}}} Momentum ⁠ P φ {\displaystyle P_{\varphi }} ⁠ , which corresponds to the vertical component of angular momentum ⁠ L z = ℓ sin ⁡ θ × m ℓ sin ⁡ θ φ ˙ {\displaystyle L_{z}=\ell \sin \theta \times m\ell \sin \theta \,{\dot {\varphi }}} ⁠ , is a constant of motion. That is a consequence of the rotational symmetry of the system around the vertical axis. Being absent from the Hamiltonian, azimuth φ {\displaystyle \varphi } is a cyclic coordinate , which implies conservation of its conjugate momentum. Hamilton's equations can be derived by a calculation with the Lagrangian ⁠ L {\displaystyle {\mathcal {L}}} ⁠ , generalized positions q i , and generalized velocities ⋅ q i , where ⁠ i = 1 , … , n {\displaystyle i=1,\ldots ,n} ⁠ . [ 3 ] Here we work off-shell , meaning ⁠ q i {\displaystyle q^{i}} ⁠ , ⁠ q ˙ i {\displaystyle {\dot {q}}^{i}} ⁠ , ⁠ t {\displaystyle t} ⁠ are independent coordinates in phase space, not constrained to follow any equations of motion (in particular, q ˙ i {\displaystyle {\dot {q}}^{i}} is not a derivative of ⁠ q i {\displaystyle q^{i}} ⁠ ). The total differential of the Lagrangian is: d L = ∑ i ( ∂ L ∂ q i d q i + ∂ L ∂ q ˙ i d q ˙ i ) + ∂ L ∂ t d t . {\displaystyle \mathrm {d} {\mathcal {L}}=\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}\,\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .} The generalized momentum coordinates were defined as ⁠ p i = ∂ L / ∂ q ˙ i {\displaystyle p_{i}=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}} ⁠ , so we may rewrite the equation as: d L = ∑ i ( ∂ L ∂ q i d q i + p i d q ˙ i ) + ∂ L ∂ t d t = ∑ i ( ∂ L ∂ q i d q i + d ( p i q ˙ i ) − q ˙ i d p i ) + ∂ L ∂ t d t . {\displaystyle {\begin{aligned}\mathrm {d} {\mathcal {L}}=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+p_{i}\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\mathrm {d} t\\=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+\mathrm {d} (p_{i}{\dot {q}}^{i})-{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\,.\end{aligned}}} After rearranging, one obtains: d ( ∑ i p i q ˙ i − L ) = ∑ i ( − ∂ L ∂ q i d q i + q ˙ i d p i ) − ∂ L ∂ t d t . {\displaystyle \mathrm {d} \!\left(\sum _{i}p_{i}{\dot {q}}^{i}-{\mathcal {L}}\right)=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .} The term in parentheses on the left-hand side is just the Hamiltonian H = ∑ p i q ˙ i − L {\textstyle {\mathcal {H}}=\sum p_{i}{\dot {q}}^{i}-{\mathcal {L}}} defined previously, therefore: d H = ∑ i ( − ∂ L ∂ q i d q i + q ˙ i d p i ) − ∂ L ∂ t d t . {\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .} One may also calculate the total differential of the Hamiltonian H {\displaystyle {\mathcal {H}}} with respect to coordinates ⁠ q i {\displaystyle q^{i}} ⁠ , ⁠ p i {\displaystyle p_{i}} ⁠ , ⁠ t {\displaystyle t} ⁠ instead of ⁠ q i {\displaystyle q^{i}} ⁠ , ⁠ q ˙ i {\displaystyle {\dot {q}}^{i}} ⁠ , ⁠ t {\displaystyle t} ⁠ , yielding: d H = ∑ i ( ∂ H ∂ q i d q i + ∂ H ∂ p i d p i ) + ∂ H ∂ t d t . {\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .} One may now equate these two expressions for ⁠ d H {\displaystyle d{\mathcal {H}}} ⁠ , one in terms of ⁠ L {\displaystyle {\mathcal {L}}} ⁠ , the other in terms of ⁠ H {\displaystyle {\mathcal {H}}} ⁠ : ∑ i ( − ∂ L ∂ q i d q i + q ˙ i d p i ) − ∂ L ∂ t d t = ∑ i ( ∂ H ∂ q i d q i + ∂ H ∂ p i d p i ) + ∂ H ∂ t d t . {\displaystyle \sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ =\ \sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .} Since these calculations are off-shell, one can equate the respective coefficients of ⁠ d q i {\displaystyle \mathrm {d} q^{i}} ⁠ , ⁠ d p i {\displaystyle \mathrm {d} p_{i}} ⁠ , ⁠ d t {\displaystyle \mathrm {d} t} ⁠ on the two sides: ∂ H ∂ q i = − ∂ L ∂ q i , ∂ H ∂ p i = q ˙ i , ∂ H ∂ t = − ∂ L ∂ t . {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\partial {\mathcal {L}} \over \partial t}\ .} On-shell, one substitutes parametric functions q i = q i ( t ) {\displaystyle q^{i}=q^{i}(t)} which define a trajectory in phase space with velocities ⁠ q ˙ i = d d t q i ( t ) {\displaystyle {\dot {q}}^{i}={\tfrac {d}{dt}}q^{i}(t)} ⁠ , obeying Lagrange's equations : d d t ∂ L ∂ q ˙ i − ∂ L ∂ q i = 0 . {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}=0\ .} Rearranging and writing in terms of the on-shell p i = p i ( t ) {\displaystyle p_{i}=p_{i}(t)} gives: ∂ L ∂ q i = p ˙ i . {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial q^{i}}}={\dot {p}}_{i}\ .} Thus Lagrange's equations are equivalent to Hamilton's equations: ∂ H ∂ q i = − p ˙ i , ∂ H ∂ p i = q ˙ i , ∂ H ∂ t = − ∂ L ∂ t . {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\dot {p}}_{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\frac {\partial {\mathcal {L}}}{\partial t}}\,.} In the case of time-independent H {\displaystyle {\mathcal {H}}} and ⁠ L {\displaystyle {\mathcal {L}}} ⁠ , i.e. ⁠ ∂ H / ∂ t = − ∂ L / ∂ t = 0 {\displaystyle \partial {\mathcal {H}}/\partial t=-\partial {\mathcal {L}}/\partial t=0} ⁠ , Hamilton's equations consist of 2 n first-order differential equations , while Lagrange's equations consist of n second-order equations. Hamilton's equations usually do not reduce the difficulty of finding explicit solutions, but important theoretical results can be derived from them, because coordinates and momenta are independent variables with nearly symmetric roles. Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, so that some coordinate q i {\displaystyle q_{i}} does not occur in the Hamiltonian (i.e. a cyclic coordinate ), the corresponding momentum coordinate p i {\displaystyle p_{i}} is conserved along each trajectory, and that coordinate can be reduced to a constant in the other equations of the set. This effectively reduces the problem from n coordinates to ( n − 1) coordinates: this is the basis of symplectic reduction in geometry. In the Lagrangian framework, the conservation of momentum also follows immediately, however all the generalized velocities q ˙ i {\displaystyle {\dot {q}}_{i}} still occur in the Lagrangian, and a system of equations in n coordinates still has to be solved. [ 4 ] The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in classical mechanics, and suggest analogous formulations in quantum mechanics : the path integral formulation and the Schrödinger equation . In its application to a given system, the Hamiltonian is often taken to be H = T + V {\displaystyle {\mathcal {H}}=T+V} where T {\displaystyle T} is the kinetic energy and V {\displaystyle V} is the potential energy. Using this relation can be simpler than first calculating the Lagrangian, and then deriving the Hamiltonian from the Lagrangian. However, the relation is not true for all systems. The relation holds true for nonrelativistic systems when all of the following conditions are satisfied [ 5 ] [ 6 ] ∂ V ( q , q ˙ , t ) ∂ q ˙ i = 0 , ∀ i {\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}=0\;,\quad \forall i} ∂ T ( q , q ˙ , t ) ∂ t = 0 {\displaystyle {\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0} T ( q , q ˙ ) = ∑ i = 1 n ∑ j = 1 n ( c i j ( q ) q ˙ i q ˙ j ) {\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{j}{\biggr )}} where t {\displaystyle t} is time, n {\displaystyle n} is the number of degrees of freedom of the system, and each c i j ( q ) {\displaystyle c_{ij}({\boldsymbol {q}})} is an arbitrary scalar function of q {\displaystyle {\boldsymbol {q}}} . In words, this means that the relation H = T + V {\displaystyle {\mathcal {H}}=T+V} holds true if T {\displaystyle T} does not contain time as an explicit variable (it is scleronomic ), V {\displaystyle V} does not contain generalised velocity as an explicit variable, and each term of T {\displaystyle T} is quadratic in generalised velocity. Preliminary to this proof, it is important to address an ambiguity in the related mathematical notation. While a change of variables can be used to equate L ( p , q , t ) = L ( q , q ˙ , t ) {\displaystyle {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)={\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)} , it is important to note that ∂ L ( q , q ˙ , t ) ∂ q ˙ i ≠ ∂ L ( p , q , t ) ∂ q ˙ i {\displaystyle {\frac {\partial {\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}\neq {\frac {\partial {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)}{\partial {\dot {q}}_{i}}}} . In this case, the right hand side always evaluates to 0. To perform a change of variables inside of a partial derivative, the multivariable chain rule should be used. Hence, to avoid ambiguity, the function arguments of any term inside of a partial derivative should be stated. Additionally, this proof uses the notation f ( a , b , c ) = f ( a , b ) {\displaystyle f(a,b,c)=f(a,b)} to imply that ∂ f ( a , b , c ) ∂ c = 0 {\displaystyle {\frac {\partial f(a,b,c)}{\partial c}}=0} . Starting from definitions of the Hamiltonian, generalized momenta, and Lagrangian for an n {\displaystyle n} degrees of freedom system H = ∑ i = 1 n ( p i q ˙ i ) − L ( q , q ˙ , t ) {\displaystyle {\mathcal {H}}=\sum _{i=1}^{n}{\biggl (}p_{i}{\dot {q}}_{i}{\biggr )}-{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)} p i ( q , q ˙ , t ) = ∂ L ( q , q ˙ , t ) ∂ q ˙ i {\displaystyle p_{i}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)={\frac {\partial {\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}} L ( q , q ˙ , t ) = T ( q , q ˙ , t ) − V ( q , q ˙ , t ) {\displaystyle {\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)-V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)} Substituting the generalized momenta into the Hamiltonian gives H = ∑ i = 1 n ( ∂ L ( q , q ˙ , t ) ∂ q ˙ i q ˙ i ) − L ( q , q ˙ , t ) {\displaystyle {\mathcal {H}}=\sum _{i=1}^{n}\left({\frac {\partial {\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}\right)-{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)} Substituting the Lagrangian into the result gives H = ∑ i = 1 n ( ∂ ( T ( q , q ˙ , t ) − V ( q , q ˙ , t ) ) ∂ q ˙ i q ˙ i ) − ( T ( q , q ˙ , t ) − V ( q , q ˙ , t ) ) = ∑ i = 1 n ( ∂ T ( q , q ˙ , t ) ∂ q ˙ i q ˙ i − ∂ V ( q , q ˙ , t ) ∂ q ˙ i q ˙ i ) − T ( q , q ˙ , t ) + V ( q , q ˙ , t ) {\displaystyle {\begin{aligned}{\mathcal {H}}&=\sum _{i=1}^{n}\left({\frac {\partial \left(T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)-V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\right)}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}\right)-\left(T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)-V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\right)\\&=\sum _{i=1}^{n}\left({\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}-{\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}\right)-T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)+V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\end{aligned}}} Now assume that ∂ V ( q , q ˙ , t ) ∂ q ˙ i = 0 , ∀ i {\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}=0\;,\quad \forall i} and also assume that ∂ T ( q , q ˙ , t ) ∂ t = 0 {\displaystyle {\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0} Applying these assumptions results in H = ∑ i = 1 n ( ∂ T ( q , q ˙ ) ∂ q ˙ i q ˙ i − ∂ V ( q , t ) ∂ q ˙ i q ˙ i ) − T ( q , q ˙ ) + V ( q , t ) = ∑ i = 1 n ( ∂ T ( q , q ˙ ) ∂ q ˙ i q ˙ i ) − T ( q , q ˙ ) + V ( q , t ) {\displaystyle {\begin{aligned}{\mathcal {H}}&=\sum _{i=1}^{n}\left({\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}-{\frac {\partial V({\boldsymbol {q}},t)}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}\right)-T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})+V({\boldsymbol {q}},t)\\&=\sum _{i=1}^{n}\left({\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}\right)-T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})+V({\boldsymbol {q}},t)\end{aligned}}} Next assume that T is of the form T ( q , q ˙ ) = ∑ i = 1 n ∑ j = 1 n ( c i j ( q ) q ˙ i q ˙ j ) {\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{j}{\biggr )}} where each c i j ( q ) {\displaystyle c_{ij}({\boldsymbol {q}})} is an arbitrary scalar function of q {\displaystyle {\boldsymbol {q}}} . Differentiating this with respect to q ˙ l {\displaystyle {\dot {q}}_{l}} , l ∈ [ 1 , n ] {\displaystyle l\in [1,n]} , gives ∂ T ( q , q ˙ ) ∂ q ˙ l = ∑ i = 1 n ∑ j = 1 n ( ∂ [ c i j ( q ) q ˙ i q ˙ j ] ∂ q ˙ l ) = ∑ i = 1 n ∑ j = 1 n ( c i j ( q ) ∂ [ q ˙ i q ˙ j ] ∂ q ˙ l ) {\displaystyle {\begin{aligned}{\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}{\partial {\dot {q}}_{l}}}&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}{\frac {\partial \left[c_{ij}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{j}\right]}{\partial {\dot {q}}_{l}}}{\biggr )}\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\frac {\partial \left[{\dot {q}}_{i}{\dot {q}}_{j}\right]}{\partial {\dot {q}}_{l}}}{\biggr )}\end{aligned}}} Splitting the summation, evaluating the partial derivative, and rejoining the summation gives ∂ T ( q , q ˙ ) ∂ q ˙ l = ∑ i ≠ l n ∑ j ≠ l n ( c i j ( q ) ∂ [ q ˙ i q ˙ j ] ∂ q ˙ l ) + ∑ i ≠ l n ( c i l ( q ) ∂ [ q ˙ i q ˙ l ] ∂ q ˙ l ) + ∑ j ≠ l n ( c l j ( q ) ∂ [ q ˙ l q ˙ j ] ∂ q ˙ l ) + c l l ( q ) ∂ [ q ˙ l 2 ] ∂ q ˙ l = ∑ i ≠ l n ∑ j ≠ l n ( 0 ) + ∑ i ≠ l n ( c i l ( q ) q ˙ i ) + ∑ j ≠ l n ( c l j ( q ) q ˙ j ) + 2 c l l ( q ) q ˙ l = ∑ i = 1 n ( c i l ( q ) q ˙ i ) + ∑ j = 1 n ( c l j ( q ) q ˙ j ) {\displaystyle {\begin{aligned}{\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}{\partial {\dot {q}}_{l}}}&=\sum _{i\neq l}^{n}\sum _{j\neq l}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\frac {\partial \left[{\dot {q}}_{i}{\dot {q}}_{j}\right]}{\partial {\dot {q}}_{l}}}{\biggr )}+\sum _{i\neq l}^{n}{\biggl (}c_{il}({\boldsymbol {q}}){\frac {\partial \left[{\dot {q}}_{i}{\dot {q}}_{l}\right]}{\partial {\dot {q}}_{l}}}{\biggr )}+\sum _{j\neq l}^{n}{\biggl (}c_{lj}({\boldsymbol {q}}){\frac {\partial \left[{\dot {q}}_{l}{\dot {q}}_{j}\right]}{\partial {\dot {q}}_{l}}}{\biggr )}+c_{ll}({\boldsymbol {q}}){\frac {\partial \left[{\dot {q}}_{l}^{2}\right]}{\partial {\dot {q}}_{l}}}\\&=\sum _{i\neq l}^{n}\sum _{j\neq l}^{n}{\biggl (}0{\biggr )}+\sum _{i\neq l}^{n}{\biggl (}c_{il}({\boldsymbol {q}}){\dot {q}}_{i}{\biggr )}+\sum _{j\neq l}^{n}{\biggl (}c_{lj}({\boldsymbol {q}}){\dot {q}}_{j}{\biggr )}+2c_{ll}({\boldsymbol {q}}){\dot {q}}_{l}\\&=\sum _{i=1}^{n}{\biggl (}c_{il}({\boldsymbol {q}}){\dot {q}}_{i}{\biggr )}+\sum _{j=1}^{n}{\biggl (}c_{lj}({\boldsymbol {q}}){\dot {q}}_{j}{\biggr )}\end{aligned}}} Summing (this multiplied by q ˙ l {\displaystyle {\dot {q}}_{l}} ) over l {\displaystyle l} results in ∑ l = 1 n ( ∂ T ( q , q ˙ ) ∂ q ˙ l q ˙ l ) = ∑ l = 1 n ( ( ∑ i = 1 n ( c i l ( q ) q ˙ i ) + ∑ j = 1 n ( c l j ( q ) q ˙ j ) ) q ˙ l ) = ∑ l = 1 n ∑ i = 1 n ( c i l ( q ) q ˙ i q ˙ l ) + ∑ l = 1 n ∑ j = 1 n ( c l j ( q ) q ˙ j q ˙ l ) = ∑ i = 1 n ∑ l = 1 n ( c i l ( q ) q ˙ i q ˙ l ) + ∑ l = 1 n ∑ j = 1 n ( c l j ( q ) q ˙ l q ˙ j ) = T ( q , q ˙ ) + T ( q , q ˙ ) = 2 T ( q , q ˙ ) {\displaystyle {\begin{aligned}\sum _{l=1}^{n}\left({\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}{\partial {\dot {q}}_{l}}}{\dot {q}}_{l}\right)&=\sum _{l=1}^{n}\left(\left(\sum _{i=1}^{n}{\biggl (}c_{il}({\boldsymbol {q}}){\dot {q}}_{i}{\biggr )}+\sum _{j=1}^{n}{\biggl (}c_{lj}({\boldsymbol {q}}){\dot {q}}_{j}{\biggr )}\right){\dot {q}}_{l}\right)\\&=\sum _{l=1}^{n}\sum _{i=1}^{n}{\biggl (}c_{il}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{l}{\biggr )}+\sum _{l=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{lj}({\boldsymbol {q}}){\dot {q}}_{j}{\dot {q}}_{l}{\biggr )}\\&=\sum _{i=1}^{n}\sum _{l=1}^{n}{\biggl (}c_{il}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{l}{\biggr )}+\sum _{l=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{lj}({\boldsymbol {q}}){\dot {q}}_{l}{\dot {q}}_{j}{\biggr )}\\&=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})+T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\\&=2T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\end{aligned}}} This simplification is a result of Euler's homogeneous function theorem . Hence, the Hamiltonian becomes H = ∑ i = 1 n ( ∂ T ( q , q ˙ ) ∂ q ˙ i q ˙ i ) − T ( q , q ˙ ) + V ( q , t ) = 2 T ( q , q ˙ ) − T ( q , q ˙ ) + V ( q , t ) = T ( q , q ˙ ) + V ( q , t ) {\displaystyle {\begin{aligned}{\mathcal {H}}&=\sum _{i=1}^{n}\left({\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}{\partial {\dot {q}}_{i}}}{\dot {q}}_{i}\right)-T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})+V({\boldsymbol {q}},t)\\&=2T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})-T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})+V({\boldsymbol {q}},t)\\&=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})+V({\boldsymbol {q}},t)\end{aligned}}} For a system of point masses, the requirement for T {\displaystyle T} to be quadratic in generalised velocity is always satisfied for the case where T ( q , q ˙ , t ) = T ( q , q ˙ ) {\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})} , which is a requirement for H = T + V {\displaystyle {\mathcal {H}}=T+V} anyway. Consider the kinetic energy for a system of N point masses. If it is assumed that T ( q , q ˙ , t ) = T ( q , q ˙ ) {\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})} , then it can be shown that r ˙ k ( q , q ˙ , t ) = r ˙ k ( q , q ˙ ) {\displaystyle {\dot {\mathbf {r} }}_{k}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)={\dot {\mathbf {r} }}_{k}({\boldsymbol {q}},{\boldsymbol {\dot {q}}})} (See Scleronomous § Application ). Therefore, the kinetic energy is T ( q , q ˙ ) = 1 2 ∑ k = 1 N ( m k r ˙ k ( q , q ˙ ) ⋅ r ˙ k ( q , q ˙ ) ) {\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})={\frac {1}{2}}\sum _{k=1}^{N}{\biggl (}m_{k}{\dot {\mathbf {r} }}_{k}({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\cdot {\dot {\mathbf {r} }}_{k}({\boldsymbol {q}},{\boldsymbol {\dot {q}}}){\biggr )}} The chain rule for many variables can be used to expand the velocity r ˙ k ( q , q ˙ ) = d r k ( q ) d t = ∑ i = 1 n ( ∂ r k ( q ) ∂ q i q ˙ i ) {\displaystyle {\begin{aligned}{\dot {\mathbf {r} }}_{k}({\boldsymbol {q}},{\boldsymbol {\dot {q}}})&={\frac {d\mathbf {r} _{k}({\boldsymbol {q}})}{dt}}\\&=\sum _{i=1}^{n}\left({\frac {\partial \mathbf {r} _{k}({\boldsymbol {q}})}{\partial q_{i}}}{\dot {q}}_{i}\right)\end{aligned}}} Resulting in T ( q , q ˙ ) = 1 2 ∑ k = 1 N ( m k ( ∑ i = 1 n ( ∂ r k ( q ) ∂ q i q ˙ i ) ⋅ ∑ j = 1 n ( ∂ r k ( q ) ∂ q j q ˙ j ) ) ) = ∑ k = 1 N ∑ i = 1 n ∑ j = 1 n ( 1 2 m k ∂ r k ( q ) ∂ q i ⋅ ∂ r k ( q ) ∂ q j q ˙ i q ˙ j ) = ∑ i = 1 n ∑ j = 1 n ( ∑ k = 1 N ( 1 2 m k ∂ r k ( q ) ∂ q i ⋅ ∂ r k ( q ) ∂ q j ) q ˙ i q ˙ j ) = ∑ i = 1 n ∑ j = 1 n ( c i j ( q ) q ˙ i q ˙ j ) {\displaystyle {\begin{aligned}T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})&={\frac {1}{2}}\sum _{k=1}^{N}\left(m_{k}\left(\sum _{i=1}^{n}\left({\frac {\partial \mathbf {r} _{k}({\boldsymbol {q}})}{\partial q_{i}}}{\dot {q}}_{i}\right)\cdot \sum _{j=1}^{n}\left({\frac {\partial \mathbf {r} _{k}({\boldsymbol {q}})}{\partial q_{j}}}{\dot {q}}_{j}\right)\right)\right)\\&=\sum _{k=1}^{N}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\frac {1}{2}}m_{k}{\frac {\partial \mathbf {r} _{k}({\boldsymbol {q}})}{\partial q_{i}}}\cdot {\frac {\partial \mathbf {r} _{k}({\boldsymbol {q}})}{\partial q_{j}}}{\dot {q}}_{i}{\dot {q}}_{j}\right)\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}\left(\sum _{k=1}^{N}\left({\frac {1}{2}}m_{k}{\frac {\partial \mathbf {r} _{k}({\boldsymbol {q}})}{\partial q_{i}}}\cdot {\frac {\partial \mathbf {r} _{k}({\boldsymbol {q}})}{\partial q_{j}}}\right){\dot {q}}_{i}{\dot {q}}_{j}\right)\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{j}{\biggr )}\end{aligned}}} This is of the required form. If the conditions for H = T + V {\displaystyle {\mathcal {H}}=T+V} are satisfied, then conservation of the Hamiltonian implies conservation of energy. This requires the additional condition that V {\displaystyle V} does not contain time as an explicit variable. ∂ V ( q , q ˙ , t ) ∂ t = 0 {\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0} In summary, the requirements for H = T + V = constant of time {\displaystyle {\mathcal {H}}=T+V={\text{constant of time}}} to be satisfied for a nonrelativistic system are [ 5 ] [ 6 ] Regarding extensions to the Euler-Lagrange formulation which use dissipation functions (See Lagrangian mechanics § Extensions to include non-conservative forces ), e.g. the Rayleigh dissipation function , energy is not conserved when a dissipation function has effect. It is possible to explain the link between this and the former requirements by relating the extended and conventional Euler-Lagrange equations: grouping the extended terms into the potential function produces a velocity dependent potential. Hence, the requirements are not satisfied when a dissipation function has effect. A sufficient illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an electromagnetic field . In Cartesian coordinates the Lagrangian of a non-relativistic classical particle in an electromagnetic field is (in SI Units ): L = ∑ i 1 2 m x ˙ i 2 + ∑ i q x ˙ i A i − q φ , {\displaystyle {\mathcal {L}}=\sum _{i}{\tfrac {1}{2}}m{\dot {x}}_{i}^{2}+\sum _{i}q{\dot {x}}_{i}A_{i}-q\varphi ,} where q is the electric charge of the particle, φ is the electric scalar potential , and the A i are the components of the magnetic vector potential that may all explicitly depend on x i {\displaystyle x_{i}} and ⁠ t {\displaystyle t} ⁠ . This Lagrangian, combined with Euler–Lagrange equation , produces the Lorentz force law m x ¨ = q E + q x ˙ × B , {\displaystyle m{\ddot {\mathbf {x} }}=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \,,} and is called minimal coupling . The canonical momenta are given by: p i = ∂ L ∂ x ˙ i = m x ˙ i + q A i . {\displaystyle p_{i}={\frac {\partial {\mathcal {L}}}{\partial {\dot {x}}_{i}}}=m{\dot {x}}_{i}+qA_{i}.} The Hamiltonian, as the Legendre transformation of the Lagrangian, is therefore: H = ∑ i x ˙ i p i − L = ∑ i ( p i − q A i ) 2 2 m + q φ . {\displaystyle {\mathcal {H}}=\sum _{i}{\dot {x}}_{i}p_{i}-{\mathcal {L}}=\sum _{i}{\frac {\left(p_{i}-qA_{i}\right)^{2}}{2m}}+q\varphi .} This equation is used frequently in quantum mechanics . Under gauge transformation : A → A + ∇ f , φ → φ − f ˙ , {\displaystyle \mathbf {A} \rightarrow \mathbf {A} +\nabla f\,,\quad \varphi \rightarrow \varphi -{\dot {f}}\,,} where f ( r , t ) is any scalar function of space and time. The aforementioned Lagrangian, the canonical momenta, and the Hamiltonian transform like: L → L ′ = L + q d f d t , p → p ′ = p + q ∇ f , H → H ′ = H − q ∂ f ∂ t , {\displaystyle L\rightarrow L'=L+q{\frac {df}{dt}}\,,\quad \mathbf {p} \rightarrow \mathbf {p'} =\mathbf {p} +q\nabla f\,,\quad H\rightarrow H'=H-q{\frac {\partial f}{\partial t}}\,,} which still produces the same Hamilton's equation: ∂ H ′ ∂ x i | p i ′ = ∂ ∂ x i | p i ′ ( x ˙ i p i ′ − L ′ ) = − ∂ L ′ ∂ x i | p i ′ = − ∂ L ∂ x i | p i ′ − q ∂ ∂ x i | p i ′ d f d t = − d d t ( ∂ L ∂ x ˙ i | p i ′ + q ∂ f ∂ x i | p i ′ ) = − p ˙ i ′ {\displaystyle {\begin{aligned}\left.{\frac {\partial H'}{\partial {x_{i}}}}\right|_{p'_{i}}&=\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}({\dot {x}}_{i}p'_{i}-L')=-\left.{\frac {\partial L'}{\partial {x_{i}}}}\right|_{p'_{i}}\\&=-\left.{\frac {\partial L}{\partial {x_{i}}}}\right|_{p'_{i}}-q\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}{\frac {df}{dt}}\\&=-{\frac {d}{dt}}\left(\left.{\frac {\partial L}{\partial {{\dot {x}}_{i}}}}\right|_{p'_{i}}+q\left.{\frac {\partial f}{\partial {x_{i}}}}\right|_{p'_{i}}\right)\\&=-{\dot {p}}'_{i}\end{aligned}}} In quantum mechanics, the wave function will also undergo a local U(1) group transformation [ 7 ] during the Gauge Transformation, which implies that all physical results must be invariant under local U(1) transformations. The relativistic Lagrangian for a particle ( rest mass m {\displaystyle m} and charge ⁠ q {\displaystyle q} ⁠ ) is given by: L ( t ) = − m c 2 1 − x ˙ ( t ) 2 c 2 + q x ˙ ( t ) ⋅ A ( x ( t ) , t ) − q φ ( x ( t ) , t ) {\displaystyle {\mathcal {L}}(t)=-mc^{2}{\sqrt {1-{\frac {{{\dot {\mathbf {x} }}(t)}^{2}}{c^{2}}}}}+q{\dot {\mathbf {x} }}(t)\cdot \mathbf {A} \left(\mathbf {x} (t),t\right)-q\varphi \left(\mathbf {x} (t),t\right)} Thus the particle's canonical momentum is p ( t ) = ∂ L ∂ x ˙ = m x ˙ 1 − x ˙ 2 c 2 + q A {\displaystyle \mathbf {p} (t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {x} }}}}={\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}+q\mathbf {A} } that is, the sum of the kinetic momentum and the potential momentum. Solving for the velocity, we get x ˙ ( t ) = p − q A m 2 + 1 c 2 ( p − q A ) 2 {\displaystyle {\dot {\mathbf {x} }}(t)={\frac {\mathbf {p} -q\mathbf {A} }{\sqrt {m^{2}+{\frac {1}{c^{2}}}{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}}} So the Hamiltonian is H ( t ) = x ˙ ⋅ p − L = c m 2 c 2 + ( p − q A ) 2 + q φ {\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}\cdot \mathbf {p} -{\mathcal {L}}=c{\sqrt {m^{2}c^{2}+{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}+q\varphi } This results in the force equation (equivalent to the Euler–Lagrange equation ) p ˙ = − ∂ H ∂ x = q x ˙ ⋅ ( ∇ A ) − q ∇ φ = q ∇ ( x ˙ ⋅ A ) − q ∇ φ {\displaystyle {\dot {\mathbf {p} }}=-{\frac {\partial {\mathcal {H}}}{\partial \mathbf {x} }}=q{\dot {\mathbf {x} }}\cdot ({\boldsymbol {\nabla }}\mathbf {A} )-q{\boldsymbol {\nabla }}\varphi =q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi } from which one can derive d d t ( m x ˙ 1 − x ˙ 2 c 2 ) = d d t ( p − q A ) = p ˙ − q ∂ A ∂ t − q ( x ˙ ⋅ ∇ ) A = q ∇ ( x ˙ ⋅ A ) − q ∇ φ − q ∂ A ∂ t − q ( x ˙ ⋅ ∇ ) A = q E + q x ˙ × B {\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}\right)&={\frac {\mathrm {d} }{\mathrm {d} t}}(\mathbf {p} -q\mathbf {A} )={\dot {\mathbf {p} }}-q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi -q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \end{aligned}}} The above derivation makes use of the vector calculus identity : 1 2 ∇ ( A ⋅ A ) = A ⋅ J A = A ⋅ ( ∇ A ) = ( A ⋅ ∇ ) A + A × ( ∇ × A ) . {\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)=\mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }=\mathbf {A} \cdot (\nabla \mathbf {A} )=(\mathbf {A} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {A} ).} An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum, ⁠ P = γ m x ˙ ( t ) = p − q A {\displaystyle \mathbf {P} =\gamma m{\dot {\mathbf {x} }}(t)=\mathbf {p} -q\mathbf {A} } ⁠ , is H ( t ) = x ˙ ( t ) ⋅ P ( t ) + m c 2 γ + q φ ( x ( t ) , t ) = γ m c 2 + q φ ( x ( t ) , t ) = E + V {\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}(t)\cdot \mathbf {P} (t)+{\frac {mc^{2}}{\gamma }}+q\varphi (\mathbf {x} (t),t)=\gamma mc^{2}+q\varphi (\mathbf {x} (t),t)=E+V} This has the advantage that kinetic momentum P {\displaystyle \mathbf {P} } can be measured experimentally whereas canonical momentum p {\displaystyle \mathbf {p} } cannot. Notice that the Hamiltonian ( total energy ) can be viewed as the sum of the relativistic energy (kinetic+rest) , ⁠ E = γ m c 2 {\displaystyle E=\gamma mc^{2}} ⁠ , plus the potential energy , ⁠ V = q φ {\displaystyle V=q\varphi } ⁠ . The Hamiltonian can induce a symplectic structure on a smooth even-dimensional manifold M 2 n in several equivalent ways, the best known being the following: [ 8 ] As a closed nondegenerate symplectic 2-form ω . According to Darboux's theorem , in a small neighbourhood around any point on M there exist suitable local coordinates p 1 , ⋯ , p n , q 1 , ⋯ , q n {\displaystyle p_{1},\cdots ,p_{n},\ q_{1},\cdots ,q_{n}} ( canonical or symplectic coordinates) in which the symplectic form becomes: ω = ∑ i = 1 n d p i ∧ d q i . {\displaystyle \omega =\sum _{i=1}^{n}dp_{i}\wedge dq_{i}\,.} The form ω {\displaystyle \omega } induces a natural isomorphism of the tangent space with the cotangent space : ⁠ T x M ≅ T x ∗ M {\displaystyle T_{x}M\cong T_{x}^{*}M} ⁠ . This is done by mapping a vector ξ ∈ T x M {\displaystyle \xi \in T_{x}M} to the 1-form ⁠ ω ξ ∈ T x ∗ M {\displaystyle \omega _{\xi }\in T_{x}^{*}M} ⁠ , where ω ξ ( η ) = ω ( η , ξ ) {\displaystyle \omega _{\xi }(\eta )=\omega (\eta ,\xi )} for all ⁠ η ∈ T x M {\displaystyle \eta \in T_{x}M} ⁠ . Due to the bilinearity and non-degeneracy of ⁠ ω {\displaystyle \omega } ⁠ , and the fact that ⁠ dim ⁡ T x M = dim ⁡ T x ∗ M {\displaystyle \dim T_{x}M=\dim T_{x}^{*}M} ⁠ , the mapping ξ → ω ξ {\displaystyle \xi \to \omega _{\xi }} is indeed a linear isomorphism . This isomorphism is natural in that it does not change with change of coordinates on M . {\displaystyle M.} Repeating over all ⁠ x ∈ M {\displaystyle x\in M} ⁠ , we end up with an isomorphism J − 1 : Vect ( M ) → Ω 1 ( M ) {\displaystyle J^{-1}:{\text{Vect}}(M)\to \Omega ^{1}(M)} between the infinite-dimensional space of smooth vector fields and that of smooth 1-forms. For every f , g ∈ C ∞ ( M , R ) {\displaystyle f,g\in C^{\infty }(M,\mathbb {R} )} and ⁠ ξ , η ∈ Vect ( M ) {\displaystyle \xi ,\eta \in {\text{Vect}}(M)} ⁠ , J − 1 ( f ξ + g η ) = f J − 1 ( ξ ) + g J − 1 ( η ) . {\displaystyle J^{-1}(f\xi +g\eta )=fJ^{-1}(\xi )+gJ^{-1}(\eta ).} (In algebraic terms, one would say that the C ∞ ( M , R ) {\displaystyle C^{\infty }(M,\mathbb {R} )} -modules Vect ( M ) {\displaystyle {\text{Vect}}(M)} and Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} are isomorphic). If ⁠ H ∈ C ∞ ( M × R t , R ) {\displaystyle H\in C^{\infty }(M\times \mathbb {R} _{t},\mathbb {R} )} ⁠ , then, for every fixed ⁠ t ∈ R t {\displaystyle t\in \mathbb {R} _{t}} ⁠ , ⁠ d H ∈ Ω 1 ( M ) {\displaystyle dH\in \Omega ^{1}(M)} ⁠ , and ⁠ J ( d H ) ∈ Vect ( M ) {\displaystyle J(dH)\in {\text{Vect}}(M)} ⁠ . J ( d H ) {\displaystyle J(dH)} is known as a Hamiltonian vector field . The respective differential equation on M {\displaystyle M} x ˙ = J ( d H ) ( x ) {\displaystyle {\dot {x}}=J(dH)(x)} is called Hamilton's equation . Here x = x ( t ) {\displaystyle x=x(t)} and J ( d H ) ( x ) ∈ T x M {\displaystyle J(dH)(x)\in T_{x}M} is the (time-dependent) value of the vector field J ( d H ) {\displaystyle J(dH)} at ⁠ x ∈ M {\displaystyle x\in M} ⁠ . A Hamiltonian system may be understood as a fiber bundle E over time R , with the fiber E t being the position space at time t ∈ R . The Lagrangian is thus a function on the jet bundle J over E ; taking the fiberwise Legendre transform of the Lagrangian produces a function on the dual bundle over time whose fiber at t is the cotangent space T ∗ E t , which comes equipped with a natural symplectic form , and this latter function is the Hamiltonian. The correspondence between Lagrangian and Hamiltonian mechanics is achieved with the tautological one-form . Any smooth real-valued function H on a symplectic manifold can be used to define a Hamiltonian system . The function H is known as "the Hamiltonian" or "the energy function." The symplectic manifold is then called the phase space . The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector field . The Hamiltonian vector field induces a Hamiltonian flow on the manifold. This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called "the time"); in other words, an isotopy of symplectomorphisms , starting with the identity. By Liouville's theorem , each symplectomorphism preserves the volume form on the phase space . The collection of symplectomorphisms induced by the Hamiltonian flow is commonly called "the Hamiltonian mechanics" of the Hamiltonian system. The symplectic structure induces a Poisson bracket . The Poisson bracket gives the space of functions on the manifold the structure of a Lie algebra . If F and G are smooth functions on M then the smooth function ω ( J ( dF ), J ( dG )) is properly defined; it is called a Poisson bracket of functions F and G and is denoted { F , G } . The Poisson bracket has the following properties: Given a function f d d t f = ∂ ∂ t f + { f , H } , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}f={\frac {\partial }{\partial t}}f+\left\{f,{\mathcal {H}}\right\},} if there is a probability distribution ρ , then (since the phase space velocity ( p ˙ i , q ˙ i ) {\displaystyle ({\dot {p}}_{i},{\dot {q}}_{i})} has zero divergence and probability is conserved) its convective derivative can be shown to be zero and so ∂ ∂ t ρ = − { ρ , H } {\displaystyle {\frac {\partial }{\partial t}}\rho =-\left\{\rho ,{\mathcal {H}}\right\}} This is called Liouville's theorem . Every smooth function G over the symplectic manifold generates a one-parameter family of symplectomorphisms and if { G , H } = 0 , then G is conserved and the symplectomorphisms are symmetry transformations . A Hamiltonian may have multiple conserved quantities G i . If the symplectic manifold has dimension 2 n and there are n functionally independent conserved quantities G i which are in involution (i.e., { G i , G j } = 0 ), then the Hamiltonian is Liouville integrable . The Liouville–Arnold theorem says that, locally, any Liouville integrable Hamiltonian can be transformed via a symplectomorphism into a new Hamiltonian with the conserved quantities G i as coordinates; the new coordinates are called action–angle coordinates . The transformed Hamiltonian depends only on the G i , and hence the equations of motion have the simple form G ˙ i = 0 , φ ˙ i = F i ( G ) {\displaystyle {\dot {G}}_{i}=0\quad ,\quad {\dot {\varphi }}_{i}=F_{i}(G)} for some function F . [ 9 ] There is an entire field focusing on small deviations from integrable systems governed by the KAM theorem . The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic ; concepts of measure, completeness, integrability and stability are poorly defined. An important special case consists of those Hamiltonians that are quadratic forms , that is, Hamiltonians that can be written as H ( q , p ) = 1 2 ⟨ p , p ⟩ q {\displaystyle {\mathcal {H}}(q,p)={\tfrac {1}{2}}\langle p,p\rangle _{q}} where ⟨ , ⟩ q is a smoothly varying inner product on the fibers T ∗ q Q , the cotangent space to the point q in the configuration space , sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term. If one considers a Riemannian manifold or a pseudo-Riemannian manifold , the Riemannian metric induces a linear isomorphism between the tangent and cotangent bundles. (See Musical isomorphism ). Using this isomorphism, one can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the metric.) The solutions to the Hamilton–Jacobi equations for this Hamiltonian are then the same as the geodesics on the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow . The existence of such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics . See also Geodesics as Hamiltonian flows . When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at every point q of the configuration space manifold Q , so that the rank of the cometric is less than the dimension of the manifold Q , one has a sub-Riemannian manifold . The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian . Every such Hamiltonian uniquely determines the cometric, and vice versa. This implies that every sub-Riemannian manifold is uniquely determined by its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the Chow–Rashevskii theorem . The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the Heisenberg group, the Hamiltonian is given by H ( x , y , z , p x , p y , p z ) = 1 2 ( p x 2 + p y 2 ) . {\displaystyle {\mathcal {H}}\left(x,y,z,p_{x},p_{y},p_{z}\right)={\tfrac {1}{2}}\left(p_{x}^{2}+p_{y}^{2}\right).} p z is not involved in the Hamiltonian. Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth functions over a symplectic manifold , Hamiltonian mechanics can be formulated on general commutative unital real Poisson algebras . A state is a continuous linear functional on the Poisson algebra (equipped with some suitable topology ) such that for any element A of the algebra, A 2 maps to a nonnegative real number. A further generalization is given by Nambu dynamics . Hamilton's equations above work well for classical mechanics , but not for quantum mechanics , since the differential equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at any point in time. However, the equations can be further generalized to then be extended to apply to quantum mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over p and q to the algebra of Moyal brackets . Specifically, the more general form of the Hamilton's equation reads d f d t = { f , H } + ∂ f ∂ t , {\displaystyle {\frac {\mathrm {d} f}{\mathrm {d} t}}=\left\{f,{\mathcal {H}}\right\}+{\frac {\partial f}{\partial t}},} where f is some function of p and q , and H is the Hamiltonian. To find out the rules for evaluating a Poisson bracket without resorting to differential equations, see Lie algebra ; a Poisson bracket is the name for the Lie bracket in a Poisson algebra . These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie algebra, as proven by Hilbrand J. Groenewold , and thereby describe quantum mechanical diffusion in phase space (See Phase space formulation and Wigner–Weyl transform ). This more algebraic approach not only permits ultimately extending probability distributions in phase space to Wigner quasi-probability distributions , but, at the mere Poisson bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system.
https://en.wikipedia.org/wiki/Hamiltonian_mechanics
Hamiltonian quantum computation is a form of quantum computing . Unlike methods of quantum computation such as the adiabatic , measurement-based and circuit model where eternal control is used to apply operations on a register of qubits, Hamiltonian quantum computers operate without external control. [ 1 ] [ 2 ] [ 3 ] Hamiltonian quantum computation was the pioneering model of quantum computation, first proposed by Paul Benioff in 1980. Benioff's motivation for building a quantum mechanical model of a computer was to have a quantum mechanical description of artificial intelligence and to create a computer that would dissipate the least amount of energy allowable by the laws of physics . [ 1 ] However, his model was not time-independent and local . [ 4 ] Richard Feynman , independent of Benioff, also wanted to provide a description of a computer based on the laws of quantum physics. He solved the problem of a time-independent and local Hamiltonian by proposing a continuous-time quantum walk that could perform universal quantum computation. [ 2 ] Superconducting qubits , [ 5 ] Ultracold atoms and non-linear photonics [ 6 ] have been proposed as potential experimental implementations of Hamiltonian quantum computers. Given a list of quantum gates described as unitaries U 1 , U 2 . . . U k {\displaystyle U_{1},U_{2}...U_{k}} , define a hamiltonian H = ∑ i = 1 k − 1 | i + 1 ⟩ ⟨ i | ⊗ U i + 1 + | i ⟩ ⟨ i + 1 | ⊗ U i + 1 † {\displaystyle H=\sum _{i=1}^{k-1}|i+1\rangle \langle i|\otimes U_{i+1}+|i\rangle \langle i+1|\otimes U_{i+1}^{\dagger }} Evolving this Hamiltonian on a state | ϕ 0 ⟩ = | 100..00 ⟩ ⊗ | ψ 0 ⟩ {\displaystyle |\phi _{0}\rangle =|100..00\rangle \otimes |\psi _{0}\rangle } composed of a clock register ( | 100..00 ⟩ {\displaystyle |100..00\rangle } ) that constaines k + 1 {\displaystyle k+1} qubits and a data register ( | ψ 0 ⟩ {\displaystyle |\psi _{0}\rangle } ) will output | ϕ k ⟩ = e − i H t | ϕ 0 ⟩ {\displaystyle |\phi _{k}\rangle =e^{-iHt}|\phi _{0}\rangle } . At a time t {\displaystyle t} , the state of the clock register can be | 000..01 ⟩ {\displaystyle |000..01\rangle } . When that happens, the state of the data register will be U 1 , U 2 . . . U k | ψ 0 ⟩ {\displaystyle U_{1},U_{2}...U_{k}|\psi _{0}\rangle } . The computation is complete and | ϕ k ⟩ = | 000..01 ⟩ ⊗ U 1 , U 2 . . . U k | ψ 0 ⟩ {\displaystyle |\phi _{k}\rangle =|000..01\rangle \otimes U_{1},U_{2}...U_{k}|\psi _{0}\rangle } . [ 7 ]
https://en.wikipedia.org/wiki/Hamiltonian_quantum_computation
Within the field of social evolution , Hamiltonian spite is a term for spiteful behaviors occurring among conspecifics that have a cost for the actor and a negative impact upon the recipient. Such behavior was theorized by W. D. Hamilton to be based on genetic affinity, with spiteful acts targetting those genetically distant of the actor. W. D. Hamilton published an influential paper on altruism in 1964 to explain why genetic kin tend to help each other. [ 1 ] He argued that genetically related individuals are likely to carry the copies of the same alleles ; thus, helping kin may ensure that copies of the actors' alleles pass on to next generations of both the recipient and the actor. While this became a widely accepted idea, it was less noted that Hamilton published a later paper that modified this view. [ 2 ] This paper argues that by measuring the genetic relatedness between any two (randomly chosen) individuals of a population several times, we can identify an average level of relatedness. Theoretical models predict that (1) it is adaptive for an individual to be altruistic to any other individuals that are more closely related to it than this average level, and also that (2) it is adaptive for an individual to be spiteful against any other individuals that are less closely related to it than this average level. The indirect adaptive benefits of such acts can surpass certain costs of the act (either helpful or harmful) itself. Hamilton mentioned birds and fishes exhibiting infanticide (more specifically: ovicide) as examples for such behaviors. Briefly, an individual can increase the chance of its genetic alleles to be passed to the next generations either by helping those that are more closely related, or by harming those that are less closely related than relationship by chance. [ 3 ] [ 4 ] Though altruism and spitefulness appear to be two sides of the same coin, the latter is less accepted among evolutionary biologists. First, unlike the case with the beneficiary of an altruistic act, targets of aggression are likely to act in revenge: bites will provoke bites. Thus harming non-kin may be more costly than helping kin. Second, presuming a panmictic population , the vast majority of pairs of individuals exhibit a roughly average level of relatedness. For a given individual, the majority of others are not worth helping or harming. While it is easy to identify the few most closely related ones (see: kin recognition ), it is hard to identify the most genetically distant ones. Most terrestrial vertebrates exhibit a certain degree of site fidelity, so levels of kinship tend to correlate negatively with spatial distance. While this may provide some cues to identify the least related individuals, it may also ensure that non-kin rarely if ever meet each other. Many animal species exhibit infanticide, i.e. adults tend to kill the eggs or the offspring of conspecifics, even if they do not feed on them (in the absence of cannibalism ). [ 5 ] This form of spitefulness is relatively free from the threat of revenge – provided that the parents and relatives of the target are either weak or far away. Infanticide may not be a form of spite as in many cases the loss of offspring to the female brings it back into estrous providing a mating advantage to an infanticidal male. This is seen in lions. [ 6 ] An individual carrying a long-lasting infection of virulent pathogens may benefit from (1) channelling the flow of pathogens from its own body away from its kin and (2) directing them toward non-kin conspecifics. The adaptive nature of this behavior has been supported by the analysis of theoretical models [ 7 ] [ 8 ] and also by the analyses of the behavioral repertoire of different animal species. [ 9 ] Thus, tuberculosis -infected European badgers and rabies -infected dogs equally tend to emigrate from their natal ranges before starting to distribute the pathogens. Similarly, wild herds of Asian elephants tend to defecate into drinkwater holes apparently to keep rival herds away. [ 10 ] Throughout human history, war has often emerged as a costly form of aggression, typically targeting the non-kin enemy. Naturally, most wars appear to be motivated by potential benefits other than the genetic. Nevertheless, widespread infanticide during periods of war indicates Hamiltonian elements as well. Infanticide is a biologically spiteful action in that it costs the killer time and energy, and opens the killer to the threat of revenge, without any direct compensating benefits. [ citation needed ]
https://en.wikipedia.org/wiki/Hamiltonian_spite
Hamiltonian truncation is a numerical method used to study quantum field theories (QFTs) in d ≥ 2 {\displaystyle d\geq 2} spacetime dimensions. Hamiltonian truncation is an adaptation of the Rayleigh–Ritz method from quantum mechanics. It is closely related to the exact diagonalization method used to treat spin systems in condensed matter physics. [ 1 ] The method is typically used to study QFTs on spacetimes of the form R × M {\displaystyle \mathbb {R} \times M} , specifically to compute the spectrum of the Hamiltonian along R {\displaystyle \mathbb {R} } . A key feature of Hamiltonian truncation is that an explicit ultraviolet cutoff Λ {\displaystyle \Lambda } is introduced, akin to the lattice spacing a in lattice Monte Carlo methods. Since Hamiltonian truncation is a nonperturbative method, it can be used to study strong-coupling phenomena like spontaneous symmetry breaking . Local quantum field theories can be defined on any manifold . Often, the spacetime of interest includes a copy of R {\displaystyle \mathbb {R} } , like R d {\displaystyle \mathbb {R} ^{d}} (flat space), R × S d − 1 {\displaystyle \mathbb {R} \times S^{d-1}} (an infinite hollow cylinder ), R × T d − 1 {\displaystyle \mathbb {R} \times \mathbf {T} ^{d-1}} (space is taken to be a torus ) or even Anti-de Sitter space in global coordinates. On such a manifold we can take time to run along R {\displaystyle \mathbb {R} } , such that energies are conserved. Solving such a QFT amounts to finding the spectrum and eigenstates of the Hamiltonian H , which is difficult or impossible to do analytically. Hamiltonian truncation provides a strategy to compute the spectrum of H to arbitrary precision. The idea is that many QFT Hamiltonians can be written as the sum of "free" part H 0 {\displaystyle H_{0}} and an "interacting" part that describes interactions (for example a ϕ 4 {\displaystyle \phi ^{4}} term or a Yukawa coupling ), schematically where V can be written as the integral of a local operator V {\displaystyle {\mathcal {V}}} over M . There may be multiple interaction terms g 1 V 1 + g 2 V 2 + … {\displaystyle g_{1}V_{1}+g_{2}V_{2}+\ldots } , but that case generalizes straightforwardly from the case with a single interaction g V {\displaystyle gV} . Hamiltonian truncation amounts to the following recipe: In a UV-finite quantum field theory, the resulting energies E α ( Λ ) {\displaystyle E_{\alpha }(\Lambda )} have a finite limit as the cutoff Λ {\displaystyle \Lambda } is taken to infinity, so at least in principle the exact spectrum of the Hamiltonian can be recovered. In practice the cutoff Λ {\displaystyle \Lambda } is always finite, and the procedure is performed on a computer. For a given cutoff Λ {\displaystyle \Lambda } , Hamiltonian truncation has a finite range of validity, meaning that cutoff errors become important when the coupling g is too large. To make this precise, let's take R to be the rough size of the manifold M , that is to say that up to some c-number coefficient. If the deformation V is the integral of a local operator of dimension Δ {\displaystyle \Delta } , then the coupling g will have mass dimension [ g ] = d − Δ {\displaystyle [g]=d-\Delta } , so the redefined coupling is dimensionless. Depending on the order of magnitude of g ¯ {\displaystyle {\bar {g}}} , we can distinguish three different regimes: There are two intrinsic but related issues with Hamiltonian truncation: The first case is due to ultraviolet divergences of the quantum field theory in question. In this case, cutoff-dependent counterterms must be added to the Hamiltonian H in order to obtain a physically meaningful result. In order to understand the second problem, one can perform perturbative computations to understand the continuum limit analytically. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Let us spell this out using an example. We have in mind a perturbation of the form gV with where V ( t , x ) {\displaystyle {\mathcal {V}}(t,\mathbf {x} )} is a local operator. Suppose that we want to compute the first corrections to the vacuum energy due to V . In Rayleigh–Schrödinger perturbation theory , we know that where where the sum runs over all states | i ⟩ {\displaystyle |i\rangle } other than the vacuum | Ω ⟩ {\displaystyle |\Omega \rangle } itself. Whether this integral converges or not depends on the large- E behavior of the spectral density ρ Ω ( E ) {\displaystyle \rho _{\Omega }(E)} . In turn, this depends on the short-distance behavior of the two-point correlation function of the operator V {\displaystyle {\mathcal {V}}} . Indeed, we can write where V ( t , x ) = e H 0 t V ( 0 , x ) e − H 0 t {\displaystyle {\mathcal {V}}(t,\mathbf {x} )=e^{H_{0}t}{\mathcal {V}}(0,\mathbf {x} )e^{-H_{0}t}} evolves in Euclidean time in the interaction picture. Hence the large- E behavior of the spectral density encodes the short-time behavior of the ⟨ V ( t , x ) V ( 0 , y ) ⟩ {\displaystyle \langle {\mathcal {V}}(t,\mathbf {x} ){\mathcal {V}}(0,\mathbf {y} )\rangle } vacuum correlator, where both x,y are integrated over space. The large- E scaling can be computed in explicit theories; in general it goes as where Δ V {\displaystyle \Delta _{\mathcal {V}}} is the scaling or mass dimension of the operator V {\displaystyle {\mathcal {V}}} and c is some constant. There are now two possibilities, depending on the value of γ ≡ d − 2 Δ V {\displaystyle \gamma \equiv d-2\Delta _{\mathcal {V}}} : A similar analysis applies to cutoff errors in excited states and at higher orders in perturbation theory. As an example, we can consider a massive scalar field ϕ ( t , x ) {\displaystyle \phi (t,\mathbf {x} )} on some spacetime R × M {\displaystyle \mathbb {R} \times M} , where M is compact (possibly having a boundary ). The total metric can be written as Let's consider the action where △ {\displaystyle \triangle } is the Laplacian on R × M {\displaystyle \mathbb {R} \times M} . The g =0 theory can be canonically quantized , which endows the field ϕ {\displaystyle \phi } with a mode decomposition where the creation and annihilation operators obey canonical commutation relations [ a m , a n † ] = δ m n {\displaystyle [a_{m},a_{n}^{\dagger }]=\delta _{mn}} . The single-particle energies ω n > 0 {\displaystyle \omega _{n}>0} and the mode functions f n ( x ) {\displaystyle f_{n}(\mathbf {x} )} depend on the spatial manifold M . The Hamiltonian at t =0 is then given by The Hilbert space of the g = 0 {\displaystyle g=0} theory is the Fock space of the modes { a n † } {\displaystyle \{a_{n}^{\dagger }\}} . That is to say that there exists a vacuum state | Ω ⟩ {\displaystyle |\Omega \rangle } obeying a n | Ω ⟩ = 0 {\displaystyle a_{n}|\Omega \rangle =0} for all n , and on top of that there are single- and multi-particle states. Explicitly, a general eigenstate of H 0 {\displaystyle H_{0}} is labeled by a tuple { k n } {\displaystyle \{k_{n}\}} of occupation numbers: where the k n {\displaystyle k_{n}} can take values in the integers: k n ∈ { 0 , 1 , 2 , … } {\displaystyle k_{n}\in \{0,1,2,\ldots \}} . Such a state has energy so finding a basis of low-energy states amounts to finding all tuples { k n } {\displaystyle \{k_{n}\}} obeying e ( k ) ≤ Λ {\displaystyle e(\mathbf {k} )\leq \Lambda } . Let's denote all such states schematically as | i ⟩ {\displaystyle |i\rangle } . Next, the matrix elements V i j {\displaystyle V_{ij}} can be computed explicitly using the canonical commutation relations. Finally, the explicit Hamiltonian H ( Λ ) i j = e i δ i j + g V i j {\displaystyle H(\Lambda )_{ij}=e_{i}\delta _{ij}+gV_{ij}} has to be diagonalized. The resulting spectra can be used to study precision physics. Depending on the values of g and m 2 {\displaystyle m^{2}} , the above ϕ 4 {\displaystyle \phi ^{4}} theory can be in a symmetry-preserving or a symmetry-broken phase, which can be studied explicitly using the above algorithm. The continuous phase transition between these two phases can also be analyzed, in which case the spectrum and eigenstates of H contain information about the conformal field theory of the Ising universality class. [ 7 ] [ 8 ] [ 9 ] The truncated conformal space approach (TCSA) is a version of the Hamiltonian truncation that applies to perturbed conformal field theories . This approach was introduced by Yurov and Al. Zamolodchikov in 1990 [ 10 ] and has become a standard ingredient used to study two-dimensional QFTs. [ 11 ] The d -dimensional version of TCSA was first studied in 2014. [ 3 ] A RG flow emanating from a conformal field theory (CFT) is described by an action where V {\displaystyle {\mathcal {V}}} is a scalar operator in the CFT of scaling dimension Δ V ≤ d {\displaystyle \Delta _{\mathcal {V}}\leq d} . At large distances, such theories are strongly coupled. It is convenient to study such RG flows on the cylinder R × S d − 1 {\displaystyle \mathbb {R} \times S^{d-1}} , taking the sphere to have radius R and endowing the full space with coordinates ( t , n ) {\displaystyle (t,\mathbf {n} )} . The reason is that the unperturbed ( g =0) theory admits a simple description owing to radial quantization . Schematically, states | i ⟩ {\displaystyle |i\rangle } on the cylinder are in one-to-one correspondence with local operators O i {\displaystyle {\mathcal {O}}_{i}} inserted at the origin of flat space: where | Ω ⟩ {\displaystyle |\Omega \rangle } is the CFT vacuum state. The Hamiltonian on the cylinder is precisely the dilatation operator D of the CFT: the unperturbed energies are given by where Δ i {\displaystyle \Delta _{i}} is the scaling dimension of the operator O i {\displaystyle {\mathcal {O}}_{i}} . Finally, the matrix elements of the deformation V are proportional to OPE coefficients V × O j ∼ O i {\displaystyle {\mathcal {V}}\times {\mathcal {O}}_{j}\sim {\mathcal {O}}_{i}} in the original CFT. Real-time QFTs are often studied in lightcone coordinates Although the spectrum of the lightcone Hamiltonian P + = i ∂ / ∂ x + {\displaystyle P_{+}=i\partial /\partial x^{+}} is continuous, it is still possible to compute certain observables using truncation methods. The most commonly used scheme, used when the UV theory is conformal, is known as lightcone conformal truncation (LCT). [ 12 ] [ 13 ] Notably, the spatial manifold M is non-compact in this case, unlike the equal-time quantization described previously. See also the page for light-front computational methods , which describes related computational setups. Hamiltonian truncation computations are normally performed using a computer algebra system , or a programming language like Python or C++ . The number of low-energy states N ( Λ ) {\displaystyle N(\Lambda )} tends to grow rapidly with the UV cutoff, and it is common to perform Hamiltonian truncation computations taking into account several thousand states. Nonetheless, one is often only interested in the first O(10) energies and eigenstates of H . Instead of diagonalizing the full Hamiltonian explicitly (which is numerically very costly), approximation methods like Arnoldi iteration and the Lanczos algorithm are commonly used. In some cases, it is not possible to orthonormalize the low-energy states | i ⟩ {\displaystyle |i\rangle } , either because this is numerically expensive or because the underlying Hilbert space is not positive definite . In that case, one has to solve the generalized eigenvalue problem where H i j ( Λ ) = ⟨ i | H ( Λ ) | j ⟩ {\displaystyle H_{ij}(\Lambda )=\langle i|H(\Lambda )|j\rangle } and G i j = ⟨ i | j ⟩ {\displaystyle G_{ij}=\langle i|j\rangle } is the Gram matrix of the theory. In this formulation, the eigenstates of the truncated Hamiltonian are | ψ α ⟩ = v α i | i ⟩ {\displaystyle |\psi _{\alpha }\rangle =v_{\alpha }^{i}|i\rangle } . In practice, it is important to keep track of the symmetries of the theory, that is to say all generators G i {\displaystyle G_{i}} that satisfy [ G i , H ( Λ ) ] = 0 {\displaystyle [G_{i},H(\Lambda )]=0} . There are two types of symmetries in Hamiltonian truncation: When all states are organized in symmetry sectors with respect to the G i {\displaystyle G_{i}} the Hamiltonian is block diagonal , so the effort required to diagonalize H is reduced.
https://en.wikipedia.org/wiki/Hamiltonian_truncation
Colonel Hamish Stephen de Bretton-Gordon OBE (born September 1963) is a soldier. He was a British Army officer for 23 years and commanding officer of the UK's Joint Chemical, Biological, Radiological and Nuclear Regiment and NATO's Rapid Reaction CBRN Battalion. [ 1 ] He is a visiting lecturer in disaster management at Bournemouth University . [ 2 ] He has commented on chemical and biological weapons for the BBC , [ 3 ] ABC [ 4 ] and The Guardian [ 5 ] and on tank warfare for the Daily Telegraph . [ 6 ] On 4 January 1988, while being sponsored through university by the British Army as a university candidate, de Bretton-Gordon was commissioned as a second lieutenant (on probation) in the Royal Tank Regiment . [ 7 ] In September 1988, his commission was confirmed: he was given seniority in the rank of second lieutenant from 10 August 1985, and promoted to lieutenant backdated to 4 January 1988 with seniority from 10 August 1987. [ 8 ] He transferred from a short service commission to a regular commission on 29 January 1991, [ 9 ] and was promoted to captain on 10 August 1991. [ 10 ] In 1991, he saw active service in Iraq with the 14th/20th King's Hussars as part of the First Gulf War . [ 11 ] After attending the Australian Command and Staff College , he was promoted to major on 30 September 1995. [ 12 ] [ 13 ] He was promoted to lieutenant colonel on 30 June 2003. [ 14 ] In 2004, rather than receiving the command of a tank regiment as he'd expected, he was appointed commanding officer of the UK's Joint Chemical, Biological, Radiological and Nuclear Regiment . [ 15 ] In preparation for the command, he studied for a diploma in chemical biology at the Royal Military College of Science . [ 16 ] In the 2005 New Year Honours , he was appointed Officer of the Order of the British Empire (OBE). [ 17 ] He additionally commanded NATO 's Rapid Reaction CBRN Battalion between 2005 and 2007. [ 18 ] He was promoted to colonel on 30 June 2007. [ 19 ] From 2007 to 2010, he was based at HQ Land Command as assistant director intelligence, surveillance and reconnaissance. [ 13 ] He retired from the British Army on 12 September 2011. [ 20 ] This biographical article related to the British Army is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hamish_de_Bretton-Gordon
Hammerkit is a company which has developed a platform as a service (PaaS) which allows web formats and repeatable solutions [ clarification needed ] to be created and distributed globally using its CloudStore . Hammerkit was a piece of software first developed by Jani Vähäsöyrinki, Heikki Luhtala, and Ari Tenhunen between 2002 and 2006. This team was joined by Robin Lindroos and these four became the core team of driving the development of Hammerkit as a web application development platform. A detailed history of the founding and early years of the Hammerkit platform and the players involved has been published by Ari Tenhunen. [ 1 ] The initial ideas started as a spin-off of Njet Commununications' Anvil project [ 2 ] to develop a Java development language to make development of Java applications faster and easier. The team of Vähäsöyrinki, Luhtala and Tenhunen could see that the Composer application in Anvil was still too complex for web designers to use and set about creating a new toolset that became Hammerkit. It was the first truly component-based web application builder available on the web. From 2002 to 2006, the new versions were release up to v3.5. A final release of Hammerkit v3.x was published in 2007. [ citation needed ] The team chose the name Hammerkit simply because they liked the simplicity of a hammer (everyone knows how to use one) and because of its association with the project Anvil name. [ citation needed ] In 2006, Hammerkit Oy was established to commercialise the software. Mark Sorsa-Leslie joined the team in October 2007 as Managing Director and in December 2008, the company was named as a Red Herring Global 100 award winner. [ 3 ] In July 2010, Hammerkit debuted version 4.0 of their platform. The major enhancement in v4.0 was the move to a fully hosted architecture and the renewal of the user interface to utilise drag and drop design rather than the previous point and click approach. [ 4 ] In December 2011, Hammerkit announced a new funding round to internationalize the business. A new office was opened in Liverpool , England and a new product, the CloudStore was launched to support the creation, reuse and distribution of web formats as repeatable web solutions. The concept is based on the approach utilised in the TV industry to create global formats that are localized for particular markets. Hammerkit now specialises in the creation of web formats for the global public relations industry, serving clients such as Edelman and Hill+Knowlton Strategies . The company was noted by Nick Jones, head of Digital at the UK prime minister's office and the cabinet office as an example of a technology that will deskill the task of creating web services in his view from Downing Street 2011 [ 5 ] published in The Drum . In late 2011, the company was announced as one of five winners of a World Summit Award in the e-business and commerce category [ 6 ] together with Star, Monaqasat, Hootsuite and Aeroscan. In March 2012, Hammerkit launched Hammerkit.org as a community-based platform to promote the creation and sharing of repeatable digital solutions. [ 7 ] In 2013, Hammerkit is populating its CloudStore with off-the-shelf applications for the PR industry to choose from. Hammerkit will carry on designing and developing websites for companies using its Hammerkit Studio. [ citation needed ]
https://en.wikipedia.org/wiki/Hammerkit
The Hammersley–Clifford theorem is a result in probability theory , mathematical statistics and statistical mechanics that gives necessary and sufficient conditions under which a strictly positive probability distribution can be represented as events generated by a Markov network (also known as a Markov random field ). It is the fundamental theorem of random fields . [ 1 ] It states that a probability distribution that has a strictly positive mass or density satisfies one of the Markov properties with respect to an undirected graph G if and only if it is a Gibbs random field , that is, its density can be factorized over the cliques (or complete subgraphs ) of the graph. The relationship between Markov and Gibbs random fields was initiated by Roland Dobrushin [ 2 ] and Frank Spitzer [ 3 ] in the context of statistical mechanics . The theorem is named after John Hammersley and Peter Clifford , who proved the equivalence in an unpublished paper in 1971. [ 4 ] [ 5 ] Simpler proofs using the inclusion–exclusion principle were given independently by Geoffrey Grimmett , [ 6 ] Preston [ 7 ] and Sherman [ 8 ] in 1973, with a further proof by Julian Besag in 1974. [ 9 ] It is a trivial matter to show that a Gibbs random field satisfies every Markov property . As an example of this fact, see the following: In the image to the right, a Gibbs random field over the provided graph has the form Pr ( A , B , C , D , E , F ) ∝ f 1 ( A , B , D ) f 2 ( A , C , D ) f 3 ( C , D , F ) f 4 ( C , E , F ) {\displaystyle \Pr(A,B,C,D,E,F)\propto f_{1}(A,B,D)f_{2}(A,C,D)f_{3}(C,D,F)f_{4}(C,E,F)} . If variables C {\displaystyle C} and D {\displaystyle D} are fixed, then the global Markov property requires that: A , B ⊥ E , F | C , D {\displaystyle A,B\perp E,F|C,D} (see conditional independence ), since C , D {\displaystyle C,D} forms a barrier between A , B {\displaystyle A,B} and E , F {\displaystyle E,F} . With C {\displaystyle C} and D {\displaystyle D} constant, Pr ( A , B , E , F | C = c , D = d ) ∝ [ f 1 ( A , B , d ) f 2 ( A , c , d ) ] ⋅ [ f 3 ( c , d , F ) f 4 ( c , E , F ) ] = g 1 ( A , B ) g 2 ( E , F ) {\displaystyle \Pr(A,B,E,F|C=c,D=d)\propto [f_{1}(A,B,d)f_{2}(A,c,d)]\cdot [f_{3}(c,d,F)f_{4}(c,E,F)]=g_{1}(A,B)g_{2}(E,F)} where g 1 ( A , B ) = f 1 ( A , B , d ) f 2 ( A , c , d ) {\displaystyle g_{1}(A,B)=f_{1}(A,B,d)f_{2}(A,c,d)} and g 2 ( E , F ) = f 3 ( c , d , F ) f 4 ( c , E , F ) {\displaystyle g_{2}(E,F)=f_{3}(c,d,F)f_{4}(c,E,F)} . This implies that A , B ⊥ E , F | C , D {\displaystyle A,B\perp E,F|C,D} . To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proved: Lemma 1 Let U {\displaystyle U} denote the set of all random variables under consideration, and let Θ , Φ 1 , Φ 2 , … , Φ n ⊆ U {\displaystyle \Theta ,\Phi _{1},\Phi _{2},\dots ,\Phi _{n}\subseteq U} and Ψ 1 , Ψ 2 , … , Ψ m ⊆ U {\displaystyle \Psi _{1},\Psi _{2},\dots ,\Psi _{m}\subseteq U} denote arbitrary sets of variables. (Here, given an arbitrary set of variables X {\displaystyle X} , X {\displaystyle X} will also denote an arbitrary assignment to the variables from X {\displaystyle X} .) If Pr ( U ) = f ( Θ ) ∏ i = 1 n g i ( Φ i ) = ∏ j = 1 m h j ( Ψ j ) {\displaystyle \Pr(U)=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i})=\prod _{j=1}^{m}h_{j}(\Psi _{j})} for functions f , g 1 , g 2 , … g n {\displaystyle f,g_{1},g_{2},\dots g_{n}} and h 1 , h 2 , … , h m {\displaystyle h_{1},h_{2},\dots ,h_{m}} , then there exist functions h 1 ′ , h 2 ′ , … , h m ′ {\displaystyle h'_{1},h'_{2},\dots ,h'_{m}} and g 1 ′ , g 2 ′ , … , g n ′ {\displaystyle g'_{1},g'_{2},\dots ,g'_{n}} such that Pr ( U ) = ( ∏ j = 1 m h j ′ ( Θ ∩ Ψ j ) ) ( ∏ i = 1 n g i ′ ( Φ i ) ) {\displaystyle \Pr(U)={\bigg (}\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j}){\bigg )}{\bigg (}\prod _{i=1}^{n}g'_{i}(\Phi _{i}){\bigg )}} In other words, ∏ j = 1 m h j ( Ψ j ) {\displaystyle \prod _{j=1}^{m}h_{j}(\Psi _{j})} provides a template for further factorization of f ( Θ ) {\displaystyle f(\Theta )} . In order to use ∏ j = 1 m h j ( Ψ j ) {\displaystyle \prod _{j=1}^{m}h_{j}(\Psi _{j})} as a template to further factorize f ( Θ ) {\displaystyle f(\Theta )} , all variables outside of Θ {\displaystyle \Theta } need to be fixed. To this end, let θ ¯ {\displaystyle {\bar {\theta }}} be an arbitrary fixed assignment to the variables from U ∖ Θ {\displaystyle U\setminus \Theta } (the variables not in Θ {\displaystyle \Theta } ). For an arbitrary set of variables X {\displaystyle X} , let θ ¯ [ X ] {\displaystyle {\bar {\theta }}[X]} denote the assignment θ ¯ {\displaystyle {\bar {\theta }}} restricted to the variables from X ∖ Θ {\displaystyle X\setminus \Theta } (the variables from X {\displaystyle X} , excluding the variables from Θ {\displaystyle \Theta } ). Moreover, to factorize only f ( Θ ) {\displaystyle f(\Theta )} , the other factors g 1 ( Φ 1 ) , g 2 ( Φ 2 ) , . . . , g n ( Φ n ) {\displaystyle g_{1}(\Phi _{1}),g_{2}(\Phi _{2}),...,g_{n}(\Phi _{n})} need to be rendered moot for the variables from Θ {\displaystyle \Theta } . To do this, the factorization Pr ( U ) = f ( Θ ) ∏ i = 1 n g i ( Φ i ) {\displaystyle \Pr(U)=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i})} will be re-expressed as Pr ( U ) = ( f ( Θ ) ∏ i = 1 n g i ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) ) ( ∏ i = 1 n g i ( Φ i ) g i ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) ) {\displaystyle \Pr(U)={\bigg (}f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}]){\bigg )}{\bigg (}\prod _{i=1}^{n}{\frac {g_{i}(\Phi _{i})}{g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}}{\bigg )}} For each i = 1 , 2 , . . . , n {\displaystyle i=1,2,...,n} : g i ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) {\displaystyle g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])} is g i ( Φ i ) {\displaystyle g_{i}(\Phi _{i})} where all variables outside of Θ {\displaystyle \Theta } have been fixed to the values prescribed by θ ¯ {\displaystyle {\bar {\theta }}} . Let f ′ ( Θ ) = f ( Θ ) ∏ i = 1 n g i ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) {\displaystyle f'(\Theta )=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])} and g i ′ ( Φ i ) = g i ( Φ i ) g i ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) {\displaystyle g'_{i}(\Phi _{i})={\frac {g_{i}(\Phi _{i})}{g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}}} for each i = 1 , 2 , … , n {\displaystyle i=1,2,\dots ,n} so Pr ( U ) = f ′ ( Θ ) ∏ i = 1 n g i ′ ( Φ i ) = ∏ j = 1 m h j ( Ψ j ) {\displaystyle \Pr(U)=f'(\Theta )\prod _{i=1}^{n}g'_{i}(\Phi _{i})=\prod _{j=1}^{m}h_{j}(\Psi _{j})} What is most important is that g i ′ ( Φ i ) = g i ( Φ i ) g i ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) = 1 {\displaystyle g'_{i}(\Phi _{i})={\frac {g_{i}(\Phi _{i})}{g_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])}}=1} when the values assigned to Φ i {\displaystyle \Phi _{i}} do not conflict with the values prescribed by θ ¯ {\displaystyle {\bar {\theta }}} , making g i ′ ( Φ i ) {\displaystyle g'_{i}(\Phi _{i})} "disappear" when all variables not in Θ {\displaystyle \Theta } are fixed to the values from θ ¯ {\displaystyle {\bar {\theta }}} . Fixing all variables not in Θ {\displaystyle \Theta } to the values from θ ¯ {\displaystyle {\bar {\theta }}} gives Pr ( Θ , θ ¯ ) = f ′ ( Θ ) ∏ i = 1 n g i ′ ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) = ∏ j = 1 m h j ( Ψ j ∩ Θ , θ ¯ [ Ψ j ] ) {\displaystyle \Pr(\Theta ,{\bar {\theta }})=f'(\Theta )\prod _{i=1}^{n}g'_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])=\prod _{j=1}^{m}h_{j}(\Psi _{j}\cap \Theta ,{\bar {\theta }}[\Psi _{j}])} Since g i ′ ( Φ i ∩ Θ , θ ¯ [ Φ i ] ) = 1 {\displaystyle g'_{i}(\Phi _{i}\cap \Theta ,{\bar {\theta }}[\Phi _{i}])=1} , f ′ ( Θ ) = ∏ j = 1 m h j ( Ψ j ∩ Θ , θ ¯ [ Ψ j ] ) {\displaystyle f'(\Theta )=\prod _{j=1}^{m}h_{j}(\Psi _{j}\cap \Theta ,{\bar {\theta }}[\Psi _{j}])} Letting h j ′ ( Θ ∩ Ψ j ) = h j ( Ψ j ∩ Θ , θ ¯ [ Ψ j ] ) {\displaystyle h'_{j}(\Theta \cap \Psi _{j})=h_{j}(\Psi _{j}\cap \Theta ,{\bar {\theta }}[\Psi _{j}])} gives: f ′ ( Θ ) = ∏ j = 1 m h j ′ ( Θ ∩ Ψ j ) {\displaystyle f'(\Theta )=\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j})} which finally gives: Pr ( U ) = ( ∏ j = 1 m h j ′ ( Θ ∩ Ψ j ) ) ( ∏ i = 1 n g i ′ ( Φ i ) ) {\displaystyle \Pr(U)={\bigg (}\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j}){\bigg )}{\bigg (}\prod _{i=1}^{n}g'_{i}(\Phi _{i}){\bigg )}} Lemma 1 provides a means of combining two different factorizations of Pr ( U ) {\displaystyle \Pr(U)} . The local Markov property implies that for any random variable x ∈ U {\displaystyle x\in U} , that there exists factors f x {\displaystyle f_{x}} and f − x {\displaystyle f_{-x}} such that: Pr ( U ) = f x ( x , ∂ x ) f − x ( U ∖ { x } ) {\displaystyle \Pr(U)=f_{x}(x,\partial x)f_{-x}(U\setminus \{x\})} where ∂ x {\displaystyle \partial x} are the neighbors of node x {\displaystyle x} . Applying Lemma 1 repeatedly eventually factors Pr ( U ) {\displaystyle \Pr(U)} into a product of clique potentials (see the image on the right). End of Proof
https://en.wikipedia.org/wiki/Hammersley–Clifford_theorem
The Hammett acidity function ( H 0 ) is a measure of acidity that is used for very concentrated solutions of strong acids , including superacids . It was proposed by the physical organic chemist Louis Plack Hammett [ 1 ] [ 2 ] and is the best-known acidity function used to extend the measure of Brønsted–Lowry acidity beyond the dilute aqueous solutions for which the pH scale is useful. In highly concentrated solutions, simple approximations such as the Henderson–Hasselbalch equation are no longer valid due to the variations of the activity coefficients . The Hammett acidity function is used in fields such as physical organic chemistry for the study of acid-catalyzed reactions, because some of these reactions use acids in very high concentrations, or even neat (pure). [ 3 ] The Hammett acidity function, H 0 , can replace the pH in concentrated solutions. It is defined using an equation [ 4 ] [ 5 ] [ 6 ] analogous to the Henderson–Hasselbalch equation: where log(x) is the common logarithm of x, and p K BH + is −log( K ) for the dissociation of BH + , which is the conjugate acid of a very weak base B, with a very negative p K BH + . In this way, it is rather as if the pH scale has been extended to very negative values. Hammett originally used a series of anilines with electron-withdrawing groups for the bases. [ 3 ] Hammett also pointed out the equivalent form where a is the activity, and the γ are thermodynamic activity coefficients . In dilute aqueous solution (pH 0–14) the predominant acid species is H 3 O + and the activity coefficients are close to unity, so H 0 is approximately equal to the pH. However, beyond this pH range, the effective hydrogen-ion activity changes much more rapidly than the concentration. [ 4 ] This is often due to changes in the nature of the acid species; for example in concentrated sulfuric acid , the predominant acid species ("H + ") is not H 3 O + but rather H 3 SO 4 + [ citation needed ] , which is a much stronger acid. The value H 0 = −12 for pure sulfuric acid must not be interpreted as pH = −12 (which would imply an impossibly high H 3 O + concentration of 10 +12 mol/L in ideal solution ). Instead it means that the acid species present (H 3 SO 4 + ) has a protonating ability equivalent to H 3 O + at a fictitious (ideal) concentration of 10 12 mol/L, as measured by its ability to protonate weak bases. Although the Hammett acidity function is the best known acidity function , other acidity functions have been developed by authors such as Arnett, Cox, Katrizky, Yates, and Stevens. [ 3 ] On this scale, pure H 2 SO 4 (18.4 M ) has a H 0 value of −12, and pyrosulfuric acid has H 0 ~ −15. [ 7 ] Take note that the Hammett acidity function clearly avoids water in its equation. It is a generalization of the pH scale—in a dilute aqueous solution (where B is H 2 O), pH is very nearly equal to H 0 . By using a solvent-independent quantitative measure of acidity, the implications of the leveling effect are eliminated, and it becomes possible to directly compare the acidities of different substances (e.g. using p K a , HF is weaker than HCl or H 2 SO 4 in water but stronger than HCl in glacial acetic acid. [ 8 ] [ 9 ] ) For mixtures (e.g., partly diluted acids in water), the acidity function depends on the composition of the mixture and has to be determined empirically. Graphs of H 0 vs mole fraction can be found in the literature for many acids. [ 3 ]
https://en.wikipedia.org/wiki/Hammett_acidity_function
In organic chemistry , the Hammett equation describes a linear free-energy relationship relating reaction rates and equilibrium constants for many reactions involving benzoic acid derivatives with meta- and para- substituents to each other with just two parameters: a substituent constant and a reaction constant. [ 1 ] [ 2 ] This equation was developed and published by Louis Plack Hammett in 1937 [ 3 ] as a follow-up to qualitative observations in his 1935 publication. [ 4 ] The basic idea is that for any two reactions with two aromatic reactants only differing in the type of substituent, the change in free energy of activation is proportional to the change in Gibbs free energy . [ 5 ] This notion does not follow from elemental thermochemistry or chemical kinetics and was introduced by Hammett intuitively. [ a ] The basic equation is: where relating the equilibrium constant , K {\displaystyle {K}} , for a given equilibrium reaction with substituent R and the reference constant K 0 {\displaystyle {K}_{0}} when R is a hydrogen atom to the substituent constant σ which depends only on the specific substituent R and the reaction rate constant ρ which depends only on the type of reaction but not on the substituent used. [ 4 ] [ 3 ] The equation also holds for reaction rates k of a series of reactions with substituted benzene derivatives: In this equation k 0 {\displaystyle {k}_{0}} is the reference reaction rate of the unsubstituted reactant, and k that of a substituted reactant. A plot of log ⁡ K K 0 {\displaystyle \log {\frac {K}{K_{0}}}} for a given equilibrium versus log ⁡ k k 0 {\displaystyle \log {\frac {k}{k_{0}}}} for a given reaction rate with many differently substituted reactants will give a straight line. The starting point for the collection of the substituent constants is a chemical equilibrium for which the substituent constant is arbitrarily set to 0 and the reaction constant is set to 1: the deprotonation of benzoic acid or benzene carboxylic acid (R and R' both H) in water at 25 °C. Having obtained a value for K 0 , a series of equilibrium constants (K) are now determined based on the same process, but now with variation of the para substituent—for instance, p-hydroxybenzoic acid (R=OH, R'=H) or p-aminobenzoic acid (R=NH 2 , R'=H) . These values, combined in the Hammett equation with K 0 and remembering that ρ = 1, give the para substituent constants compiled in table 1 for amine , methoxy , ethoxy , dimethylamino , methyl , fluorine , bromine , chlorine , iodine , nitro and cyano substituents. Repeating the process with meta-substituents afford the meta substituent constants . This treatment does not include ortho-substituents , which would introduce steric effects . The σ values displayed in the Table above reveal certain substituent effects. With ρ = 1, the group of substituents with increasing positive values—notably cyano and nitro —cause the equilibrium constant to increase compared to the hydrogen reference, meaning that the acidity of the carboxylic acid (depicted on the left of the equation) has increased. These substituents stabilize the negative charge on the carboxylate oxygen atom by an electron-withdrawing inductive effect (−I) and also by a negative mesomeric effect (−M). The next set of substituents are the halogens , for which the substituent effect is still positive but much more modest. The reason for this is that while the inductive effect is still negative, the mesomeric effect is positive, causing partial cancellation. The data also show that for these substituents, the meta effect is much larger than the para effect, due to the fact that the mesomeric effect is greatly reduced in a meta substituent. With meta substituents a carbon atom bearing the negative charge is further away from the carboxylic acid group (structure 2b). This effect is depicted in scheme 3 , where, in a para substituted arene 1a , one resonance structure 1b is a quinoid with positive charge on the X substituent, releasing electrons and thus destabilizing the Y substituent. This destabilizing effect is not possible when X has a meta orientation. Other substituents, like methoxy and ethoxy , can even have opposite signs for the substituent constant as a result of opposing inductive and mesomeric effect. Only alkyl and aryl substituents like methyl are electron-releasing in both respects. Of course, when the sign for the reaction constant is negative (next section), only substituents with a likewise negative substituent constant will increase equilibrium constants. Because the carbonyl group is unable to serve a source of electrons for −M groups (in contrast to lone pair donors like OH), for reactions involving phenol and aniline starting materials, the σ p values for electron-withdrawing groups will appear too small. For reactions where resonance effects are expected to have a major impact, a modified parameter, and a modified set of σ p − constants may give a better fit. This parameter is defined using the ionization constants of para substituted phenols, via a scaling factor to match up the values of σ p − with those of σ p for "non-anomalous" substituents, so as to maintain comparable ρ values: for ArOH ⇄ ArO − + H + , we define σ p − = 1 2.11 log 10 ⁡ ( K X K H ) {\displaystyle \sigma _{p}^{-}={\frac {1}{2.11}}\log _{10}\left({\frac {K_{{\ce {X}}}}{K_{{\ce {H}}}}}\right)} . Likewise, the carbonyl carbon of a benzoic acid is at a nodal position and unable to serve as a sink for +M groups (in contrast to a carbocation at the benzylic position). Thus for reactions involving carbocations at the α-position, the σ p values for electron-donating groups will appear insufficiently negative. Based on similar considerations, a set of σ p + constants give better fit for reactions involving electron-donating groups at the para position and the formation of a carbocation at the benzylic site. The σ p + are based on the rate constants of the S N 1 reaction of cumyl chlorides in 90% acetone/water: for ArCMe 2 Cl + H 2 O → ArCMe 2 OH + HCl , we define σ p + = − 1 4.54 log 10 ⁡ ( k X k H ) {\displaystyle \sigma _{p}^{+}=-{\frac {1}{4.54}}\log _{10}\left({\frac {k_{{\ce {X}}}}{k_{{\ce {H}}}}}\right)} . Note that the scaling factor is negative, since an electron-donating group speeds up the reaction. For a reaction whose Hammett plot is being constructed, these alternative Hammett constants may need to be tested to see if a better linearity could be obtained. With knowledge of substituent constants it is now possible to obtain reaction constants for a wide range of organic reactions . The archetypal reaction is the alkaline hydrolysis of ethyl benzoate (R=R'=H) in a water/ethanol mixture at 30 °C. Measurement of the reaction rate k 0 combined with that of many substituted ethyl benzoates ultimately result in a reaction constant of +2.498. [ 3 ] [ needs update ] [ non-primary source needed ] Reaction constants are known for many other reactions and equilibria. Here is a selection of those provided by Hammett himself (with their values in parentheses): The reaction constant, or sensitivity constant, ρ , describes the susceptibility of the reaction to substituents, compared to the ionization of benzoic acid. It is equivalent to the slope of the Hammett plot. Information on the reaction and the associated mechanism can be obtained based on the value obtained for ρ . If the value of: These relations can be exploited to elucidate the mechanism of a reaction. As the value of ρ is related to the charge during the rate determining step, mechanisms can be devised based on this information. If the mechanism for the reaction of an aromatic compound is thought to occur through one of two mechanisms, the compound can be modified with substituents with different σ values and kinetic measurements taken. Once these measurements have been made, a Hammett plot can be constructed to determine the value of ρ . If one of these mechanisms involves the formation of charge, this can be verified based on the ρ value. Conversely, if the Hammett plot shows that no charge is developed, i.e. a zero slope, the mechanism involving the building of charge can be discarded. Hammett plots may not always be perfectly linear. For instance, a curve may show a sudden change in slope, or ρ value. In such a case, it is likely that the mechanism of the reaction changes upon adding a different substituent. Other deviations from linearity may be due to a change in the position of the transition state. In such a situation, certain substituents may cause the transition state to appear earlier (or later) in the reaction mechanism. [ 7 ] [ page needed ] 3 kinds of ground state or static electrical influences predominate: The latter two influences are often treated together as a composite effect, but are treated here separately. Westheimer demonstrated that the electrical effects of π-substituted dipolar groups on the acidities of benzoic and phenylacetic acids can be quantitatively correlated, by assuming only direct electrostatic action of the substituent on the ionizable proton of the carboxyl group . Westheimer's treatment worked well except for those acids with substituents that have unshared electron pairs such as –OH and –OCH3, as these substituents interact strongly with the benzene ring. [ 8 ] [ non-primary source needed ] [ non-primary source needed ] [ 9 ] [ non-primary source needed ] [ needs update ] [ non-primary source needed ] Roberts and Moreland studied the reactivities of 4-substituted bicyclo[2.2.2]octane-1-carboxylic acids and esters. In such a molecule, transmission of electrical effects of substituents through the ring by resonance is not possible. Hence, this hints on the role of the π-electrons in the transmission of substituent effects through aromatic systems . [ 10 ] [ non-primary source needed ] [ non-primary source needed ] Reactivity of 4-substituted bicyclo[2.2.2]octane-1-carboxylic acids and esters were measured in 3 different processes, each of which had been previously used with the benzoic acid derivatives. A plot of log(k) against log(K A ) showed a linear relationship. Such linear relationships correspond to linear free energy relationships, which strongly imply that the effect of the substituents are exerted through changes of potential energy and that the steric and entropy terms remain almost constant through the series. The linear relationship fit well in the Hammett Equation. For the 4-substituted bicyclo[2.2.2.]octane-1-carboxylic acid derivatives, the substituent and reaction constants are designated σ’ and ρ’. Reactivity data indicate that the effects of substituent groups in determining the reactivities of substituted benzoic and bicyclo[2.2.2.]-octane-1-carboxylic acids are comparable. This implies that the aromatic π-electrons do not play a dominant role in the transmission of electrical effects of dipolar groups to the ionizable carboxyl group Difference between ρ and ρ’ for the reactions of the acids with diphenylazomethane is probably due to an inverse relation to the solvent dielectric constant D e For meta-directing groups ( electron withdrawing group or EWG ), σ meta and σ para are more positive than σ’. (The superscript, c, in table denotes data from Hammett, 1940. [ 11 ] [ page needed ] ) For ortho-para directing groups ( electron donating group or EDG ), σ’ more positive than σ meta and σ para . The difference between σ para and σ’ (σ para − σ’) is greater than that between σ meta and σ’(σ meta − σ’). This is expected as electron resonance effects are felt more strongly at the p-positions. The (σ − σ’) values can be taken as a reasonable measurement of the resonance effects. The plot of the Hammett equation is typically seen as being linear, with either a positive or negative slope correlating to the value of rho. However, nonlinearity emerges in the Hammett plot when a substituent affects the rate of reaction or changes the rate-determining step or reaction mechanism of the reaction. For the reason of the former case, new sigma constants have been introduced to accommodate the deviation from linearity otherwise seen resulting from the effect of the substituent. σ+ takes into account positive charge buildup occurring in the transition state of the reaction. Therefore, an electron donating group (EDG) will accelerate the rate of the reaction by resonance stabilization and will give the following sigma plot with a negative rho value. [ 12 ] [ non-primary source needed ] [ non-primary source needed ] σ− is designated in the case where negative charge buildup in the transition state occurs, and the rate of the reaction is consequently accelerated by electron withdrawing groups (EWG). The EWG withdraws electron density by resonance and effectively stabilizes the negative charge that is generated. The corresponding plot will show a positive rho value. In the case of a nucleophilic acyl substitution the effect of the substituent, X, of the non-leaving group can in fact accelerate the rate of the nucleophilic addition reaction when X is an EWG. This is attributed to the resonance contribution of the EWG to withdraw electron density thereby increasing the susceptibility for nucleophilic attack on the carbonyl carbon. A change in rate occurs when X is EDG, as is evidenced when comparing the rates between X = Me and X = OMe, and nonlinearity is observed in the Hammett plot. [ 13 ] [ non-primary source needed ] [ non-primary source needed ] The effect of the substituent may change the rate-determining step (rds) in the mechanism of the reaction. A certain electronic effect may accelerate a certain step so that it is no longer the rds. [ 14 ] [ non-primary source needed ] [ non-primary source needed ] A change in the mechanism of a reaction also results in nonlinearity in the Hammett plot. Typically, the model used for measuring the changes in rate in this instance is that of the SN2 reaction. [ 15 ] [ non-primary source needed ] [ non-primary source needed ] However, it has been observed that in some cases of an SN2 reaction that an EWG does not accelerate the reaction as would be expected [ 16 ] [ non-primary source needed ] [ non-primary source needed ] and that the rate varies with the substituent. In fact, the sign of the charge and degree to which it develops will be affected by the substituent in the case of the benzylic system. [ 15 ] [ non-primary source needed ] For example, the substituent may determine the mechanism to be an SN1 type reaction over a SN2 type reaction, in which case the resulting Hammett plot will indicate a rate acceleration due to an EDG, thus elucidating the mechanism of the reaction. Another deviation from the regular Hammett equation is explained by the charge of nucleophile. [ 15 ] [ non-primary source needed ] Despite nonlinearity in benzylic SN2 reactions, electron withdrawing groups could either accelerate or retard the reaction. If the nucleophile is negatively charged (e.g. cyanide) the electron withdrawing group will increase the rate due to stabilization of the extra charge which is put on the carbon in the transition state. On the other hand, if the nucleophile is not charged (e.g. triphenylphosphine), electron withdrawing group is going to slow down the reaction by decreasing the electron density in the anti bonding orbital of leaving group in the transition state. Other equations now exist that refine the original Hammett equation: the Swain–Lupton equation , [ citation needed ] the Taft equation , [ citation needed ] the Grunwald–Winstein equation , [ citation needed ] and the Yukawa–Tsuno equation . [ citation needed ] An equation that addresses stereochemistry in aliphatic systems has also been developed. [ vague ] [ 17 ] [ non-primary source needed ] [ non-primary source needed ] Core-electron binding energy (CEBE) shifts correlate linearly with the Hammett substituent constants ( σ ) in substituted benzene derivatives. [ 18 ] [ non-primary source needed ] Consider para-disubstituted benzene p-F-C 6 H 4 -Z, where Z is a substituent such as NH 2 , NO 2 , etc. The fluorine atom is para with respect to the substituent Z in the benzene ring. The image on the right shows four distinguished ring carbon atoms, C1( ipso ), C2( ortho ), C3( meta ), C4( para ) in p-F-C 6 H 4 -Z molecule. The carbon with Z is defined as C1(ipso) and fluorinated carbon as C4(para). This definition is followed even for Z = H. The left-hand side of ( 1 ) is called CEBE shift or ΔCEBE, and is defined as the difference between the CEBE of the fluorinated carbon atom in p-F-C 6 H 4 -Z and that of the fluorinated carbon in the reference molecule FC 6 H 5 . The right-hand side of Eq. 1 is a product of a parameter κ and a Hammett substituent constant at the para position, σp . The parameter κ is defined by eq. 3 : where ρ and ρ * are the Hammett reaction constants for the reaction of the neutral molecule and core ionized molecule, respectively. ΔCEBEs of ring carbons in p-F-C 6 H 4 -Z were calculated with density functional theory to see how they correlate with Hammett σ-constants. Linear plots were obtained when the calculated CEBE shifts at the ortho, meta and para carbon were plotted against Hammett σ o , σ m and σ p constants respectively. Hence the approximate agreement in numerical value and in sign between the CEBE shifts and their corresponding Hammett σ constant. [ 19 ] [ non-primary source needed ] [ non-primary source needed ]
https://en.wikipedia.org/wiki/Hammett_equation
The Hammick reaction , named after Dalziel Hammick , is a chemical reaction in which the thermal decarboxylation of α- picolinic (or related) acids in the presence of carbonyl compounds forms 2- pyridyl -carbinols. [ 1 ] [ 2 ] [ 3 ] Using p -cymene as solvent has been shown to increase yields. [ 4 ] Upon heating α-picolinic acid will spontaneously decarboxylate forming the so-called 'Hammick Intermediate' ( 3 ). This was initially thought to be an aromatic ylide , but is now believed to be a carbene [ 5 ] [ 6 ] In the presence of a strong electrophile , such as an aldehyde or ketone , this species will undergo nucleophilic attack faster than proton transfer. After nucleophilic attack intramolecular proton transfer yields the desired carbinol ( 6 ). The scope of the reaction is effectively limited to decarboxylating acids where the carboxyl group is α to the nitrogen, (reactivity has been reported when the acids are located elsewhere on the molecule but with low yields) [ 7 ] [ 8 ] thus suitable substrates are limited to the derivatives of α-picolinic acid [ 3 ] [ 9 ] including the α-carboxylic acids of quinoline and isoquinoline .
https://en.wikipedia.org/wiki/Hammick_reaction
In coding theory , Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits . It is a member of a larger family of Hamming codes , but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes. [ 1 ] The Hamming code adds three additional check bits to every four data bits of the message. Hamming's (7,4) algorithm can correct any single-bit error, or detect all single-bit and two-bit errors. In other words, the minimal Hamming distance between any two correct codewords is 3, and received words can be correctly decoded if they are at a distance of at most one from the codeword that was transmitted by the sender. This means that for transmission medium situations where burst errors do not occur, Hamming's (7,4) code is effective (as the medium would have to be extremely noisy for two out of seven bits to be flipped). In quantum information , the Hamming (7,4) is used as the base for the Steane code , a type of CSS code used for quantum error correction . The goal of the Hamming codes is to create a set of parity bits that overlap so that a single-bit error in a data bit or a parity bit can be detected and corrected. While multiple overlaps can be created, the general method is presented in Hamming codes . This table describes which parity bits cover which transmitted bits in the encoded word. For example, p 2 provides an even parity for bits 2, 3, 6, and 7. It also details which transmitted bit is covered by which parity bit by reading the column. For example, d 1 is covered by p 1 and p 2 but not p 3 This table will have a striking resemblance to the parity-check matrix ( H ) in the next section. Furthermore, if the parity columns in the above table were removed then resemblance to rows 1, 2, and 4 of the code generator matrix ( G ) below will also be evident. So, by picking the parity bit coverage correctly, all errors with a Hamming distance of 1 can be detected and corrected, which is the point of using a Hamming code. Hamming codes can be computed in linear algebra terms through matrices because Hamming codes are linear codes . For the purposes of Hamming codes, two Hamming matrices can be defined: the code generator matrix G and the parity-check matrix H : As mentioned above, rows 1, 2, and 4 of G should look familiar as they map the data bits to their parity bits: The remaining rows (3, 5, 6, 7) map the data to their position in encoded form and there is only 1 in that row so it is an identical copy. In fact, these four rows are linearly independent and form the identity matrix (by design, not coincidence). Also as mentioned above, the three rows of H should be familiar. These rows are used to compute the syndrome vector at the receiving end and if the syndrome vector is the null vector (all zeros) then the received word is error-free; if non-zero then the value indicates which bit has been flipped. The four data bits — assembled as a vector p — is pre-multiplied by G (i.e., G T p {\displaystyle G^{T}p} ) and taken modulo 2 to yield the encoded value that is transmitted. The original 4 data bits are converted to seven bits (hence the name "Hamming(7,4)") with three parity bits added to ensure even parity using the above data bit coverages. The first table above shows the mapping between each data and parity bit into its final bit position (1 through 7) but this can also be presented in a Venn diagram . The first diagram in this article shows three circles (one for each parity bit) and encloses data bits that each parity bit covers. The second diagram (shown to the right) is identical but, instead, the bit positions are marked. For the remainder of this section, the following 4 bits (shown as a column vector) will be used as a running example: Suppose we want to transmit this data ( 1011 ) over a noisy communications channel . Specifically, a binary symmetric channel meaning that error corruption does not favor either zero or one (it is symmetric in causing errors). Furthermore, all source vectors are assumed to be equiprobable. We take the product of G and p , with entries modulo 2, to determine the transmitted codeword x : This means that 0110011 would be transmitted instead of transmitting 1011 . Programmers concerned about multiplication should observe that each row of the result is the least significant bit of the Population Count of set bits resulting from the row and column being Bitwise ANDed together rather than multiplied. In the adjacent diagram, the seven bits of the encoded word are inserted into their respective locations; from inspection it is clear that the parity of the red, green, and blue circles are even: What will be shown shortly is that if, during transmission, a bit is flipped then the parity of two or all three circles will be incorrect and the errored bit can be determined (even if one of the parity bits) by knowing that the parity of all three of these circles should be even. If no error occurs during transmission, then the received codeword r is identical to the transmitted codeword x : The receiver multiplies H and r to obtain the syndrome vector z , which indicates whether an error has occurred, and if so, for which codeword bit. Performing this multiplication (again, entries modulo 2): Since the syndrome z is the null vector , the receiver can conclude that no error has occurred. This conclusion is based on the observation that when the data vector is multiplied by G , a change of basis occurs into a vector subspace that is the kernel of H . As long as nothing happens during transmission, r will remain in the kernel of H and the multiplication will yield the null vector. Otherwise, suppose, we can write modulo 2, where e i is the i t h {\displaystyle i_{th}} unit vector , that is, a zero vector with a 1 in the i t h {\displaystyle i^{th}} , counting from 1. Thus the above expression signifies a single bit error in the i t h {\displaystyle i^{th}} place. Now, if we multiply this vector by H : Since x is the transmitted data, it is without error, and as a result, the product of H and x is zero. Thus Now, the product of H with the i t h {\displaystyle i^{th}} standard basis vector picks out that column of H , we know the error occurs in the place where this column of H occurs. For example, suppose we have introduced a bit error on bit #5 The diagram to the right shows the bit error (shown in blue text) and the bad parity created (shown in red text) in the red and green circles. The bit error can be detected by computing the parity of the red, green, and blue circles. If a bad parity is detected then the data bit that overlaps only the bad parity circles is the bit with the error. In the above example, the red and green circles have bad parity so the bit corresponding to the intersection of red and green but not blue indicates the errored bit. Now, which corresponds to the fifth column of H . Furthermore, the general algorithm used ( see Hamming code#General algorithm ) was intentional in its construction so that the syndrome of 101 corresponds to the binary value of 5, which indicates the fifth bit was corrupted. Thus, an error has been detected in bit 5, and can be corrected (simply flip or negate its value): This corrected received value indeed, now, matches the transmitted value x from above. Once the received vector has been determined to be error-free or corrected if an error occurred (assuming only zero or one bit errors are possible) then the received data needs to be decoded back into the original four bits. First, define a matrix R : Then the received value, p r , is equal to Rr . Using the running example from above It is not difficult to show that only single bit errors can be corrected using this scheme. Alternatively, Hamming codes can be used to detect single and double bit errors, by merely noting that the product of H is nonzero whenever errors have occurred. In the adjacent diagram, bits 4 and 5 were flipped. This yields only one circle (green) with an invalid parity but the errors are not recoverable. However, the Hamming (7,4) and similar Hamming codes cannot distinguish between single-bit errors and two-bit errors. That is, two-bit errors appear the same as one-bit errors. If error correction is performed on a two-bit error the result will be incorrect. Similarly, Hamming codes cannot detect or recover from an arbitrary three-bit error; Consider the diagram: if the bit in the green circle (colored red) were 1, the parity checking would return the null vector, indicating that there is no error in the codeword. Since the source is only 4 bits then there are only 16 possible transmitted words. Included is the eight-bit value if an extra parity bit is used ( see Hamming(7,4) code with an additional parity bit ). (The data bits are shown in blue; the parity bits are shown in red; and the extra parity bit shown in green.) The Hamming(7,4) code is closely related to the E 7 lattice and, in fact, can be used to construct it, or more precisely, its dual lattice E 7 ∗ (a similar construction for E 7 uses the dual code [7,3,4] 2 ). In particular, taking the set of all vectors x in Z 7 with x congruent (modulo 2) to a codeword of Hamming(7,4), and rescaling by 1/ √ 2 , gives the lattice E 7 ∗ This is a particular instance of a more general relation between lattices and codes. For instance, the extended (8,4)-Hamming code, which arises from the addition of a parity bit, is also related to the E 8 lattice . [ 2 ]
https://en.wikipedia.org/wiki/Hamming(7,4)
In combinatorics , a Hamming ball is a metric ball for Hamming distance . The Hamming ball of radius r {\displaystyle r} centered at a string x {\displaystyle x} over some alphabet (often the alphabet {0,1}) is the set of all strings of the same length that differ from x {\displaystyle x} in at most r {\displaystyle r} positions. This may be denoted using the standard notation for metric balls, B ( s , r ) {\displaystyle B(s,r)} . For an alphabet X {\displaystyle X} and a string x {\displaystyle x} , the Hamming ball is a subset of the Hamming space X | x | {\displaystyle X^{|x|}} of strings of the same length as x {\displaystyle x} , and it is a proper subset whenever r < | x | {\displaystyle r<|x|} . The name Hamming ball comes from coding theory , where error correction codes can be defined as having disjoint Hamming balls around their codewords, [ 1 ] and covering codes can be defined as having Hamming balls around the codeword whose union is the whole Hamming space. [ 2 ] Some local search algorithms for SAT solvers such as WalkSAT operate by using random guessing or covering codes to find a Hamming ball that contains a desired solution, and then searching within this Hamming ball to find the solution. [ 2 ] A version of Helly's theorem for Hamming balls is known: For Hamming balls of radius r {\displaystyle r} (in Hamming spaces of dimension greater than r {\displaystyle r} ), if a family of balls has the property that every subfamily of at most 2 r + 1 {\displaystyle 2^{r+1}} balls has a common intersection, then the whole family has a common intersection. [ 3 ]
https://en.wikipedia.org/wiki/Hamming_ball