id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,316,648
https://en.wikipedia.org/wiki/Constructive%20dilemma
Constructive dilemma is a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either P or R is true, then either Q or S has to be true. In sum, if two conditionals are true and at least one of their antecedents is, then at least one of their consequents must be too. Constructive dilemma is the disjunctive version of modus ponens, whereas destructive dilemma is the disjunctive version of modus tollens. The constructive dilemma rule can be stated: where the rule is that whenever instances of "", "", and "" appear on lines of a proof, "" can be placed on a subsequent line. Formal notation The constructive dilemma rule may be written in sequent notation: where is a metalogical symbol meaning that is a syntactic consequence of , , and in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic: where , , and are propositions expressed in some formal system. Natural language example If I win a million dollars, I will donate it to an orphanage. If my friend wins a million dollars, he will donate it to a wildlife fund. Either I win a million dollars or my friend wins a million dollars. Therefore, either an orphanage will get a million dollars, or a wildlife fund will get a million dollars. The dilemma derives its name because of the transfer of disjunctive operator. References Rules of inference Dilemmas Theorems in propositional logic
Constructive dilemma
[ "Mathematics" ]
319
[ "Theorems in propositional logic", "Rules of inference", "Theorems in the foundations of mathematics", "Proof theory" ]
1,317,423
https://en.wikipedia.org/wiki/Voith%20Schneider%20Propeller
The Voith Schneider Propeller (VSP) is a specialized marine propulsion system (MPS) manufactured by the Voith Group based on a cyclorotor design. It is highly maneuverable, being able to change the direction of its thrust almost instantaneously. It is widely used on tugs and ferries. Operation From a circular plate, rotating around a vertical axis, a circular array of vertical blades (in the shape of hydrofoils) protrude out of the bottom of the ship. Each blade can rotate itself around a vertical axis. The internal gear changes the angle of attack of the blades in sync with the rotation of the plate, so that each blade can provide thrust in any direction. Unlike the azimuth thruster (where a conventional propeller is rotated about the vertical axis to direct its thrust, allowing a vessel to steer without the use of a rudder), the Voith-Schneider drive merely requires changing the pattern of orientation of the vertical blades. In a marine situation, this provides for a drive which can be directed in any direction and thus does away with the need for a rudder. It is highly efficient and provides for an almost instantaneous change of direction. These drives are becoming increasingly common in work boats such as fireboats and tugboats where extreme maneuverability is needed. Azimuth thrusters (and Kort nozzles) have both advantages and disadvantages when compared to cycloidal drives. The azimuth thruster is less efficient and slower to manoeuvre, but is likely to be cheaper in the short term. Life cycle costs favour the Voith solution, something reflected in the residual value of a Voith water tractor. A choice is made on the basis of perceived performance requirements. Instead of a Kort nozzle, VSPs are often fitted with a "thrust plate" or "propeller guard" which acts as a nozzle at low speed, protects the VSP against grounding and provides another blocking location during drydocking. A low acoustic signature favours the device's use in minesweepers by minimising cavitation (usually produced at the tips of axial propellers) as the rotor does not need to rotate as fast for a given thrust. The underwater sound signature of the MV North Sea Giant (IMO: 9524073, MMSI: 248039000) dynamic positioning vessel was measured by the International Centre for Island Technology (ICIT) whilst installing a foundation monopile for the Voith tidal energy device in the Fall of Warness, Orkney (Ref Beharie and Side, 2011). VSPs are offered with an input power range of 160 kW to 3900 kW History The Voith Schneider propeller was originally a design for a hydro-electric turbine. Its Austrian inventor, Ernst Schneider, had a chance meeting on a train with a manager at Voith's subsidiary St. Pölten works; this led to the turbine being investigated by Voith's engineers, who discovered that although it was no more efficient than other water turbines, Schneider's design worked well as a pump by reversing the flow through the device. By changing the orientation of the vertical blades, it could be made to function as a combined propeller and rudder. In 1928 a prototype was installed in a 60-hp motor launch named Torqueo (Latin:I spin) and trials were carried out on Lake Constance. A number of German minesweepers (R boats) were fitted with VSPs; the first of these was the R8, built in 1929 by Lürssen. By 1931 VSPs were being fitted in new vessels on Lake Constance run by the German State Railways. The first such ship to use the Voith Schneider propeller was the excursion boat Kempten. Two German 1935-type M class minesweepers M-1 and M-2 were fitted with VSPs. The first British ship to use Voith Schneider propellers was the double-ended Isle of Wight ferry MV Lymington, launched in 1938. Some 80 ships had been installed with VSPs by the end of the 1930s, including the uncompleted 1938 German aircraft carrier Graf Zeppelin (two auxiliary units in the bow), and the Japanese submarine cable laying ship Toyo-maru (also 1938). The three vessels (John Burns, Ernest Bevin, and James Newman) which were in service for the Woolwich Ferry until 2018 featured Voith-Schneider propulsion systems. They were built in 1963 by the Caledon Shipbuilding & Engineering Company of Dundee and featured one VSP in the bow and a second in the stern for remarkable maneuverability. The Tay Ferries Scotscraig which were built by the Caledon in the 1950s also used VSPs.It was essentially a replacement copy of the earlier Abercraig ferry, which was built by Fleming and Ferguson Paisley yard for Dundee Harbour crossings and launched in 1938. The "Abercraig" also featured VSPs. The US Navy built twelve VSP-equipped Osprey-class coastal minehunters in the 1990s. These vessels have been decommissioned, six were sold to foreign navies. and six were sold for "dismantlement purposes only." The French Navy operates sixteen tugboats of the RPC12 type, that can provide a 12-tonne bollard pull thanks to two Voith Schneider propellers. The same device, mounted on a horizontal rather than a vertical axis, has been used to provide lift and propulsion on a few experimental aeroplanes, known as "cyclogyros". None of them were very successful. It has also more recently been proposed as an alternative to rotors for drone applications. See also References Sources Further reading External links Voith Turbo marine website Propellers Tugboats Marine propulsion
Voith Schneider Propeller
[ "Engineering" ]
1,165
[ "Marine propulsion", "Marine engineering" ]
1,317,567
https://en.wikipedia.org/wiki/Vinyl%20alcohol
Vinyl alcohol, also called ethenol (IUPAC name; not ethanol) or ethylenol, is the simplest enol. With the formula , it is a labile compound that converts to acetaldehyde immediately upon isolation near room temperature. It is not a practical precursor to any compound. Synthesis Vinyl alcohol can be formed by the pyrolytic elimination of water from ethylene glycol at a temperature of 900 °C and low pressure. Such processes are of no practical importance. Tautomerization of vinyl alcohol to acetaldehyde Under normal conditions, vinyl alcohol converts (tautomerizes) to acetaldehyde: At room temperature, acetaldehyde () is more stable than vinyl alcohol () by 42.7 kJ/mol. Vinyl alcohol gas isomerizes to the aldehyde with a half-life of 30 min at room temperature. The uncatalyzed keto–enol tautomerism by a 1,3-hydrogen migration is forbidden by the Woodward–Hoffmann rules and therefore has a high activation barrier and is not a significant pathway at or near room temperature. However, even trace amounts of acids or bases (including water) can catalyze the reaction. Even with rigorous precautions to minimize adventitious moisture or proton sources, vinyl alcohol can only be stored for minutes to hours before it isomerizes to acetaldehyde. (Carbonic acid is another example of a substance that is stable when rigorously pure, but decomposes rapidly due to catalysis by trace moisture.) The tautomerization can also be catalyzed via photochemical process. These findings suggest that the keto–enol tautomerization is a viable route under atmospheric and stratospheric conditions, relevant to a role for vinyl alcohol in the production of organic acids in the atmosphere. Vinyl alcohol can be stabilized by controlling the water concentration in the system and utilizing the kinetic favorability of the deuterium-produced kinetic isotope effect (kH+/kD+ = 4.75, kH2O/kD2O = 12). Deuterium stabilization can be accomplished through hydrolysis of a ketene precursor in the presence of a slight stoichiometric excess of heavy water (D2O). Studies show that the tautomerization process is significantly inhibited at ambient temperatures ( kt ≈ 10−6 M/s), and the half-life of the enol form can easily be increased to t1/2 = 42 minutes for first-order hydrolysis kinetics. Relationship to poly(vinyl alcohol) Because of the instability of vinyl alcohol, the thermoplastic polyvinyl alcohol (PVA or PVOH) is made indirectly by polymerization of vinyl acetate followed by hydrolysis of the ester bonds (Ac = acetyl; HOAc = acetic acid): As a ligand Several metal complexes are known that contain vinyl alcohol as a ligand. One example is Pt(acac)(η2-C2H3OH)Cl. Occurrence in interstellar medium Vinyl alcohol was detected in the molecular cloud Sagittarius B in 2001, the last of the three stable isomers of (after acetaldehyde and ethylene oxide) to be detected in space. Its stability in the (dilute) interstellar medium shows that its tautomerization does not happen unimolecularly, a fact attributed to the size of the activation energy barrier to the rearrangement being insurmountable at temperatures present in interstellar space. The vinyl alcohol to acetaldehyde rearrangement is the only keto-enol tautomerisation to have been detected in deep space, induced by the provision of secondary electrons from galactic cosmic rays. References Enols Vinyl compounds Organic compounds with 2 carbon atoms
Vinyl alcohol
[ "Chemistry" ]
788
[ "Enols", "Organic compounds", "Functional groups", "Organic compounds with 2 carbon atoms" ]
24,053,892
https://en.wikipedia.org/wiki/C14H16N2
{{DISPLAYTITLE:C14H16N2}} The molecular formula C14H16N2 (molar mass: 212.29 g/mol) may refer to: Atipamezole Diphenylethylenediamine Ergoline Naphthylpiperazine RS134-49 Tolidine Molecular formulas
C14H16N2
[ "Physics", "Chemistry" ]
73
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,053,915
https://en.wikipedia.org/wiki/C20H26N4O
{{DISPLAYTITLE:C20H26N4O}} The molecular formula C20H26N4O (molar mass: 338.45 g/mol, exact mass: 338.2107 u) may refer to: Lisuride Molecular formulas
C20H26N4O
[ "Physics", "Chemistry" ]
58
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,053,935
https://en.wikipedia.org/wiki/C19H23N3O2
{{DISPLAYTITLE:C19H23N3O2}} The molecular formula C19H23N3O2 (molar mass : 325.41 g/mol) may refer to: ABT-670, a potent, orally bioavailable dopamine agonist Ergometrine, a primary ergot and morning glory alkaloid Ergometrinine, an ergot alkaloid Molecular formulas
C19H23N3O2
[ "Physics", "Chemistry" ]
96
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,053,946
https://en.wikipedia.org/wiki/Damped%20Lyman-alpha%20system
Damped Lyman alpha systems or Damped Lyman alpha absorption systems is a term used by astronomers for concentrations of neutral hydrogen gas that are detected in the spectra of quasars – a class of distant Active Galactic Nuclei. They are defined to be systems where the column density (density projected along the line of sight to the quasar) of hydrogen is larger than 2 x 1020 atoms/cm2. The observed spectra consist of neutral hydrogen Lyman alpha absorption lines which are broadened by radiation damping. These systems can be observed in quantity at relatively high redshifts of 2–4, when they contained most of the neutral hydrogen in the universe. They are believed to be associated with the early stages of galaxy formation, as the high neutral hydrogen column densities of DLAs are also typical of sightlines in the Milky Way, and other nearby galaxies. Since they are observed in absorption rather than emissions by their stars, they offer the opportunity to study the dynamics of the gas in early galaxies directly. See also Lyman-alpha blob Lyman-alpha emitter Lyman-alpha forest Lyman-break galaxy References Physical cosmology Astronomical spectroscopy Intergalactic media
Damped Lyman-alpha system
[ "Physics", "Chemistry", "Astronomy" ]
236
[ "Astronomical sub-disciplines", "Spectrum (physical sciences)", "Outer space", "Theoretical physics", "Intergalactic media", "Astrophysics", "Astronomical spectroscopy", "Spectroscopy", "Physical cosmology" ]
24,058,550
https://en.wikipedia.org/wiki/Amide%20reduction
Amide reduction is a reaction in organic synthesis where an amide is reduced to either an amine or an aldehyde functional group. Catalytic hydrogenation Catalytic hydrogenation can be used to reduce amides to amines; however, the process often requires high hydrogenation pressures and reaction temperatures to be effective (i.e. often requiring pressures above 197 atm and temperatures exceeding 200 °C). Selective catalysts for the reaction include copper chromite, rhenium trioxide and rhenium(VII) oxide or bimetallic catalyst. Amines from other hydride sources Reducing agents able to effect this reaction include metal hydrides such as lithium aluminium hydride, or lithium borohydride in mixed solvents of tetrahydrofuran and methanol. Iron catalysis by triiron dodecacarbonyl in combination with polymethylhydrosiloxane has been reported. Lawesson's reagent converts amides to thioamides, which then catalytically desulfurize. Noncatalytic routes to aldehydes Some amides can be reduced to aldehydes in the Sonn-Müller method, but most routes to aldehydes involve a well-chosen organometallic reductant. Lithium aluminum hydride reduces an excess of N,N-disubstituted amides to an aldehyde: R(CO)NRR' + LiAlH4 → RCHO + HNRR' With further reduction the alcohol is obtained. Schwartz's reagent reduces amides to aldehydes, and so does hydrosilylation with a suitable catalyst. References External links Amide reduction @ organic-chemistry.org Organic redox reactions
Amide reduction
[ "Chemistry" ]
364
[ "Organic redox reactions", "Organic reactions" ]
24,059,053
https://en.wikipedia.org/wiki/Agaricus%20benesii
Agaricus benesii is an agaric mushroom of the genus Agaricus known in English as the mull mushroom. This mushroom can be distinguished by a white cap that bruises pinkish-red when injured, a scaly lower stipe, and a conifer habitat. Similar to Agaricus californicus and A. xanthodermus, the cap discolors brown in age. A distinguishing feature of A. californicus and A. xanthodermus, however, is a thickened annulus at the margin, a phenolic odor, and a yellowing bruise, instead of red. In the case of Agaricus xanthodermus, it occurs quickly, though faintly to not at all in the case of A. californicus. Another similar species, Agaricus bernardii, also stains red and has white flesh, but differentiates on its larger bulk, a sheathing veil, briny odor, and different habitat, namely grass. Description The cap is broad; it has a convex shape which, in age, becomes flat. The flesh is white, moderately thick, and firm. The odor is pungent, even though the mushroom has a mild taste. When injured, it turns a pinkish-red. The surface is white, dry, and innately fibrillose. At the margin, it is finely scaled, though it discolors into a brownish shade in age. The stem is tall and thick. The stem extends to the enlarged base. At maturity, the stem is stuffed. The surface is white, and turns smooth at the apex, while it is finely scaled below. The partial veil is white, membranous, and two layered. The upper surface is striate, while the lower surface is composed of scaly patches, forming a small, superior annulus. The flesh is white, though it stains red quickly when injured. The gills are initially unattached to the stem, packed close together, and are pinkish-brown; in age they become blackish-brown. Spores are 5–6 by 3–4 μm, smooth, and elliptical. The spore print is a blackish-brown color. Habitat Found under Monterey Cypress and pines, Agaricus benesii are often in small groups or by themselves. In addition, they fruit from mid to late winter. See also List of Agaricus species References Specific General benesii Fungi described in 1925 Fungi of Europe Fungus species
Agaricus benesii
[ "Biology" ]
514
[ "Fungi", "Fungus species" ]
854,272
https://en.wikipedia.org/wiki/Creatine%20kinase
Creatine kinase (CK), also known as creatine phosphokinase (CPK) or phosphocreatine kinase, is an enzyme () expressed by various tissues and cell types. CK catalyses the conversion of creatine and uses adenosine triphosphate (ATP) to create phosphocreatine (PCr) and adenosine diphosphate (ADP). This CK enzyme reaction is reversible and thus ATP can be generated from PCr and ADP. In tissues and cells that consume ATP rapidly, especially skeletal muscle, but also brain, photoreceptor cells of the retina, hair cells of the inner ear, spermatozoa and smooth muscle, PCr serves as an energy reservoir for the rapid buffering and regeneration of ATP in situ, as well as for intracellular energy transport by the PCr shuttle or circuit. Thus creatine kinase is an important enzyme in such tissues. Clinically, creatine kinase is assayed in blood tests as a marker of damage of CK-rich tissue such as in myocardial infarction (heart attack), rhabdomyolysis (severe muscle breakdown), muscular dystrophy, autoimmune myositides, and acute kidney injury. Types In the cells, the cytosolic CK enzymes consist of two subunits, which can be either B (brain type) or M (muscle type). There are, therefore, three different isoenzymes: CK-MM, CK-BB and CK-MB. The genes for these subunits are located on different chromosomes: B on 14q32 and M on 19q13. In addition to those three cytosolic CK isoforms, there are two mitochondrial creatine kinase isoenzymes, the ubiquitous form and the sarcomeric form. The functional entity of the mitochondrial CK isoforms is an octamer consisting of four dimers each. While mitochondrial creatine kinase is directly involved in the formation of phosphocreatine from mitochondrial ATP, cytosolic CK regenerates ATP from ADP, using PCr. This happens at intracellular sites where ATP is used in the cell, with CK acting as an in situ ATP regenerator. Isoenzyme patterns differ in tissues. Skeletal muscle expresses CK-MM (98%) and low levels of CK-MB (1%). The myocardium (heart muscle), in contrast, expresses CK-MM at 70% and CK-MB at 25–30%. CK-BB is predominantly expressed in brain and smooth muscle, including vascular and uterine tissue. Protein structure The first structure of a creatine kinase solved by X-ray protein crystallography was that of the octameric, sarcomeric muscle-type mitochondrial CK (s-mtCK) in 1996., followed by the structure of ubiquitous mitochondrial CK (u-mtCK) in 2000. The atomic structure of the banana-shaped, dimeric cytosolic brain-type BB-CK was solved in 1999 at a resolution of 1,4 Å. Cytosolic BB-CK, as well as muscle-type MM-CK both form banana-shaped symmetric dimers, with one catalytic active site in each subunit. Functions Mitochondrial creatine kinase (CKm) is present in the mitochondrial intermembrane space, where it regenerates phosphocreatine (PCr) from mitochondrially generated ATP and creatine (Cr) imported from the cytosol. Apart from the two mitochondrial CK isoenzyme forms, that is, ubiquitous mtCK (present in non-muscle tissues) and sarcomeric mtCK (present in sarcomeric muscle), there are three cytosolic CK isoforms present in the cytosol, depending on the tissue. Whereas MM-CK is expressed in sarcomeric muscle, that is, skeletal and cardiac muscle, MB-CK is expressed in cardiac muscle, and BB-CK is expressed in smooth muscle and in most non-muscle tissues. Mitochondrial mtCK and cytosolic CK are connected in a so-called PCr/Cr-shuttle or circuit. PCr generated by mtCK in mitochondria is shuttled to cytosolic CK that is coupled to ATP-dependent processes, e.g. ATPases, such as acto-myosin ATPase and calcium ATPase involved in muscle contraction, and sodium/potassium ATPase involved in sodium retention in the kidney. The bound cytosolic CK accepts the PCr shuttled through the cell and uses ADP to regenerate ATP, which can then be used as an energy source by the ATPases (CK is associated intimately with the ATPases, forming a functionally coupled microcompartment). PCr is not only an energy buffer, but also a cellular transport form of energy between subcellular sites of energy (ATP) production (mitochondria and glycolysis) and those of energy utilization (ATPases). Thus, CK enhances skeletal, cardiac, and smooth muscle contractility, and is involved in the generation of blood pressure. Further, the ADP-scavenging action of creatine kinase has been implicated in bleeding disorders: persons with highly elevated plasma CK could be prone to major bleeding. Laboratory testing CK is often determined routinely in a medical laboratory. It used to be determined specifically in patients with chest pain to recognize acute myocardial infarction, but this test has been largely replaced by troponin. Normal values at rest are usually between 60 and 400 IU/L, where one unit is enzyme activity, more specifically the amount of enzyme that will catalyze 1 μmol of substrate per minute under specified conditions (temperature, pH, substrate concentrations and activators.) This test is not specific for the type of CK that is elevated. Creatine kinase in the blood may be high in health and disease. Exercise increases the outflow of creatine kinase to the blood stream for up to a week, and this is the most common cause of high CK in blood. Furthermore, high CK in the blood may be related to high intracellular CK such as in persons of African descent. Finally, high CK in the blood may be an indication of damage to CK-rich tissue, such as in rhabdomyolysis, myocardial infarction, myositis and myocarditis. This means creatine kinase in blood may be elevated in a wide range of clinical conditions including the use of medication such as statins; endocrine disorders such as hypothyroidism; and skeletal muscle diseases and disorders including malignant hyperthermia, and neuroleptic malignant syndrome. Furthermore, the isoenzyme determination has in the past been used extensively as an indication for myocardial damage in heart attacks. Troponin measurement has largely replaced this in many hospitals, although some centers still rely on CK-MB. Nomenclature This enzyme is often listed in medical literature under incorrect name "creatinine kinase". Creatinine is not a substrate or a product of the enzyme. See also Reference ranges for blood tests CPK-MB test References External links Simply stated at mdausa.org CPK isoenzymes test CK at Lab Tests Online Chemical pathology EC 2.7.3
Creatine kinase
[ "Chemistry", "Biology" ]
1,535
[ "Biochemistry", "Chemical pathology" ]
854,294
https://en.wikipedia.org/wiki/DNA%20repair
DNA repair is a collection of processes by which a cell identifies and corrects damage to the DNA molecules that encode its genome. In human cells, both normal metabolic activities and environmental factors such as radiation can cause DNA damage, resulting in tens of thousands of individual molecular lesions per cell per day. Many of these lesions cause structural damage to the DNA molecule and can alter or eliminate the cell's ability to transcribe the gene that the affected DNA encodes. Other lesions induce potentially harmful mutations in the cell's genome, which affect the survival of its daughter cells after it undergoes mitosis. As a consequence, the DNA repair process is constantly active as it responds to damage in the DNA structure. When normal repair processes fail, and when cellular apoptosis does not occur, irreparable DNA damage may occur. This can eventually lead to malignant tumors, or cancer as per the two-hit hypothesis. The rate of DNA repair depends on various factors, including the cell type, the age of the cell, and the extracellular environment. A cell that has accumulated a large amount of DNA damage or can no longer effectively repair its DNA may enter one of three possible states: an irreversible state of dormancy, known as senescence cell suicide, also known as apoptosis or programmed cell death unregulated cell division, which can lead to the formation of a tumor that is cancerous The DNA repair ability of a cell is vital to the integrity of its genome and thus to the normal functionality of that organism. Many genes that were initially shown to influence life span have turned out to be involved in DNA damage repair and protection. The 2015 Nobel Prize in Chemistry was awarded to Tomas Lindahl, Paul Modrich, and Aziz Sancar for their work on the molecular mechanisms of DNA repair processes. DNA damage DNA damage, due to environmental factors and normal metabolic processes inside the cell, occurs at a rate of 10,000 to 1,000,000 molecular lesions per cell per day. While this constitutes at most only 0.0003125% of the human genome's approximately 3.2 billion bases, unrepaired lesions in critical genes (such as tumor suppressor genes) can impede a cell's ability to carry out its function and appreciably increase the likelihood of tumor formation and contribute to tumor heterogeneity. The vast majority of DNA damage affects the primary structure of the double helix; that is, the bases themselves are chemically modified. These modifications can in turn disrupt the molecules' regular helical structure by introducing non-native chemical bonds or bulky adducts that do not fit in the standard double helix. Unlike proteins and RNA, DNA usually lacks tertiary structure and therefore damage or disturbance does not occur at that level. DNA is, however, supercoiled and wound around "packaging" proteins called histones (in eukaryotes), and both superstructures are vulnerable to the effects of DNA damage. Sources DNA damage can be subdivided into two main types: endogenous damage such as attack by reactive oxygen species produced from normal metabolic byproducts (spontaneous mutation), especially the process of oxidative deamination also includes replication errors exogenous damage caused by external agents such as ultraviolet (UV) radiation (200–400 nm) from the sun or other artificial light sources other radiation frequencies, including x-rays and gamma rays hydrolysis or thermal disruption certain plant toxins human-made mutagenic chemicals, especially aromatic compounds that act as DNA intercalating agents viruses The replication of damaged DNA before cell division can lead to the incorporation of wrong bases opposite damaged ones. Daughter cells that inherit these wrong bases carry mutations from which the original DNA sequence is unrecoverable (except in the rare case of a back mutation, for example, through gene conversion). Types There are several types of damage to DNA due to endogenous cellular processes: oxidation of bases [e.g. 8-oxo-7,8-dihydroguanine (8-oxoG)] and generation of DNA strand interruptions from reactive oxygen species, alkylation of bases (usually methylation), such as formation of 7-methylguanosine, 1-methyladenine, 6-O-Methylguanine hydrolysis of bases, such as deamination, depurination, and depyrimidination. "bulky adduct formation" (e.g., benzo[a]pyrene diol epoxide-dG adduct, aristolactam I-dA adduct) mismatch of bases, due to errors in DNA replication, in which the wrong DNA base is stitched into place in a newly forming DNA strand, or a DNA base is skipped over or mistakenly inserted. Monoadduct damage cause by change in single nitrogenous base of DNA Di adduct damage Damage caused by exogenous agents comes in many forms. Some examples are: Absorption of UV light directly by DNA induces photochemical reactions, leading to the formation of pyrimidine dimers, and photoionization, provoking oxidative damage. UV-A light creates mostly free radicals. The damage caused by free radicals is called indirect DNA damage. Ionizing radiation such as that created by radioactive decay or in cosmic rays causes breaks in DNA strands. Intermediate-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer. Thermal disruption at elevated temperature increases the rate of depurination (loss of purine bases from the DNA backbone) and single-strand breaks. For example, hydrolytic depurination is seen in the thermophilic bacteria, which grow in hot springs at 40–80 °C. The rate of depurination (300 purine residues per genome per generation) is too high in these species to be repaired by normal repair machinery, hence a possibility of an adaptive response cannot be ruled out. Industrial chemicals such as vinyl chloride and hydrogen peroxide, and environmental chemicals such as polycyclic aromatic hydrocarbons found in smoke, soot and tar create a huge diversity of DNA adducts- ethanoates, oxidized bases, alkylated phosphodiesters and crosslinking of DNA, just to name a few. UV damage, alkylation/methylation, X-ray damage and oxidative damage are examples of induced damage. Spontaneous damage can include the loss of a base, deamination, sugar ring puckering and tautomeric shift. Constitutive (spontaneous) DNA damage caused by endogenous oxidants can be detected as a low level of histone H2AX phosphorylation in untreated cells. Nuclear versus mitochondrial In eukaryotic cells, DNA is found in two cellular locations – inside the nucleus and inside the mitochondria. Nuclear DNA (nDNA) exists as chromatin during non-replicative stages of the cell cycle and is condensed into aggregate structures known as chromosomes during cell division. In either state the DNA is highly compacted and wound up around bead-like proteins called histones. Whenever a cell needs to express the genetic information encoded in its nDNA the required chromosomal region is unraveled, genes located therein are expressed, and then the region is condensed back to its resting conformation. Mitochondrial DNA (mtDNA) is located inside mitochondria organelles, exists in multiple copies, and is also tightly associated with a number of proteins to form a complex known as the nucleoid. Inside mitochondria, reactive oxygen species (ROS), or free radicals, byproducts of the constant production of adenosine triphosphate (ATP) via oxidative phosphorylation, create a highly oxidative environment that is known to damage mtDNA. A critical enzyme in counteracting the toxicity of these species is superoxide dismutase, which is present in both the mitochondria and cytoplasm of eukaryotic cells. Senescence and apoptosis Senescence, an irreversible process in which the cell no longer divides, is a protective response to the shortening of the chromosome ends, called telomeres. The telomeres are long regions of repetitive noncoding DNA that cap chromosomes and undergo partial degradation each time a cell undergoes division (see Hayflick limit). In contrast, quiescence is a reversible state of cellular dormancy that is unrelated to genome damage (see cell cycle). Senescence in cells may serve as a functional alternative to apoptosis in cases where the physical presence of a cell for spatial reasons is required by the organism, which serves as a "last resort" mechanism to prevent a cell with damaged DNA from replicating inappropriately in the absence of pro-growth cellular signaling. Unregulated cell division can lead to the formation of a tumor (see cancer), which is potentially lethal to an organism. Therefore, the induction of senescence and apoptosis is considered to be part of a strategy of protection against cancer. Mutation It is important to distinguish between DNA damage and mutation, the two major types of error in DNA. DNA damage and mutation are fundamentally different. Damage results in physical abnormalities in the DNA, such as single- and double-strand breaks, 8-hydroxydeoxyguanosine residues, and polycyclic aromatic hydrocarbon adducts. DNA damage can be recognized by enzymes, and thus can be correctly repaired if redundant information, such as the undamaged sequence in the complementary DNA strand or in a homologous chromosome, is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented, and thus translation into a protein will also be blocked. Replication may also be blocked or the cell may die. In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and thus a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damage and mutation are related because DNA damage often causes errors of DNA synthesis during replication or repair; these errors are a major source of mutation. Given these properties of DNA damage and mutation, it can be seen that DNA damage is a special problem in non-dividing or slowly-dividing cells, where unrepaired damage will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damage that does not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell's survival. Thus, in a population of cells composing a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism because such mutant cells can give rise to cancer. Thus, DNA damage in frequently dividing cells, because it gives rise to mutations, is a prominent cause of cancer. In contrast, DNA damage in infrequently-dividing cells is likely a prominent cause of aging. Mechanisms Cells cannot function if DNA damage corrupts the integrity and accessibility of essential information in the genome (but cells remain superficially functional when non-essential genes are missing or damaged). Depending on the type of damage inflicted on the DNA's double helical structure, a variety of repair strategies have evolved to restore lost information. If possible, cells use the unmodified complementary strand of the DNA or the sister chromatid as a template to recover the original information. Without access to a template, cells use an error-prone recovery mechanism known as translesion synthesis as a last resort. Damage to DNA alters the spatial configuration of the helix, and such alterations can be detected by the cell. Once damage is localized, specific DNA repair molecules bind at or near the site of damage, inducing other molecules to bind and form a complex that enables the actual repair to take place. Direct reversal Cells are known to eliminate three types of damage to their DNA by chemically reversing it. These mechanisms do not require a template, since the types of damage they counteract can occur in only one of the four bases. Such direct reversal mechanisms are specific to the type of damage incurred and do not involve breakage of the phosphodiester backbone. The formation of pyrimidine dimers upon irradiation with UV light results in an abnormal covalent bond between adjacent pyrimidine bases. The photoreactivation process directly reverses this damage by the action of the enzyme photolyase, whose activation is obligately dependent on energy absorbed from blue/UV light (300–500 nm wavelength) to promote catalysis. Photolyase, an old enzyme present in bacteria, fungi, and most animals no longer functions in humans, who instead use nucleotide excision repair to repair damage from UV irradiation. Another type of damage, methylation of guanine bases, is directly reversed by the enzyme methyl guanine methyl transferase (MGMT), the bacterial equivalent of which is called ogt. This is an expensive process because each MGMT molecule can be used only once; that is, the reaction is stoichiometric rather than catalytic. A generalized response to methylating agents in bacteria is known as the adaptive response and confers a level of resistance to alkylating agents upon sustained exposure by upregulation of alkylation repair enzymes. The third type of DNA damage reversed by cells is certain methylation of the bases cytosine and adenine. Single-strand damage When only one of the two strands of a double helix has a defect, the other strand can be used as a template to guide the correction of the damaged strand. In order to repair damage to one of the two paired molecules of DNA, there exist a number of excision repair mechanisms that remove the damaged nucleotide and replace it with an undamaged nucleotide complementary to that found in the undamaged DNA strand. Base excision repair (BER): damaged single bases or nucleotides are most commonly repaired by removing the base or the nucleotide involved and then inserting the correct base or nucleotide. In base excision repair, a glycosylase enzyme removes the damaged base from the DNA by cleaving the bond between the base and the deoxyribose. These enzymes remove a single base to create an apurinic or apyrimidinic site (AP site). Enzymes called AP endonucleases nick the damaged DNA backbone at the AP site. DNA polymerase then removes the damaged region using its 5' to 3' exonuclease activity and correctly synthesizes the new strand using the complementary strand as a template. The gap is then sealed by enzyme DNA ligase. Nucleotide excision repair (NER): bulky, helix-distorting damage, such as pyrimidine dimerization caused by UV light is usually repaired by a three-step process. First the damage is recognized, then 12-24 nucleotide-long strands of DNA are removed both upstream and downstream of the damage site by endonucleases, and the removed DNA region is then resynthesized. NER is a highly evolutionarily conserved repair mechanism and is used in nearly all eukaryotic and prokaryotic cells. In prokaryotes, NER is mediated by Uvr proteins. In eukaryotes, many more proteins are involved, although the general strategy is the same. Mismatch repair systems are present in essentially all cells to correct errors that are not corrected by proofreading. These systems consist of at least two proteins. One detects the mismatch, and the other recruits an endonuclease that cleaves the newly synthesized DNA strand close to the region of damage. In E. coli , the proteins involved are the Mut class proteins: MutS, MutL, and MutH. In most Eukaryotes, the analog for MutS is MSH and the analog for MutL is MLH. MutH is only present in bacteria. This is followed by removal of damaged region by an exonuclease, resynthesis by DNA polymerase, and nick sealing by DNA ligase. Double-strand breaks Double-strand breaks, in which both strands in the double helix are severed, are particularly hazardous to the cell because they can lead to genome rearrangements. In fact, when a double-strand break is accompanied by a cross-linkage joining the two strands at the same point, neither strand can be used as a template for the repair mechanisms, so that the cell will not be able to complete mitosis when it next divides, and will either die or, in rare cases, undergo a mutation. Three mechanisms exist to repair double-strand breaks (DSBs): non-homologous end joining (NHEJ), microhomology-mediated end joining (MMEJ), and homologous recombination (HR): In NHEJ, DNA Ligase IV, a specialized DNA ligase that forms a complex with the cofactor XRCC4, directly joins the two ends. To guide accurate repair, NHEJ relies on short homologous sequences called microhomologies present on the single-stranded tails of the DNA ends to be joined. If these overhangs are compatible, repair is usually accurate. NHEJ can also introduce mutations during repair. Loss of damaged nucleotides at the break site can lead to deletions, and joining of nonmatching termini forms insertions or translocations. NHEJ is especially important before the cell has replicated its DNA, since there is no template available for repair by homologous recombination. There are "backup" NHEJ pathways in higher eukaryotes. Besides its role as a genome caretaker, NHEJ is required for joining hairpin-capped double-strand breaks induced during V(D)J recombination, the process that generates diversity in B-cell and T-cell receptors in the vertebrate immune system. MMEJ starts with short-range end resection by MRE11 nuclease on either side of a double-strand break to reveal microhomology regions. In further steps, Poly (ADP-ribose) polymerase 1 (PARP1) is required and may be an early step in MMEJ. There is pairing of microhomology regions followed by recruitment of flap structure-specific endonuclease 1 (FEN1) to remove overhanging flaps. This is followed by recruitment of XRCC1–LIG3 to the site for ligating the DNA ends, leading to an intact DNA. MMEJ is always accompanied by a deletion, so that MMEJ is a mutagenic pathway for DNA repair. HR requires the presence of an identical or nearly identical sequence to be used as a template for repair of the break. The enzymatic machinery responsible for this repair process is nearly identical to the machinery responsible for chromosomal crossover during meiosis. This pathway allows a damaged chromosome to be repaired using a sister chromatid (available in G2 after DNA replication) or a homologous chromosome as a template. DSBs caused by the replication machinery attempting to synthesize across a single-strand break or unrepaired lesion cause collapse of the replication fork and are typically repaired by recombination. In an in vitro system, MMEJ occurred in mammalian cells at the levels of 10–20% of HR when both HR and NHEJ mechanisms were also available. The extremophile Deinococcus radiodurans has a remarkable ability to survive DNA damage from ionizing radiation and other sources. At least two copies of the genome, with random DNA breaks, can form DNA fragments through annealing. Partially overlapping fragments are then used for synthesis of homologous regions through a moving D-loop that can continue extension until complementary partner strands are found. In the final step, there is crossover by means of RecA-dependent homologous recombination. Topoisomerases introduce both single- and double-strand breaks in the course of changing the DNA's state of supercoiling, which is especially common in regions near an open replication fork. Such breaks are not considered DNA damage because they are a natural intermediate in the topoisomerase biochemical mechanism and are immediately repaired by the enzymes that created them. Another type of DNA double-strand breaks originates from the DNA heat-sensitive or heat-labile sites. These DNA sites are not initial DSBs. However, they convert to DSB after treating with elevated temperature. Ionizing irradiation can induces a highly complex form of DNA damage as clustered damage. It consists of different types of DNA lesions in various locations of the DNA helix. Some of these closely located lesions can probably convert to DSB by exposure to high temperatures. But the exact nature of these lesions and their interactions is not yet known Translesion synthesis Translesion synthesis (TLS) is a DNA damage tolerance process that allows the DNA replication machinery to replicate past DNA lesions such as thymine dimers or AP sites. It involves switching out regular DNA polymerases for specialized translesion polymerases (i.e. DNA polymerase IV or V, from the Y Polymerase family), often with larger active sites that can facilitate the insertion of bases opposite damaged nucleotides. The polymerase switching is thought to be mediated by, among other factors, the post-translational modification of the replication processivity factor PCNA. Translesion synthesis polymerases often have low fidelity (high propensity to insert wrong bases) on undamaged templates relative to regular polymerases. However, many are extremely efficient at inserting correct bases opposite specific types of damage. For example, Pol η mediates error-free bypass of lesions induced by UV irradiation, whereas Pol ι introduces mutations at these sites. Pol η is known to add the first adenine across the T^T photodimer using Watson-Crick base pairing and the second adenine will be added in its syn conformation using Hoogsteen base pairing. From a cellular perspective, risking the introduction of point mutations during translesion synthesis may be preferable to resorting to more drastic mechanisms of DNA repair, which may cause gross chromosomal aberrations or cell death. In short, the process involves specialized polymerases either bypassing or repairing lesions at locations of stalled DNA replication. For example, Human DNA polymerase eta can bypass complex DNA lesions like guanine-thymine intra-strand crosslink, G[8,5-Me]T, although it can cause targeted and semi-targeted mutations. Paromita Raychaudhury and Ashis Basu studied the toxicity and mutagenesis of the same lesion in Escherichia coli by replicating a G[8,5-Me]T-modified plasmid in E. coli with specific DNA polymerase knockouts. Viability was very low in a strain lacking pol II, pol IV, and pol V, the three SOS-inducible DNA polymerases, indicating that translesion synthesis is conducted primarily by these specialized DNA polymerases. A bypass platform is provided to these polymerases by Proliferating cell nuclear antigen (PCNA). Under normal circumstances, PCNA bound to polymerases replicates the DNA. At a site of lesion, PCNA is ubiquitinated, or modified, by the RAD6/RAD18 proteins to provide a platform for the specialized polymerases to bypass the lesion and resume DNA replication. After translesion synthesis, extension is required. This extension can be carried out by a replicative polymerase if the TLS is error-free, as in the case of Pol η, yet if TLS results in a mismatch, a specialized polymerase is needed to extend it; Pol ζ. Pol ζ is unique in that it can extend terminal mismatches, whereas more processive polymerases cannot. So when a lesion is encountered, the replication fork will stall, PCNA will switch from a processive polymerase to a TLS polymerase such as Pol ι to fix the lesion, then PCNA may switch to Pol ζ to extend the mismatch, and last PCNA will switch to the processive polymerase to continue replication. Global response to DNA damage Cells exposed to ionizing radiation, ultraviolet light or chemicals are prone to acquire multiple sites of bulky DNA lesions and double-strand breaks. Moreover, DNA damaging agents can damage other biomolecules such as proteins, carbohydrates, lipids, and RNA. The accumulation of damage, to be specific, double-strand breaks or adducts stalling the replication forks, are among known stimulation signals for a global response to DNA damage. The global response to damage is an act directed toward the cells' own preservation and triggers multiple pathways of macromolecular repair, lesion bypass, tolerance, or apoptosis. The common features of global response are induction of multiple genes, cell cycle arrest, and inhibition of cell division. Initial steps The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow DNA repair, the chromatin must be remodeled. In eukaryotes, ATP dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of a DNA damage. In one of the earliest steps, the stress-activated protein kinase, c-Jun N-terminal kinase (JNK), phosphorylates SIRT6 on serine 10 in response to double-strand breaks or other DNA damage. This post-translational modification facilitates the mobilization of SIRT6 to DNA damage sites, and is required for efficient recruitment of poly (ADP-ribose) polymerase 1 (PARP1) to DNA break sites and for efficient repair of DSBs. PARP1 protein starts to appear at DNA damage sites in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. PARP1 synthesizes polymeric adenosine diphosphate ribose (poly (ADP-ribose) or PAR) chains on itself. Next the chromatin remodeler ALC1 quickly attaches to the product of PARP1 action, a poly-ADP ribose chain, and ALC1 completes arrival at the DNA damage within 10 seconds of the occurrence of the damage. About half of the maximum chromatin relaxation, presumably due to action of ALC1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA double-strand breaks. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. DDB2 occurs in a heterodimeric complex with DDB1. This complex further complexes with the ubiquitin ligase protein CUL4A and with PARP1. This larger complex rapidly associates with UV-induced damage within chromatin, with half-maximum association completed in 40 seconds. The PARP1 protein, attached to both DDB1 and DDB2, then PARylates (creates a poly-ADP ribose chain) on DDB2 that attracts the DNA remodeling protein ALC1. Action of ALC1 relaxes the chromatin at the site of UV damage to DNA. This relaxation allows other proteins in the nucleotide excision repair pathway to enter the chromatin and repair UV-induced cyclobutane pyrimidine dimer damages. After rapid chromatin remodeling, cell cycle checkpoints are activated to allow DNA repair to occur before the cell cycle progresses. First, two kinases, ATM and ATR are activated within 5 or 6 minutes after DNA is damaged. This is followed by phosphorylation of the cell cycle checkpoint protein Chk1, initiating its function, about 10 minutes after DNA is damaged. DNA damage checkpoints After DNA damage, cell cycle checkpoints are activated. Checkpoint activation pauses the cell cycle and gives the cell time to repair the damage before continuing to divide. DNA damage checkpoints occur at the G1/S and G2/M boundaries. An intra-S checkpoint also exists. Checkpoint activation is controlled by two master kinases, ATM and ATR. ATM responds to DNA double-strand breaks and disruptions in chromatin structure, whereas ATR primarily responds to stalled replication forks. These kinases phosphorylate downstream targets in a signal transduction cascade, eventually leading to cell cycle arrest. A class of checkpoint mediator proteins including BRCA1, MDC1, and 53BP1 has also been identified. These proteins seem to be required for transmitting the checkpoint activation signal to downstream proteins. DNA damage checkpoint is a signal transduction pathway that blocks cell cycle progression in G1, G2 and metaphase and slows down the rate of S phase progression when DNA is damaged. It leads to a pause in cell cycle allowing the cell time to repair the damage before continuing to divide. Checkpoint Proteins can be separated into four groups: phosphatidylinositol 3-kinase (PI3K)-like protein kinase, proliferating cell nuclear antigen (PCNA)-like group, two serine/threonine(S/T) kinases and their adaptors. Central to all DNA damage induced checkpoints responses is a pair of large protein kinases belonging to the first group of PI3K-like protein kinases-the ATM (Ataxia telangiectasia mutated) and ATR (Ataxia- and Rad-related) kinases, whose sequence and functions have been well conserved in evolution. All DNA damage response requires either ATM or ATR because they have the ability to bind to the chromosomes at the site of DNA damage, together with accessory proteins that are platforms on which DNA damage response components and DNA repair complexes can be assembled. An important downstream target of ATM and ATR is p53, as it is required for inducing apoptosis following DNA damage. The cyclin-dependent kinase inhibitor p21 is induced by both p53-dependent and p53-independent mechanisms and can arrest the cell cycle at the G1/S and G2/M checkpoints by deactivating cyclin/cyclin-dependent kinase complexes. The prokaryotic SOS response The SOS response is the changes in gene expression in Escherichia coli and other bacteria in response to extensive DNA damage. The prokaryotic SOS system is regulated by two key proteins: LexA and RecA. The LexA homodimer is a transcriptional repressor that binds to operator sequences commonly referred to as SOS boxes. In Escherichia coli it is known that LexA regulates transcription of approximately 48 genes including the lexA and recA genes. The SOS response is known to be widespread in the Bacteria domain, but it is mostly absent in some bacterial phyla, like the Spirochetes. The most common cellular signals activating the SOS response are regions of single-stranded DNA (ssDNA), arising from stalled replication forks or double-strand breaks, which are processed by DNA helicase to separate the two DNA strands. In the initiation step, RecA protein binds to ssDNA in an ATP hydrolysis driven reaction creating RecA–ssDNA filaments. RecA–ssDNA filaments activate LexA autoprotease activity, which ultimately leads to cleavage of LexA dimer and subsequent LexA degradation. The loss of LexA repressor induces transcription of the SOS genes and allows for further signal induction, inhibition of cell division and an increase in levels of proteins responsible for damage processing. In Escherichia coli, SOS boxes are 20-nucleotide long sequences near promoters with palindromic structure and a high degree of sequence conservation. In other classes and phyla, the sequence of SOS boxes varies considerably, with different length and composition, but it is always highly conserved and one of the strongest short signals in the genome. The high information content of SOS boxes permits differential binding of LexA to different promoters and allows for timing of the SOS response. The lesion repair genes are induced at the beginning of SOS response. The error-prone translesion polymerases, for example, UmuCD'2 (also called DNA polymerase V), are induced later on as a last resort. Once the DNA damage is repaired or bypassed using polymerases or through recombination, the amount of single-stranded DNA in cells is decreased, lowering the amounts of RecA filaments decreases cleavage activity of LexA homodimer, which then binds to the SOS boxes near promoters and restores normal gene expression. Eukaryotic transcriptional responses to DNA damage Eukaryotic cells exposed to DNA damaging agents also activate important defensive pathways by inducing multiple proteins involved in DNA repair, cell cycle checkpoint control, protein trafficking and degradation. Such genome wide transcriptional response is very complex and tightly regulated, thus allowing coordinated global response to damage. Exposure of yeast Saccharomyces cerevisiae to DNA damaging agents results in overlapping but distinct transcriptional profiles. Similarities to environmental shock response indicates that a general global stress response pathway exist at the level of transcriptional activation. In contrast, different human cell types respond to damage differently indicating an absence of a common global response. The probable explanation for this difference between yeast and human cells may be in the heterogeneity of mammalian cells. In an animal different types of cells are distributed among different organs that have evolved different sensitivities to DNA damage. In general global response to DNA damage involves expression of multiple genes responsible for postreplication repair, homologous recombination, nucleotide excision repair, DNA damage checkpoint, global transcriptional activation, genes controlling mRNA decay, and many others. A large amount of damage to a cell leaves it with an important decision: undergo apoptosis and die, or survive at the cost of living with a modified genome. An increase in tolerance to damage can lead to an increased rate of survival that will allow a greater accumulation of mutations. Yeast Rev1 and human polymerase η are members of Y family translesion DNA polymerases present during global response to DNA damage and are responsible for enhanced mutagenesis during a global response to DNA damage in eukaryotes. Aging Pathological effects of poor DNA repair Experimental animals with genetic deficiencies in DNA repair often show decreased life span and increased cancer incidence. For example, mice deficient in the dominant NHEJ pathway and in telomere maintenance mechanisms get lymphoma and infections more often, and, as a consequence, have shorter lifespans than wild-type mice. In similar manner, mice deficient in a key repair and transcription protein that unwinds DNA helices have premature onset of aging-related diseases and consequent shortening of lifespan. However, not every DNA repair deficiency creates exactly the predicted effects; mice deficient in the NER pathway exhibited shortened life span without correspondingly higher rates of mutation. The maximum life spans of mice, naked mole-rats and humans are respectively ~3, ~30 and ~129 years. Of these, the shortest lived species, mouse, expresses DNA repair genes, including core genes in several DNA repair pathways, at a lower level than do humans and naked mole rats. Furthermore several DNA repair pathways in humans and naked mole-rats are up-regulated compared to mouse. These observations suggest that elevated DNA repair facilitates greater longevity. If the rate of DNA damage exceeds the capacity of the cell to repair it, the accumulation of errors can overwhelm the cell and result in early senescence, apoptosis, or cancer. Inherited diseases associated with faulty DNA repair functioning result in premature aging, increased sensitivity to carcinogens and correspondingly increased cancer risk (see below). On the other hand, organisms with enhanced DNA repair systems, such as Deinococcus radiodurans, the most radiation-resistant known organism, exhibit remarkable resistance to the double-strand break-inducing effects of radioactivity, likely due to enhanced efficiency of DNA repair and especially NHEJ. Longevity and caloric restriction A number of individual genes have been identified as influencing variations in life span within a population of organisms. The effects of these genes is strongly dependent on the environment, in particular, on the organism's diet. Caloric restriction reproducibly results in extended lifespan in a variety of organisms, likely via nutrient sensing pathways and decreased metabolic rate. The molecular mechanisms by which such restriction results in lengthened lifespan are as yet unclear (see for some discussion); however, the behavior of many genes known to be involved in DNA repair is altered under conditions of caloric restriction. Several agents reported to have anti-aging properties have been shown to attenuate constitutive level of mTOR signaling, an evidence of reduction of metabolic activity, and concurrently to reduce constitutive level of DNA damage induced by endogenously generated reactive oxygen species. For example, increasing the gene dosage of the gene SIR-2, which regulates DNA packaging in the nematode worm Caenorhabditis elegans, can significantly extend lifespan. The mammalian homolog of SIR-2 is known to induce downstream DNA repair factors involved in NHEJ, an activity that is especially promoted under conditions of caloric restriction. Caloric restriction has been closely linked to the rate of base excision repair in the nuclear DNA of rodents, although similar effects have not been observed in mitochondrial DNA. The C. elegans gene AGE-1, an upstream effector of DNA repair pathways, confers dramatically extended life span under free-feeding conditions but leads to a decrease in reproductive fitness under conditions of caloric restriction. This observation supports the pleiotropy theory of the biological origins of aging, which suggests that genes conferring a large survival advantage early in life will be selected for even if they carry a corresponding disadvantage late in life. Medicine and DNA repair modulation Hereditary DNA repair disorders Defects in the NER mechanism are responsible for several genetic disorders, including: Xeroderma pigmentosum: hypersensitivity to sunlight/UV, resulting in increased skin cancer incidence and premature aging Cockayne syndrome: hypersensitivity to UV and chemical agents Trichothiodystrophy: sensitive skin, brittle hair and nails Mental retardation often accompanies the latter two disorders, suggesting increased vulnerability of developmental neurons. Other DNA repair disorders include: Werner's syndrome: premature aging and retarded growth Bloom's syndrome: sunlight hypersensitivity, high incidence of malignancies (especially leukemias). Ataxia telangiectasia: sensitivity to ionizing radiation and some chemical agents All of the above diseases are often called "segmental progerias" ("accelerated aging diseases") because those affected appear elderly and experience aging-related diseases at an abnormally young age, while not manifesting all the symptoms of old age. Other diseases associated with reduced DNA repair function include Fanconi anemia, hereditary breast cancer and hereditary colon cancer. Cancer Because of inherent limitations in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. There are at least 34 Inherited human DNA repair gene mutations that increase cancer risk. Many of these mutations cause DNA repair to be less effective than normal. In particular, Hereditary nonpolyposis colorectal cancer (HNPCC) is strongly associated with specific mutations in the DNA mismatch repair pathway. BRCA1 and BRCA2, two important genes whose mutations confer a hugely increased risk of breast cancer on carriers, are both associated with a large number of DNA repair pathways, especially NHEJ and homologous recombination. Cancer therapy procedures such as chemotherapy and radiotherapy work by overwhelming the capacity of the cell to repair DNA damage, resulting in cell death. Cells that are most rapidly dividing – most typically cancer cells – are preferentially affected. The side-effect is that other non-cancerous but rapidly dividing cells such as progenitor cells in the gut, skin, and hematopoietic system are also affected. Modern cancer treatments attempt to localize the DNA damage to cells and tissues only associated with cancer, either by physical means (concentrating the therapeutic agent in the region of the tumor) or by biochemical means (exploiting a feature unique to cancer cells in the body). In the context of therapies targeting DNA damage response genes, the latter approach has been termed 'synthetic lethality'. Perhaps the most well-known of these 'synthetic lethality' drugs is the poly(ADP-ribose) polymerase 1 (PARP1) inhibitor olaparib, which was approved by the Food and Drug Administration in 2015 for the treatment in women of BRCA-defective ovarian cancer. Tumor cells with partial loss of DNA damage response (specifically, homologous recombination repair) are dependent on another mechanism – single-strand break repair – which is a mechanism consisting, in part, of the PARP1 gene product. Olaparib is combined with chemotherapeutics to inhibit single-strand break repair induced by DNA damage caused by the co-administered chemotherapy. Tumor cells relying on this residual DNA repair mechanism are unable to repair the damage and hence are not able to survive and proliferate, whereas normal cells can repair the damage with the functioning homologous recombination mechanism. Many other drugs for use against other residual DNA repair mechanisms commonly found in cancer are currently under investigation. However, synthetic lethality therapeutic approaches have been questioned due to emerging evidence of acquired resistance, achieved through rewiring of DNA damage response pathways and reversion of previously inhibited defects. DNA repair defects in cancer It has become apparent over the past several years that the DNA damage response acts as a barrier to the malignant transformation of preneoplastic cells. Previous studies have shown an elevated DNA damage response in cell-culture models with oncogene activation and preneoplastic colon adenomas. DNA damage response mechanisms trigger cell-cycle arrest, and attempt to repair DNA lesions or promote cell death/senescence if repair is not possible. Replication stress is observed in preneoplastic cells due to increased proliferation signals from oncogenic mutations. Replication stress is characterized by: increased replication initiation/origin firing; increased transcription and collisions of transcription-replication complexes; nucleotide deficiency; increase in reactive oxygen species (ROS). Replication stress, along with the selection for inactivating mutations in DNA damage response genes in the evolution of the tumor, leads to downregulation and/or loss of some DNA damage response mechanisms, and hence loss of DNA repair and/or senescence/programmed cell death. In experimental mouse models, loss of DNA damage response-mediated cell senescence was observed after using a short hairpin RNA (shRNA) to inhibit the double-strand break response kinase ataxia telangiectasia (ATM), leading to increased tumor size and invasiveness. Humans born with inherited defects in DNA repair mechanisms (for example, Li-Fraumeni syndrome) have a higher cancer risk. The prevalence of DNA damage response mutations differs across cancer types; for example, 30% of breast invasive carcinomas have mutations in genes involved in homologous recombination. In cancer, downregulation is observed across all DNA damage response mechanisms (base excision repair (BER), nucleotide excision repair (NER), DNA mismatch repair (MMR), homologous recombination repair (HR), non-homologous end joining (NHEJ) and translesion DNA synthesis (TLS). As well as mutations to DNA damage repair genes, mutations also arise in the genes responsible for arresting the cell cycle to allow sufficient time for DNA repair to occur, and some genes are involved in both DNA damage repair and cell cycle checkpoint control, for example ATM and checkpoint kinase 2 (CHEK2) – a tumor suppressor that is often absent or downregulated in non-small cell lung cancer. Epigenetic DNA repair defects in cancer Classically, cancer has been viewed as a set of diseases that are driven by progressive genetic abnormalities that include mutations in tumour-suppressor genes and oncogenes, and chromosomal aberrations. However, it has become apparent that cancer is also driven by epigenetic alterations. Epigenetic alterations refer to functionally relevant modifications to the genome that do not involve a change in the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation) and histone modification, changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1) and changes caused by microRNAs. Each of these epigenetic alterations serves to regulate gene expression without altering the underlying DNA sequence. These changes usually remain through cell divisions, last for multiple cell generations, and can be considered to be epimutations (equivalent to mutations). While large numbers of epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, appear to be particularly important. Such alterations are thought to occur early in progression to cancer and to be a likely cause of the genetic instability characteristic of cancers. Reduced expression of DNA repair genes causes deficient DNA repair. When DNA repair is deficient DNA damages remain in cells at a higher than usual level and these excess damages cause increased frequencies of mutation or epimutation. Mutation rates increase substantially in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR). Chromosomal rearrangements and aneuploidy also increase in HRR defective cells. Higher levels of DNA damage not only cause increased mutation, but also cause increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damages, incompletely cleared sites of repair can cause epigenetic gene silencing. Deficient expression of DNA repair proteins due to an inherited mutation can cause increased risk of cancer. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have an increased risk of cancer, with some defects causing up to a 100% lifetime chance of cancer (e.g. p53 mutations). However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers. Frequencies of epimutations in DNA repair genes Deficiencies in DNA repair enzymes are occasionally caused by a newly arising somatic mutation in a DNA repair gene, but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. For example, when 113 colorectal cancers were examined in sequence, only four had a missense mutation in the DNA repair gene MGMT, while the majority had reduced MGMT expression due to methylation of the MGMT promoter region (an epigenetic alteration). Five different studies found that between 40% and 90% of colorectal cancers have reduced MGMT expression due to methylation of the MGMT promoter region. Similarly, out of 119 cases of mismatch repair-deficient colorectal cancers that lacked DNA repair gene PMS2 expression, PMS2 was deficient in 6 due to mutations in the PMS2 gene, while in 103 cases PMS2 expression was deficient because its pairing partner MLH1 was repressed due to promoter methylation (PMS2 protein is unstable in the absence of MLH1). In the other 10 cases, loss of PMS2 expression was likely due to epigenetic overexpression of the microRNA, miR-155, which down-regulates MLH1. In a further example, epigenetic defects were found in various cancers (e.g. breast, ovarian, colorectal and head and neck). Two or three deficiencies in the expression of ERCC1, XPF or PMS2 occur simultaneously in the majority of 49 colon cancers evaluated by Facista et al. The chart in this section shows some frequent DNA damaging agents, examples of DNA lesions they cause, and the pathways that deal with these DNA damages. At least 169 enzymes are either directly employed in DNA repair or influence DNA repair processes. Of these, 83 are directly employed in repairing the 5 types of DNA damages illustrated in the chart. Some of the more well studied genes central to these repair processes are shown in the chart. The gene designations shown in red, gray or cyan indicate genes frequently epigenetically altered in various types of cancers. Wikipedia articles on each of the genes highlighted by red, gray or cyan describe the epigenetic alteration(s) and the cancer(s) in which these epimutations are found. Review articles, and broad experimental survey articles also document most of these epigenetic DNA repair deficiencies in cancers. Red-highlighted genes are frequently reduced or silenced by epigenetic mechanisms in various cancers. When these genes have low or absent expression, DNA damages can accumulate. Replication errors past these damages (see translesion synthesis) can lead to increased mutations and, ultimately, cancer. Epigenetic repression of DNA repair genes in accurate DNA repair pathways appear to be central to carcinogenesis. The two gray-highlighted genes RAD51 and BRCA2, are required for homologous recombinational repair. They are sometimes epigenetically over-expressed and sometimes under-expressed in certain cancers. As indicated in the Wikipedia articles on RAD51 and BRCA2, such cancers ordinarily have epigenetic deficiencies in other DNA repair genes. These repair deficiencies would likely cause increased unrepaired DNA damages. The over-expression of RAD51 and BRCA2 seen in these cancers may reflect selective pressures for compensatory RAD51 or BRCA2 over-expression and increased homologous recombinational repair to at least partially deal with such excess DNA damages. In those cases where RAD51 or BRCA2 are under-expressed, this would itself lead to increased unrepaired DNA damages. Replication errors past these damages (see translesion synthesis) could cause increased mutations and cancer, so that under-expression of RAD51 or BRCA2 would be carcinogenic in itself. Cyan-highlighted genes are in the microhomology-mediated end joining (MMEJ) pathway and are up-regulated in cancer. MMEJ is an additional error-prone inaccurate repair pathway for double-strand breaks. In MMEJ repair of a double-strand break, an homology of 5–25 complementary base pairs between both paired strands is sufficient to align the strands, but mismatched ends (flaps) are usually present. MMEJ removes the extra nucleotides (flaps) where strands are joined, and then ligates the strands to create an intact DNA double helix. MMEJ almost always involves at least a small deletion, so that it is a mutagenic pathway. FEN1, the flap endonuclease in MMEJ, is epigenetically increased by promoter hypomethylation and is over-expressed in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreas, and lung. PARP1 is also over-expressed when its promoter region ETS site is epigenetically hypomethylated, and this contributes to progression to endometrial cancer and BRCA-mutated serous ovarian cancer. Other genes in the MMEJ pathway are also over-expressed in a number of cancers (see MMEJ for summary), and are also shown in cyan. Genome-wide distribution of DNA repair in human somatic cells Differential activity of DNA repair pathways across various regions of the human genome causes mutations to be very unevenly distributed within tumor genomes. In particular, the gene-rich, early-replicating regions of the human genome exhibit lower mutation frequencies than the gene-poor, late-replicating heterochromatin. One mechanism underlying this involves the histone modification H3K36me3, which can recruit mismatch repair proteins, thereby lowering mutation rates in H3K36me3-marked regions. Another important mechanism concerns nucleotide excision repair, which can be recruited by the transcription machinery, lowering somatic mutation rates in active genes and other open chromatin regions. Epigenetic alterations due to DNA repair Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear. Repair of oxidative DNA damage can alter epigenetic markers In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation. Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes. When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration. As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA. While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations. Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed. Homologous recombinational repair alters epigenetic markers At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period. In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion. Non-homologous end joining can cause some epigenetic marker alterations Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%. Evolution The basic processes of DNA repair are highly conserved among both prokaryotes and eukaryotes and even among bacteriophages (viruses which infect bacteria); however, more complex organisms with more complex genomes have correspondingly more complex repair mechanisms. The ability of a large number of protein structural motifs to catalyze relevant chemical reactions has played a significant role in the elaboration of repair mechanisms during evolution. For an extremely detailed review of hypotheses relating to the evolution of DNA repair, see. The fossil record indicates that single-cell life began to proliferate on the planet at some point during the Precambrian period, although exactly when recognizably modern life first emerged is unclear. Nucleic acids became the sole and universal means of encoding genetic information, requiring DNA repair mechanisms that in their basic form have been inherited by all extant life forms from their common ancestor. The emergence of Earth's oxygen-rich atmosphere (known as the "oxygen catastrophe") due to photosynthetic organisms, as well as the presence of potentially damaging free radicals in the cell due to oxidative phosphorylation, necessitated the evolution of DNA repair mechanisms that act specifically to counter the types of damage induced by oxidative stress. The mechanism by which this came about, however, is unclear. Rate of evolutionary change On some occasions, DNA damage is not repaired or is repaired by an error-prone mechanism that results in a change from the original sequence. When this occurs, mutations may propagate into the genomes of the cell's progeny. Should such an event occur in a germ line cell that will eventually produce a gamete, the mutation has the potential to be passed on to the organism's offspring. The rate of evolution in a particular species (or, in a particular gene) is a function of the rate of mutation. As a consequence, the rate and accuracy of DNA repair mechanisms have an influence over the process of evolutionary change. DNA damage protection and repair does not influence the rate of adaptation by gene regulation and by recombination and selection of alleles. On the other hand, DNA damage repair and protection does influence the rate of accumulation of irreparable, advantageous, code expanding, inheritable mutations, and slows down the evolutionary mechanism for expansion of the genome of organisms with new functionalities. The tension between evolvability and mutation repair and protection needs further investigation. Technology A technology named clustered regularly interspaced short palindromic repeat (shortened to CRISPR-Cas9) was discovered in 2012. The new technology allows anyone with molecular biology training to alter the genes of any species with precision, by inducing DNA damage at a specific point and then altering DNA repair mechanisms to insert new genes. It is cheaper, more efficient, and more precise than other technologies. With the help of CRISPR–Cas9, parts of a genome can be edited by scientists by removing, adding, or altering parts in a DNA sequence. See also Accelerated aging disease Aging DNA Cell cycle DNA damage (naturally occurring) DNA damage theory of aging DNA replication Direct DNA damage Error detection and correction Gene therapy Human mitochondrial genetics Indirect DNA damage Life extension Progeria REPAIRtoire SiDNA The scientific journal DNA Repair under Mutation Research References External links Roswell Park Cancer Institute DNA Repair Lectures A comprehensive list of Human DNA Repair Genes 3D structures of some DNA repair enzymes DNA repair special interest group DNA Repair DNA Damage and DNA Repair Segmental Progeria Cellular processes Molecular genetics Mutation Senescence Articles containing video clips
DNA repair
[ "Chemistry", "Biology" ]
13,604
[ "DNA repair", "Senescence", "Molecular genetics", "Cellular processes", "Molecular biology", "Metabolism" ]
855,138
https://en.wikipedia.org/wiki/Line%20element
In geometry, the line element or length element can be informally thought of as a line segment associated with an infinitesimal displacement vector in a metric space. The length of the line element, which may be thought of as a differential arc length, is a function of the metric tensor and is denoted by . Line elements are used in physics, especially in theories of gravitation (most notably general relativity) where spacetime is modelled as a curved Pseudo-Riemannian manifold with an appropriate metric tensor. General formulation Definition of the line element and arc length The coordinate-independent definition of the square of the line element ds in an n-dimensional Riemannian or Pseudo Riemannian manifold (in physics usually a Lorentzian manifold) is the "square of the length" of an infinitesimal displacement (in pseudo Riemannian manifolds possibly negative) whose square root should be used for computing curve length: where g is the metric tensor, · denotes inner product, and dq an infinitesimal displacement on the (pseudo) Riemannian manifold. By parametrizing a curve , we can define the arc length of the curve length of the curve between , and as the integral: To compute a sensible length of curves in pseudo Riemannian manifolds, it is best to assume that the infinitesimal displacements have the same sign everywhere. E.g. in physics the square of a line element along a timeline curve would (in the signature convention) be negative and the negative square root of the square of the line element along the curve would measure the proper time passing for an observer moving along the curve. From this point of view, the metric also defines in addition to line element the surface and volume elements etc. Identification of the square of the line element with the metric tensor Since is an arbitrary "square of the arc length", completely defines the metric, and it is therefore usually best to consider the expression for as a definition of the metric tensor itself, written in a suggestive but non tensorial notation: This identification of the square of arc length with the metric is even more easy to see in n-dimensional general curvilinear coordinates , where it is written as a symmetric rank 2 tensor coinciding with the metric tensor: Here the indices i and j take values 1, 2, 3, ..., n and Einstein summation convention is used. Common examples of (pseudo) Riemannian spaces include three-dimensional space (no inclusion of time coordinates), and indeed four-dimensional spacetime. Line elements in Euclidean space Following are examples of how the line elements are found from the metric. Cartesian coordinates The simplest line element is in Cartesian coordinates - in which case the metric is just the Kronecker delta: (here i, j = 1, 2, 3 for space) or in matrix form (i denotes row, j denotes column): The general curvilinear coordinates reduce to Cartesian coordinates: so Orthogonal curvilinear coordinates For all orthogonal coordinates the metric is given by: where for i = 1, 2, 3 are scale factors, so the square of the line element is: Some examples of line elements in these coordinates are below. General curvilinear coordinates Given an arbitrary basis of a space of dimension , the metric is defined as the inner product of the basis vectors. Where and the inner product is with respect to the ambient space (usually its ) In a coordinate basis The coordinate basis is a special type of basis that is regularly used in differential geometry. Line elements in 4d spacetime Minkowski spacetime The Minkowski metric is: where one sign or the other is chosen, both conventions are used. This applies only for flat spacetime. The coordinates are given by the 4-position: so the line element is: Schwarzschild coordinates In Schwarzschild coordinates coordinates are , being the general metric of the form: (note the similitudes with the metric in 3D spherical polar coordinates). so the line element is: General spacetime The coordinate-independent definition of the square of the line element ds in spacetime is: In terms of coordinates: where for this case the indices and run over 0, 1, 2, 3 for spacetime. This is the spacetime interval - the measure of separation between two arbitrarily close events in spacetime. In special relativity it is invariant under Lorentz transformations. In general relativity it is invariant under arbitrary invertible differentiable coordinate transformations. See also Covariance and contravariance of vectors First fundamental form List of integration and measure theory topics Metric tensor Ricci calculus Raising and lowering indices Volume element References Affine geometry Riemannian geometry Special relativity General relativity da:Linjeelement de:Linienelement
Line element
[ "Physics" ]
973
[ "Special relativity", "General relativity", "Theory of relativity" ]
855,150
https://en.wikipedia.org/wiki/Infinite%20divisibility
Infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter, space, time, money, or abstract mathematical objects such as the continuum. In philosophy The origin of the idea in the Western tradition can be traced to the 5th century BCE starting with the Ancient Greek pre-Socratic philosopher Democritus and his teacher Leucippus, who theorized matter's divisibility beyond what can be perceived by the senses until ultimately ending at an indivisible atom. The Indian philosopher, Maharshi Kanada also proposed an atomistic theory, however there is ambiguity around when this philosopher lived, ranging from sometime between the 6th century to 2nd century BCE. Around 500 BC, he postulated that if we go on dividing matter (padarth), we shall get smaller and smaller particles. Ultimately, a time will come when we shall come across the smallest particles beyond which further division will not be possible. He named these particles Parmanu. Another Indian philosopher, Pakudha Katyayama, elaborated this doctrine and said that these particles normally exist in a combined form which gives us various forms of matter. Atomism is explored in Plato's dialogue Timaeus. Aristotle proves that both length and time are infinitely divisible, refuting atomism. Andrew Pyle gives a lucid account of infinite divisibility in the first few pages of his Atomism and its Critics. There he shows how infinite divisibility involves the idea that there is some extended item, such as an apple, which can be divided infinitely many times, where one never divides down to point, or to atoms of any sort. Many philosophers claim that infinite divisibility involves either a collection of an infinite number of items (since there are infinite divisions, there must be an infinite collection of objects), or (more rarely), point-sized items, or both. Pyle states that the mathematics of infinitely divisible extensions involve neither of these — that there are infinite divisions, but only finite collections of objects and they never are divided down to point extension-less items. In Zeno's arrow paradox, Zeno questioned how an arrow can move if at one moment it is here and motionless and at a later moment be somewhere else and motionless. In reference to Zeno's paradox of the arrow in flight, Alfred North Whitehead writes that "an infinite number of acts of becoming may take place in a finite time if each subsequent act is smaller in a convergent series": In quantum physics Until the discovery of quantum mechanics, no distinction was made between the question of whether matter is infinitely divisible and the question of whether matter can be cut into smaller parts ad infinitum. As a result, the Greek word átomos (ἄτομος), which literally means "uncuttable", is usually translated as "indivisible". Whereas the modern atom is indeed divisible, it actually is uncuttable: there is no partition of space such that its parts correspond to material parts of the atom. In other words, the quantum-mechanical description of matter no longer conforms to the cookie cutter paradigm. This casts fresh light on the ancient conundrum of the divisibility of matter. The multiplicity of a material object—the number of its parts—depends on the existence, not of delimiting surfaces, but of internal spatial relations (relative positions between parts), and these lack determinate values. According to the Standard Model of particle physics, the particles that make up an atom—quarks and electrons—are point particles: they do not take up space. What makes an atom nevertheless take up space is not any spatially extended "stuff" that "occupies space", and that might be cut into smaller and smaller pieces, but the indeterminacy of its internal spatial relations. Physical space is often regarded as infinitely divisible: it is thought that any region in space, no matter how small, could be further split. Time is similarly considered as infinitely divisible. However, according to the best currently accepted theory in physics, the Standard Model, there is a distance (called the Planck length, 1.616229(38)×10−35 metres, named after one of the fathers of Quantum Theory, Max Planck) and therefore a time interval (the amount of time which light takes to traverse that distance in a vacuum, 5.39116(13) × 10−44 seconds, known as the Planck time) at which the Standard Model is expected to break down – effectively making this the smallest physical scale about which meaningful statements can be currently made. To predict the physical behaviour of space-time and fundamental particles at smaller distances requires a new theory of Quantum Gravity, which unifies the hitherto incompatible theories of Quantum Mechanics and General Relativity. In economics One dollar, or one euro, is divided into 100 cents; one can only pay in increments of a cent. It is quite commonplace for prices of some commodities such as gasoline to be in increments of a tenth of a cent per gallon or per litre. If gasoline costs $3.979 per gallon and one buys 10 gallons, then the "extra" 9/10 of a cent comes to ten times that: an "extra" 9 cents, so the cent in that case gets paid. Money is infinitely divisible in the sense that it is based upon the real number system. However, modern day coins are not divisible (in the past some coins were weighed with each transaction, and were considered divisible with no particular limit in mind). There is a point of precision in each transaction that is useless because such small amounts of money are insignificant to humans. The more the price is multiplied the more the precision could matter. For example, when buying a million shares of stock, the buyer and seller might be interested in a tenth of a cent price difference, but it's only a choice. Everything else in business measurement and choice is similarly divisible to the degree that the parties are interested. For example, financial reports may be reported annually, quarterly, or monthly. Some business managers run cash-flow reports more than once per day. Although time may be infinitely divisible, data on securities prices are reported at discrete times. For example, if one looks at records of stock prices in the 1920s, one may find the prices at the end of each day, but perhaps not at three-hundredths of a second after 12:47 PM. A new method, however, theoretically, could report at double the rate, which would not prevent further increases of velocity of reporting. Perhaps paradoxically, technical mathematics applied to financial markets is often simpler if infinitely divisible time is used as an approximation. Even in those cases, a precision is chosen with which to work, and measurements are rounded to that approximation. In terms of human interaction, money and time are divisible, but only to the point where further division is not of value, which point cannot be determined exactly. In order theory To say that the field of rational numbers is infinitely divisible (i.e. order theoretically dense) means that between any two rational numbers there is another rational number. By contrast, the ring of integers is not infinitely divisible. Infinite divisibility does not imply gaplessness: the rationals do not enjoy the least upper bound property. That means that if one were to partition the rationals into two non-empty sets A and B where A contains all rationals less than some irrational number (π, say) and B all rationals greater than it, then A has no largest member and B has no smallest member. The field of real numbers, by contrast, is both infinitely divisible and gapless. Any linearly ordered set that is infinitely divisible and gapless, and has more than one member, is uncountably infinite. For a proof, see Cantor's first uncountability proof. Infinite divisibility alone implies infiniteness but not uncountability, as the rational numbers exemplify. In probability distributions To say that a probability distribution F on the real line is infinitely divisible means that if X is any random variable whose distribution is F, then for every positive integer n there exist n independent identically distributed random variables X1, ..., Xn whose sum is equal in distribution to X (those n other random variables do not usually have the same probability distribution as X). The Poisson distribution, the stuttering Poisson distribution, the negative binomial distribution, and the Gamma distribution are examples of infinitely divisible distributions — as are the normal distribution, Cauchy distribution and all other members of the stable distribution family. The skew-normal distribution is an example of a non-infinitely divisible distribution. (See Domínguez-Molina and Rocha-Arteaga (2007).) Every infinitely divisible probability distribution corresponds in a natural way to a Lévy process, i.e., a stochastic process { Xt : t ≥ 0 } with stationary independent increments (stationary means that for s < t, the probability distribution of Xt − Xs depends only on t − s; independent increments means that that difference is independent of the corresponding difference on any interval not overlapping with [s, t], and similarly for any finite number of intervals). This concept of infinite divisibility of probability distributions was introduced in 1929 by Bruno de Finetti. See also Divisible group, a mathematical group in which every element is an arbitrary multiple of some other element Indecomposable distribution Salami slicing Zeno's paradoxes References Domínguez-Molina, J.A.; Rocha-Arteaga, A. (2007) "On the Infinite Divisibility of some Skewed Symmetric Distributions". Statistics and Probability Letters, 77 (6), 644–648 External links Infinite Hierarchical Nesting of Matter (translation of Russian Wikipedia page) Order theory Metaphysical properties Quantum mechanics Mereology de:Unendliche Teilbarkeit
Infinite divisibility
[ "Physics", "Mathematics" ]
2,109
[ "Theoretical physics", "Quantum mechanics", "Order theory" ]
855,571
https://en.wikipedia.org/wiki/Nagios
Nagios is an event monitoring system that offers monitoring and alerting services for servers, switches, applications and services. It alerts users when things go wrong and alerts them a second time when the problem has been resolved. Ethan Galstad and a group of developers originally wrote Nagios as NetSaint. , they actively maintain both the official and unofficial plugins. Nagios is a recursive acronym: "Nagios Ain't Gonna Insist On Sainthood" – "sainthood" makes reference to the original name NetSaint, which changed in response to a legal challenge by owners of a similar trademark. "Agios" (or "hagios") also transliterates the Greek word άγιος, which means "saint". Nagios was originally designed to run under Linux, but it also runs on other Unix variants. It is free software licensed under the terms of the GNU General Public License version 2 as published by the Free Software Foundation. History On 16 January 2014, Nagios Enterprises redirected the nagios-plugins.org domain to a web server controlled by Nagios Enterprises without explicitly notifying the Nagios Plugins community team the consequences of their actions. Nagios Enterprises replaced the nagios-plugins team with a group of new, different members. The community team members who were replaced continued their work under the name Monitoring Plugins along with a different website with the new domain of monitoring-plugins.org. Design Nagios agents include: NRPE Nagios Remote Plugin Executor (NRPE) is a Nagios agent that allows remote system monitoring using scripts that are hosted on the remote systems. It allows for monitoring of resources such as disk usage, system load or the number of users currently logged in. Nagios periodically polls the agent on remote system using the check_nrpe plugin. NRPE allows you to remotely execute Nagios plugins on other Linux/Unix machines. This would allow you to monitor remote machine metrics (disk usage, CPU load, etc.). NRPE can also communicate with some Windows agent add-ons, so you can execute scripts and check metrics on remote Windows machines, as well. As of 28 Jan 2020, NRPE 4.0.1 has been deprecated. NRDP Nagios Remote Data Processor (NRDP) is a Nagios agent with a flexible data transport mechanism and processor. It is designed with an architecture that allows it to be easily extended and customized. NRDP uses standard ports and protocols (HTTP and XML) and can be implemented as a replacement for Nagios Service Check Acceptor (NSCA). NSClient++ This program is mainly used to monitor Windows machines. Being installed on a remote system, NSClient++ listens to port TCP 12489. The Nagios plugin that is used to collect information from this addon is called check_nt. As NRPE, NSClient++ allows to monitor the so-called 'private services' (memory usage, CPU load, disk usage, running processes, etc.) Nagios is a host and service monitor that is designed to inform you of network problems. NCPA The Nagios Cross Platform Agent is an open source project maintained by Nagios Enterprises. NCPA installs on Windows, Linux, and Mac OS X. Created as a scale-able API that allows flexibility and simplicity in monitoring hosts. NCPA allows multiple checks such as memory usage, CPU usage, disk usage, processes, services, and network usage. Active checks are queried through the API of the "NCPA Listener" service while passive checks are sent via the "NCPA Passive" service. Nagios XI Nagios XI is a proprietary interface using Nagios Core as the back-end, written and maintained by the original author, Ethan Galstad, and Nagios Enterprises. CentOS and RHEL are the currently supported operating systems. It combines Nagios Core with other technologies. Its main database and the ndoutils module that is used alongside Nagios Core use MySQL. While the front-end of Nagios Core is mainly CGI with some PHP, most of the Nagios XI front-end and back-end are written in PHP including the subsystem, event handlers, and notifications, and Python is used to create capacity planning reports and other reports. RRDtool and Highcharts are included to create customizable graphs that can be displayed in dashboards. See also References Further reading Barth, Wolfgang; (2006) Nagios: System And Network Monitoring - No Starch Press Barth, Wolfgang; (2008) Nagios: System And Network Monitoring, 2nd edition - No Starch Press Turnbull, James; (2006) Pro Nagios 2.0 - San Francisco: Apress Josephsen, David; (2007) Building a Monitoring Infrastructure with Nagios - Prentice Hall Dondich, Taylor; (2006) Network Monitoring with Nagios - O'Reilly Schubert, Max et al.; (2008) Nagios 3 Enterprise Network Monitoring - Syngress Kocjan, Wojciech; (2008) "Learning Nagios 3.0" - Packt Publishing External links Internet Protocol based network software Free network management software Multi-agent network management software Network analyzers Linux security software System administration System monitors
Nagios
[ "Technology" ]
1,133
[ "Information systems", "System administration" ]
856,005
https://en.wikipedia.org/wiki/Rotation%20matrix
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix rotates points in the plane counterclockwise through an angle about the origin of a two-dimensional Cartesian coordinate system. To perform the rotation on a plane point with standard coordinates , it should be written as a column vector, and multiplied by the matrix : If and are the endpoint coordinates of a vector, where is cosine and is sine, then the above equations become the trigonometric summation angle formulae. Indeed, a rotation matrix can be seen as the trigonometric summation angle formulae in matrix form. One way to understand this is to say we have a vector at an angle 30° from the axis, and we wish to rotate that angle by a further 45°. We simply need to compute the vector endpoint coordinates at 75°. The examples in this article apply to active rotations of vectors counterclockwise in a right-handed coordinate system ( counterclockwise from ) by pre-multiplication ( on the left). If any one of these is changed (such as rotating axes instead of vectors, a passive transformation), then the inverse of the example matrix should be used, which coincides with its transpose. Since matrix multiplication has no effect on the zero vector (the coordinates of the origin), rotation matrices describe rotations about the origin. Rotation matrices provide an algebraic description of such rotations, and are used extensively for computations in geometry, physics, and computer graphics. In some literature, the term rotation is generalized to include improper rotations, characterized by orthogonal matrices with a determinant of −1 (instead of +1). These combine proper rotations with reflections (which invert orientation). In other cases, where reflections are not being considered, the label proper may be dropped. The latter convention is followed in this article. Rotation matrices are square matrices, with real entries. More specifically, they can be characterized as orthogonal matrices with determinant 1; that is, a square matrix is a rotation matrix if and only if and . The set of all orthogonal matrices of size with determinant +1 is a representation of a group known as the special orthogonal group , one example of which is the rotation group SO(3). The set of all orthogonal matrices of size with determinant +1 or −1 is a representation of the (general) orthogonal group . In two dimensions In two dimensions, the standard rotation matrix has the following form: This rotates column vectors by means of the following matrix multiplication, Thus, the new coordinates of a point after rotation are Examples For example, when the vector is rotated by an angle , its new coordinates are and when the vector is rotated by an angle , its new coordinates are Direction The direction of vector rotation is counterclockwise if is positive (e.g. 90°), and clockwise if is negative (e.g. −90°) for . Thus the clockwise rotation matrix is found as The two-dimensional case is the only non-trivial (i.e. not one-dimensional) case where the rotation matrices group is commutative, so that it does not matter in which order multiple rotations are performed. An alternative convention uses rotating axes, and the above matrices also represent a rotation of the axes clockwise through an angle . Non-standard orientation of the coordinate system If a standard right-handed Cartesian coordinate system is used, with the to the right and the up, the rotation is counterclockwise. If a left-handed Cartesian coordinate system is used, with directed to the right but directed down, is clockwise. Such non-standard orientations are rarely used in mathematics but are common in 2D computer graphics, which often have the origin in the top left corner and the down the screen or page. See below for other alternative conventions which may change the sense of the rotation produced by a rotation matrix. Common 2D rotations Particularly useful are the matrices for 90°, 180°, and 270° counter-clockwise rotations. Relationship with complex plane Since the matrices of the shape form a ring isomorphic to the field of the complex numbers . Under this isomorphism, the rotation matrices correspond to circle of the unit complex numbers, the complex numbers of modulus . If one identifies with through the linear isomorphism the action of a matrix of the above form on vectors of corresponds to the multiplication by the complex number , and rotations correspond to multiplication by complex numbers of modulus . As every rotation matrix can be written the above correspondence associates such a matrix with the complex number (this last equality is Euler's formula). In three dimensions Basic 3D rotations A basic 3D rotation (also called elemental rotation) is a rotation about one of the axes of a coordinate system. The following three basic rotation matrices rotate vectors by an angle about the -, -, or -axis, in three dimensions, using the right-hand rule—which codifies their alternating signs. Notice that the right-hand rule only works when multiplying . (The same matrices can also represent a clockwise rotation of the axes.) For column vectors, each of these basic vector rotations appears counterclockwise when the axis about which they occur points toward the observer, the coordinate system is right-handed, and the angle is positive. , for instance, would rotate toward the a vector aligned with the , as can easily be checked by operating with on the vector : This is similar to the rotation produced by the above-mentioned two-dimensional rotation matrix. See below for alternative conventions which may apparently or actually invert the sense of the rotation produced by these matrices. General 3D rotations Other 3D rotation matrices can be obtained from these three using matrix multiplication. For example, the product represents a rotation whose yaw, pitch, and roll angles are , and , respectively. More formally, it is an intrinsic rotation whose Tait–Bryan angles are , , , about axes , , , respectively. Similarly, the product represents an extrinsic rotation whose (improper) Euler angles are , , , about axes , , . These matrices produce the desired effect only if they are used to premultiply column vectors, and (since in general matrix multiplication is not commutative) only if they are applied in the specified order (see Ambiguities for more details). The order of rotation operations is from right to left; the matrix adjacent to the column vector is the first to be applied, and then the one to the left. Conversion from rotation matrix to axis–angle Every rotation in three dimensions is defined by its axis (a vector along this axis is unchanged by the rotation), and its angle — the amount of rotation about that axis (Euler rotation theorem). There are several methods to compute the axis and angle from a rotation matrix (see also axis–angle representation). Here, we only describe the method based on the computation of the eigenvectors and eigenvalues of the rotation matrix. It is also possible to use the trace of the rotation matrix. Determining the axis Given a rotation matrix , a vector parallel to the rotation axis must satisfy since the rotation of around the rotation axis must result in . The equation above may be solved for which is unique up to a scalar factor unless . Further, the equation may be rewritten which shows that lies in the null space of . Viewed in another way, is an eigenvector of corresponding to the eigenvalue . Every rotation matrix must have this eigenvalue, the other two eigenvalues being complex conjugates of each other. It follows that a general rotation matrix in three dimensions has, up to a multiplicative constant, only one real eigenvector. One way to determine the rotation axis is by showing that: Since is a skew-symmetric matrix, we can choose such that The matrix–vector product becomes a cross product of a vector with itself, ensuring that the result is zero: Therefore, if then The magnitude of computed this way is , where is the angle of rotation. This does not work if is symmetric. Above, if is zero, then all subsequent steps are invalid. In this case, it is necessary to diagonalize and find the eigenvector corresponding to an eigenvalue of 1. Determining the angle To find the angle of a rotation, once the axis of the rotation is known, select a vector perpendicular to the axis. Then the angle of the rotation is the angle between and . A more direct method, however, is to simply calculate the trace: the sum of the diagonal elements of the rotation matrix. Care should be taken to select the right sign for the angle to match the chosen axis: from which follows that the angle's absolute value is For the rotation axis , you can get the correct angle from where Rotation matrix from axis and angle The matrix of a proper rotation by angle around the axis , a unit vector with , is given by: A derivation of this matrix from first principles can be found in section 9.2 here. The basic idea to derive this matrix is dividing the problem into few known simple steps. First rotate the given axis and the point such that the axis lies in one of the coordinate planes (, or ) Then rotate the given axis and the point such that the axis is aligned with one of the two coordinate axes for that particular coordinate plane (, or ) Use one of the fundamental rotation matrices to rotate the point depending on the coordinate axis with which the rotation axis is aligned. Reverse rotate the axis-point pair such that it attains the final configuration as that was in step 2 (Undoing step 2) Reverse rotate the axis-point pair which was done in step 1 (undoing step 1) This can be written more concisely as where is the cross product matrix of ; the expression is the outer product, and is the identity matrix. Alternatively, the matrix entries are: where is the Levi-Civita symbol with . This is a matrix form of Rodrigues' rotation formula, (or the equivalent, differently parametrized Euler–Rodrigues formula) with In the rotation of a vector around the axis by an angle can be written as: or equivalently: This can also be written in tensor notation as: If the 3D space is right-handed and , this rotation will be counterclockwise when points towards the observer (Right-hand rule). Explicitly, with a right-handed orthonormal basis, Note the striking merely apparent differences to the equivalent Lie-algebraic formulation below. Properties For any -dimensional rotation matrix acting on (The rotation is an orthogonal matrix) It follows that: A rotation is termed proper if , and improper (or a roto-reflection) if . For even dimensions , the eigenvalues of a proper rotation occur as pairs of complex conjugates which are roots of unity: for , which is real only for . Therefore, there may be no vectors fixed by the rotation (), and thus no axis of rotation. Any fixed eigenvectors occur in pairs, and the axis of rotation is an even-dimensional subspace. For odd dimensions , a proper rotation will have an odd number of eigenvalues, with at least one and the axis of rotation will be an odd dimensional subspace. Proof: Here is the identity matrix, and we use , as well as since is odd. Therefore, , meaning there is a nonzero vector with , that is , a fixed eigenvector. There may also be pairs of fixed eigenvectors in the even-dimensional subspace orthogonal to , so the total dimension of fixed eigenvectors is odd. For example, in 2-space , a rotation by angle has eigenvalues and , so there is no axis of rotation except when , the case of the null rotation. In 3-space , the axis of a non-null proper rotation is always a unique line, and a rotation around this axis by angle has eigenvalues . In 4-space , the four eigenvalues are of the form . The null rotation has . The case of is called a simple rotation, with two unit eigenvalues forming an axis plane, and a two-dimensional rotation orthogonal to the axis plane. Otherwise, there is no axis plane. The case of is called an isoclinic rotation, having eigenvalues repeated twice, so every vector is rotated through an angle . The trace of a rotation matrix is equal to the sum of its eigenvalues. For , a rotation by angle has trace . For , a rotation around any axis by angle has trace . For , and the trace is , which becomes for an isoclinic rotation. Examples The rotation matrix corresponds to a 90° planar rotation clockwise about the origin. The transpose of the matrix is its inverse, but since its determinant is −1, this is not a proper rotation matrix; it is a reflection across the line . The rotation matrix corresponds to a −30° rotation around the -axis in three-dimensional space. The rotation matrix corresponds to a rotation of approximately −74° around the axis in three-dimensional space. The permutation matrix is a rotation matrix, as is the matrix of any even permutation, and rotates through 120° about the axis . The matrix has determinant +1, but is not orthogonal (its transpose is not its inverse), so it is not a rotation matrix. The matrix is not square, and so cannot be a rotation matrix; yet yields a identity matrix (the columns are orthonormal). The matrix describes an isoclinic rotation in four dimensions, a rotation through equal angles (180°) through two orthogonal planes. The rotation matrix rotates vectors in the plane of the first two coordinate axes 90°, rotates vectors in the plane of the next two axes 180°, and leaves the last coordinate axis unmoved. Geometry In Euclidean geometry, a rotation is an example of an isometry, a transformation that moves points without changing the distances between them. Rotations are distinguished from other isometries by two additional properties: they leave (at least) one point fixed, and they leave "handedness" unchanged. In contrast, a translation moves every point, a reflection exchanges left- and right-handed ordering, a glide reflection does both, and an improper rotation combines a change in handedness with a normal rotation. If a fixed point is taken as the origin of a Cartesian coordinate system, then every point can be given coordinates as a displacement from the origin. Thus one may work with the vector space of displacements instead of the points themselves. Now suppose are the coordinates of the vector from the origin to point . Choose an orthonormal basis for our coordinates; then the squared distance to , by Pythagoras, is which can be computed using the matrix multiplication A geometric rotation transforms lines to lines, and preserves ratios of distances between points. From these properties it can be shown that a rotation is a linear transformation of the vectors, and thus can be written in matrix form, . The fact that a rotation preserves, not just ratios, but distances themselves, is stated as or Because this equation holds for all vectors, , one concludes that every rotation matrix, , satisfies the orthogonality condition, Rotations preserve handedness because they cannot change the ordering of the axes, which implies the special matrix condition, Equally important, it can be shown that any matrix satisfying these two conditions acts as a rotation. Multiplication The inverse of a rotation matrix is its transpose, which is also a rotation matrix: The product of two rotation matrices is a rotation matrix: For , multiplication of rotation matrices is generally not commutative. Noting that any identity matrix is a rotation matrix, and that matrix multiplication is associative, we may summarize all these properties by saying that the rotation matrices form a group, which for is non-abelian, called a special orthogonal group, and denoted by , , , or , the group of rotation matrices is isomorphic to the group of rotations in an space. This means that multiplication of rotation matrices corresponds to composition of rotations, applied in left-to-right order of their corresponding matrices. Ambiguities The interpretation of a rotation matrix can be subject to many ambiguities. In most cases the effect of the ambiguity is equivalent to the effect of a rotation matrix inversion (for these orthogonal matrices equivalently matrix transpose). Alias or alibi (passive or active) transformation The coordinates of a point may change due to either a rotation of the coordinate system (alias), or a rotation of the point (alibi). In the latter case, the rotation of also produces a rotation of the vector representing . In other words, either and are fixed while rotates (alias), or is fixed while and rotate (alibi). Any given rotation can be legitimately described both ways, as vectors and coordinate systems actually rotate with respect to each other, about the same axis but in opposite directions. Throughout this article, we chose the alibi approach to describe rotations. For instance, represents a counterclockwise rotation of a vector by an angle , or a rotation of by the same angle but in the opposite direction (i.e. clockwise). Alibi and alias transformations are also known as active and passive transformations, respectively. Pre-multiplication or post-multiplication The same point can be represented either by a column vector or a row vector . Rotation matrices can either pre-multiply column vectors (), or post-multiply row vectors (). However, produces a rotation in the opposite direction with respect to . Throughout this article, rotations produced on column vectors are described by means of a pre-multiplication. To obtain exactly the same rotation (i.e. the same final coordinates of point ), the equivalent row vector must be post-multiplied by the transpose of (i.e. ). Right- or left-handed coordinates The matrix and the vector can be represented with respect to a right-handed or left-handed coordinate system. Throughout the article, we assumed a right-handed orientation, unless otherwise specified. Vectors or forms The vector space has a dual space of linear forms, and the matrix can act on either vectors or forms. Decompositions Independent planes Consider the rotation matrix If acts in a certain direction, , purely as a scaling by a factor , then we have so that Thus is a root of the characteristic polynomial for , Two features are noteworthy. First, one of the roots (or eigenvalues) is 1, which tells us that some direction is unaffected by the matrix. For rotations in three dimensions, this is the axis of the rotation (a concept that has no meaning in any other dimension). Second, the other two roots are a pair of complex conjugates, whose product is 1 (the constant term of the quadratic), and whose sum is (the negated linear term). This factorization is of interest for rotation matrices because the same thing occurs for all of them. (As special cases, for a null rotation the "complex conjugates" are both 1, and for a 180° rotation they are both −1.) Furthermore, a similar factorization holds for any rotation matrix. If the dimension, , is odd, there will be a "dangling" eigenvalue of 1; and for any dimension the rest of the polynomial factors into quadratic terms like the one here (with the two special cases noted). We are guaranteed that the characteristic polynomial will have degree and thus eigenvalues. And since a rotation matrix commutes with its transpose, it is a normal matrix, so can be diagonalized. We conclude that every rotation matrix, when expressed in a suitable coordinate system, partitions into independent rotations of two-dimensional subspaces, at most of them. The sum of the entries on the main diagonal of a matrix is called the trace; it does not change if we reorient the coordinate system, and always equals the sum of the eigenvalues. This has the convenient implication for and rotation matrices that the trace reveals the angle of rotation, , in the two-dimensional space (or subspace). For a matrix the trace is , and for a matrix it is . In the three-dimensional case, the subspace consists of all vectors perpendicular to the rotation axis (the invariant direction, with eigenvalue 1). Thus we can extract from any rotation matrix a rotation axis and an angle, and these completely determine the rotation. Sequential angles The constraints on a rotation matrix imply that it must have the form with . Therefore, we may set and , for some angle . To solve for it is not enough to look at alone or alone; we must consider both together to place the angle in the correct quadrant, using a two-argument arctangent function. Now consider the first column of a rotation matrix, Although will probably not equal 1, but some value , we can use a slight variation of the previous computation to find a so-called Givens rotation that transforms the column to zeroing . This acts on the subspace spanned by the - and -axes. We can then repeat the process for the -subspace to zero . Acting on the full matrix, these two rotations produce the schematic form Shifting attention to the second column, a Givens rotation of the -subspace can now zero the value. This brings the full matrix to the form which is an identity matrix. Thus we have decomposed as An rotation matrix will have , or entries below the diagonal to zero. We can zero them by extending the same idea of stepping through the columns with a series of rotations in a fixed sequence of planes. We conclude that the set of rotation matrices, each of which has entries, can be parameterized by angles. In three dimensions this restates in matrix form an observation made by Euler, so mathematicians call the ordered sequence of three angles Euler angles. However, the situation is somewhat more complicated than we have so far indicated. Despite the small dimension, we actually have considerable freedom in the sequence of axis pairs we use; and we also have some freedom in the choice of angles. Thus we find many different conventions employed when three-dimensional rotations are parameterized for physics, or medicine, or chemistry, or other disciplines. When we include the option of world axes or body axes, 24 different sequences are possible. And while some disciplines call any sequence Euler angles, others give different names (Cardano, Tait–Bryan, roll-pitch-yaw) to different sequences. One reason for the large number of options is that, as noted previously, rotations in three dimensions (and higher) do not commute. If we reverse a given sequence of rotations, we get a different outcome. This also implies that we cannot compose two rotations by adding their corresponding angles. Thus Euler angles are not vectors, despite a similarity in appearance as a triplet of numbers. Nested dimensions A rotation matrix such as suggests a rotation matrix, is embedded in the upper left corner: This is no illusion; not just one, but many, copies of -dimensional rotations are found within -dimensional rotations, as subgroups. Each embedding leaves one direction fixed, which in the case of matrices is the rotation axis. For example, we have fixing the -axis, the -axis, and the -axis, respectively. The rotation axis need not be a coordinate axis; if is a unit vector in the desired direction, then where , , is a rotation by angle leaving axis fixed. A direction in -dimensional space will be a unit magnitude vector, which we may consider a point on a generalized sphere, . Thus it is natural to describe the rotation group as combining and . A suitable formalism is the fiber bundle, where for every direction in the base space, , the fiber over it in the total space, , is a copy of the fiber space, , namely the rotations that keep that direction fixed. Thus we can build an rotation matrix by starting with a matrix, aiming its fixed axis on (the ordinary sphere in three-dimensional space), aiming the resulting rotation on , and so on up through . A point on can be selected using numbers, so we again have numbers to describe any rotation matrix. In fact, we can view the sequential angle decomposition, discussed previously, as reversing this process. The composition of Givens rotations brings the first column (and row) to , so that the remainder of the matrix is a rotation matrix of dimension one less, embedded so as to leave fixed. Skew parameters via Cayley's formula When an rotation matrix , does not include a −1 eigenvalue, thus none of the planar rotations which it comprises are 180° rotations, then is an invertible matrix. Most rotation matrices fit this description, and for them it can be shown that is a skew-symmetric matrix, . Thus ; and since the diagonal is necessarily zero, and since the upper triangle determines the lower one, contains independent numbers. Conveniently, is invertible whenever is skew-symmetric; thus we can recover the original matrix using the Cayley transform, which maps any skew-symmetric matrix to a rotation matrix. In fact, aside from the noted exceptions, we can produce any rotation matrix in this way. Although in practical applications we can hardly afford to ignore 180° rotations, the Cayley transform is still a potentially useful tool, giving a parameterization of most rotation matrices without trigonometric functions. In three dimensions, for example, we have If we condense the skew entries into a vector, , then we produce a 90° rotation around the -axis for (1, 0, 0), around the -axis for (0, 1, 0), and around the -axis for (0, 0, 1). The 180° rotations are just out of reach; for, in the limit as , does approach a 180° rotation around the axis, and similarly for other directions. Decomposition into shears For the 2D case, a rotation matrix can be decomposed into three shear matrices (): This is useful, for instance, in computer graphics, since shears can be implemented with fewer multiplication instructions than rotating a bitmap directly. On modern computers, this may not matter, but it can be relevant for very old or low-end microprocessors. A rotation can also be written as two shears and scaling (): Group theory Below follow some basic facts about the role of the collection of all rotation matrices of a fixed dimension (here mostly 3) in mathematics and particularly in physics where rotational symmetry is a requirement of every truly fundamental law (due to the assumption of isotropy of space), and where the same symmetry, when present, is a simplifying property of many problems of less fundamental nature. Examples abound in classical mechanics and quantum mechanics. Knowledge of the part of the solutions pertaining to this symmetry applies (with qualifications) to all such problems and it can be factored out of a specific problem at hand, thus reducing its complexity. A prime example – in mathematics and physics – would be the theory of spherical harmonics. Their role in the group theory of the rotation groups is that of being a representation space for the entire set of finite-dimensional irreducible representations of the rotation group SO(3). For this topic, see Rotation group SO(3) § Spherical harmonics. The main articles listed in each subsection are referred to for more detail. Lie group The rotation matrices for each form a group, the special orthogonal group, . This algebraic structure is coupled with a topological structure inherited from in such a way that the operations of multiplication and taking the inverse are analytic functions of the matrix entries. Thus is for each a Lie group. It is compact and connected, but not simply connected. It is also a semi-simple group, in fact a simple group with the exception SO(4). The relevance of this is that all theorems and all machinery from the theory of analytic manifolds (analytic manifolds are in particular smooth manifolds) apply and the well-developed representation theory of compact semi-simple groups is ready for use. Lie algebra The Lie algebra of is given by and is the space of skew-symmetric matrices of dimension , see classical group, where is the Lie algebra of , the orthogonal group. For reference, the most common basis for is Exponential map Connecting the Lie algebra to the Lie group is the exponential map, which is defined using the standard matrix exponential series for For any skew-symmetric matrix , is always a rotation matrix. An important practical example is the case. In rotation group SO(3), it is shown that one can identify every with an Euler vector , where is a unit magnitude vector. By the properties of the identification , is in the null space of . Thus, is left invariant by and is hence a rotation axis. According to Rodrigues' rotation formula on matrix form, one obtains, where This is the matrix for a rotation around axis by the angle . For full detail, see exponential map SO(3). Baker–Campbell–Hausdorff formula The BCH formula provides an explicit expression for in terms of a series expansion of nested commutators of and . This general expansion unfolds as In the case, the general infinite expansion has a compact form, for suitable trigonometric function coefficients, detailed in the Baker–Campbell–Hausdorff formula for SO(3). As a group identity, the above holds for all faithful representations, including the doublet (spinor representation), which is simpler. The same explicit formula thus follows straightforwardly through Pauli matrices; see the derivation for SU(2). For the general case, one might use Ref. Spin group The Lie group of rotation matrices, , is not simply connected, so Lie theory tells us it is a homomorphic image of a universal covering group. Often the covering group, which in this case is called the spin group denoted by , is simpler and more natural to work with. In the case of planar rotations, SO(2) is topologically a circle, . Its universal covering group, Spin(2), is isomorphic to the real line, , under addition. Whenever angles of arbitrary magnitude are used one is taking advantage of the convenience of the universal cover. Every rotation matrix is produced by a countable infinity of angles, separated by integer multiples of 2. Correspondingly, the fundamental group of is isomorphic to the integers, . In the case of spatial rotations, SO(3) is topologically equivalent to three-dimensional real projective space, . Its universal covering group, Spin(3), is isomorphic to the , . Every rotation matrix is produced by two opposite points on the sphere. Correspondingly, the fundamental group of SO(3) is isomorphic to the two-element group, . We can also describe Spin(3) as isomorphic to quaternions of unit norm under multiplication, or to certain real matrices, or to complex special unitary matrices, namely SU(2). The covering maps for the first and the last case are given by and For a detailed account of the and the quaternionic covering, see spin group SO(3). Many features of these cases are the same for higher dimensions. The coverings are all two-to-one, with , , having fundamental group . The natural setting for these groups is within a Clifford algebra. One type of action of the rotations is produced by a kind of "sandwich", denoted by . More importantly in applications to physics, the corresponding spin representation of the Lie algebra sits inside the Clifford algebra. It can be exponentiated in the usual way to give rise to a representation, also known as projective representation of the rotation group. This is the case with SO(3) and SU(2), where the representation can be viewed as an "inverse" of the covering map. By properties of covering maps, the inverse can be chosen ono-to-one as a local section, but not globally. Infinitesimal rotations The matrices in the Lie algebra are not themselves rotations; the skew-symmetric matrices are derivatives, proportional differences of rotations. An actual "differential rotation", or infinitesimal rotation matrix has the form where is vanishingly small and , for instance with , The computation rules are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. It turns out that the order in which infinitesimal rotations are applied is irrelevant. To see this exemplified, consult infinitesimal rotations SO(3). Conversions We have seen the existence of several decompositions that apply in any dimension, namely independent planes, sequential angles, and nested dimensions. In all these cases we can either decompose a matrix or construct one. We have also given special attention to rotation matrices, and these warrant further attention, in both directions . Quaternion Given the unit quaternion , the equivalent pre-multiplied (to be used with column vectors) rotation matrix is Now every quaternion component appears multiplied by two in a term of degree two, and if all such terms are zero what is left is an identity matrix. This leads to an efficient, robust conversion from any quaternion – whether unit or non-unit – to a rotation matrix. Given: we can calculate Freed from the demand for a unit quaternion, we find that nonzero quaternions act as homogeneous coordinates for rotation matrices. The Cayley transform, discussed earlier, is obtained by scaling the quaternion so that its component is 1. For a 180° rotation around any axis, will be zero, which explains the Cayley limitation. The sum of the entries along the main diagonal (the trace), plus one, equals , which is . Thus we can write the trace itself as ; and from the previous version of the matrix we see that the diagonal entries themselves have the same form: , , and . So we can easily compare the magnitudes of all four quaternion components using the matrix diagonal. We can, in fact, obtain all four magnitudes using sums and square roots, and choose consistent signs using the skew-symmetric part of the off-diagonal entries: Alternatively, use a single square root and division This is numerically stable so long as the trace, , is not negative; otherwise, we risk dividing by (nearly) zero. In that case, suppose is the largest diagonal entry, so will have the largest magnitude (the other cases are derived by cyclic permutation); then the following is safe. If the matrix contains significant error, such as accumulated numerical error, we may construct a symmetric matrix, and find the eigenvector, , of its largest magnitude eigenvalue. (If is truly a rotation matrix, that value will be 1.) The quaternion so obtained will correspond to the rotation matrix closest to the given matrix (Note: formulation of the cited article is post-multiplied, works with row vectors). Polar decomposition If the matrix is nonsingular, its columns are linearly independent vectors; thus the Gram–Schmidt process can adjust them to be an orthonormal basis. Stated in terms of numerical linear algebra, we convert to an orthogonal matrix, , using QR decomposition. However, we often prefer a closest to , which this method does not accomplish. For that, the tool we want is the polar decomposition (; ). To measure closeness, we may use any matrix norm invariant under orthogonal transformations. A convenient choice is the Frobenius norm, , squared, which is the sum of the squares of the element differences. Writing this in terms of the trace, , our goal is, Find minimizing , subject to . Though written in matrix terms, the objective function is just a quadratic polynomial. We can minimize it in the usual way, by finding where its derivative is zero. For a matrix, the orthogonality constraint implies six scalar equalities that the entries of must satisfy. To incorporate the constraint(s), we may employ a standard technique, Lagrange multipliers, assembled as a symmetric matrix, . Thus our method is: Differentiate with respect to (the entries of) , and equate to zero. Consider a example. Including constraints, we seek to minimize Taking the derivative with respect to , , , in turn, we assemble a matrix. In general, we obtain the equation so that where is orthogonal and is symmetric. To ensure a minimum, the matrix (and hence ) must be positive definite. Linear algebra calls the polar decomposition of , with the positive square root of . When is non-singular, the and factors of the polar decomposition are uniquely determined. However, the determinant of is positive because is positive definite, so inherits the sign of the determinant of . That is, is only guaranteed to be orthogonal, not a rotation matrix. This is unavoidable; an with negative determinant has no uniquely defined closest rotation matrix. Axis and angle To efficiently construct a rotation matrix from an angle and a unit axis , we can take advantage of symmetry and skew-symmetry within the entries. If , , and are the components of the unit vector representing the axis, and then Determining an axis and angle, like determining a quaternion, is only possible up to the sign; that is, and correspond to the same rotation matrix, just like and . Additionally, axis–angle extraction presents additional difficulties. The angle can be restricted to be from 0° to 180°, but angles are formally ambiguous by multiples of 360°. When the angle is zero, the axis is undefined. When the angle is 180°, the matrix becomes symmetric, which has implications in extracting the axis. Near multiples of 180°, care is needed to avoid numerical problems: in extracting the angle, a two-argument arctangent with equal to avoids the insensitivity of arccos; and in computing the axis magnitude in order to force unit magnitude, a brute-force approach can lose accuracy through underflow . A partial approach is as follows: The -, -, and -components of the axis would then be divided by . A fully robust approach will use a different algorithm when , the trace of the matrix , is negative, as with quaternion extraction. When is zero because the angle is zero, an axis must be provided from some source other than the matrix. Euler angles Complexity of conversion escalates with Euler angles (used here in the broad sense). The first difficulty is to establish which of the twenty-four variations of Cartesian axis order we will use. Suppose the three angles are , , ; physics and chemistry may interpret these as while aircraft dynamics may use One systematic approach begins with choosing the rightmost axis. Among all permutations of , only two place that axis first; one is an even permutation and the other odd. Choosing parity thus establishes the middle axis. That leaves two choices for the left-most axis, either duplicating the first or not. These three choices gives us variations; we double that to 24 by choosing static or rotating axes. This is enough to construct a matrix from angles, but triples differing in many ways can give the same rotation matrix. For example, suppose we use the convention above; then we have the following equivalent pairs: {| style="text-align:right" | (90°,||45°,||−105°) || ≡ || (−270°,||−315°,||255°) || multiples of 360° |- | (72°,||0°,||0°) || ≡ || (40°,||0°,||32°) || singular alignment |- | (45°,||60°,||−30°) || ≡ || (−135°,||−60°,||150°) || bistable flip |} Angles for any order can be found using a concise common routine (; ). The problem of singular alignment, the mathematical analog of physical gimbal lock, occurs when the middle rotation aligns the axes of the first and last rotations. It afflicts every axis order at either even or odd multiples of 90°. These singularities are not characteristic of the rotation matrix as such, and only occur with the usage of Euler angles. The singularities are avoided when considering and manipulating the rotation matrix as orthonormal row vectors (in 3D applications often named the right-vector, up-vector and out-vector) instead of as angles. The singularities are also avoided when working with quaternions. Vector to vector formulation In some instances it is interesting to describe a rotation by specifying how a vector is mapped into another through the shortest path (smallest angle). In this completely describes the associated rotation matrix. In general, given , the matrix belongs to and maps to . Uniform random rotation matrices We sometimes need to generate a uniformly distributed random rotation matrix. It seems intuitively clear in two dimensions that this means the rotation angle is uniformly distributed between 0 and 2. That intuition is correct, but does not carry over to higher dimensions. For example, if we decompose rotation matrices in axis–angle form, the angle should not be uniformly distributed; the probability that (the magnitude of) the angle is at most should be , for . Since is a connected and locally compact Lie group, we have a simple standard criterion for uniformity, namely that the distribution be unchanged when composed with any arbitrary rotation (a Lie group "translation"). This definition corresponds to what is called Haar measure. show how to use the Cayley transform to generate and test matrices according to this criterion. We can also generate a uniform distribution in any dimension using the subgroup algorithm of . This recursively exploits the nested dimensions group structure of , as follows. Generate a uniform angle and construct a rotation matrix. To step from to , generate a vector uniformly distributed on the -sphere , embed the matrix in the next larger size with last column , and rotate the larger matrix so the last column becomes . As usual, we have special alternatives for the case. Each of these methods begins with three independent random scalars uniformly distributed on the unit interval. takes advantage of the odd dimension to change a Householder reflection to a rotation by negation, and uses that to aim the axis of a uniform planar rotation. Another method uses unit quaternions. Multiplication of rotation matrices is homomorphic to multiplication of quaternions, and multiplication by a unit quaternion rotates the unit sphere. Since the homomorphism is a local isometry, we immediately conclude that to produce a uniform distribution on SO(3) we may use a uniform distribution on . In practice: create a four-element vector where each element is a sampling of a normal distribution. Normalize its length and you have a uniformly sampled random unit quaternion which represents a uniformly sampled random rotation. Note that the aforementioned only applies to rotations in dimension 3. For a generalised idea of quaternions, one should look into Rotors. Euler angles can also be used, though not with each angle uniformly distributed (; ). For the axis–angle form, the axis is uniformly distributed over the unit sphere of directions, , while the angle has the nonuniform distribution over noted previously . See also Euler–Rodrigues formula Euler's rotation theorem Rodrigues' rotation formula Plane of rotation Axis–angle representation Rotation group SO(3) Rotation formalisms in three dimensions Rotation operator (vector space) Transformation matrix Yaw-pitch-roll system Kabsch algorithm Isometry Rigid transformation Rotations in 4-dimensional Euclidean space Trigonometric Identities Versor Remarks Notes References ; reprinted as article 52 in (GTM 222) (Also NASA-CR-53568.) (GTM 102) External links Rotation matrices at Mathworld Math Awareness Month 2000 interactive demo (requires Java) Rotation Matrices at MathPages A parametrization of SOn(R) by generalized Euler Angles Rotation about any point Transformation (function) Matrices Mathematical physics
Rotation matrix
[ "Physics", "Mathematics" ]
9,188
[ "Transformation (function)", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Matrices (mathematics)", "Geometry", "Mathematical physics" ]
2,684,988
https://en.wikipedia.org/wiki/Fluid%20mechanics
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. It has applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology. It can be divided into fluid statics, the study of fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic. Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. History The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes investigated fluid statics and buoyancy and formulated his famous law known now as the Archimedes' principle, which was published in his work On Floating Bodies—generally considered to be the first major work on fluid mechanics. Iranian scholar Abu Rayhan Biruni and later Al-Khazini applied experimental scientific methods to fluid mechanics. Rapid advancement in fluid mechanics began with Leonardo da Vinci (observations and experiments), Evangelista Torricelli (invented the barometer), Isaac Newton (investigated viscosity) and Blaise Pascal (researched hydrostatics, formulated Pascal's law), and was continued by Daniel Bernoulli with the introduction of mathematical fluid dynamics in Hydrodynamica (1739). Inviscid flow was further analyzed by various mathematicians (Jean le Rond d'Alembert, Joseph Louis Lagrange, Pierre-Simon Laplace, Siméon Denis Poisson) and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille and Gotthilf Hagen. Further mathematical justification was provided by Claude-Louis Navier and George Gabriel Stokes in the Navier–Stokes equations, and boundary layers were investigated (Ludwig Prandtl, Theodore von Kármán), while various scientists such as Osborne Reynolds, Andrey Kolmogorov, and Geoffrey Ingram Taylor advanced the understanding of fluid viscosity and turbulence. Main branches Fluid statics Fluid statics or hydrostatics is the branch of fluid mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium; and is contrasted with fluid dynamics, the study of fluids in motion. Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of water is always level whatever the shape of its container. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspects of geophysics and astrophysics (for example, in understanding plate tectonics and anomalies in the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields. Fluid dynamics Fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density, and temperature, as functions of space and time. It has several subdisciplines itself, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and movements on aircraft, determining the mass flow rate of petroleum through pipelines, predicting evolving weather patterns, understanding nebulae in interstellar space and modeling explosions. Some fluid-dynamical principles are used in traffic engineering and crowd dynamics. Relationship to continuum mechanics Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table. In a mechanical view, a fluid is a substance that does not support shear stress; that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress. Assumptions The assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. Fundamentally, every fluid mechanical system is assumed to obey: Conservation of mass Conservation of energy Conservation of momentum The continuum assumption For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume. The is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale. Fluid properties can vary continuously from one volume element to another and are average values of the molecular properties. The continuum hypothesis can lead to inaccurate results in applications like supersonic speed flows, or molecular flows on nano scale. Those problems for which the continuum hypothesis fails can be solved using statistical mechanics. To determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, but molecular approach (statistical mechanics) can be applied to find the fluid motion for larger Knudsen numbers. Navier–Stokes equations The Navier–Stokes equations (named after Claude-Louis Navier and George Gabriel Stokes) are differential equations that describe the force balance at a given point within a fluid. For an incompressible fluid with vector velocity field , the Navier–Stokes equations are . These differential equations are the analogues for deformable materials to Newton's equations of motion for particles – the Navier–Stokes equations describe changes in momentum (force) in response to pressure and viscosity, parameterized by the kinematic viscosity . Occasionally, body forces, such as the gravitational force or Lorentz force are added to the equations. Solutions of the Navier–Stokes equations for a given physical problem must be sought with the help of calculus. In practical terms, only the simplest cases can be solved exactly in this way. These cases generally involve non-turbulent, steady flow in which the Reynolds number is small. For more complex cases, especially those involving turbulence, such as global weather systems, aerodynamics, hydrodynamics and many more, solutions of the Navier–Stokes equations can currently only be found with the help of computers. This branch of science is called computational fluid dynamics. Inviscid and viscous fluids An inviscid fluid has no viscosity, . In practice, an inviscid flow is an idealization, one that facilitates mathematical treatment. In fact, purely inviscid flows are only known to be realized in the case of superfluidity. Otherwise, fluids are generally viscous, a property that is often most important within a boundary layer near a solid surface, where the flow must match onto the no-slip condition at the solid. In some cases, the mathematics of a fluid mechanical system can be treated by assuming that the fluid outside of boundary layers is inviscid, and then matching its solution onto that for a thin laminar boundary layer. For fluid flow over a porous boundary, the fluid velocity can be discontinuous between the free fluid and the fluid in the porous media (this is related to the Beavers and Joseph condition). Further, it is useful at low subsonic speeds to assume that gas is incompressible—that is, the density of the gas does not change even though the speed and static pressure change. Newtonian versus non-Newtonian fluids A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it continues to flow. For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved slowly through the fluid is proportional to the force applied to the object. (Compare friction). Important fluids, like water as well as most gasses, behave—to good approximation—as a Newtonian fluid under normal conditions on Earth. By contrast, stirring a non-Newtonian fluid can leave a "hole" behind. This will gradually fill up over time—this behavior is seen in materials such as pudding, oobleck, or sand (although sand isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property—for example, most fluids with long molecular chains can react in a non-Newtonian manner. Equations for a Newtonian fluid The constant of proportionality between the viscous stress tensor and the velocity gradient is known as the viscosity. A simple equation to describe incompressible Newtonian fluid behavior is where is the shear stress exerted by the fluid ("drag"), is the fluid viscosity—a constant of proportionality, and is the velocity gradient perpendicular to the direction of shear. For a Newtonian fluid, the viscosity, by definition, depends only on temperature, not on the forces acting upon it. If the fluid is incompressible the equation governing the viscous stress (in Cartesian coordinates) is where is the shear stress on the face of a fluid element in the direction is the velocity in the direction is the direction coordinate. If the fluid is not incompressible the general form for the viscous stress in a Newtonian fluid is where is the second viscosity coefficient (or bulk viscosity). If a fluid does not obey this relation, it is termed a non-Newtonian fluid, of which there are several types. Non-Newtonian fluids can be either plastic, Bingham plastic, pseudoplastic, dilatant, thixotropic, rheopectic, viscoelastic. In some applications, another rough broad division among fluids is made: ideal and non-ideal fluids. An ideal fluid is non-viscous and offers no resistance whatsoever to a shearing force. An ideal fluid really does not exist, but in some calculations, the assumption is justifiable. One example of this is the flow far from solid surfaces. In many cases, the viscous effects are concentrated near the solid boundaries (such as in boundary layers) while in regions of the flow field far away from the boundaries the viscous effects can be neglected and the fluid there is treated as it were inviscid (ideal flow). When the viscosity is neglected, the term containing the viscous stress tensor in the Navier–Stokes equation vanishes. The equation reduced in this form is called the Euler equation. See also Transport phenomena Aerodynamics Applied mechanics Bernoulli's principle Communicating vessels Computational fluid dynamics Compressor map Secondary flow Different types of boundary conditions in fluid dynamics Fluid–structure interaction Immersed boundary method Stochastic Eulerian Lagrangian method Stokesian dynamics Smoothed-particle hydrodynamics References Further reading External links Free Fluid Mechanics books Annual Review of Fluid Mechanics. . CFDWiki – the Computational Fluid Dynamics reference wiki. Educational Particle Image Velocimetry – resources and demonstrations Civil engineering
Fluid mechanics
[ "Engineering" ]
2,624
[ "Construction", "Civil engineering", "Fluid mechanics" ]
2,685,460
https://en.wikipedia.org/wiki/Level-spacing%20distribution
In mathematical physics, level spacing is the difference between consecutive elements in some set of real numbers. In particular, it is the difference between consecutive energy levels or eigenvalues of a matrix or linear operator. Mathematical physics
Level-spacing distribution
[ "Physics", "Mathematics" ]
47
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
2,685,562
https://en.wikipedia.org/wiki/Fredholm%20determinant
In mathematics, the Fredholm determinant is a complex-valued function which generalizes the determinant of a finite dimensional linear operator. It is defined for bounded operators on a Hilbert space which differ from the identity operator by a trace-class operator. The function is named after the mathematician Erik Ivar Fredholm. Fredholm determinants have had many applications in mathematical physics, the most celebrated example being Gábor Szegő's limit formula, proved in response to a question raised by Lars Onsager and C. N. Yang on the spontaneous magnetization of the Ising model. Definition Let be a Hilbert space and the set of bounded invertible operators on of the form , where is a trace-class operator. is a group because so is trace class if is. It has a natural metric given by , where is the trace-class norm. If is a Hilbert space with inner product , then so too is the th exterior power with inner product In particular gives an orthonormal basis of if is an orthonormal basis of . If is a bounded operator on , then functorially defines a bounded operator on by If is trace-class, then is also trace-class with This shows that the definition of the Fredholm determinant given by makes sense. Properties If is a trace-class operator defines an entire function such that The function is continuous on trace-class operators, with One can improve this inequality slightly to the following, as noted in Chapter 5 of Simon: If and are trace-class then The function defines a homomorphism of into the multiplicative group of nonzero complex numbers (since elements of are invertible). If is in and is invertible, If is trace-class, then Fredholm determinants of commutators A function from into is said to be differentiable if is differentiable as a map into the trace-class operators, i.e. if the limit exists in trace-class norm. If is a differentiable function with values in trace-class operators, then so too is and where Israel Gohberg and Mark Krein proved that if is a differentiable function into , then is a differentiable map into with This result was used by Joel Pincus, William Helton and Roger Howe to prove that if and are bounded operators with trace-class commutator , then Szegő limit formula Let and let be the orthogonal projection onto the Hardy space . If is a smooth function on the circle, let denote the corresponding multiplication operator on . The commutator is trace-class. Let be the Toeplitz operator on defined by then the additive commutator is trace-class if and are smooth. Berger and Shaw proved that If and are smooth, then is in . Harold Widom used the result of Pincus-Helton-Howe to prove that where He used this to give a new proof of Gábor Szegő's celebrated limit formula: where is the projection onto the subspace of spanned by and . Szegő's limit formula was proved in 1951 in response to a question raised by the work Lars Onsager and C. N. Yang on the calculation of the spontaneous magnetization for the Ising model. The formula of Widom, which leads quite quickly to Szegő's limit formula, is also equivalent to the duality between bosons and fermions in conformal field theory. A singular version of Szegő's limit formula for functions supported on an arc of the circle was proved by Widom; it has been applied to establish probabilistic results on the eigenvalue distribution of random unitary matrices. Informal presentation for the case of integral operators The section below provides an informal definition for the Fredholm determinant of when the trace-class operator is an integral operator given by a kernel . A proper definition requires a presentation showing that each of the manipulations are well-defined, convergent, and so on, for the given situation for which the Fredholm determinant is contemplated. Since the kernel may be defined for a large variety of Hilbert spaces and Banach spaces, this is a non-trivial exercise. The Fredholm determinant may be defined as where is an integral operator. The trace of the operator and its alternating powers is given in terms of the kernel by and and in general The trace is well-defined for these kernels, since these are trace-class or nuclear operators. Applications The Fredholm determinant was used by physicist John A. Wheeler (1937, Phys. Rev. 52:1107) to help provide mathematical description of the wavefunction for a composite nucleus composed of antisymmetrized combination of partial wavefunctions by the method of Resonating Group Structure. This method corresponds to the various possible ways of distributing the energy of neutrons and protons into fundamental boson and fermion nucleon cluster groups or building blocks such as the alpha-particle, helium-3, deuterium, triton, di-neutron, etc. When applied to the method of Resonating Group Structure for beta and alpha stable isotopes, use of the Fredholm determinant: (1) determines the energy values of the composite system, and (2) determines scattering and disintegration cross sections. The method of Resonating Group Structure of Wheeler provides the theoretical bases for all subsequent Nucleon Cluster Models and associated cluster energy dynamics for all light and heavy mass isotopes (see review of Cluster Models in physics in N.D. Cook, 2006). References Determinants Fredholm theory Hilbert spaces Topological tensor products
Fredholm determinant
[ "Physics", "Engineering" ]
1,160
[ "Hilbert spaces", "Tensors", "Quantum mechanics", "Topological tensor products" ]
2,685,968
https://en.wikipedia.org/wiki/Attosecond%20physics
Attosecond physics, also known as attophysics, or more generally attosecond science, is a branch of physics that deals with light-matter interaction phenomena wherein attosecond (10−18 s) photon pulses are used to unravel dynamical processes in matter with unprecedented time resolution. Attosecond science mainly employs pump–probe spectroscopic methods to investigate the physical process of interest. Due to the complexity of this field of study, it generally requires a synergistic interplay between state-of-the-art experimental setup and advanced theoretical tools to interpret the data collected from attosecond experiments. The main interests of attosecond physics are: Atomic physics: investigation of electron correlation effects, photo-emission delay and ionization tunneling. Molecular physics and molecular chemistry: role of electronic motion in molecular excited states (e.g. charge-transfer processes), light-induced photo-fragmentation, and light-induced electron transfer processes. Solid-state physics: investigation of exciton dynamics in advanced 2D materials, petahertz charge carrier motion in solids, spin dynamics in ferromagnetic materials. One of the primary goals of attosecond science is to provide advanced insights into the quantum dynamics of electrons in atoms, molecules and solids with the long-term challenge of achieving real-time control of the electron motion in matter. The advent of broadband solid-state titanium-doped sapphire based (Ti:Sa) lasers (1986), chirped pulse amplification (CPA) (1988), spectral broadening of high-energy pulses (e.g. gas-filled hollow-core fiber via self-phase modulation) (1996), mirror-dispersion-controlled technology (chirped mirrors) (1994), and carrier envelop offset stabilization (2000) had enabled the creation of isolated-attosecond light pulses (generated by the non-linear process of high harmonic generation in a noble gas) (2004, 2006), which have given birth to the field of attosecond science. The current world record for the shortest light-pulse generated by human technology is 43 as. In 2022, Anne L'Huillier, Paul Corkum, Ferenc Krausz were awarded with the Wolf Prize in physics for their pioneering contributions to ultrafast laser science and attosecond physics. This was followed by the 2023 Nobel Prize in Physics, where L'Huillier, Krausz and Pierre Agostini were rewarded “for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter.” Introduction Motivation The natural time scale of electron motion in atoms, molecules, and solids is the attosecond (1 as= 10−18 s). This fact is a direct consequence of quantum mechanics. For simplicity, consider a quantum particle in superposition between ground-level, of energy , and the first excited level, of energy : with and chosen as the square roots of the quantum probability of observing the particle in the corresponding state. are the time-dependent ground and excited state respectively, with the reduced Planck constant. The expectation value of a generic hermitian and symmetric operator, , can be written as , as a consequence the time evolution of this observable is: While the first two terms do not depend on time, the third, instead, does. This creates a dynamic for the observable with a characteristic time, , given by . As a consequence, for energy levels in the range of 10 eV, which is the typical electronic energy range in matter, the characteristic time of the dynamics of any associated physical observable is approximately 400 as. To measure the time evolution of , one needs to use a controlled tool, or a process, with an even shorter time-duration that can interact with that dynamic. This is the reason why attosecond light pulses are used to disclose the physics of ultra-fast phenomena in the few-femtosecond and attosecond time-domain. Generation of attosecond pulses To generate a traveling pulse with an ultrashort time duration, two key elements are needed: bandwidth and central wavelength of the electromagnetic wave. From Fourier analysis, the more the available spectral bandwidth of a light pulse, the shorter, potentially, is its time duration. There is, however, a lower-limit in the minimum duration exploitable for a given pulse central wavelength. This limit is the optical cycle. Indeed, for a pulse centered in the low-frequency region, e.g. infrared (IR) 800 nm, its minimum time duration is around 2.67 fs, where is the speed of light; whereas, for a light field with central wavelength in the extreme ultraviolet (XUV) at 30 nm the minimum duration is around 100 as. Thus, a smaller time duration requires the use of shorter, and more energetic wavelength, even down to the soft-X-ray (SXR) region. For this reason, standard techniques to create attosecond light pulses are based on radiation sources with broad spectral bandwidths and central wavelength located in the XUV-SXR range. The most common sources that fit these requirements are free-electron lasers (FEL) and high harmonic generation (HHG) setups. Physical observables and experiments Once an attosecond light source is available, one has to drive the pulse towards the sample of interest and, then, measure its dynamics. The most suitable experimental observables to analyze the electron dynamics in matter are: Angular asymmetry in the velocity distribution of molecular photo-fragment. Quantum yield of molecular photo-fragments. XUV-SXR spectrum transient absorption. XUV-SXR spectrum transient reflectivity. Photo-electron kinetic energy distribution. Attosecond electron microscopy The general strategy is to use a pump-probe scheme to "image" through one of the aforementioned observables the ultra-fast dynamics occurring in the material under investigation. Few-femtosecond IR-XUV/SXR attosecond pulse pump-probe experiments As an example, in a typical pump-probe experimental apparatus, an attosecond (XUV-SXR) pulse and an intense ( W/cm2) low-frequency infrared pulse with a time duration of few to tens femtoseconds are collinearly focused on the studied sample. At this point, by varying the delay of the attosecond pulse, which could be pump/probe depending on the experiment, with respect to the IR pulse (probe/pump), the desired physical observable is recorded. The subsequent challenge is to interpret the collected data and retrieve fundamental information on the hidden dynamics and quantum processes occurring in the sample. This can be achieved with advanced theoretical tools and numerical calculations. By exploiting this experimental scheme, several kinds of dynamics can be explored in atoms, molecules and solids; typically light-induced dynamics and out-of-equilibrium excited states within attosecond time-resolution. Quantum mechanics foundations Attosecond physics typically deals with non-relativistic bounded particles and employs electromagnetic fields with a moderately high intensity ( W/cm2). This fact allows to set up a discussion in a non-relativistic and semi-classical quantum mechanics environment for light-matter interaction. Atoms Resolution of time dependent Schrödinger equation in an electromagnetic field The time evolution of a single electronic wave function in an atom, is described by the Schrödinger equation (in atomic units): where the light-matter interaction Hamiltonian, , can be expressed in the length gauge, within the dipole approximation, as: where is the Coulomb potential of the atomic species considered; are the momentum and position operator, respectively; and is the total electric field evaluated in the neighbor of the atom. The formal solution of the Schrödinger equation is given by the propagator formalism: where , is the electron wave function at time . This exact solution cannot be used for almost any practical purpose. However, it can be proved, using Dyson's equations that the previous solution can also be written as: where, is the bounded Hamiltonian and is the interaction Hamiltonian. The formal solution of Eq. , which previously was simply written as Eq. , can now be regarded in Eq. as a superposition of different quantum paths (or quantum trajectory), each one of them with a peculiar interaction time with the electric field. In other words, each quantum path is characterized by three steps: An initial evolution without the electromagnetic field. This is described by the left-hand side term in the integral. Then, a "kick" from the electromagnetic field, that "excite" the electron. This event occurs at an arbitrary time that uni-vocally characterizes the quantum path . A final evolution driven by both the field and the Coulomb potential, given by . In parallel, you also have a quantum path that do not perceive the field at all, this trajectory is indicated by the right-hand side term in Eq. . This process is entirely time-reversible, i.e. can also occur in the opposite order. Equation is not straightforward to handle. However, physicists use it as the starting point for numerical calculation, more advanced discussion or several approximations. For strong-field interaction problems, where ionization may occur, one can imagine to project Eq. in a certain continuum state (unbounded state or free state) , of momentum , so that: where is the probability amplitude to find at a certain time , the electron in the continuum states . If this probability amplitude is greater than zero, the electron is photoionized. For the majority of application, the second term in is not considered, and only the first one is used in discussions, hence: Equation is also known as time reversed S-matrix amplitude and it gives the probability of photoionization by a generic time-varying electric field. Strong field approximation (SFA) Strong field approximation (SFA), or Keldysh-Faisal-Reiss theory is a physical model, started in 1964 by the Russian physicist Keldysh, is currently used to describe the behavior of atoms (and molecules) in intense laser fields. SFA is the starting theory for discussing both high harmonic generation and attosecond pump-probe interaction with atoms. The main assumption made in SFA is that the free-electron dynamics is dominated by the laser field, while the Coulomb potential is regarded as a negligible perturbation. This fact re-shapes equation into: where, is the Volkov Hamiltonian, here expressed for simplicity in the velocity gauge, with , , the electromagnetic vector potential. At this point, to keep the discussion at its basic level, lets consider an atom with a single energy level , ionization energy and populated by a single electron (single active electron approximation). We can consider the initial time of the wave function dynamics as , and we can assume that initially the electron is in the atomic ground state . So that, and Moreover, we can regard the continuum states as plane-wave functions state, . This is a rather simplified assumption, a more reasonable choice would have been to use as continuum state the exact atom scattering states. The time evolution of simple plane-wave states with the Volkov Hamiltonian is given by: here for consistency with Eq. the evolution has already been properly converted into the length gauge. As a consequence, the final momentum distribution of a single electron in a single-level atom, with ionization potential , is expressed as: where, is the dipole expectation value (or transition dipole moment), and is the semiclassical action. The result of Eq. is the basic tool to understand phenomena like: The high harmonic generation process, which is typically the result of strong field interaction of noble gases with an intense low-frequency pulse, Attosecond pump-probe experiments with simple atoms. The debate on tunneling time. Weak attosecond pulse-strong-IR-fields-atoms interactions Attosecond pump-probe experiments with simple atoms is a fundamental tool to measure the time duration of an attosecond pulse and to explore several quantum proprieties of matter. This kind of experiments can be easily described within strong field approximation by exploiting the results of Eq. , as discussed below. As a simple model, consider the interaction between a single active electron in a single-level atom and two fields: an intense femtosecond infrared (IR) pulse (, and a weak attosecond pulse (centered in the extreme ultraviolet (XUV) region) . Then, by substituting these fields to it results with . At this point, we can divide Eq. in two contributions: direct ionization and strong field ionization (multiphoton regime), respectively. Typically, these two terms are relevant in different energetic regions of the continuum. Consequently, for typical experimental condition, the latter process is disregarded, and only direct ionization from the attosecond pulse is considered. Then, since the attosecond pulse is weaker than the infrared one, it holds . Thus, is typically neglected in Eq. . In addition to that, we can re-write the attosecond pulse as a delayed function with respect to the IR field, . Therefore, the probability distribution, , of finding an electron ionized in the continuum with momentum , after the interaction has occurred (at ), in a pump-probe experiments, with an intense IR pulse and a delayed-attosecond XUV pulse, is given by: with Equation describes the photoionization phenomenon of two-color interaction (XUV-IR) with a single-level atom and single active electron. This peculiar result can be regarded as a quantum interference process between all the possible ionization paths, started by a delayed XUV attosecond pulse, with a following motion in the continuum states driven by a strong IR field. The resulting 2D photo-electron (momentum, or equivalently energy, vs delay) distribution is called streaking trace. Techniques Here are listed and discussed some of the most common techniques and approaches pursued in attosecond research centers. Metrology with photo-electron spectroscopy (FROG-CRAB) A daily challenge in attosecond science is to characterize the temporal proprieties of the attosecond pulses used in any pump-probe experiments with atoms, molecules or solids. The most used technique is based on the frequency-resolved optical gating for a complete reconstruction of attosecond bursts (FROG-CRAB). The main advantage of this technique is that it allows to exploit the corroborated frequency-resolved optical gating (FROG) technique, developed in 1991 for picosecond-femtosecond pulse characterization, to the attosecond field. Complete reconstruction of attosecond bursts (CRAB) is an extension of FROG and it is based on the same idea for the field reconstruction. In other words, FROG-CRAB is based on the conversion of an attosecond pulse into an electron wave-packet that is freed in the continuum by atomic photoionization, as already described with Eq.. The role of the low-frequency driving laser pulse( e.g. infra-red pulse) is to behave as gate for the temporal measurement. Then, by exploring different delays between the low-frequency and the attosecond pulse a streaking trace (or streaking spectrogram) can be obtained. This 2D-spectrogram is later analyzed by a reconstruction algorithm with the goal of retrieving both the attosecond pulse and the IR pulse, with no need of a prior knowledge on any of them. However, as Eq. pinpoints, the intrinsic limits of this technique is the knowledge on atomic dipole proprieties, in particular on the atomic dipole quantum phase. The reconstruction of both the low-frequency field and the attosecond pulse from a streaking trace is typically achieved through iterative algorithms, such as: Principal component generalized projections algorithm (PCGPA). Volkov transform generalized projection algorithm (VTGPA). extended ptychographic iterative engine (ePIE). See also Femtochemistry Femtotechnology Ultrashort pulse Chirped pulse amplification Free-electron laser Attosecond chronoscopy References Further reading Quantum mechanics Atomic, molecular, and optical physics Articles containing video clips Time-resolved spectroscopy
Attosecond physics
[ "Physics", "Chemistry" ]
3,424
[ "Spectrum (physical sciences)", "Theoretical physics", "Quantum mechanics", "Time-resolved spectroscopy", " molecular", "Atomic", "Spectroscopy", " and optical physics" ]
2,686,262
https://en.wikipedia.org/wiki/Piezophile
A piezophile (from Greek "piezo-" for pressure and "-phile" for loving) is an organism with optimal growth under high hydrostatic pressure, i.e., an organism that has its maximum rate of growth at a hydrostatic pressure equal to or above , when tested over all permissible temperatures. Originally, the term barophile was used for these organisms, but since the prefix "baro-" stands for weight, the term piezophile was given preference. Like all definitions of extremophiles, the definition of piezophiles is anthropocentric, and humans consider that moderate values for hydrostatic pressure are those around 1 atm (= 0.1 MPa = 14.7 psi), whereas those "extreme" pressures are the normal living conditions for those organisms. Hyperpiezophiles are organisms that have their maximum growth rate above 50 MPa (= 493 atm = 7,252 psi). Though the high hydrostatic pressure has deleterious effects on organisms growing at atmospheric pressure, these organisms which are solely found at high pressure habitats at deep sea in fact need high pressures for their optimum growth. Often their growth is able to continue at much higher pressures (such as 100MPa) compared to those organisms which normally grow at low pressures. The first obligate piezophile found was a psychrophilic bacteria called Colwellia marinimaniae strain M-41. It was isolated from a decaying amphipod Hirondellea gigas from the bottom of Mariana Trench. The first thermophilic piezophilic archaea Pyrococcus yayanosii strain CH1 was isolated from the Ashadze site, a deep sea hydrothermal vent. Strain MT-41 has an optimal growth pressure at 70MPa at 2 °C and strain CH1 has a optimal growth pressure at 52MPa at 98 °C. They are unable to grow at pressures lower than or equal to 20MPa, and both can grow at pressures above 100MPa.The current record for highest hydrostatic pressure where growth was observed is 140MPa shown by Colwellia marinimaniae MTCD1. The term "obligate piezophile" refers to organisms that are unable to grow under lower hydrostatic pressures, such as 0.1 MPa. In contrast, piezotolerant organisms are those that have their maximum rate of growth at a hydrostatic pressure under 10 MPa, but that nevertheless are able to grow at lower rates under higher hydrostatic pressures. Most of the Earth's biosphere (in terms of volume) is subject to high hydrostatic pressure, and the piezosphere comprises the deep sea (at the depth of 1,000 m and greater) plus the deep subsurface (which can extend up to 5,000 m beneath the seafloor or the continental surface). The deep sea has a mean temperature around 1 to 3 °C, and it is dominated by psychropiezophiles. In contrast, deep subsurface and hydrothermal vents in the seafloor are dominated by thermopiezophiles that prosper in temperatures above 45 °C (113 °F). Although the study of nutrient acquisition and metabolism within the piezosphere is still in its infancy, it is understood that most of the organic matter present are refractory complex polymers from the eutrophic zone. Both heterotrophic metabolism and autotrophic fixation are present within the piezosphere and additional research suggests significant metabolism of iron-bearing minerals and carbon monoxide. Additional research is required to fully understand and characterize piezosphere metabolism. Piezophilic adaptations High pressure has several effects on biological systems. The application of pressure results in equilibrium shifting towards state occupying small volume and it changes intermolecular distances and affects conformations. This also has an effect on the functionality of the cells. Piezophiles employ several mechanisms to adapt themselves to these high hydrostatic pressures. They regulate gene expression according to pressure and also adapt their biomolecules to differences in pressure. Nucleic acids High pressure stabilizes hydrogen bonds and stacking interactions of the DNA. Thus it favours the double stranded duplex structure of the DNA. However, to carry out several processes like DNA replication, transcription and translation, the transition to single-strand structure is necessary, which becomes difficult as high pressure increases the melting temperature, Tm. Thus, these processes may face difficulties. Cell membranes When pressure increases, the fluidity of the cell membrane is decreased as due to restrictions in volume they change their conformation and packing. This decreases the permeability of the cell membrane to water and different molecules. In response to flucatuation in environment, they change their membrane structures. Piezophilic bacteria do so by varying their acyl chain length, by accumulating unsaturated fatty acids, accumulating specific polar headgroups and branched fatty acids. Piezophilic archaea synthesize archaeol and cadarchaeol-based polar lipids, bipolar tetraether lipids, incorporate cyclopentane rings and increase in unsaturation. Proteins The macromolecules bearing the largest effect of pressure are proteins. Just like lipids, they change their conformation and packing to accommodate changes in pressure. This affects their multimeric conformation, stability and also the structure of their catalytic sites, which changes their functionality. In pressure-intolerant species, proteins tend to compact and unfold under high pressures as overall volume is reduced. Piezophilic proteins, however, tend to have less void space and smaller void spaces overall to mitigate compaction and unfolding pressures. There are also changes in the various interactions between amino acids. In general, they are very resistant to pressure. Enzymes Due to the functional nature of enzymes, piezophiles must maintain their activity to survive. High pressures tend to favor enzymes with higher flexibility at the cost of lower stability. Additionally, piezophilic enzymes often have high absolute (distinct from temperature or pressure) and relative catalytic activity. This allows the enzymes to maintain sufficient activity even with decreases due to temperature or pressure effects. Furthermore, some piezophilic enzymes have increasing catalytic activity with increasing pressures, though this is not a generalization for all piezophilic enzymes. Overall effect on cells As a result of high pressure, several functions may be lost in organisms that are pressure-intolerant. Effects can include loss of flagellar motility, enzyme function, and thus metabolism. It can also lead to cell death due to modifications in the cellular structure. High pressures also can cause an imbalance in oxidation and reduction reactions generating relatively high concentrations of reactive oxygen species (ROS). An increased amount of anti-oxidation genes and proteins are found in piezophiles to combat the ROS as they often cause cellular damage. See also Extremophile Thermophile Psychrophile Archaea Bacteria Cell membrane References Aquatic ecology Bacteria
Piezophile
[ "Biology" ]
1,453
[ "Prokaryotes", "Ecosystems", "Bacteria", "Aquatic ecology", "Microorganisms" ]
2,686,401
https://en.wikipedia.org/wiki/Aerospace%20physiology
Aerospace physiology is the study of the effects of high altitudes on the body, such as different pressures and levels of oxygen. At different altitudes the body may react in different ways, provoking more cardiac output, and producing more erythrocytes. These changes cause more energy waste in the body, causing muscle fatigue, but this varies depending on the level of the altitude. Effects of altitude The physics that affect the body in the sky or in space are different from the ground. For example, barometric pressure is different at different heights. At sea level barometric pressure is 760 mmHg; at 3,048 m above sea level, barometric pressure is 523 mmHg, and at 15,240 m, the barometric pressure is 87 mmHg. As the barometric pressure decreases, atmospheric partial pressure decreases also. This pressure is always below 20% of the total barometric pressure. At sea level, alveolar partial pressure of oxygen is 104 mmHg, reaching 6000 meters above the sea level. This pressure will decrease up to 40 mmHg in a non-acclimated person, but in an acclimated person, it will decrease as much as 52 mmHg. This is because alveolar ventilation will increase more in the acclimated person. Aviation physiology can also include the effect in humans and animals exposed for long periods of time inside pressurized cabins. The other main issue with altitude is hypoxia, caused by both the lack of barometric pressure and the decrease in oxygen as the body rises. With exposure at higher altitudes, alveolar carbon dioxide partial pressure (PCO2) decreases from 40 mmHg (sea level) to lower levels. With a person acclimated to sea level, ventilation increases about five times and the carbon dioxide partial pressure decreases up to 6 mmHg. In an altitude of 3040 meters, arterial saturation of oxygen elevates to 90%, but over this altitude arterial saturation of oxygen decreases rapidly as much as 70% (6000 m), and decreases more at higher altitudes. g-forces g-forces are mostly experienced by the body during flight, especially high speed flight and space travel. This includes positive g-force, negative g-force and zero g-force, caused by simple acceleration, deceleration and centripetal acceleration. When an airplane turns, centripetal acceleration is determined by ƒ=mv2/r. This indicates that if speed increases, centripetal acceleration force also increases in proportion to the square of the speed. When an aviator is submitted to positive g-force in acceleration, the blood will move to the inferior part of the body, meaning that if the g-force is elevated, all the blood pressure in veins will increase. This means less blood reaches the heart, affecting its ability to function, with decreased circulation. The effects for negative g-force can be more dangerous producing hyperemia and also psychotic episodes. In space, G forces are almost zero, which is called microgravity, meaning that the person is floating in the interior of the vessel. This happens because the gravity acts on the spaceship and in the body equally, both are pulled with the same forces of acceleration and also in the same direction. Hypoxia (medical) General effects Hypoxia occurs when the bloodstream lacks oxygen. In an aerospace environment, this occurs because there is little or no oxygen. The work capacity of the body is reduced, decreasing the movement of all muscles (skeletal and cardiac muscles). The decrease in work capacity is related to the decrease of the oxygen of transportation velocity. Some acute effects from hypoxia include: dizziness, laxity, mental fatigue, muscle fatigue and euphoria. These effects will affect a non-acclimated person starting in an altitude of 3650 meters above sea level. These effects will increase and can result in cramps or convulsions at an altitude of 5500 meters and will end in an altitude at 7000 meters with a coma. Mountaineering disease One type of hypoxia related syndrome is mountaineering disease. A non-acclimated person that stays for a significant amount of time at a high altitude can develop high erythrocytes and hematocrit. Pulmonary arterial pressure will increase even if the person is acclimated, presenting dilatation of the right side of the heart. Peripheral arterial pressure is decreased, leading to congestive cardiac insufficiency, and death if exposure is long enough. These effects are produced by a decrease of erythrocytes, which causes a significant increase of viscosity in blood. This causes diminished blood flow in tissues, so oxygen distribution decreases. The vasoconstriction of the pulmonary arterioles is caused by hypoxia in the right portion of the heart. Arteriole spasms include the major part of the blood flow through the pulmonary vessels, producing a short circuit in the blood flow giving less oxygen in blood. The person will recover if there is an administration of oxygen or if s/he is taken to low altitudes. Mountaineering disease and pulmonary edema are most common in those who climb rapidly to a high altitude. This illness starts from a few hours up to two or three days after ascension to a high altitude. There exist two cases: acute cerebral edema and acute pulmonary edema. The first one is caused by the vasodilatation of the cerebral blood vessels produced by the hypoxia; the second one is caused by the vasoconstriction of the pulmonary arterioles, caused by the hypoxia. Adaptation to low oxygen environments Hypoxia is the principal stimulus that increases the number of erythrocytes, increasing the hematocrit from 40 up to 60%, with an increase of the hemoglobin concentration in blood from 15 g/dl up to 20–21 g/dl. Also the blood volume increases 20% producing an increase of the corporal hemoglobin up 15% or more. A person that stays for a period of time at higher altitudes acclimates, producing fewer effects over the human body. There are several mechanisms that help with acclimation, which are an increase of pulmonary ventilation, higher erythrocytes levels, increase of the pulmonary diffusion capacity and increase of the vascularization of the peripheral tissues. Arterial chemical receptors are stimulated by exposure to a low partial pressure and hence increase alveolar ventilation, up to a maximum of 1.65 times. Almost immediately, compensation for the higher altitude begins with an increase of pulmonary ventilation eliminating a large amount CO2. Carbon dioxide partial pressure reduces and corporal fluids pH increase. These actions inhibit the respiratory center of the encephalic trunk, but later this inhibition disappears and the respiratory center responds to the stimulation of the peripheral chemical receptors because of the hypoxia increasing ventilation up to six times. Cardiac output increases up to 30% after a person rises to a high altitude, but it will decrease back to normal levels, depending on the increase of the hematocrit. The quantity of oxygen that goes to the peripheral tissues its relatively normal. Also a disease called "angiogenia" appears. The kidneys respond to low carbon dioxide partial pressure by decreasing the secretion of hydrogen ions, and increasing the excretion of bicarbonate. This respiratory alkalosis reduces the concentration of HCO3 and return plasma pH to normal levels. The respiratory center responds to the stimulation of the peripheral chemical receptors produced by the hypoxia after the kidneys have recover the alkalosis. References External links Airman Education Programs: Aerospace Physiology Training FAA: Aircrew Health and Safety Videos Aviation medicine Physiology Physiology
Aerospace physiology
[ "Physics", "Biology" ]
1,574
[ "Spacetime", "Space", "Aerospace", "Physiology" ]
2,686,467
https://en.wikipedia.org/wiki/Tryptone
Tryptone is the assortment of peptides formed by the digestion of casein by the protease trypsin. Tryptone is commonly used in microbiology to produce lysogeny broth (LB) for the growth of E. coli and other microorganisms. It provides a source of amino acids for the growing bacteria. Tryptone is similar to casamino acids, both being digests of casein, but casamino acids can be produced by acid hydrolysis and typically only have free amino acids and few peptide chains; tryptone by contrast is the product of an incomplete enzymatic hydrolysis with some oligopeptides present. Tryptone is also a component of some germination media used in plant propagation. See also Albumose Trypticase soy agar References Peptides Microbiological media ingredients
Tryptone
[ "Chemistry" ]
178
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
2,686,574
https://en.wikipedia.org/wiki/A%20Hole%20in%20Texas
A Hole In Texas is a novel by Herman Wouk. Published in 2004, the book describes the adventures of a high-energy physicist following the surprise announcement that a Chinese physicist (with whom he had a long-ago romance) had discovered the long-sought Higgs boson. Parts of the plot are based on the aborted Superconducting Super Collider project. Published by Little, Brown and Company, . Literary significance and reception Kirkus Reviews said that A Hole In Texas was "Ingenious. Absolutely ingenious." Publishers Weekly called it "Occasionally corny but also playful, thoughtful and passionate". The journal Science said that Wouk "accurately depicts science as an often interactive and collegial enterprise", and that the novel offers a "refreshing contrast with the treatments of mad scientists that are so abundant in literature and popular culture." The review in Nature had some criticism, saying that the "scientific explanations are pat and usually come in the form of long e-mails that bog down the plot", that the discussions of the Chinese people "verge on racism", and that the book's ending "falls flat". Notes 2004 American novels Novels about NASA Novels by Herman Wouk Novels set in Texas Works about particle physics
A Hole in Texas
[ "Physics" ]
258
[ "Works about particle physics", "Particle physics" ]
2,687,105
https://en.wikipedia.org/wiki/Lithium%20fluoride
Lithium fluoride is an inorganic compound with the chemical formula LiF. It is a colorless solid that transitions to white with decreasing crystal size. Its structure is analogous to that of sodium chloride, but it is much less soluble in water. It is mainly used as a component of molten salts. Partly because Li and F are both light elements, and partly because is highly reactive, formation of LiF from the elements releases one of the highest energies per mass of reactants, second only to that of BeO. Manufacturing LiF is prepared from lithium hydroxide or lithium carbonate with hydrogen fluoride. Applications Precursor to lithium hexafluorophosphate for batteries Lithium fluoride is reacted with hydrogen fluoride (HF) and phosphorus pentachloride to make lithium hexafluorophosphate , an ingredient in lithium ion battery electrolyte. The lithium fluoride alone does not absorb hydrogen fluoride to form a bifluoride salt. In molten salts Fluorine is produced by the electrolysis of molten potassium bifluoride. This electrolysis proceeds more efficiently when the electrolyte contains a few percent of LiF, possibly because it facilitates formation of an Li-C-F interface on the carbon electrodes. A useful molten salt, FLiNaK, consists of a mixture of LiF, together with sodium fluoride and potassium fluoride. The primary coolant for the Molten-Salt Reactor Experiment was FLiBe; (66 mol% of LiF, 33 mol% of ). Optics Because of the large band gap for LiF, its crystals are transparent to short wavelength ultraviolet radiation, more so than any other material. LiF is therefore used in specialized optics for the vacuum ultraviolet spectrum. (See also magnesium fluoride.) Lithium fluoride is used also as a diffracting crystal in X-ray spectrometry. Radiation detectors It is also used as a means to record ionizing radiation exposure from gamma rays, beta particles, and neutrons (indirectly, using the (n,alpha) nuclear reaction) in thermoluminescent dosimeters. 6LiF nanopowder enriched to 96% has been used as the neutron reactive backfill material for microstructured semiconductor neutron detectors (MSND). Nuclear reactors Lithium fluoride (highly enriched in the common isotope lithium-7) forms the basic constituent of the preferred fluoride salt mixture used in liquid-fluoride nuclear reactors. Typically lithium fluoride is mixed with beryllium fluoride to form a base solvent (FLiBe), into which fluorides of uranium and thorium are introduced. Lithium fluoride is exceptionally chemically stable and LiF/ mixtures (FLiBe) have low melting points () and the best neutronic properties of fluoride salt combinations appropriate for reactor use. MSRE used two different mixtures in the two cooling circuits. Cathode for PLED and OLEDs Lithium fluoride is widely used in PLED and OLED as a coupling layer to enhance electron injection. The thickness of the LiF layer is usually around 1 nm. The dielectric constant (or relative permittivity, ε) of LiF is 9.0. Natural occurrence Naturally occurring lithium fluoride is known as the extremely rare mineral griceite. References Lithium compounds Fluorides Alkali metal fluorides Optical materials Crystals Metal halides Rock salt crystal structure
Lithium fluoride
[ "Physics", "Chemistry", "Materials_science" ]
714
[ "Inorganic compounds", "Salts", "Materials", "Optical materials", "Crystallography", "Crystals", "Metal halides", "Fluorides", "Matter" ]
2,687,346
https://en.wikipedia.org/wiki/Interferon%20gamma
Interferon gamma (IFNG or IFN-γ) is a dimerized soluble cytokine that is the only member of the type II class of interferons. The existence of this interferon, which early in its history was known as immune interferon, was described by E. F. Wheelock as a product of human leukocytes stimulated with phytohemagglutinin, and by others as a product of antigen-stimulated lymphocytes. It was also shown to be produced in human lymphocytes. or tuberculin-sensitized mouse peritoneal lymphocytes challenged with Mantoux test (PPD); the resulting supernatants were shown to inhibit growth of vesicular stomatitis virus. Those reports also contained the basic observation underlying the now widely employed interferon gamma release assay used to test for tuberculosis. In humans, the IFNG protein is encoded by the IFNG gene. Through cell signaling, interferon gamma plays a role in regulating the immune response of its target cell. A key signaling pathway that is activated by type II IFN is the JAK-STAT signaling pathway. IFNG plays an important role in both innate and adaptive immunity. Type II IFN is primarily secreted by CD4+ T helper 1 (Th1) cells, natural killer (NK) cells, and CD8+ cytotoxic T cells. The expression of type II IFN is upregulated and downregulated by cytokines. By activating signaling pathways in cells such as macrophages, B cells, and CD8+ cytotoxic T cells, it is able to promote inflammation, antiviral or antibacterial activity, and cell proliferation and differentiation. Type II IFN is serologically different from interferon type 1, binds to different receptors, and is encoded by a separate chromosomal locus. Type II IFN has played a role in the development of cancer immunotherapy treatments due to its ability to prevent tumor growth. Function IFNG, or type II interferon, is a cytokine that is critical for innate and adaptive immunity against viral, some bacterial and protozoan infections. IFNG is an important activator of macrophages and inducer of major histocompatibility complex class II molecule expression. Aberrant IFNG expression is associated with a number of autoinflammatory and autoimmune diseases. The importance of IFNG in the immune system stems in part from its ability to inhibit viral replication directly, and most importantly from its immunostimulatory and immunomodulatory effects. IFNG is produced predominantly by natural killer cells (NK) and natural killer T cells (NKT) as part of the innate immune response, and by CD4 Th1 and CD8 cytotoxic T lymphocyte (CTL) effector T cells once antigen-specific immunity develops as part of the adaptive immune response. IFNG is also produced by non-cytotoxic innate lymphoid cells (ILC), a family of immune cells first discovered in the early 2010s. The primary cells that secrete type II IFN are CD4+ T helper 1 (Th1) cells, natural killer (NK) cells, and CD8+ cytotoxic T cells. It can also be secreted by antigen presenting cells (APCs) such as dendritic cells (DCs), macrophages (MΦs), and B cells to a lesser degree. Type II IFN expression is upregulated by the production of interleukin cytokines, such as IL-12, IL-15, IL-18, as well as type I interferons (IFN-α and IFN-β). Meanwhile, IL-4, IL-10, transforming growth factor-beta (TGF-β) and glucocorticoids are known to downregulate type II IFN expression. Type II IFN is a cytokine, meaning it functions by signaling to other cells in the immune system and influencing their immune response. There are many immune cells type II IFN acts on. Some of its main functions are to induce IgG isotype switching in B cells; upregulate major histocompatibility complex (MHC) class II expression on APCs; induce CD8+ cytotoxic T cell differentiation, activation, and proliferation; and activate macrophages. In macrophages, type II IFN stimulates IL-12 expression. IL-12 in turn promotes the secretion of IFNG by NK cells and Th1 cells, and it signals naive T helper cells (Th0) to differentiate into Th1 cells. Structure The IFNG monomer consists of a core of six α-helices and an extended unfolded sequence in the C-terminal region. This is shown in the structural models below. The α-helices in the core of the structure are numbered 1 to 6. The biologically active dimer is formed by anti-parallel inter-locking of the two monomers as shown below. In the cartoon model, one monomer is shown in red, the other in blue. Receptor binding Cellular responses to IFNG are activated through its interaction with a heterodimeric receptor consisting of Interferon gamma receptor 1 (IFNGR1) and Interferon gamma receptor 2 (IFNGR2). IFN-γ binding to the receptor activates the JAK-STAT pathway. Activation of the JAK-STAT pathway induces upregulation of interferon-stimulated genes (ISGs), including MHC II. IFNG also binds to the glycosaminoglycan heparan sulfate (HS) at the cell surface. However, in contrast to many other heparan sulfate binding proteins, where binding promotes biological activity, the binding of IFNG to HS inhibits its biological activity. The structural models shown in figures 1-3 for IFNG are all shortened at their C-termini by 17 amino acids. Full length IFNG is 143 amino acids long, the models are 126 amino acids long. Affinity for heparan sulfate resides solely within the deleted sequence of 17 amino acids. Within this sequence of 17 amino acids lie two clusters of basic amino acids termed D1 and D2, respectively. Heparan sulfate interacts with both of these clusters. In the absence of heparan sulfate the presence of the D1 sequence increases the rate at which IFNG-receptor complexes form. Interactions between the D1 cluster of amino acids and the receptor may be the first step in complex formation. By binding to D1 HS may compete with the receptor and prevent active receptor complexes from forming. The biological significance of heparan sulfates interaction with IFNG is unclear; however, binding of the D1 cluster to HS may protect it from proteolytic cleavage. Signaling IFNG binds to the type II cell-surface receptor, also known as the IFN gamma receptor (IFNGR) which is part of the class II cytokine receptor family. The IFNGR is composed of two subunits: the IFNGR1 and IFNGR2. IFNGR1 is associated with JAK1 and IFNGR2 is associated with JAK2. Upon IFNG binding the receptor, IFNGR1 and IFNGR2 undergo conformational changes that result in the autophosphorylation and activation of JAK1 and JAK2. This leads to a signaling cascade and eventual transcription of target genes. The expression of 236 different genes has been linked to type II IFN-mediated signaling. The proteins expressed by type II IFN-mediated signaling are primarily involved in promoting inflammatory immune responses and regulating other cell-mediated immune responses, such as apoptosis, intracellular IgG trafficking, cytokine signaling and production, hematopoiesis, and cell proliferation and differentiation. JAK-STAT pathway One key pathway triggered by IFNG binding IFNGRs is the Janus Kinase and Signal Transducer and Activator of Transcription pathway, more commonly referred to as the JAK-STAT pathway. In the JAK-STAT pathway, activated JAK1 and JAK2 proteins regulate the phosphorylation of tyrosine in STAT1 transcription factors. The tyrosines are phosphorylated at a very specific location, allowing activated STAT1 proteins to interact with each other come together to form STAT1-STAT1 homodimers. The STAT1-STAT1 homodimers can then enter the cell nucleus. They then initiate transcription by binding to gamma interferon activation site (GAS) elements, which are located in the promoter region of Interferon-stimulated genes (ISGs) that express for antiviral effector proteins, as well as positive and negative regulators of type II IFN signaling pathways. The JAK proteins also lead to the activation of phosphatidylinositol 3-kinase (PI3K). PI3K leads to the activation of protein kinase C delta type (PKC-δ) which phosphorylates the amino acid serine in STAT1 transcription factors. The phosphorylation of the serine in STAT1-STAT1 homodimers are essential for the full transcription process to occur. Other signaling pathways Other signaling pathways that are triggered by IFNG are the mTOR signaling pathway, the MAPK signaling pathway, and the PI3K/AKT signaling pathway. Biological activity IFNG is secreted by T helper cells (specifically, Th1 cells), cytotoxic T cells (TC cells), macrophages, mucosal epithelial cells and NK cells. IFNG is both an important autocrine signal for professional APCs in early innate immune response, and an important paracrine signal in adaptive immune response. The expression of IFNG is induced by the cytokines IL-12, IL-15, IL-18, and type I IFN. IFNG is the only Type II interferon and it is serologically distinct from Type I interferons; it is acid-labile, while the type I variants are acid-stable. IFNG has antiviral, immunoregulatory, and anti-tumor properties. It alters transcription in up to 30 genes producing a variety of physiological and cellular responses. Among the effects are: Promotes NK cell activity Increases antigen presentation and lysosome activity of macrophages. Activates inducible nitric oxide synthase (iNOS) Induces the production of IgG2a and IgG3 from activated plasma B cells Causes normal cells to increase expression of class I MHC molecules as well as class II MHC on antigen-presenting cells—to be specific, through induction of antigen processing genes, including subunits of the immunoproteasome (MECL1, LMP2, LMP7), as well as TAP and ERAAP in addition possibly to the direct upregulation of MHC heavy chains and B2-microglobulin itself Promotes adhesion and binding required for leukocyte migration Induces the expression of intrinsic defense factors—for example, with respect to retroviruses, relevant genes include TRIM5alpha, APOBEC, and Tetherin, representing directly antiviral effects Primes alveolar macrophages against secondary bacterial infections. IFNG is the primary cytokine that defines Th1 cells: Th1 cells secrete IFNG, which in turn causes more undifferentiated CD4+ cells (Th0 cells) to differentiate into Th1 cells, representing a positive feedback loop—while suppressing Th2 cell differentiation. (Equivalent defining cytokines for other cells include IL-4 for Th2 cells and IL-17 for Th17 cells.) NK cells and CD8+ cytotoxic T cells also produce IFNG. IFNG suppresses osteoclast formation by rapidly degrading the RANK adaptor protein TRAF6 in the RANK-RANKL signaling pathway, which otherwise stimulates the production of NF-κB. Activity in granuloma formation A granuloma is the body's way of dealing with a substance it cannot remove or sterilize. Infectious causes of granulomas (infections are typically the most common cause of granulomas) include tuberculosis, leprosy, histoplasmosis, cryptococcosis, coccidioidomycosis, blastomycosis, and toxoplasmosis. Examples of non-infectious granulomatous diseases are sarcoidosis, Crohn's disease, berylliosis, giant-cell arteritis, granulomatosis with polyangiitis, eosinophilic granulomatosis with polyangiitis, pulmonary rheumatoid nodules, and aspiration of food and other particulate material into the lung. The infectious pathophysiology of granulomas is discussed primarily here. The key association between IFNG and granulomas is that IFNG activates macrophages so that they become more powerful in killing intracellular organisms. Activation of macrophages by IFNG from Th1 helper cells in mycobacterial infections allows the macrophages to overcome the inhibition of phagolysosome maturation caused by mycobacteria (to stay alive inside macrophages). The first steps in IFNG-induced granuloma formation are activation of Th1 helper cells by macrophages releasing IL-1 and IL-12 in the presence of intracellular pathogens, and presentation of antigens from those pathogens. Next the Th1 helper cells aggregate around the macrophages and release IFNG, which activates the macrophages. Further activation of macrophages causes a cycle of further killing of intracellular bacteria, and further presentation of antigens to Th1 helper cells with further release of IFNG. Finally, macrophages surround the Th1 helper cells and become fibroblast-like cells walling off the infection. Activity during pregnancy Uterine natural killer cells (NKs) secrete high levels of chemoattractants, such as IFNG in mice. IFNG dilates and thins the walls of maternal spiral arteries to enhance blood flow to the implantation site. This remodeling aids in the development of the placenta as it invades the uterus in its quest for nutrients. IFNG knockout mice fail to initiate normal pregnancy-induced modification of decidual arteries. These models display abnormally low amounts of cells or necrosis of decidua. In humans, elevated levels of IFN gamma have been associated with increased risk of miscarriage. Correlation studies have observed high IFNG levels in women with a history of spontaneous miscarriage, when compared to women with no history of spontaneous miscarriage. Additionally, low-IFNG levels are associated with women who successfully carry to term. It is possible that IFNG is cytotoxic to trophoblasts, which leads to miscarriage. However, causal research on the relationship between IFNG and miscarriage has not been performed due to ethical constraints. Production Recombinant human IFNG, as an expensive biopharmaceutical, has been expressed in different expression systems including prokaryotic, protozoan, fungal (yeasts), plant, insect and mammalian cells. Human IFNG is commonly expressed in Escherichia coli, marketed as ACTIMMUNE®, however, the resulting product of the prokaryotic expression system is not glycosylated with a short half-life in the bloodstream after injection; the purification process from bacterial expression system is also very costly. Other expression systems like Pichia pastoris did not show satisfactory results in terms of yields. Therapeutic use Interferon gamma 1b is approved by the U.S. Food and Drug Administration to treat chronic granulomatous disease (CGD) and osteopetrosis. The mechanism by which IFNG benefits CGD is via enhancing the efficacy of neutrophils against catalase-positive bacteria by correcting patients' oxidative metabolism. It was not approved to treat idiopathic pulmonary fibrosis (IPF). In 2002, the manufacturer InterMune issued a press release saying that phase III data demonstrated survival benefit in IPF and reduced mortality by 70% in patients with mild to moderate disease. The U.S. Department of Justice charged that the release contained false and misleading statements. InterMune's chief executive, Scott Harkonen, was accused of manipulating the trial data, was convicted in 2009 of wire fraud, and was sentenced to fines and community service. Harkonen appealed his conviction to the U.S. Court of Appeals for the Ninth Circuit, and lost. Harkonen was granted a full pardon on January 20, 2021. Preliminary research on the role of IFNG in treating Friedreich's ataxia (FA) conducted by Children's Hospital of Philadelphia has found no beneficial effects in short-term (< 6-months) treatment. However, researchers in Turkey have discovered significant improvements in patients' gait and stance after 6 months of treatment. Although not officially approved, Interferon gamma has also been shown to be effective in treating patients with moderate to severe atopic dermatitis. Specifically, recombinant IFNG therapy has shown promise in patients with lowered IFNG expression, such as those with predisposition to herpes simplex virus, and pediatric patients. Potential use in immunotherapy IFNG increases an anti-proliferative state in cancer cells, while upregulating MHC I and MHC II expression, which increases immunorecognition and removal of pathogenic cells. IFNG also reduces metastasis in tumors by upregulating fibronectin, which negatively impacts tumor architecture. Increased IFNG mRNA levels in tumors at diagnosis has been associated to better responses to immunotherapy. Cancer immunotherapy The goal of cancer immunotherapy is to trigger an immune response by the patient's immune cells to attack and kill malignant (cancer-causing) tumor cells. Type II IFN deficiency has been linked to several types of cancer, including B-cell lymphoma and lung cancer. Furthermore, it has been found that in patients receiving the drug durvalumab to treat non-small cell lung carcinoma and transitional cell carcinoma had higher response rates to the drug, and the drug stunted the progression of both types of cancer for a longer duration of time. Thus, promoting the upregulation of type II IFN has been proven to be a crucial part in creating effective cancer immunotherapy treatments. IFNG is not approved yet for the treatment in any cancer immunotherapy. However, improved survival was observed when IFNG was administered to patients with bladder carcinoma and melanoma cancers. The most promising result was achieved in patients with stage 2 and 3 of ovarian carcinoma. On the contrary, it was stressed: "Interferon-γ secreted by CD8-positive lymphocytes upregulates PD-L1 on ovarian cancer cells and promotes tumour growth." The in vitro study of IFNG in cancer cells is more extensive and results indicate anti-proliferative activity of IFNG leading to the growth inhibition or cell death, generally induced by apoptosis but sometimes by autophagy. In addition, it has been reported that mammalian glycosylation of recombinant human IFNG, expressed in HEK293, improves its therapeutic efficacy compared to the unglycosylated form that is expressed in E. coli. Involvement in antitumor immunity Type II IFN enhances Th1 cell, cytotoxic T cell, and APC activities, which results in an enhanced immune response against the malignant tumor cells, leading to tumor cell apoptosis and necroptosis (cell death). Furthermore, Type II IFN suppresses the activity of regulatory T cells, which are responsible for silencing immune responses against pathogens, preventing the deactivation of the immune cells involved in the killing of the tumor cells. Type II IFN prevents tumor cell division by directly acting on the tumor cells, which results in increased expression of proteins that inhibit the tumor cells from continuing through the cell cycle (i.e., cell cycle arrest). Type II IFN can also prevent tumor growth by indirectly acting on endothelial cells lining the blood vessels close to the site of the tumor, cutting off blood flow to the tumor cells and thus the supply of necessary resources for tumor cell survival and proliferation. Barriers The importance of type II IFN in cancer immunotherapy has been acknowledged; current research is studying the effects of type II IFN on cancer, both as a solo form of treatment and as a form of treatment to be administered alongside other anticancer drugs. But type II IFN has not been approved by the Food and Drug Administration (FDA) to treat cancer, except for malignant osteoporosis. This is most likely due to the fact that while type II IFN is involved in antitumor immunity, some of its functions may enhance the progression of a cancer. When type II IFN acts on tumor cells, it may induce the expression of a transmembrane protein known as programmed death-ligand 1 (PDL1), which allows the tumor cells to evade an attack from immune cells. Type II IFN-mediated signaling may also promote angiogenesis (formation of new blood vessels to the tumor site) and tumor cell proliferation. Interactions Interferon gamma has been shown to interact with Interferon gamma receptor 1 and Interferon gamma receptor 2. Diseases Interferon gamma has been shown to be a crucial player in the immune response against some intracellular pathogens, including that of Chagas disease. It has also been identified as having a role in seborrheic dermatitis. IFNG has a significant anti-viral effect in herpes simplex virus I (HSV) infection. IFNG compromises the microtubules that HSV relies upon for transport into an infected cell's nucleus, inhibiting the ability of HSV to replicate. Studies in mice on acyclovir resistant herpes have shown that IFNG treatment can significantly reduce herpes viral load. The mechanism by which IFNG inhibits herpes reproduction is independent of T-cells, which means that IFNG may be an effective treatment in individuals with low T-cells. Chlamydia infection is impacted by IFNG in host cells. In human epithelial cells, IFNG upregulates expression of indoleamine 2,3-dioxygenase, which in turn depletes tryptophan in hosts and impedes chlamydia's reproduction. Additionally, in rodent epithelial cells, IFNG upregulates a GTPase that inhibits chlamydial proliferation. In both the human and rodent systems, chlamydia has evolved mechanisms to circumvent the negative effects of host cell behavior. Regulation There is evidence that interferon-gamma expression is regulated by a pseudoknotted element in its 5' UTR. There is also evidence that interferon-gamma is regulated either directly or indirectly by the microRNAs: miR-29. Furthermore, there is evidence that interferon-gamma expression is regulated via GAPDH in T-cells. This interaction takes place in the 3'UTR, where binding of GAPDH prevents the translation of the mRNA sequence. References Further reading External links IFNepitope2 Prediction of IFN-gamma inducing peptides Antiviral drugs Cytokines Immunostimulants Drugs developed by Hoffmann-La Roche
Interferon gamma
[ "Chemistry", "Biology" ]
5,000
[ "Cytokines", "Antiviral drugs", "Biocides", "Signal transduction" ]
2,687,374
https://en.wikipedia.org/wiki/SHEEP%20%28symbolic%20computation%20system%29
SHEEP is one of the earliest interactive symbolic computation systems. It is specialized for computations with tensors, and was designed for the needs of researchers working with general relativity and other theories involving extensive tensor calculus computations. SHEEP is a freeware package (copyrighted, but free for educational and research use). The name "SHEEP" is pun on the Lisp Algebraic Manipulator or LAM on which SHEEP is based. The package was written by Inge Frick, using earlier work by Ian Cohen and Ray d'Inverno, who had written ALAM - Atlas LISP Algebraic Manipulation in earlier (designed in 1970). SHEEP was an interactive computer package whereas LAM and ALAM were batch processing languages. Jan E. Åman wrote an important package in SHEEP to carry out the Cartan-Karlhede algorithm. A more recent version of SHEEP, written by Jim Skea, runs under Cambridge Lisp, which is also used for REDUCE. See also GRTensorII Notes External links SHEEP download directory at Queen Mary, University of London Some sources of info on Sheep Review article by M.A.H.MacCallum in "Workshop on Dynamical Spacetimes and Numerical Relativity" edited by Joan Centrella Tensors
SHEEP (symbolic computation system)
[ "Physics", "Engineering" ]
251
[ "Tensors", "Relativity stubs", "Theory of relativity" ]
2,687,437
https://en.wikipedia.org/wiki/GRTensorII
GRTensorII is a Maple package designed for tensor computations, particularly in general relativity. This package was developed at Queen's University in Kingston, Ontario by Peter Musgrave, Denis Pollney and Kayll Lake. While there are many packages which perform tensor computations (including a standard Maple package), GRTensorII is particularly well suited for carrying out routine computations of useful quantities when working with (or searching for) exact solutions in general relativity. Its principal advantages include convenience of definition of new spacetimes and tensor expression efficient computation with frames efficient computation of Ricci and Weyl spinor components and of Petrov classification efficient computation of the Carminati-McLenaghan invariants and other curvature invariants Currently, GRTensorII does have some drawbacks: Maple is expensive valuable subpackages for perturbation and junction computations have not been updated no subpackage is yet publicly available in GRTensorII for executing the Cartan-Karlhede algorithm sharing information with standard Maple packages can sometimes become awkward References Not a textbook, but excellent supplementary reading. The author uses GRTensorII, so this book is particularly well suited for use by students who have a working installation of this package. External links GRTensorII home page General relativity
GRTensorII
[ "Physics" ]
262
[ "General relativity", "Theory of relativity" ]
2,687,833
https://en.wikipedia.org/wiki/Net%20positive%20suction%20head
In a hydraulic circuit, net positive suction head (NPSH) may refer to one of two quantities in the analysis of cavitation: The Available NPSH (NPSHA): a measure of how close the fluid at a given point is to flashing, and so to cavitation. Technically it is the absolute pressure head minus the vapour pressure of the liquid. The Required NPSH (NPSHR): the head value at the suction side (e.g. the inlet of a pump) required to keep the fluid away from cavitating (provided by the manufacturer). NPSH is particularly relevant inside centrifugal pumps and turbines, which are parts of a hydraulic system that are most vulnerable to cavitation. If cavitation occurs, the drag coefficient of the impeller vanes will increase drastically—possibly stopping flow altogether—and prolonged exposure will damage the impeller. NPSH in a pump In a pump, cavitation will first occur at the inlet of the impeller. Denoting the inlet by i, the NPSHA at this point is defined as: where is the absolute pressure at the inlet, is the average velocity at the inlet, is the fluid density, is the acceleration of gravity and is the vapor pressure of the fluid. Note that NPSH is equivalent to the sum of both the static and dynamic heads – that is, the stagnation head – minus the equilibrium vapor pressure head, hence "net positive suction head". Applying the Bernoulli's equation for the control volume enclosing the suction free surface 0 and the pump inlet i, under the assumption that the kinetic energy at 0 is negligible, that the fluid is inviscid, and that the fluid density is constant: Using the above application of Bernoulli to eliminate the velocity term and local pressure terms in the definition of NPSHA: This is the standard expression for the available NPSH at a point. Cavitation will occur at the point i when the available NPSH is less than the NPSH required to prevent cavitation (NPSHR). For simple impeller systems, NPSHR can be derived theoretically, but very often it is determined empirically. Note NPSHAand NPSHR are in absolute units and usually expressed in "m" or "ft," not "psia". Experimentally, NPSHR is often defined as the NPSH3, the point at which the head output of the pump decreases by 3 % at a given flow due to reduced hydraulic performance. On multi-stage pumps this is limited to a 3 % drop in the first stage head. NPSH in a turbine The calculation of NPSH in a reaction turbine is different to the calculation of NPSH in a pump, because the point at which cavitation will first occur is in a different place. In a reaction turbine, cavitation will first occur at the outlet of the impeller, at the entrance of the draft tube. Denoting the entrance of the draft tube by e, the NPSHA is defined in the same way as for pumps: Applying Bernoulli's principle from the draft tube entrance e to the lower free surface 0, under the assumption that the kinetic energy at 0 is negligible, that the fluid is inviscid, and that the fluid density is constant: Using the above application of Bernoulli to eliminate the velocity term and local pressure terms in the definition of NPSHA: Note that, in turbines minor friction losses () alleviate the effect of cavitation - opposite to what happens in pumps. NPSH design considerations Vapour pressure is strongly dependent on temperature, and thus so will both NPSHR and NPSHA. Centrifugal pumps are particularly vulnerable especially when pumping heated solution near the vapor pressure, whereas positive displacement pumps are less affected by cavitation, as they are better able to pump two-phase flow (the mixture of gas and liquid), however, the resultant flow rate of the pump will be diminished because of the gas volumetrically displacing a disproportion of liquid. Careful design is required to pump high temperature liquids with a centrifugal pump when the liquid is near its boiling point. The violent collapse of the cavitation bubble creates a shock wave that can carve material from internal pump components (usually the leading edge of the impeller) and creates noise often described as "pumping gravel". Additionally, the inevitable increase in vibration can cause other mechanical faults in the pump and associated equipment. Relationship to other cavitation parameters The NPSH appears in a number of other cavitation-relevant parameters. The suction head coefficient is a dimensionless measure of NPSH: Where is the angular velocity (in rad/s) of the turbo-machine shaft, and is the turbo-machine impeller diameter. Thoma's cavitation number is defined as: Where is the head across the turbo-machine. Some general NPSH examples (based on sea level). Example Number 1: A tank with a liquid level 2 metres above the pump intake, plus the atmospheric pressure of 10 metres, minus a 2 metre friction loss into the pump (say for pipe & valve loss), minus the NPSHR curve (say 2.5 metres) of the pre-designed pump (see the manufacturers curve) = an NPSHA (available) of 7.5 metres. (not forgetting the flow duty). This equates to 3 times the NPSH required. This pump will operate well so long as all other parameters are correct. Remember that positive or negative flow duty will change the reading on the pump manufacture NPSHR curve. The lower the flow, the lower the NPSHR, and vice versa. Lifting out of a well will also create negative NPSH; however remember that atmospheric pressure at sea level is 10 metres! This helps us, as it gives us a bonus boost or “push” into the pump intake. (Remember that you only have 10 metres of atmospheric pressure as a bonus and nothing more!). Example Number 2: A well or bore with an operating level of 5 metres below the intake, minus a 2 metre friction loss into pump (pipe loss), minus the NPSHR curve (say 2.4 metres) of the pre-designed pump = an NPSHA (available) of (negative) -9.4 metres. Adding the atmospheric pressure of 10 metres gives a positive NPSHA of 0.6 metres. The minimum requirement is 0.6 metres above NPSHR), so the pump should lift from the well. Using the situation from example 2 above, but pumping 70 degrees Celsius (158F) water from a hot spring, creating negative NPSH, yields the following: Example Number 3: A well or bore running at 70 degrees Celsius (158F) with an operating level of 5 metres below the intake, minus a 2 metre friction loss into pump (pipe loss), minus the NPSHR curve (say 2.4 metres) of the pre-designed pump, minus a temperature loss of 3 metres/10 feet = an NPSHA (available) of (negative) -12.4 metres. Adding the atmospheric pressure of 10 metres and gives a negative NPSHA of -2.4 metres remaining. Remembering that the minimum requirement is 600 mm above the NPSHR therefore this pump will not be able to pump the 70 degree Celsius liquid and will cavitate and lose performance and cause damage. To work efficiently, the pump must be buried in the ground at a depth of 2.4 metres plus the required 600 mm minimum, totalling a total depth of 3 metres into the pit. (3.5 metres to be completely safe). A minimum of 600 mm (0.06 bar) and a recommended 1.5 metre (0.15 bar) head pressure “higher” than the NPSHR pressure value required by the manufacturer is required to allow the pump to operate properly. Serious damage may occur if a large pump has been sited incorrectly with an incorrect NPSHR value and this may result in a very expensive pump or installation repair. NPSH problems may be able to be solved by changing the NPSHR or by re-siting the pump. If an NPSHA is say 10 bar then the pump you are using will deliver exactly 10 bar more over the entire operational curve of a pump than its listed operational curve. Example: A pump with a max. pressure head of 8 bar (80 metres) will actually run at 18 bar if the NPSHA is 10 bar. i.e.: 8 bar (pump curve) plus 10 bar NPSHA = 18 bar. This phenomenon is what manufacturers use when they design multistage pumps, (Pumps with more than one impeller). Each multi stacked impeller boosts the succeeding impeller to raise the pressure head. Some pumps can have up to 150 stages or more, in order to boost heads up to hundreds of metres. References Hydraulics Fluid mechanics de:Pumpe#NPSH-Wert
Net positive suction head
[ "Physics", "Chemistry", "Engineering" ]
1,860
[ "Physical systems", "Hydraulics", "Civil engineering", "Fluid mechanics", "Fluid dynamics" ]
2,690,240
https://en.wikipedia.org/wiki/David%20G.%20Cory
David G. Cory is a Professor of Chemistry at the University of Waterloo where he holds the Canada Excellence Research Chair in Quantum Information Processing. He works at the Institute for Quantum Computing, and is also associated with the Waterloo Institute for Nanotechnology. Education and career Cory was educated at Case Western Reserve University, earning a bachelor's degree there in 1981 and a Ph.D. in chemistry in 1987. He carried out postdoctoral research at Radboud University Nijmegen in the Netherlands and at Naval Research Laboratory in Washington, D.C. He was a Professor of Nuclear Engineering at Massachusetts Institute of Technology prior to his 2010 appointment at Waterloo. At MIT, he worked on NMR, including his work on NMR quantum computation. Together with Amr Fahmy and Timothy Havel he developed the concept of pseudo-pure states and performed the first experimental demonstrations of NMR quantum computing. Cory's research also concerns the realization and application of quantum control in various physical systems and devices. In 2015, he and teams from University of Waterloo, National Institute of Standards and Technology and Boston University demonstrated the generation and control of orbital angular momentum of neutron beams using a fork-dislocation grating, extending the existing work in optical and electron beams to neutrons. They subsequently demonstrated the control of both the spin and orbital angular momentum degrees of freedom of neutron beams. See also NMR quantum computer Randomized benchmarking List of University of Waterloo people References External links Living people Year of birth missing (living people) Case Western Reserve University alumni MIT School of Engineering faculty Academic staff of the University of Waterloo 21st-century American chemists Quantum physicists Canadian chemical engineers 21st-century chemists Quantum information scientists Canadian physicists Physical chemists Fellows of the American Physical Society
David G. Cory
[ "Physics", "Chemistry" ]
356
[ "Physical chemists", "Quantum physicists", "Quantum mechanics" ]
2,690,303
https://en.wikipedia.org/wiki/Stanford%20Synchrotron%20Radiation%20Lightsource
The Stanford Synchrotron Radiation Lightsource (formerly Stanford Synchrotron Radiation Laboratory), a division of SLAC National Accelerator Laboratory, is operated by Stanford University for the Department of Energy. SSRL is a National User Facility which provides synchrotron radiation, a name given to electromagnetic radiation in the x-ray, ultraviolet, visible and infrared realms produced by electrons circulating in a storage ring (Stanford Positron Electron Asymmetric Ring - SPEAR) at nearly the speed of light. The extremely bright light that is produced can be used to investigate various forms of matter ranging from objects of atomic and molecular size to man-made materials with unusual properties. The obtained information and knowledge is of great value to society, with impact in areas such as the environment, future technologies, health, biology, basic research, and education. SSRL provides experimental facilities to some 2,000 academic and industrial scientists working in such varied fields as drug design, environmental cleanup, electronics, and x-ray imaging. It is located in San Mateo County, in the city of Menlo Park, California, close to the Stanford University main campus. History In 1972, the first x-ray beamline was constructed by Ingolf Lindau and Piero Pianetta as literally a "hole in the wall" extending off of the SPEAR storage ring. SPEAR had been built in an era of particle colliders, where physicists were more interested in smashing particles together in hope of discovering antimatter than in using x-ray radiation for solid state physics and chemistry. From those meager beginnings the Stanford Synchrotron Radiation Project (SSRP) began. Within a short time SSRP had five experimental hutches that each used the radiation originating from only one of the large SPEAR dipole (bending) magnets. Each one of those stations was equipped with a monochromator to select the radiation of interest, and experimenters would bring their samples and end stations from all over the world to study the unique effects only achieved through synchrotron radiation. The SLAC 2-mile linear accelerator was the original source for 3GeV electrons, but by 1991 SPEAR had its own 3-section linac and energy-ramping booster ring. Today, the SPEAR storage ring is dedicated completely to the Stanford Synchrotron Radiation Lightsource as part of the SLAC National Accelerator Laboratory facility. SSRL currently operates 24/7 for about nine months each year; the remaining time is used for major maintenance and upgrades where direct access to the storage ring is needed. There are currently 17 beamlines and over 30 unique experimental stations which are made available to users from universities, government labs, and industry from all over the world. Directors Sebastian Doniach 1973-1977 Arthur Bienenstock 1978-1998 Keith Hodgson 1998-2005 Joachim Stöhr 2005-2009 Piero Pianetta 2009 Chi-Chang Kao 2010-2012 Piero Pianetta 2012-2014 Kelly Gaffney 2014-2019 Paul McIntyre 2019-present Facilities listed by Beamline and Station BL 7-3, 9-3, 4-3 These three beamlines are dedicated to biological x-ray absorption spectroscopy. Beamline 7-3 is an unfocused beamline and thus is best suited for XAS on dilute protein samples. Beamline 9-3 has an additional upstream focusing mirror, over 7-3, making it the preferred choice for photo reducing samples or ones where multiple different spots are needed. Beamline 4-3 was newly reopened as of 4/6/2009 bringing special capabilities for soft-energy (2.4-6 keV) studies in addition to hard x-rays. Beamline 4-3 now replaces 6-2 as the preferred location for Sulfur K-edge experiments at SSRL. BL 6-2 With three upstream mirrors, two for focusing and a third for harmonic rejection, this beamline has become dedicated to transmission x-ray microscopy in the 4-12 keV range, soft x-ray absorption spectroscopy including Rapid-scanning xRF imaging, and advanced spectroscopy such as XES (resonant and non-resonant x-ray emission spectroscopy), XRS (non-resonant x-ray Raman scattering and RIXS (resonant inelastic X-ray scattering). BL 8-2, 10-1, 13-2 These three beamlines are specialized for soft x-ray absorption spectroscopy, including NEXAFS (Near edge X-ray absorption fine structure), some light atom Ligand K-edge (carbon, nitrogen, oxygen, chlorine), PES (Photoemission spectroscopy), and L-edge measurements. All experiments on these beamlines require special handling and advanced ultra high vacuum experience and techniques. BL 11-3 Materials Science Scattering, Reflectivity and Single Crystal Diffraction Experiments. Uses to date include: study of structure in organic, metal, and semiconductor thin films and multilayers; study of charge-density waves in rare earth tri-tellurides; study of in-situ growth of biogenic minerals; partial determination of texture in recrystallized pumice; quick determination of single crystal orientation. BL 1-5, 7-1, 9-1, 9-2, 11-1, 11-3, 12-2 These beamlines are used for macromolecular x-ray crystallography. All of the beamlines are for general use, except for beamline 12-2, which was funded in part by Caltech via a gift from the Gordon and Betty Moore Foundation. As a result, 40% of beamtime on 12-2 is reserved for Caltech researchers. BL 4-2 Biological small-angle X-ray scattering beamline. External links SSRL Headline News A Monthly Digital Publication Lightsources.org Archives and History Office - Stanford Synchrotron Radiation Project (SSRP) References SSRL Home page : Synchrotron Radiation Lightsource Synchrotron radiation facilities Laboratories in California United States Department of Energy national laboratories Buildings and structures in San Mateo County, California Particle physics facilities University and college laboratories in the United States Research institutes in the San Francisco Bay Area
Stanford Synchrotron Radiation Lightsource
[ "Materials_science" ]
1,271
[ "Materials testing", "Synchrotron radiation facilities" ]
2,690,773
https://en.wikipedia.org/wiki/Starlink%20Project
The Starlink Project, referred to by users as Starlink and by developers as simply The Project, was a UK astronomical computing project which supplied general-purpose data reduction software. Until the late 1990s, it also supplied computing hardware and system administration personnel to UK astronomical institutes. In the former respect, it was analogous to the US IRAF project. The project was formally started in 1980, though the funding had been agreed, and some work begun, a year earlier. It was closed down when its funding was withdrawn by the Particle Physics and Astronomy Research Council in 2005. In 2006, the Joint Astronomy Centre released its own updated version of Starlink and took over maintenance; the task was passed again in mid-2015 to the East Asian Observatory. The latest version was released on 2018 July 19. Part of the software is relicensed under the GNU GPL while some of it remain under the original custom licence. History From its beginning, the project aimed to cope with the ever-increasing data volumes which astronomers had to handle. A 1982 paper exclaimed that astronomers were returning from observing runs (a week or so of observations at a remote telescope) with more than 10 Gigabits of data on tape; at the end of its life the project was rolling out libraries to handle data of more than 4 Gigabytes per single image. The project provided centrally-purchased (and thus discounted) hardware, professional system administrators, and the developers to write astronomical data-reduction applications for the UK astronomy community and beyond. At its peak size in the late 1980s and early 1990s, the project had a presence at around 30 sites, located at most of the UK universities with an astronomy department, plus facilities at the Joint Astronomy Centre, the home of UKIRT and the James Clerk Maxwell Telescope in Hawaii. The number of active developers fluctuated between five and more than a dozen. By 1982, the project had a staff of 17, serving about 400 users at six sites, using seven VAXen (six VAX-11/780s and one VAX-11/750, representing a total of about 6.5 GB of disk space). They were networked from the outset, first with DECNET and later with X.25. Between 1992 and 1995 the project switched to UNIX (and switched the networking to TCP/IP), supporting Digital UNIX on Alpha-based systems, and Solaris on systems from Sun Microsystems. By the late 1990s it was additionally supporting Linux, and by 2005 it was supporting Red Hat Linux, Solaris, and Tru64 UNIX. It was about this time that the project open-sourced its software (using the GNU General Public License; it had previously had an "academic use only" licence), and reworked its build system so that the software could be built on a much broader range of POSIX-like systems, including OS X and Cygwin. Though it was not explicitly funded to do so, the project was an early participant in the Virtual Observatory movement, and contributed to the IVOA. One of its VO applications was TOPCAT, development of which continues, with AstroGrid funding. Applications, libraries, and other facilities The project produced a number of applications and libraries, including: GAIA The main GUI application, which acts as a general astronomical image viewer, as well as a front end to many of the other applications. ORAC-DR The ORAC-DR data reduction system, developed at JAC Hawai'i, is a data processing pipeline for incoming data. It is in use for online data reduction at UKIRT and JCMT for a variety of instruments. This is not a Starlink application as such, but it is tightly integrated with the Starlink suite, and by default uses Starlink software as its application engines. See the ORAC-DR home page for further details. KAPPA A suite of general-purpose data-analysis and visualisation tools, usable both from the command-line and graphically. It provides general-purpose applications that have wide applicability, concentrating on image processing, data visualisation, and manipulating NDF components. It integrates with other Starlink packages. In a wider context, KAPPA offers facilities not in IRAF, for instance handling of data errors, quality masking, a graphics database, availability from the shell, as well as more n-dimensional applications, widespread use of data axes, and a different style. It integrates with instrument packages developed at UK observatories. With the automatic data conversion and the availability of KAPPA and other Starlink packages from within the IRAF command language, it's possible to pick the best of the relevant tools from both systems to get the job done. CCDPACK A package of programs for reducing CCD-like data. They allow you to debias, remove dark current, pre-flash, flatfield, register, resample, normalize and combine your data. AST A flexible and powerful library for handling World Coordinate Systems, partly based on the SLALIB library. If you are writing software for astronomy and need to use celestial coordinates (e.g. RA and Dec), spectral coordinates (e.g. wavelength, frequency, etc.), or other coordinate system information, then this library should be of interest. It provides solutions for most of the problems you will meet and allows you to write robust and flexible software. It is able to read and write WCS information in a variety of formats, including FITS-WCS. It has Fortran, C and Python bindings. SLALIB A library of routines intended to make accurate and reliable positional-astronomy applications easier to write. Most SLALIB routines are concerned with astronomical position and time, but a number have wider trigonometrical, numerical or general applications. As well as this GPL version, there is also a commercial version of SLALIB available from its original author. HDS A Hierarchical Data System—is a portable, flexible system for storing and retrieving data, and takes over from a computer's filing system at the level of an individual file. A conventional file effectively contains a 1-dimensional sequence of data elements, whereas an HDS file can contain a more complex structure. It predates the Hierarchical Data Format by several years. NDF NDF is the project's principal data format. Built upon HDS the N-dimensional Data Format—is for storing bulk data in the form of n-dimensional arrays of numbers: mostly spectra, images, and cubes. It supports concepts such as quality, data errors, world coordinate systems, and Metadata. It is also extensible to handle user-defined information. ADAM The ADAM environment was a standardised software environment developed initially by the Royal Greenwich Observatory, and then adopted and developed by Starlink between 1985 and 1990. It was initially designed as a telescope control system, installed at the Anglo-Australian Telescope at Siding Spring Observatory, the William Herschel Telescope at the Isaac Newton Group of Telescopes on La Palma, and at the James Clerk Maxwell Telescope on Mauna Kea (where it is still working in legacy systems), but its role expanded to cover graphics, data access, interprocess communication, and the full range of functionality required to support a diverse range of interoperable applications. Although it is no longer seriously used for telescope control, other layers of it live on in the current versions of the Starlink applications and libraries. The project also produced a number of cookbooks on various astronomical topics. By the end, the project's code base consisted of around 100 components, totalling around 2,100,000 source lines of code written by the project or curated by it, in various languages including Fortran, C, C++, Java, Perl and Tcl/Tk, plus another 700,000 lines of customised third-party code. Obtaining the software At present, though funding for the project has ceased, the software is still available, either as pre-built distributions, or from a Git repository. The Astrophysics Source Code Library maintains an entry on Starlink. The Joint Astronomy Centre took over the maintenance of the Starlink codebase (with support from STFC), and made the following releases: Keoe (Vega) on 2006 September 7 Hokulei (Capella) in Spring 2007 March 1 Puana (Procyon) on 2007 August 22 Humu (Altair) on 2008 February 8. Lehuakona (Antares) on 2008 November 12. Nanahope (Pollux) on 2009 July 27. Hawaiki (Deneb) on 2010 January 20. Namaka (Lambda Scorpii) on 2011 February 8. Kapuahi (Aldebaran) on 2012 September 17. Hikianalia (Spica) on 2013 April 15. 2014A on 2014 July 24. The East Asian Observatory has now taken over co-ordination and maintenance of Starlink software, and it has made the following releases: 2015A on 2015 April 6. 2015B on 2015 December 17. 2016A on 2016 November 15. 2017A on 2017 August 10. 2018A on 2018 July 19. 2021A on 2021 December 27. 2023A on 2021 December 20. See also Space flight simulation game List of space flight simulation games Planetarium software List of observatory software References External links EAO Starlink page Astronomical imaging Astronomy in the United Kingdom Astronomy organizations Astronomy software College and university associations and consortia in the United Kingdom Cross-platform software Free astronomy software Grid computing projects Information technology organisations based in the United Kingdom Science and Technology Facilities Council
Starlink Project
[ "Astronomy" ]
1,954
[ "Astronomy software", "Works about astronomy", "Astronomy organizations" ]
2,690,851
https://en.wikipedia.org/wiki/Norbornene
Norbornene or norbornylene or norcamphene is a highly strained bridged cyclic hydrocarbon. It is a white solid with a pungent sour odor. The molecule consists of a cyclohexene ring with a methylene bridge between carbons 1 and 4. The molecule carries a double bond which induces significant ring strain and significant reactivity. Production Norbornene is made by a Diels–Alder reaction of cyclopentadiene and ethylene. Many substituted norbornenes can be prepared similarly. Related bicyclic compounds are norbornadiene, which has the same carbon skeleton but with two double bonds, and norbornane which is prepared by hydrogenation of norbornene. Reactions Norbornene undergoes an acid-catalyzed hydration reaction to form norborneol. This reaction was of great interest in the elucidation of the non-classical carbocation controversy. Norbornene is used in the Catellani reaction and in norbornene-mediated meta-C−H activation. Certain substituted norbornenes undergo unusual substitution reactions owing to the generation of the 2-norbornyl cation. Being a strained ene, norbornenes react readily with thiols in the thiol-ene reaction to form thioethers. This makes norbornene-functionalized monomers ideal for polymerization with thiol-based monomers to form thiol-ene networks. Polynorbornenes Norbornenes are important monomers in ring-opening metathesis polymerizations (ROMP). Typically these conversions are effected with ill-defined catalysts. Polynorbornenes exhibit high glass transition temperatures and high optical clarity. In addition to ROMP, norbornene monomers also undergo vinyl-addition polymerization, and is a popular monomer for use in cyclic olefin copolymers. Polynorbornene is used mainly in the rubber industry for antivibration (rail, building, industry), antiimpact (personal protective equipment, shoe parts, bumpers) and grip improvement (toy tires, racing tires, transmission systems, transports systems for copiers, feeders, etc.) Ethylidene norbornene is a related monomer derived from cyclopentadiene and butadiene. See also Nadic anhydride References Monomers Norbornanes
Norbornene
[ "Chemistry", "Materials_science" ]
493
[ "Monomers", "Polymer chemistry" ]
2,691,067
https://en.wikipedia.org/wiki/Injection%20pump
An injection pump is the device that pumps fuel into the cylinders of a diesel engine. Traditionally, the injection pump was driven indirectly from the crankshaft by gears, chains or a toothed belt (often the timing belt) that also drives the camshaft. It rotates at half crankshaft speed in a conventional four-stroke diesel engine. Its timing is such that the fuel is injected only very slightly before top dead centre of that cylinder's compression stroke. It is also common for the pump belt on gasoline engines to be driven directly from the camshaft. In some systems injection pressures can be as high as 620 bar (8992 psi). Safety Because of the need for positive injection into a very high-pressure environment, the pump develops great pressure—typically 15,000 psi (100 MPa) or more on newer systems. This is a good reason to take great care when working on diesel systems; escaping fuel at this sort of pressure can easily penetrate skin and clothes, and be injected into body tissues with medical consequences serious enough to warrant amputation. Construction Earlier diesel pumps used an in-line layout with a series of cam-operated injection cylinders in a line, rather like a miniature inline engine. The pistons have a constant stroke volume, and injection volume (i.e., throttling) is controlled by rotating the cylinders against a cut-off port that aligns with a helical slot in the cylinder. When all the cylinders are rotated at once, they simultaneously vary their injection volume to produce more or less power from the engine. Inline pumps still find favour on large multi-cylinder engines such as those on trucks, construction plant, static engines and agricultural vehicles. For use on cars and light trucks, the rotary pump or distributor pump was developed. It uses a single injection cylinder driven from an axial cam plate, which injects into the individual fuel lines via a rotary distribution valve. Later incarnations, such as the Bosch VE pump, vary the injection timing with crankshaft speed to allow greater power at high crank speeds, and smoother, more economical running at slower revolution of crankshaft. Some distributor injection pumps (or VE for Verteilereinspritz in german) variants have a pressure-based system that allows the injection volume to increase over normal to allow a turbocharger or supercharger equipped engine to develop more power under boost conditions. All injection pumps incorporate a governor to cut fuel supply if the crankshaft rpm endangers the engine - the heavy moving parts of diesel engines do not tolerate overspeeding well, and catastrophic damage can occur if they are over-revved. Poorly maintained and worn engines can consume their lubrication oil through worn out crankcase ventilation systems and 'run away', causing increasing engine speed until the engine destroys itself. This is because most diesel engines only regulate their speed by fuel supply control and most don't have a throttle valve to control air intake, other than those with EGR systems. New types Mechanical pumps are gradually being phased out in order to comply with international emissions directives, and to increase performance and economy. From the 1990s an intermediate stage between full electronic control were pumps that used electronic control units to control some of the functions of the rotary pump but were still mechanically timed and powered by the engine. The first generation four and five cylinder VW/Audi TDI engines pioneered these pumps before switching to unit injectors. These pumps were used to provide better injection control and refinement for car diesel engines as they changed from indirect injection to much more efficient but inherently less refined direct injection engines in the 1990s. The ECUs could even vary the damping of hydraulic engine mounts to aid refinement. BOSCH VP30 VP37 VP44 are example pumps. Since then there has been a widespread change to common rail diesel systems and electronic unit direct injection systems. These allow higher pressures to be developed, much finer control of injection volumes, and multiple injection stages compared to mechanical systems. See also Unit injector Injection pressures during the whole process should be above 1000–1200 bar for a good spray formation and air–fuel mixture; a tendency in practice to 1600–1800 bar and higher is noted. References Pumps Fuel injection systems Diesel engine components Diesel engine technology
Injection pump
[ "Physics", "Chemistry" ]
860
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
504,902
https://en.wikipedia.org/wiki/Electric%20field%20gradient
In atomic, molecular, and solid-state physics, the electric field gradient (EFG) measures the rate of change of the electric field at an atomic nucleus generated by the electronic charge distribution and the other nuclei. The EFG couples with the nuclear electric quadrupole moment of quadrupolar nuclei (those with spin quantum number greater than one-half) to generate an effect which can be measured using several spectroscopic methods, such as nuclear magnetic resonance (NMR), microwave spectroscopy, electron paramagnetic resonance (EPR, ESR), nuclear quadrupole resonance (NQR), Mössbauer spectroscopy or perturbed angular correlation (PAC). The EFG is non-zero only if the charges surrounding the nucleus violate cubic symmetry and therefore generate an inhomogeneous electric field at the position of the nucleus. EFGs are highly sensitive to the electronic density in the immediate vicinity of a nucleus. This is because the EFG operator scales as r−3, where r is the distance from a nucleus. This sensitivity has been used to study effects on charge distribution resulting from substitution, weak interactions, and charge transfer. Especially in crystals, the local structure can be investigated with above methods using the EFG's sensitivity to local changes, like defects or phase changes. In crystals the EFG is in the order of 1021V/m2. Density functional theory has become an important tool for methods of nuclear spectroscopy to calculate EFGs and provide a deeper understanding of specific EFGs in crystals from measurements. Definition A given charge distribution of electrons and nuclei, ρ(r), generates an electrostatic potential V(r). The derivative of this potential is the negative of the electric field generated. The first derivatives of the field, or the second derivatives of the potential, is the electric field gradient. The nine components of the EFG are thus defined as the second partial derivatives of the electrostatic potential, evaluated at the position of a nucleus: For each nucleus, the components Vij are combined as a symmetric 3 × 3 matrix. Under the assumption that the charge distribution generating the electrostatic potential is external to the nucleus, the matrix is traceless, for in that situation Laplace's equation, ∇2V(r) = 0, holds. Relaxing this assumption, a more general form of the EFG tensor which retains the symmetry and traceless character is where ∇2V(r) is evaluated at a given nucleus. As V (and φ) is symmetric it can be diagonalized. The principal tensor components are usually denoted Vzz, Vyy and Vxx in order of decreasing modulus. Given the traceless character, only two of the principal components are independent. Typically these are described by Vzz and the asymmetry parameter, η, defined as with and , thus . Electric field gradient as well as the asymmetry parameter can be evaluated numerically for large electric systems as shown in. References Electrostatics Atomic physics Quantum chemistry Electric and magnetic fields in matter
Electric field gradient
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
625
[ "Quantum chemistry", "Quantum mechanics", "Electric and magnetic fields in matter", "Materials science", "Theoretical chemistry", "Condensed matter physics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
505,425
https://en.wikipedia.org/wiki/DNA%20polymerase%20I
DNA polymerase I (or Pol I) is an enzyme that participates in the process of prokaryotic DNA replication. Discovered by Arthur Kornberg in 1956, it was the first known DNA polymerase (and the first known of any kind of polymerase). It was initially characterized in E. coli and is ubiquitous in prokaryotes. In E. coli and many other bacteria, the gene that encodes Pol I is known as polA. The E. coli Pol I enzyme is composed of 928 amino acids, and is an example of a processive enzyme — it can sequentially catalyze multiple polymerisation steps without releasing the single-stranded template. The physiological function of Pol I is mainly to support repair of damaged DNA, but it also contributes to connecting Okazaki fragments by deleting RNA primers and replacing the ribonucleotides with DNA. Discovery In 1956, Arthur Kornberg and colleagues discovered Pol I by using Escherichia coli (E. coli) extracts to develop a DNA synthesis assay. The scientists added 14C-labeled thymidine so that a radioactive polymer of DNA, not RNA, could be retrieved. To initiate the purification of DNA polymerase, the researchers added streptomycin sulfate to the E. coli extract. This separated the extract into a nucleic acid-free supernatant (S-fraction) and nucleic acid-containing precipitate (P-fraction). The P-fraction also contained Pol I and heat-stable factors essential for the DNA synthesis reactions. These factors were identified as nucleoside triphosphates, the building blocks of nucleic acids. The S-fraction contained multiple deoxynucleoside kinases. In 1959, the Nobel Prize in Physiology or Medicine was awarded to Arthur Kornberg and Severo Ochoa "for their discovery of the mechanisms involved in the biological synthesis of Ribonucleic acid and Deoxyribonucleic Acid." Structure and function General structure Pol I mainly functions in the repair of damaged DNA. Structurally, Pol I is a member of the alpha/beta protein superfamily, which encompasses proteins in which α-helices and β-strands occur in irregular sequences. E. coli DNA Pol I consists of multiple domains with three distinct enzymatic activities. Three domains, often referred to as thumb, finger and palm domain work together to sustain DNA polymerase activity. A fourth domain next to the palm domain contains an exonuclease active site that removes incorrectly incorporated nucleotides in a 3' to 5' direction in a process known as proofreading. A fifth domain contains another exonuclease active site that removes DNA or RNA in a 5' to 3' direction and is essential for RNA primer removal during DNA replication or DNA during DNA repair processes. E. coli bacteria produces 5 different DNA polymerases: DNA Pol I, DNA Pol II, DNA Pol III, DNA Pol IV, and DNA Pol V. Structural and functional similarity to other polymerases In DNA replication, the leading DNA strand is continuously extended in the direction of replication fork movement, whereas the DNA lagging strand runs discontinuously in the opposite direction as Okazaki fragments. DNA polymerases also cannot initiate DNA chains so they must be initiated by short RNA or DNA segments known as primers. In order for DNA polymerization to take place, two requirements must be met. First of all, all DNA polymerases must have both a template strand and a primer strand. Unlike RNA, DNA polymerases cannot synthesize DNA from a template strand. Synthesis must be initiated by a short RNA segment, known as RNA primer, synthesized by Primase in the 5' to 3' direction. DNA synthesis then occurs by the addition of a dNTP to the 3' hydroxyl group at the end of the preexisting DNA strand or RNA primer. Secondly, DNA polymerases can only add new nucleotides to the preexisting strand through hydrogen bonding. Since all DNA polymerases have a similar structure, they all share a two-metal ion-catalyzed polymerase mechanism. One of the metal ions activates the primer 3' hydroxyl group, which then attacks the primary 5' phosphate of the dNTP. The second metal ion will stabilize the leaving oxygen's negative charge, and subsequently chelates the two exiting phosphate groups. The X-ray crystal structures of polymerase domains of DNA polymerases are described in analogy to human right hands. All DNA polymerases contain three domains. The first domain, which is known as the "fingers domain", interacts with the dNTP and the paired template base. The "fingers domain" also interacts with the template to position it correctly at the active site. Known as the "palm domain", the second domain catalyses the reaction of the transfer of the phosphoryl group. Lastly, the third domain, which is known as the "thumb domain", interacts with double stranded DNA. The exonuclease domain contains its own catalytic site and removes mispaired bases. Among the seven different DNA polymerase families, the "palm domain" is conserved in five of these families. The "finger domain" and "thumb domain" are not consistent in each family due to varying secondary structure elements from different sequences. Function Pol I possesses four enzymatic activities: A 5'→3' (forward) DNA-dependent DNA polymerase activity, requiring a 3' primer site and a template strand A 3'→5' (reverse) exonuclease activity that mediates proofreading A 5'→3' (forward) exonuclease activity mediating nick translation during DNA repair. A 5'→3' (forward) RNA-dependent DNA polymerase activity. Pol I operates on RNA templates with considerably lower efficiency (0.1–0.4%) than it does DNA templates, and this activity is probably of only limited biological significance. In order to determine whether Pol I was primarily used for DNA replication or in the repair of DNA damage, an experiment was conducted with a deficient Pol I mutant strain of E. coli. The mutant strain that lacked Pol I was isolated and treated with a mutagen. The mutant strain developed bacterial colonies that continued to grow normally and that also lacked Pol I. This confirmed that Pol I was not required for DNA replication. However, the mutant strain also displayed characteristics which involved extreme sensitivity to certain factors that damaged DNA, like UV light. Thus, this reaffirmed that Pol I was more likely to be involved in repairing DNA damage rather than DNA replication. Mechanism In the replication process, RNase H removes the RNA primer (created by primase) from the lagging strand and then polymerase I fills in the necessary nucleotides between the Okazaki fragments (see DNA replication) in a 5'→3' direction, proofreading for mistakes as it goes. It is a template-dependent enzyme—it only adds nucleotides that correctly base pair with an existing DNA strand acting as a template. It is crucial that these nucleotides are in the proper orientation and geometry to base pair with the DNA template strand so that DNA ligase can join the various fragments together into a continuous strand of DNA. Studies of polymerase I have confirmed that different dNTPs can bind to the same active site on polymerase I. Polymerase I is able to actively discriminate between the different dNTPs only after it undergoes a conformational change. Once this change has occurred, Pol I checks for proper geometry and proper alignment of the base pair, formed between bound dNTP and a matching base on the template strand. The correct geometry of A=T and G≡C base pairs are the only ones that can fit in the active site. However, it is important to know that one in every 104 to 105 nucleotides is added incorrectly. Nevertheless, Pol I can fix this error in DNA replication using its selective method of active discrimination. Despite its early characterization, it quickly became apparent that polymerase I was not the enzyme responsible for most DNA synthesis—DNA replication in E. coli proceeds at approximately 1,000 nucleotides/second, while the rate of base pair synthesis by polymerase I averages only between 10 and 20 nucleotides/second. Moreover, its cellular abundance of approximately 400 molecules per cell did not correlate with the fact that there are typically only two replication forks in E. coli. Additionally, it is insufficiently processive to copy an entire genome, as it falls off after incorporating only 25–50 nucleotides. Its role in replication was proven when, in 1969, John Cairns isolated a viable polymerase I mutant that lacked the polymerase activity. Cairns' lab assistant, Paula De Lucia, created thousands of cell free extracts from E. coli colonies and assayed them for DNA-polymerase activity. The 3,478th clone contained the polA mutant, which was named by Cairns to credit "Paula" [De Lucia]. It was not until the discovery of DNA polymerase III that the main replicative DNA polymerase was finally identified. Research applications DNA polymerase I obtained from E. coli is used extensively for molecular biology research. However, the 5'→3' exonuclease activity makes it unsuitable for many applications. This undesirable enzymatic activity can be simply removed from the holoenzyme to leave a useful molecule called the Klenow fragment, widely used in molecular biology. In fact, the Klenow fragment was used during the first protocols of polymerase chain reaction (PCR) amplification until Thermus aquaticus, the source of a heat-tolerant Taq Polymerase I, was discovered in 1976. Exposure of DNA polymerase I to the protease subtilisin cleaves the molecule into a smaller fragment, which retains only the DNA polymerase and proofreading activities. See also DNA polymerase II DNA polymerase III DNA polymerase V References EC 2.7.7 DNA replication Enzymes 1956 in biology
DNA polymerase I
[ "Biology" ]
2,108
[ "DNA replication", "Molecular genetics", "Genetics techniques" ]
505,457
https://en.wikipedia.org/wiki/DnaA
DnaA is a protein that activates initiation of DNA replication in bacteria. Based on the Replicon Model, a positively active initiator molecule contacts with a particular spot on a circular chromosome called the replicator to start DNA replication. It is a replication initiation factor which promotes the unwinding of DNA at oriC. The DnaA proteins found in all bacteria engage with the DnaA boxes to start chromosomal replication. The onset of the initiation phase of DNA replication is determined by the concentration of DnaA. DnaA accumulates during growth and then triggers the initiation of replication. Replication begins with active DnaA binding to 9-mer (9-bp) repeats upstream of oriC. Binding of DnaA leads to strand separation at the 13-mer repeats. This binding causes the DNA to loop in preparation for melting open by the helicase DnaB. Function DnaA consists mainly in two different forms, the active ATP-form and the inactive ADP. The level of active DnaA within a cell is low immediately after a cell has divided. Although the active form of DnaA requires ATP, the formation of the oriC/DnaA complex and subsequent DNA unwinding does not require ATP hydrolysis. The oriC site in E. coli has three AT rich 13 base pair regions (DUEs) followed by four 9 bp regions with the sequence TTAT(C or A)CA(C or A)A. DnaA molecules bind to the 9 bp regions, which wrap around the proteins causing the DNA at the AT-rich region to unwind. There are currently 11 DnaA binding sites identified within oriC, to which DnaA binds with differential affinity. When DNA replication is about to commence, DnaA occupies all of the high and low affinity binding sites. The denatured AT-rich region allows for the recruitment of DnaB (helicase), which complexes with DnaC (helicase loader). DnaC helps the helicase to bind to and to properly accommodate the ssDNA at the 13 bp region; this is accomplished by ATP hydrolysis, after which DnaC is released. Single-strand binding proteins (SSBs) stabilize the single DNA strands in order to maintain the replication bubble. DnaB is a 5'→3' helicase, so it travels on the lagging strand. It associates with DnaG (a primase) to form the only primer for the leading strand and to add RNA primers on the lagging strand. The interaction between DnaG and DnaB is necessary to control the longitude of Okazaki fragments on the lagging strand. DNA polymerase III is then able to start DNA replication. DnaA is made up of four domains: the first is the N-terminal that associates with regulatory proteins, the second is a helical linker region, the third domain is a AAA+ region that binds to ATP, and the fourth domain is the C-terminal DNA binding region. DnaA contains two conserved regions: the first is located in the central part of the protein and corresponds to the ATP-binding domain, the second is located in the C-terminal half and is involved in DNA-binding. DnaA mutants The first strains to have the dnaA gene mutated were the temperature-sensitive K-12 strains CRT46 and CRT83, with the corresponding strain numbers beingdnaA46 and dnaA83. In contrary to dnaA mutants, the PC2 strain has a mutation in the dnaC gene, which codes for the loading factor for the DNA helicase dnaB. Synthesis DnaA has the ability to bind its own promoter. When DnaA binds to its own promoter it blocks RNA polymerase from binding the promoter and inhibits initiation of transcription. In this way, DnaA is able to regulate its own expression. This process is called autoregulation. Regulation Each cell division cycle triggers a new round of chromosome replication with the accumulation of DnaA, the initiator protein, on the OriC region of DNA. It is crucial to regulate DnaA-ATP monomer interactions with oriC during helicase loading and unwinding of origin DNA for precise timing. DnaA recognition sites in Escherichia coli are arranged in OriC to facilitate staged pre-replication complex assembling, with DnaA interacting with low affinity sites as it oligomerizes to fill the gaps between high affinity sites as it oligomerizes. There may be numerous gap-filling strategies to link OriC functions to bacterial lifestyles in nature, which may account for the wide variability of OriC DnaA recognition site patterns. The two forms of DnaA, the active ATP- and ADP-form are regulated. The ATP-form is converted to the ADP-form through either Regulatory inactivation of DnaA (RIDA), which in turn consists of the Hda protein and the β sliding clamp (DnaN) and datA-dependent DnaA-ATP hydrolysis. The ADP-form is converted to the ATP-form by DnaA-reactivating sequences 1 and 2 (DARS1 and DARS2). Regulation of DnaA binding to DNA at OriC Since DNA replication must occur irreversibly and only once per cycle, the binding behavior of DnaA complexes to OriC is a highly regulated, and therefore dependent on many other cellular mechanisms. While all OriC sites are bound at replication initiation, there are three high-affinity binding sites–R1, R2, and R4–that are typically occupied by DnaA for the majority of the cell cycle, thus making their binding somewhat less dependent of other events happening within the cell at a given point in time. By contrast, the lower affinity sites are typically only bound to DnaA complexes right before replication begins. There are currently eight identified sites with lower DnaA/OriC binding affinity: R5 (or R5M), I1, I2, I3 and R3, tau2, C1, C2 and C3. Between the R1 and R2 high affinity sites exist the R5M, tau2, I1, and I2 low affinity sites, and C3, C2, 13, and C1 exist between the R2 and R4 sites. The I sites, tau2, C2, and C3 sites preferentially bind with and are more efficient at binding to DnaA in its ATP-bound active form (DnaA-ATP) prior to DNA strand separation, whereas the R1-R5 sites and C1 site have not demonstrated a preference for binding with DnaA-ATP over DnaA-ADP. OriC binding with active DnaA-ATP complexes at the lower affinity I sites, as well as the tau2, C2, and C3, sites is required for the strand separation process to initiate in a time regulated manner, meaning DnaA-ATP cannot be substituted with inactive DnaA-ADP complexes to initiate replication properly and with sufficient regulation. Recent studies suggest that while OriC sites bound entirely to DnaA-ADP complexes are capable of preparing the cell for DNA replication, they struggle to maintain the healthy and consistent replication frequency regulation cells continuing OriC sites bound to DnaA-ATP complexes achieve, perhaps explaining why some sites bind preferentially to the active DnaA conformation over the inactive conformation. Two other proteins, an integration host factor (IHF) protein and a DnaA initiator associating (DiaA) protein, help facilitate the binding of DnaA-ATP complexes to the OriC sites and set the stage for replication initiation to occur. IHF plays a key functional role positively regulating the binding of DnaA complexes to the lower affinity OriC sites as the cell prepares for replication, essentially evening the playing field between the high and low affinity OriC sites in terms of their ability to bind with DnaA complexes. Cooperative binding is thought to be a mechanism in which the high-affinity sites supply the lower-affinity sites within their vicinity with DnaA-ATP complexes in the moments leading up to replication initiation. While DnaA can saturate all OriC binding sites in systems lacking IHF, a much higher concentration of DnaA is needed in the cellular environment for this to be achieved. However, in these situations, cells also experience a loss of synchronization in their replication initiation timing, indicating how important IHF is for maintaining consistent regulation of this process in cells and preventing a lag in the initiation of replication. When IHF is present in a cellular system, IHF enhances DnaA binding to low affinity OriC sites without any need for increasing the baseline concentration of DnaA present, further highlighting its importance in maintaining replication initiation timing. Conformationally, IHF assists in promoting the process of DnaA-ATP complexes binding to the low affinity OriC binding sites at the right time by binding to a different site on OriC ahead of replication initiation, causing DNA it to bend in such a way that facilitates efficient binding with DnaA-ATP complexes. Prior to IHF binding to OriC, a different protein, factor for inversion stimulation (FIS) protein, is bound to DNA for the majority of the cell cycle (with the exception of the events leading up to replication initiation), inhibiting the binding of IHF to DNA. Consequently, the binding of DnaA complexes to the lower affinity OriC sites is also inhibited, thus, preventing the chromosomal replication process from starting prematurely and thereby demonstrating how FIS positively regulates the maintenance of a consistent cell cycle progression via inhibition. As FIS binding to OriC weakens, IHF begins to bind to OriC, therefore increasing the low affinity sites’ ability to bind to DnaA-ATP complexes, concurrent with IHF binding. The switch from FIS binding to IHF binding to DNA is hypothesized to be brought about by the generation of more DnaA-ATP complexes, promoted by the existence of the high affinity sites bound to DnaA while FIS is concurrently bound to DNA, which are then recruited to the high affinity region and build up, exerting a conformational stress on bound FIS (especially by accumulation at the R2 site, as it is closest to the FIS binding site), thereby deteriorating its binding ability with DNA. As a result, IHF can take advantage of the weakened state of FIS binding so that it can then bind to its own respective OriC site, causing DNA to bend and essentially align the accumulated DnaA-ATP complexes better with the low affinity binding sites, thus aiding in the facilitation of their binding with DnaA-ATP. In the absence of the switch-like behavior that occurs with the transition from FIS to IHF binding to DNA, cells are unable to maintain a control over the sequence of events that ensure replication initiation happens both irreversibly and only once per cell cycle. DiaA positively regulates the replication initiation timeline by facilitating the binding of DnaA-ATP complexes on OriC sites. DiaA binds to DnaA in its tetrameric form (consisting of four DiaA protomers (individual proteins) bound to one another), specifically to the first domain of DnaA–in the same region where another protein, replicative DNA helicase (DnaB), is presumed to bind with DnaA. Due to its tetrameric structure, DiaA has the ability to bind to multiple DnaA-ATP complexes at a time, as each protamer within the homotetramer consists of an DnaA-ATP binding site. This beneficial characteristic of DiaA tetramers can aid in promoting the cooperative binding behavior of transferring DnaA-ATP molecules to different sites on the OriC region of DNA as the cell prepares to undergo chromosomal replication. DiaA also negatively regulates the chromosomal replication process by inhibiting the binding of the DnaB protein, whose presence and function is required for chromosomal replication, from binding to DnaA-ATP complexes assembled on OriC, therefore helping to preserve the inflexible regulation sequence of events needed for a controlled replication process and prevents asynchronous initiation within the overall cycle cycle. Thus, taken together, IHF and DiaA, along with the proteins they interact with in their respective binding mechanisms, are very both important for helping DnaA-ATP complexes bind to all the identified binding sites on OriC, including the low affinity sites, within a timely manner that ensures replication initiation occurs irreversibly and only a single time during the cell cycle. Once replication initiation has occurred and DNA has undergone strand separation successfully, a different process commences to make sure DnaA-ATP cannot bind directly to DNA again with a protein that negatively regulates replication initiation–the locus of datA–SeqA. When DNA unwinds post-initiation, new replication forks are generated, a process that subsequently leads to the unbinding of DnaA complexes from the OriC sites. DNA’s GATC sites within OriC and at the region where the dnaA promoter exists become hemimethylated, and therefore experience a reduced ability to function and express the same way as they would while methylated. SeqA is able to physically prevent replication from starting up again prematurely by binding to the hemimethylated GATA sites on OriC–many of which somewhat overlap with a couple of the low affinity binding DnaA binding sites, as well as IHF’s binding site on OriC–essentially shielding IHF and DnaA from binding to OriC. However, the high affinity OriC DnaA complex binding sites are not blocked by SeqA binding to DNA, thus explaining how DnaA stays bound to the three high affinity sites throughout the majority of the cycle cycle duration. When GATC sites are bound to SeqA while hemimethylated, they are limited in their ability to synthesize new DnaA proteins as well, thus causing DnaA concentration within the cell to decline post initiation. Thus, with these sites blocked by SeqA, DnaA-ATP binding to some of the lower affinity sites is not possible for a combination of reasons. In studies performed with strains lacking the ability to produce SeqA, cells were unable to synchronously initiate replication once per cycle, mirroring the effects of what happens when cells lack IHF. Since the binding of the low affinity sites on OriC are basically the key event that kick start replication initiation and DNA’s unraveling, SeqA’s blocking of DnaA-ATP complex binding during the majority of the cell cycle is vital for keep cells healthy by maintaining a consistent cycle. DnaA protein structure There are four disciplines within the DnaA protein. An initial comparison of Escherichia coli and Bacillus subtilis proteins led to the discovery of a sphere structure, which revealed a relatively conserved N-terminus and a largely conserved large C-terminus separated by a region that was mostly variable. As an example, the Enterobacterial proteins have nearly identical N- and C-terminal sequences, however they are characterized by numerous amino acid adjustments, elisions, and insertions in the variable regions. There is an AAA+ family ATPase motif and an independent DNA binding sphere in the C-terminal region. It was determined by NMR that Escherichia coli sphere IV had a crystal-clear structure when complexed with a DnaA- box. As a result, it was confirmed that the DNA list is intermediated by a combination of a helix-turn-helix motif and an introductory circle. When bound to ATP, but not to ADP, DnaA forms a super-helical structure with four monomers per turn. The structure of sphere I has been determined from three additional bacterial species and Escherichia coli by NMR. Autoregulation of DnaA protein synthesis The research on dnaA(Ts) mutants provided the first proof that the dnaA gene is autoregulated. DnaA protein is still produced at non-permissive temperatures where it is inactive, but in some mutants it can be made active again by returning to a temperature that is conducive to development. This reversible initiation capacity—which was larger than anticipated given the mass gain of the culture—could be seen in the absence of protein synthesis at the permissive temperature and suggested that the DnaA protein synthesis was derepressed at the high growth temperature. These results prompted a thorough investigation of the dnaA46 mutant under permissive, intermediate, and non-permissive development conditions. The study's findings revealed that as growth temperature increased, the DnaA46 protein's activity decreased, leading to progressively decreasing DNA and origin concentrations at intermediate temperatures. An increase in initiation capacity was seen concurrently with a decrease in DnaA protein activity. Hansen and Rasmussen (1977) argued that the DnaA protein had a positive effect in replication initiation aing transcripts entering the dnaA gene were found as a result of sequencing the dnaA promoter region and the dnaA gene. The DnaA promoter region has nine GATC sites within 225 base pairs, and a sequence that is similar to nd a negative role in its own synthesis based on these observations. Two promoters providrepetitions (DnaA-boxes) in the oriC region was found between the two promoters. According to several studies, the DnaA protein negatively regulates both promoters. In these research, it was discovered that the dnaA transcription was upregulated by 4- to 5-fold at non-permissive temperatures in dnaATs mutants and repressed by the same amount when DnaA protein was overproduced. The autoregulation of the dnaA gene requires the DnaA-box. The sequence of the dnaA2p promoter region has some intriguing characteristics that can be seen more clearly. This promoter contains two GATC sites, one in the 10 sequence and the other in the 35 sequence, and both in vivo and in vitro, methylation increases transcription from this promoter by a factor of two. In addition, DnaA protein binds to regions upstream of the dnaA2p promoter with a high affinity. See also Origin recognition complex References Further reading External links Biomolecules DNA replication
DnaA
[ "Chemistry", "Biology" ]
3,737
[ "Genetics techniques", "Natural products", "Organic compounds", "DNA replication", "Molecular genetics", "Structural biology", "Biomolecules", "Biochemistry", "Molecular biology" ]
505,470
https://en.wikipedia.org/wiki/Cooperativity
Cooperativity is a phenomenon displayed by systems involving identical or near-identical elements, which act dependently of each other, relative to a hypothetical standard non-interacting system in which the individual elements are acting independently. One manifestation of this is enzymes or receptors that have multiple binding sites where the affinity of the binding sites for a ligand is apparently increased, positive cooperativity, or decreased, negative cooperativity, upon the binding of a ligand to a binding site. For example, when an oxygen atom binds to one of hemoglobin's four binding sites, the affinity to oxygen of the three remaining available binding sites increases; i.e. oxygen is more likely to bind to a hemoglobin bound to one oxygen than to an unbound hemoglobin. This is referred to as cooperative binding. We also see cooperativity in large chain molecules made of many identical (or nearly identical) subunits (such as DNA, proteins, and phospholipids), when such molecules undergo phase transitions such as melting, unfolding or unwinding. This is referred to as subunit cooperativity. However, the definition of cooperativity based on apparent increase or decrease in affinity to successive ligand binding steps is problematic, as the concept of "energy" must always be defined relative to a standard state. When we say that the affinity is increased upon binding of one ligand, it is empirically unclear what we mean since a non-cooperative binding curve is required to rigorously define binding energy and hence also affinity. A much more general and useful definition of positive cooperativity is: A process involving multiple identical incremental steps, in which intermediate states are statistically underrepresented relative to a hypothetical standard system (null hypothesis) where the steps occur independently of each other. Likewise, a definition of negative cooperativity would be a process involving multiple identical incremental steps, in which the intermediate states are overrepresented relative to a hypothetical standard state in which individual steps occur independently. These latter definitions for positive and negative cooperativity easily encompass all processes which we call "cooperative", including conformational transitions in large molecules (such as proteins) and even psychological phenomena of large numbers of people (which can act independently of each other, or in a co-operative fashion). Cooperative binding When a substrate binds to one enzymatic subunit, the rest of the subunits are stimulated and become active. Ligands can either have positive cooperativity, negative cooperativity, or non-cooperativity. An example of positive cooperativity is the binding of oxygen to hemoglobin. One oxygen molecule can bind to the ferrous iron of a heme molecule in each of the four chains of a hemoglobin molecule. Deoxy-hemoglobin has a relatively low affinity for oxygen, but when one molecule binds to a single heme, the oxygen affinity increases, allowing the second molecule to bind more easily, and the third and fourth even more easily. The oxygen affinity of 3-oxy-hemoglobin is ~300 times greater than that of deoxy-hemoglobin. This behavior leads the affinity curve of hemoglobin to be sigmoidal, rather than hyperbolic as with the monomeric myoglobin. By the same process, the ability for hemoglobin to lose oxygen increases as fewer oxygen molecules are bound. ''See also Oxygen-hemoglobin dissociation curve. Negative cooperativity means that the opposite will be true; as ligands bind to the protein, the protein's affinity for the ligand will decrease, i.e. it becomes less likely for the ligand to bind to the protein. An example of this occurring is the relationship between glyceraldehyde-3-phosphate and the enzyme glyceraldehyde-3-phosphate dehydrogenase. Homotropic cooperativity refers to the fact that the molecule causing the cooperativity is the one that will be affected by it. Heterotropic cooperativity is where a third party substance causes the change in affinity. Homotropic or heterotropic cooperativity could be of both positives as well as negative types depend upon whether it support or oppose further binding of the ligand molecules to the enzymes. Subunit cooperativity Cooperativity is not only a phenomenon of ligand binding, but also applies anytime energetic interactions make it easier or more difficult for something to happen involving multiple units as opposed to with single units. (That is, easier or more difficult compared with what is expected when only accounting for the addition of multiple units). For example, unwinding of DNA involves cooperativity: Portions of DNA must unwind in order for DNA to carry out replication, transcription and recombination. Positive cooperativity among adjacent DNA nucleotides makes it easier to unwind a whole group of adjacent nucleotides than it is to unwind the same number of nucleotides spread out along the DNA chain. The cooperative unit size is the number of adjacent bases that tend to unwind as a single unit due to the effects of positive cooperativity. This phenomenon applies to other types of chain molecules as well, such as the folding and unfolding of proteins and in the "melting" of phospholipid chains that make up the membranes of cells. Subunit cooperativity is measured on the relative scale known as Hill's Constant. Hill equation A simple and widely used model for molecular interactions is the Hill equation, which provides a way to quantify cooperative binding by describing the fraction of saturated ligand binding sites as a function of the ligand concentration. Hill coefficient The Hill coefficient is a measure of ultrasensitivity (i.e. how steep is the response curve). From an operational point of view the Hill coefficient can be estimated as: . where EC90 and EC10 are the input values needed to produce the 10% and 90% of the maximal response, respectively. Response coefficient Global sensitivity measures such as the Hill coefficient do not characterise the local behaviours of the s-shaped curves. Instead, these features are well captured by the response coefficient measure defined as: In systems biology, such responses are referred to as elasticities. Link between Hill coefficient and response coefficient Altszyler et al. (2017) have shown that these ultrasensitivity measures can be linked by the following equation: where denoted the mean value of the variable x over the range [a,b]. Ultrasensitivity in function composition Consider two coupled ultrasensitive modules, disregarding effects of sequestration of molecular components between layers. In this case, the expression for the system's dose-response curve, , results from the mathematical composition of the functions, , which describe the input/output relationship of isolated modules : Brown et al. (1997) have shown that the local ultrasensitivity of the different layers combines multiplicatively: . In connection with this result, Ferrell et al. (1997) showed, for Hill-type modules, that the overall cascade global ultrasensitivity had to be less than or equal to the product of the global ultrasensitivity estimations of each cascade's layer, , where and are the Hill coefficient of modules 1 and 2 respectively. Altszyler et al. (2017) have shown that the cascade's global ultrasensitivity can be analytically calculated: where and delimited the Hill input's working range of the composite system, i.e. the input values for the i-layer so that the last layer (corresponding to in this case) reached the 10% and 90% of it maximal output level. It followed this equation that the system's Hill coefficient could be written as the product of two factors, and , which characterized local average sensitivities over the relevant input region for each layer: , with in this case. For the more general case of a cascade of modules, the Hill coefficient can be expressed as: , Supramultiplicativity Several authors have reported the existence of supramultiplicative behavior in signaling cascades (i.e. the ultrasensitivity of the combination of layers is higher than the product of individual ultrasensitivities), but in many cases the ultimate origin of supramultiplicativity remained elusive. Altszyler et al. (2017) framework naturally suggested a general scenario where supramultiplicative behavior could take place. This could occur when, for a given module, the corresponding Hill's input working range was located in an input region with local ultrasensitivities higher than the global ultrasensitivity of the respective dose-response curve. References Biomolecules Cell signaling Chemical bonding Enzyme kinetics Receptors
Cooperativity
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
1,815
[ "Natural products", "Biochemistry", "Enzyme kinetics", "Signal transduction", "Organic compounds", "Receptors", "Condensed matter physics", "nan", "Biomolecules", "Structural biology", "Chemical bonding", "Chemical kinetics", "Molecular biology" ]
505,501
https://en.wikipedia.org/wiki/DNA%20polymerase%20III%20holoenzyme
DNA polymerase III holoenzyme is the primary enzyme complex involved in prokaryotic DNA replication. It was discovered by Thomas Kornberg (son of Arthur Kornberg) and Malcolm Gefter in 1970. The complex has high processivity (i.e. the number of nucleotides added per binding event) and, specifically referring to the replication of the E.coli genome, works in conjunction with four other DNA polymerases (Pol I, Pol II, Pol IV, and Pol V). Being the primary holoenzyme involved in replication activity, the DNA Pol III holoenzyme also has proofreading capabilities that corrects replication mistakes by means of exonuclease activity reading 3'→5' and synthesizing 5'→3'. DNA Pol III is a component of the replisome, which is located at the replication fork. Components The replisome is composed of the following: 2 DNA Pol III enzymes, each comprising α, ε and θ subunits. (It has been proven that there is a third copy of Pol III at the replisome.) the α subunit (encoded by the dnaE gene) has the polymerase activity. the ε subunit (dnaQ) has 3'→5' exonuclease activity. the θ subunit (holE) stimulates the ε subunit's proofreading. 2 β units (dnaN) which act as sliding DNA clamps, they keep the polymerase bound to the DNA. 2 τ units (dnaX) which act to dimerize two of the core enzymes (α, ε, and θ subunits). 1 γ unit (also dnaX) which acts as a clamp loader for the lagging strand Okazaki fragments, helping the two β subunits to form a unit and bind to DNA. The γ unit is made up of 5 γ subunits which include 3 γ subunits, 1 δ subunit (holA), and 1 δ' subunit (holB). The δ is involved in copying of the lagging strand. Χ (holC) and Ψ (holD) which form a 1:1 complex and bind to γ or τ. X can also mediate the switch from RNA primer to DNA. Activity DNA polymerase III synthesizes base pairs at a rate of around 1000 nucleotides per second. DNA Pol III activity begins after strand separation at the origin of replication. Because DNA synthesis cannot start de novo, an RNA primer, complementary to part of the single-stranded DNA, is synthesized by primase (an RNA polymerase): ("!" for RNA, '"$" for DNA, "*" for polymerase) --------> * * * * ! ! ! ! _ _ _ _ _ _ _ _ | RNA | <--ribose (sugar)-phosphate backbone G U A U | Pol | <--RNA primer * * * * |_ _ _ _| <--hydrogen bonding C A T A G C A T C C <--template ssDNA (single-stranded DNA) _ _ _ _ _ _ _ _ _ _ <--deoxyribose (sugar)-phosphate backbone $ $ $ $ $ $ $ $ $ $ Addition onto 3'OH As replication progresses and the replisome moves forward, DNA polymerase III arrives at the RNA primer and begins replicating the DNA, adding onto the 3'OH of the primer: * * * * ! ! ! ! _ _ _ _ _ _ _ _ | DNA | <--deoxyribose (sugar)-phosphate backbone G U A U | Pol | <--RNA primer * * * * |_III_ _| <--hydrogen bonding C A T A G C A T C C <--template ssDNA (single-stranded DNA) _ _ _ _ _ _ _ _ _ _ <--deoxyribose (sugar)-phosphate backbone $ $ $ $ $ $ $ $ $ $ Synthesis of DNA DNA polymerase III will then synthesize a continuous or discontinuous strand of DNA, depending if this is occurring on the leading or lagging strand (Okazaki fragment) of the DNA. DNA polymerase III has a high processivity and therefore, synthesizes DNA very quickly. This high processivity is due in part to the β-clamps that "hold" onto the DNA strands. -----------> * * * * ! ! ! ! $ $ $ $ $ $ _ _ _ _ _ _ _ _ _ _ _ _ _ _| DNA | <--deoxyribose (sugar)-phosphate backbone G U A U C G T A G G| Pol | <--RNA primer * * * * * * * * * *|_III_ _| <--hydrogen bonding C A T A G C A T C C <--template ssDNA (single-stranded DNA) _ _ _ _ _ _ _ _ _ _ <--deoxyribose (sugar)-phosphate backbone $ $ $ $ $ $ $ $ $ $ Removal of primer After replication of the desired region, the RNA primer is removed by DNA polymerase I via the process of nick translation. The removal of the RNA primer allows DNA ligase to ligate the DNA-DNA nick between the new fragment and the previous strand. DNA polymerase I & III, along with many other enzymes are all required for the high fidelity, high-processivity of DNA replication. See also Beta clamp DNA polymerase DNA replication References External links Overview at Oregon State University Clamping down on pathogenic bacteria – how to shut down a key DNA polymerase complex EC 2.7.7 DNA replication Enzymes Protein complexes
DNA polymerase III holoenzyme
[ "Biology" ]
1,218
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
505,575
https://en.wikipedia.org/wiki/Phosphodiester%20bond
In chemistry, a phosphodiester bond occurs when exactly two of the hydroxyl groups () in phosphoric acid react with hydroxyl groups on other molecules to form two ester bonds. The "bond" involves this linkage . Discussion of phosphodiesters is dominated by their prevalence in DNA and RNA, but phosphodiesters occur in other biomolecules, e.g. acyl carrier proteins, phospholipids and the cyclic forms of GMP and AMP (cGMP and cAMP). Phosphodiester Backbone of DNA and RNA Phosphodiester bonds make up the backbones of DNA and RNA. In the phosphodiester bonds of nucleic acids, a phosphate is attached to the 5' carbon of one nucleoside and to the 3' carbon of the adjacent nucleoside. Specifically, it is the phosphodiester bonds that link the 3' carbon atom of one sugar molecule and the 5' carbon atom of another (hence the name 3', 5' phosphodiester linkage used with reference to this kind of bond in DNA and RNA chains). The involved saccharide groups are deoxyribose in DNA and ribose in RNA. In order for the phosphodiester bond to form, joining the nucleosides, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. In DNA replication, for example, formation of the phosphodiester bonds is catalyzed by a DNA polymerase enzyme, using a pair of magnesium cations and other supporting structures. Formation of the bond occurs not only in DNA and RNA replication, but also in the repair and recombination of nucleic acids, and may require the involvement of various polymerases, primers, and/or ligases. During the replication of DNA, for example, the DNA polymerase I leaves behind a hole between the phosphates in the newly formed backbone. DNA ligase is able to form a phosphodiester bond between the nucleotides on each side of the gap. Phosphodiesters are negatively charged at pH 7. The negative charge attracts histones, metal cations such as magnesium, and polyamines [needs citation]. Repulsion between these negative charges influences the conformation of the polynucleic acids. Breaking the Phosphodiester Bond Hydrolysis (breaking) of phosphodiester bonds can be promoted in several ways. Phosphodiesterases are enzymes that catalyze the hydrolysis of the phosphodiester bond. These enzymes are involved in repairing DNA and RNA sequences, nucleotide salvage, and in the conversion of cGMP and cAMP to GMP and AMP, respectively. Hydrolysis of the phosphodiester bond also occurs chemically and spontaneously, without the aid of enzymes. For example, simple ribose (in RNA) has one more hydroxyl group than deoxyribose (in DNA), making the former less stable and more susceptible to alkaline hydrolysis, wherein relatively high pH conditions induce the breaking of the phosphodiester linkage between two ribonucleotides. The relative instability of RNA under hydroxyl attack of its phosphodiester bonds makes it inadequate for the storage of genomic information, but contributes to its usefulness in transcription and translation. See also Phosphodiesterase Phosphodiesterase inhibitor DNA replication, DNA, ATP Teichoic acid, DNase I PDE5 Nick (DNA) References Organophosphates Molecular biology Biology and pharmacology of chemical elements
Phosphodiester bond
[ "Chemistry", "Biology" ]
787
[ "Pharmacology", "Properties of chemical elements", "Biology and pharmacology of chemical elements", "Molecular biology", "Biochemistry" ]
506,276
https://en.wikipedia.org/wiki/Equivalent%20dose
Equivalent dose is a dose quantity H representing the stochastic health effects of low levels of ionizing radiation on the human body which represents the probability of radiation-induced cancer and genetic damage. It is derived from the physical quantity absorbed dose, but also takes into account the biological effectiveness of the radiation, which is dependent on the radiation type and energy. In the SI system of units, the unit of measure is the sievert (Sv). Application To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent dose, the details of which depend on the radiation type. For applications in radiation protection and dosimetry assessment, the International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data on how to calculate equivalent dose from absorbed dose. Equivalent dose is designated by the ICRP as a "limiting quantity"; to specify exposure limits to ensure that "the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". This is a calculated value, as equivalent dose cannot be practically measured, and the purpose of the calculation is to generate a value of equivalent dose for comparison with observed health effects. Calculation Equivalent dose HT is calculated using the mean absorbed dose deposited in body tissue or organ T, multiplied by the radiation weighting factor WR which is dependent on the type and energy of the radiation R. The radiation weighting factor represents the relative biological effectiveness of the radiation and modifies the absorbed dose to take account of the different biological effects of various types and energies of radiation. The ICRP has assigned radiation weighting factors to specified radiation types dependent on their relative biological effectiveness, which are shown in accompanying table. Calculating equivalent dose from absorbed dose; where HT  is the equivalent dose in sieverts (Sv) absorbed by tissue T, DT,R  is the absorbed dose in grays (Gy) in tissue T by radiation type R and WR  is the radiation weighting factor defined by regulation. Thus for example, an absorbed dose of 1 Gy by alpha particles will lead to an equivalent dose of 20 Sv, and an equivalent dose of radiation is estimated to have the same biological effect as an equal amount of absorbed dose of gamma rays, which is given a weighting factor of 1. To obtain the equivalent dose for a mix of radiation types and energies, a sum is taken over all types of radiation energy doses. This takes into account the contributions of the varying biological effect of different radiation types. History The concept of equivalent dose was developed in the 1950s. In its 1990 recommendations, the ICRP revised the definitions of some radiation protection quantities, and provided new names for the revised quantities. Some regulators, notably the International Committee for Weights and Measures (CIPM) and the US Nuclear Regulatory Commission continue to use the old terminology of quality factors and dose equivalent, even though the underlying calculations have changed. Future use At the ICRP 3rd International Symposium on the System of Radiological Protection in October 2015, ICRP Task Group 79 reported on the "Use of Effective Dose as a Risk-related Radiological Protection Quantity". This included a proposal to discontinue use of equivalent dose as a separate protection quantity. This would avoid confusion between equivalent dose, effective dose and dose equivalent, and to use absorbed dose in Gy as a more appropriate quantity for limiting deterministic effects to the eye lens, skin, hands & feet. These proposals will need to go through the following stages: Discussion within ICRP Committees Revision of report by Task Group Reconsideration by Committees and Main Commission Public Consultation Units The SI unit of measure for equivalent dose is the sievert, defined as one Joule per kg. In the United States the roentgen equivalent man (rem), equal to 0.01 sievert, is still in common use, although regulatory and advisory bodies are encouraging transition to sievert. Related quantities Limitation of equivalent dose calculation Equivalent dose HT is used for assessing stochastic health risk due to external radiation fields that penetrate uniformly through the whole body. However it needs further corrections when the field is applied only to part(s) of the body, or non-uniformly to measure the overall stochastic health risk to the body. To enable this a further dose quantity called effective dose must be used to take into account the varying sensitivity of different organs and tissues to radiation. Relationship to committed dose Whilst equivalent dose is used for the stochastic effects of external radiation, a similar approach is used for internal, or committed dose. The ICRP defines an equivalent dose quantity for individual committed dose, which is used to measure the effect of inhaled or ingested radioactive materials. A committed dose from an internal source represents the same effective risk as the same amount of equivalent dose applied uniformly to the whole body from an external source. Committed equivalent dose, H T(t) is the time integral of the equivalent dose rate in a particular tissue or organ that will be received by an individual following intake of radioactive material into the body by a Reference Person, where s is the integration time in years. This refers specifically to the dose in a specific tissue or organ, in the similar way to external equivalent dose. The ICRP states "Radionuclides incorporated in the human body irradiate the tissues over time periods determined by their physical half-life and their biological retention within the body. Thus they may give rise to doses to body tissues for many months or years after the intake. The need to regulate exposures to radionuclides and the accumulation of radiation dose over extended periods of time has led to the definition of committed dose quantities". Equivalent dose V dose equivalent There is no confusion between equivalent dose and dose equivalent. Indeed, they are same concepts. Although the CIPM definition states that the linear energy transfer function of the ICRU is used in calculating the biological effect, the ICRP in 1990 developed the "protection" dose quantities named effective and equivalent dose, which are calculated from more complex computational models and are distinguished by not having the phrase dose equivalent in their name. Prior to 1990, the ICRP used the term "dose equivalent" to refer to the absorbed dose at a point multiplied by the quality factor at that point, where the quality factor was a function of linear energy transfer (LET). Currently, the ICRP's definition of "equivalent dose" represents an average dose over an organ or tissue, and radiation weighting factors are used instead of quality factors. The phrase dose equivalent is only used for which use Q for calculation, and the following are defined as such by the ICRU and ICRP: ambient dose equivalent directional dose equivalent personal dose equivalent In the US there are further differently named dose quantities which are not part of the ICRP system of quantities. Use of old factors The International Committee for Weights and Measures (CIPM) and the US Nuclear Regulatory Commission continue to use the old terminology of quality factors and dose equivalent. The NRC quality factors are independent of linear energy transfer, though not always equal to the ICRP radiation weighting factors. The NRC's definition of dose equivalent is "the product of the absorbed dose in tissue, quality factor, and all other necessary modifying factors at the location of interest." However, it is apparent from their definition of effective dose equivalent that "all other necessary modifying factors" excludes the tissue weighting factor. The radiation weighting factors for neutrons are also different between US NRC and the ICRP - see accompanying diagram. Dosimetry reports Cumulative equivalent dose due to external whole-body exposure is normally reported to nuclear energy workers in regular dosimetry reports. In the US, three different equivalent doses are typically reported: deep-dose equivalent, (DDE) shallow dose equivalent, (SDE) eye dose equivalent See also Banana equivalent dose Becquerel Counts per minute Curie Gray (unit) Ionizing radiation units Ionisation chamber Rad (unit) Roentgen (unit) Roentgen equivalent man Sievert References External links Dose equivalent - glossary of the European Nuclear Society - "The confusing world of radiation dosimetry" - M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Radioactivity quantities Radiobiology Radiation protection
Equivalent dose
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
1,698
[ "Physical quantities", "Quantity", "Radiobiology", "Radioactivity quantities", "Radioactivity" ]
506,330
https://en.wikipedia.org/wiki/3SUM
In computational complexity theory, the 3SUM problem asks if a given set of real numbers contains three elements that sum to zero. A generalized version, k-SUM, asks the same question on k elements, rather than simply 3. 3SUM can be easily solved in time, and matching lower bounds are known in some specialized models of computation . It was conjectured that any deterministic algorithm for the 3SUM requires time. In 2014, the original 3SUM conjecture was refuted by Allan Grønlund and Seth Pettie who gave a deterministic algorithm that solves 3SUM in time. Additionally, Grønlund and Pettie showed that the 4-linear decision tree complexity of 3SUM is . These bounds were subsequently improved. The current best known algorithm for 3SUM runs in time. Kane, Lovett, and Moran showed that the 6-linear decision tree complexity of 3SUM is . The latter bound is tight (up to a logarithmic factor). It is still conjectured that 3SUM is unsolvable in expected time. When the elements are integers in the range , 3SUM can be solved in time by representing the input set as a bit vector, computing the set of all pairwise sums as a discrete convolution using the fast Fourier transform, and finally comparing this set to . Quadratic algorithm Suppose the input array is . In integer (word RAM) models of computing, 3SUM can be solved in time on average by inserting each number into a hash table, and then, for each index and , checking whether the hash table contains the integer . It is also possible to solve the problem in the same time in a comparison-based model of computing or real RAM, for which hashing is not allowed. The algorithm below first sorts the input array and then tests all possible pairs in a careful order that avoids the need to binary search for the pairs in the sorted list, achieving worst-case time, as follows. sort(S); for i = 0 to n - 2 do a = S[i]; start = i + 1; end = n - 1; while (start < end) do b = S[start] c = S[end]; if (a + b + c == 0) then output a, b, c; // Continue search for all triplet combinations summing to zero. // We need to update both end and start together since the array values are distinct. start = start + 1; end = end - 1; else if (a + b + c > 0) then end = end - 1; else start = start + 1; end end The following example shows this algorithm's execution on a small sorted array. Current values of a are shown in red, values of b and c are shown in magenta. -25 -10 -7 -3 2 4 8 10 (a+b+c==-25) -25 -10 -7 -3 2 4 8 10 (a+b+c==-22) . . . -25 -10 -7 -3 2 4 8 10 (a+b+c==-7) -25 -10 -7 -3 2 4 8 10 (a+b+c==-7) -25 -10 -7 -3 2 4 8 10 (a+b+c==-3) -25 -10 -7 -3 2 4 8 10 (a+b+c==2) -25 -10 -7 -3 2 4 8 10 (a+b+c==0) The correctness of the algorithm can be seen as follows. Suppose we have a solution a + b + c = 0. Since the pointers only move in one direction, we can run the algorithm until the leftmost pointer points to a. Run the algorithm until either one of the remaining pointers points to b or c, whichever occurs first. Then the algorithm will run until the last pointer points to the remaining term, giving the affirmative solution. Variants Non-zero sum Instead of looking for numbers whose sum is 0, it is possible to look for numbers whose sum is any constant C. The simplest way would be to modify the original algorithm to search the hash table for the integer . Another method: Subtract C/3 from all elements of the input array. In the modified array, find 3 elements whose sum is 0. For example, if A=[1,2,3,4] and if you are asked to find 3SUM for C=4, then subtract 4/3 from all the elements of A, and solve it in the usual 3sum way, i.e., . Three different arrays Instead of searching for the 3 numbers in a single array, we can search for them in 3 different arrays. I.e., given three arrays X, Y and Z, find three numbers , such that . Call the 1-array variant 3SUM×1 and the 3-array variant 3SUM×3. Given a solver for 3SUM×1, the 3SUM×3 problem can be solved in the following way (assuming all elements are integers): For every element in X, Y and Z, set: , , . Let S be a concatenation of the arrays X, Y and Z. Use the 3SUM×1 oracle to find three elements such that . Return . By the way we transformed the arrays, it is guaranteed that . Convolution sum Instead of looking for arbitrary elements of the array such that: the convolution 3sum problem (Conv3SUM) looks for elements in specific locations: Reduction from Conv3SUM to 3SUM Given a solver for 3SUM, the Conv3SUM problem can be solved in the following way. Define a new array T, such that for every index i: (where n is the number of elements in the array, and the indices run from 0 to n-1). Solve 3SUM on the array T. Correctness proof: If in the original array there is a triple with , then , so this solution will be found by 3SUM on T. Conversely, if in the new array there is a triple with , then . Because , necessarily and , so this is a valid solution for Conv3SUM on S. Reduction from 3SUM to Conv3SUM Given a solver for Conv3SUM, the 3SUM problem can be solved in the following way. The reduction uses a hash function. As a first approximation, assume that we have a linear hash function, i.e. a function h such that: Suppose that all elements are integers in the range: 0...N-1, and that the function h maps each element to an element in the smaller range of indices: 0...n-1. Create a new array T and send each element of S to its hash value in T, i.e., for every x in S(): Initially, suppose that the mappings are unique (i.e. each cell in T accepts only a single element from S). Solve Conv3SUM on T. Now: If there is a solution for 3SUM: , then: and , so this solution will be found by the Conv3SUM solver on T. Conversely, if a Conv3SUM is found on T, then obviously it corresponds to a 3SUM solution on S since T is just a permutation of S. This idealized solution doesn't work, because any hash function might map several distinct elements of S to the same cell of T. The trick is to create an array by selecting a single random element from each cell of T, and run Conv3SUM on . If a solution is found, then it is a correct solution for 3SUM on S. If no solution is found, then create a different random and try again. Suppose there are at most R elements in each cell of T. Then the probability of finding a solution (if a solution exists) is the probability that the random selection will select the correct element from each cell, which is . By running Conv3SUM times, the solution will be found with a high probability. Unfortunately, we do not have linear perfect hashing, so we have to use an almost linear hash function, i.e. a function h such that: or This requires to duplicate the elements of S when copying them into T, i.e., put every element both in (as before) and in . So each cell will have 2R elements, and we will have to run Conv3SUM times. 3SUM-hardness A problem is called 3SUM-hard if solving it in subquadratic time implies a subquadratic-time algorithm for 3SUM. The concept of 3SUM-hardness was introduced by . They proved that a large class of problems in computational geometry are 3SUM-hard, including the following ones. (The authors acknowledge that many of these problems are contributed by other researchers.) Given a set of lines in the plane, are there three that meet in a point? Given a set of non-intersecting axis-parallel line segments, is there a line that separates them into two non-empty subsets? Given a set of infinite strips in the plane, do they fully cover a given rectangle? Given a set of triangles in the plane, compute their measure. Given a set of triangles in the plane, does their union have a hole? A number of visibility and motion planning problems, e.g., Given a set of horizontal triangles in space, can a particular triangle be seen from a particular point? Given a set of non-intersecting axis-parallel line segment obstacles in the plane, can a given rod be moved by translations and rotations between a start and finish positions without colliding with the obstacles? By now there are a multitude of other problems that fall into this category. An example is the decision version of X + Y sorting: given sets of numbers and of elements each, are there distinct for ? See also Subset sum problem Notes References . . . . . . Computational geometry Polynomial-time problems Unsolved problems in computer science
3SUM
[ "Mathematics" ]
2,097
[ "Unsolved problems in mathematics", "Unsolved problems in computer science", "Computational mathematics", "Computational problems", "Polynomial-time problems", "Computational geometry", "Mathematical problems" ]
506,383
https://en.wikipedia.org/wiki/Cryptosystem
In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption). Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term cryptosystem is most often used when the key generation algorithm is important. For this reason, the term cryptosystem is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques. Formal definition Mathematically, a cryptosystem or encryption scheme can be defined as a tuple with the following properties. is a set called the "plaintext space". Its elements are called plaintexts. is a set called the "ciphertext space". Its elements are called ciphertexts. is a set called the "key space". Its elements are called keys. is a set of functions . Its elements are called "encryption functions". is a set of functions . Its elements are called "decryption functions". For each , there is such that for all . Note; typically this definition is modified in order to distinguish an encryption scheme as being either a symmetric-key or public-key type of cryptosystem. Examples A classical example of a cryptosystem is the Caesar cipher. A more contemporary example is the RSA cryptosystem. Another example of a cryptosystem is the Advanced Encryption Standard (AES). AES is a widely used symmetric encryption algorithm that has become the standard for securing data in various applications. Paillier cryptosystem is another example used to preserve and maintain privacy and sensitive information. It is featured in electronic voting, electronic lotteries and electronic auctions. See also List of cryptosystems Semantic security References Cryptography
Cryptosystem
[ "Mathematics", "Engineering" ]
420
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
506,682
https://en.wikipedia.org/wiki/Fluid%20power
Fluid power is the use of fluids under pressure to generate, control, and transmit power. Fluid power is conventionally subdivided into hydraulics (using a liquid such as mineral oil or water) and pneumatics (using a gas such as compressed air or other gases). Although steam is also a fluid, steam power is usually classified separately from fluid power (implying hydraulics or pneumatics). Compressed-air and water-pressure systems were once used to transmit power from a central source to industrial users over extended geographic areas; fluid power systems today are usually within a single building or mobile machine. Fluid power systems perform work by a pressurized fluid bearing directly on a piston in a cylinder or in a fluid motor. A fluid cylinder produces a force resulting in linear motion, whereas a fluid motor produces torque resulting in rotary motion. Within a fluid power system, cylinders and motors (also called actuators) do the desired work. Control components such as valves regulate the system. Elements A fluid power system has a pump driven by a prime mover (such as an electric motor or internal combustion engine) that converts mechanical energy into fluid energy, Pressurized fluid is controlled and directed by valves into an actuator device such as a hydraulic cylinder or pneumatic cylinder, to provide linear motion, or a hydraulic motor or pneumatic motor, to provide rotary motion or torque. Rotary motion may be continuous or confined to less than one revolution. Hydraulic pumps Dynamic (non positive displacement) pumps This type is generally used for low-pressure, high volume flow applications. Since they are not capable of withstanding high pressures, there is little use in the fluid power field. Their maximum pressure is limited to 250-300 psi (1.7 - 2.0 MPa). This type of pump is primarily used for transporting fluids from one location to another. Centrifugal and axial flow propeller pumps are the two most common types of dynamic pumps. Positive displacement pumps This type is universally used for fluid power systems. With this pump, a fixed amount of fluid is ejected into the hydraulic system per revolution of pump shaft rotation. These pumps are capable of overcoming the pressure resulting from the mechanical loads on the system as well as the resistance to flow due to friction. These two features are highly desirable in fluid power pumps. These pumps also have the following advantages over non positive displacement pumps: High-pressure capability (up to 12,000 psi, ca. 80 MPa) Small compact size High volumetric efficiency Small changes in efficiency throughout the design pressure range Characteristics Fluid power systems can produce high power and high forces in small volumes, compared with electrically-driven systems. The forces that are exerted can be easily monitored within a system by gauges and meters. In comparison to systems that provide force through electricity or fuel, fluid power systems are known to have long service lives if maintained properly. The working fluid passing through a fluid motor inherently provides cooling of the motor, which must be separately arranged for an electric motor. Fluid motors normally produce no sparks, which are a source of ignition or explosions in hazardous areas containing flammable gases or vapors. Fluid power systems are susceptible to pressure and flow losses within pipes and control devices. Fluid power systems are equipped with filters and other measures to preserve the cleanliness of the working fluid. Any dirt in the system can cause wear of seals and leakage, or can obstruct control valves and cause erratic operation. The hydraulic fluid itself is sensitive to temperature and pressure along with being somewhat compressible. These can cause systems to not run properly. If not run properly, cavitation and aeration can occur. Application Mobile applications of fluid power are widespread. Nearly every self-propelled wheeled vehicle has either hydraulically-operated or pneumatically-operated brakes. Earthmoving equipment such as bulldozers, backhoes and others use powerful hydraulic systems for digging and also for propulsion. A very compact fluid power system is the automatic transmission found in many vehicles, which includes a hydraulic torque converter. Fluid power is also used in automated systems, where tools or work pieces are moved or held using fluid power. Variable-flow control valves and position sensors may be included in a servomechanism system for precision machine tools. Below is a more detailed list of applications and categories that fluid power is used for: Industrial (also known as fixed) metalworking injection molding controllers material handling Aerospace landing gears brakes Pneumatic and hydraulic systems compared Cost Pneumatics are less expensive to build and operate. Air is used as the compressed medium, so there is no requirement to drain or recover fluid. Hydraulic systems use larger working pressures, and require larger parts than pneumatics. Precision Unlike liquids, gases change volume significantly when pressurized making it difficult to achieve precision. Common hydraulic circuit application Synchronizing This circuit works off of synchronization. As a cylinder reaches a certain point another will be activated, either by a hydraulic limit switch valve or by the build-up of pressure in the cylinder. These circuits are used in manufacturing. An example of this would be on an assembly line. As a hydraulic arm is activated to grab an object. It then will reach a point of extension or retraction, where the other cylinder is activated to screw a cap or top onto the object. Hence the term synchronizing. Regenerative In a regenerative circuit, a double acting cylinder is used. This cylinder has a pump that has a fixed output. The use of a regenerative circuit permits use of a smaller size pump for any given application. This works by re-routing the fluid to the cap instead of back to the tank. For example, in a drilling process a regenerative circuit will allow drilling at a consistent speed, and retraction at a much faster speed. This gives the operator faster and more precise production. Electrical control Combinations of electrical control of fluid power elements are widespread in automated systems. A wide variety of measuring, sensing, or control elements are available in electrical form. These can be used to operate solenoid valves or servo valves that control the fluid power element. Electrical control may be used to allow, for example, remote control of a fluid power system without running long control lines to a remotely located manual control valve. See also Hydraulic circuit Hydraulic power network London Hydraulic Power Company Pneumatic circuit Pneumatic actuator References Esposito, Anthpny, Fluid Power With Applications, Esposito, Anthony, Fluid Power with Applications, Hydraulic Power System Analysis, A. Akers, M. Gassman, & R. Smith, Taylor & Francis, New York, 2006, Mechanical engineering
Fluid power
[ "Physics", "Engineering" ]
1,361
[ "Applied and interdisciplinary physics", "Physical quantities", "Power (physics)", "Fluid power", "Mechanical engineering" ]
507,305
https://en.wikipedia.org/wiki/Cement%20chemist%20notation
Cement chemist notation (CCN) was developed to simplify the formulas cement chemists use on a daily basis. It is a shorthand way of writing the chemical formula of oxides of calcium, silicon, and various metals. Abbreviations of oxides The main oxides present in cement (or in glass and ceramics) are abbreviated in the following way: Conversion of hydroxides in oxide and free water For the sake of mass balance calculations, hydroxides present in hydrated phases found in hardened cement paste, such as in portlandite, Ca(OH)2, must first be converted into oxide and water. To better understand the conversion process of hydroxide anions in oxide and water, it is necessary to consider the autoprotolysis of the hydroxyl anions; it implies a proton exchange between two OH−, like in a classical acid–base reaction: + → + or also, 2 OH− → O2− + H2O For portlandite this gives thus the following mass balance: Ca(OH)2 → CaO + H2O Thus portlandite can be written as CaO · H2O or CH. Main phases in Portland cement before and after hydration These oxides are used to build more complex compounds. The main crystalline phases described hereafter are related respectively to the composition of: Clinker and non-hydrated Portland cement, and; Hardened cement pastes obtained after hydration and cement setting. Clinker and non-hydrated Portland cement Four main phases are present in the clinker and in the non-hydrated Portland cement. They are formed at high temperature (1,450 °C) in the cement kiln and are the following: The four compounds referred as C3S, C2S, C3A and C4AF are known as the main crystalline phases of Portland cement. The phase composition of a particular cement can be quantified through a complex set of calculation known as the Bogue formula. To avoid the flash setting of concrete, due to the very fast hydration of the tricalcium aluminate (), calcium sulfate is interground with the cement clinker to prepare the cement powder. In cement chemist notation, (anhydrite) is abbreviated as C, and (gypsum) as CH2. Similarly, in case of a limestone filler addition, , can be noted C. Hydrated cement paste Hydration products formed in hardened cement pastes (also known as HCPs) are more complicated, because many of these products have nearly the same formula and some are solid solutions with overlapping formulas. Some examples are given below: The hyphens in C-S-H indicate a calcium silicate hydrate phase of variable composition, while 'CSH' would indicate a calcium silicate phase, CaH2SiO4. Use in ceramics, glass, and oxide chemistry The cement chemist notation is not restricted to cement applications but is in fact a more general notation of oxide chemistry applicable to other domains than cement chemistry sensu stricto. For instance, in ceramics applications, the kaolinite formula can also be written in terms of oxides, thus the corresponding formula for kaolinite, Al2Si2O5(OH)4, is Al2O3 · 2 SiO2 · 2 H2O or in CCN AS2H2. Possible use of CCN in mineralogy Although not a very developed practice in mineralogy, some chemical reactions involving silicate and oxide in the melt or in hydrothermal systems, and silicate weathering processes could also be successfully described by applying the cement chemist notation to silicate mineralogy. An example could be the formal comparison of belite hydration and forsterite serpentinisation dealing both with the hydration of two structurally similar earth -alkaline silicates, Ca2SiO4 and Mg2SiO4, respectively. Calcium system belite hydration: Magnesium system forsterite serpentinisation: The ratio Ca/Si (C/S) and Mg/Si (M/S) decrease from 2 for the dicalcium and dimagnesium silicate reagents to 1.5 for the hydrated silicate products of the hydration reaction. In other term, the C-S-H or the serpentine are less rich in Ca and Mg respectively. This is why the reaction leads to the elimination of the excess of portlandite (Ca(OH)2) and brucite (Mg(OH)2), respectively, out of the silicate system, giving rise to the crystallization of both hydroxides as separate phases. The rapid reaction of belite hydration in the setting of cement is formally "chemically analogue" to the slow natural hydration of forsterite (the magnesium end-member of olivine) leading to the formation of serpentine and brucite in nature. However, the kinetic of hydration of poorly crystallized artificial belite is much swifter than the slow conversion/weathering of well crystallized Mg-olivine under natural conditions. This comparison suggests that mineralogists could probably also benefit from the concise formalism of the cement chemist notation in their works. See also Hydration of belite in cement (analogous to forsterite hydration) Hydration reaction of forsterite (olivine) in serpentinisation References External links Cement and Concrete Glossary Cement Chemistry of construction methods Concrete Chemical formulas Oxide minerals Silicates
Cement chemist notation
[ "Chemistry", "Engineering" ]
1,126
[ "Chemical formulas", "Concrete", "Structural engineering", "Chemical structures" ]
507,330
https://en.wikipedia.org/wiki/Immunostaining
In biochemistry, immunostaining is any use of an antibody-based method to detect a specific protein in a sample. The term "immunostaining" was originally used to refer to the immunohistochemical staining of tissue sections, as first described by Albert Coons in 1941. However, immunostaining now encompasses a broad range of techniques used in histology, cell biology, and molecular biology that use antibody-based staining methods. Techniques Immunohistochemistry Immunohistochemistry or IHC staining of tissue sections (or immunocytochemistry, which is the staining of cells), is perhaps the most commonly applied immunostaining technique. While the first cases of IHC staining used fluorescent dyes (see immunofluorescence), other non-fluorescent methods using enzymes such as peroxidase (see immunoperoxidase staining) and alkaline phosphatase are now used. These enzymes are capable of catalysing reactions that give a coloured product that is easily detectable by light microscopy. Alternatively, radioactive elements can be used as labels, and the immunoreaction can be visualized by autoradiography. Tissue preparation or fixation is essential for the preservation of cell morphology and tissue architecture. Inappropriate or prolonged fixation may significantly diminish the antibody binding capability. Many antigens can be successfully demonstrated in formalin-fixed paraffin-embedded tissue sections. However, some antigens will not survive even moderate amounts of aldehyde fixation. Under these conditions, tissues should be rapidly fresh frozen in liquid nitrogen and cut with a cryostat. The disadvantages of frozen sections include poor morphology, poor resolution at higher magnifications, difficulty in cutting over paraffin sections, and the need for frozen storage. Alternatively, vibratome sections do not require the tissue to be processed through organic solvents or high heat, which can destroy the antigenicity, or disrupted by freeze thawing. The disadvantage of vibratome sections is that the sectioning process is slow and difficult with soft and poorly fixed tissues, and that chatter marks or vibratome lines are often apparent in the sections. The detection of many antigens can be dramatically improved by antigen retrieval methods that act by breaking some of the protein cross-links formed by fixation to uncover hidden antigenic sites. This can be accomplished by heating for varying lengths of times (heat induced epitope retrieval or HIER) or using enzyme digestion (proteolytic induced epitope retrieval or PIER). One of the main difficulties with IHC staining is overcoming specific or non-specific background. Optimisation of fixation methods and times, pre-treatment with blocking agents, incubating antibodies with high salt, and optimising post-antibody wash buffers and wash times are all important for obtaining high quality immunostaining. In addition, the presence of both positive and negative controls for staining are essential for determining specificity. Flow cytometry A flow cytometer can be used for the direct analysis of cells expressing one or more specific proteins. Cells are immunostained in solution using methods similar to those used for immunofluorescence, and then analysed by flow cytometry. Flow cytometry has several advantages over IHC including: the ability to define distinct cell populations by their size and granularity; the capacity to gate out dead cells; improved sensitivity; and multi-colour analysis to measure several antigens simultaneously. However, flow cytometry can be less effective at detecting extremely rare cell populations, and there is a loss of architectural relationships in the absence of a tissue section. Flow cytometry also has a high capital cost associated with the purchase of a flow cytometer. Western blotting Western blotting allows the detection of specific proteins from extracts made from cells or tissues, before or after any purification steps. Proteins are generally separated by size using gel electrophoresis before being transferred to a synthetic membrane via dry, semi-dry, or wet blotting methods. The membrane can then be probed using antibodies using methods similar to immunohistochemistry, but without a need for fixation. Detection is typically performed using peroxidase linked antibodies to catalyse a chemiluminescent reaction. Western blotting is a routine molecular biology method that can be used to semi-quantitatively compare protein levels between extracts. The size separation prior to blotting allows the protein molecular weight to be gauged as compared with known molecular weight markers. Enzyme-linked immunosorbent assay The enzyme-linked immunosorbent assay or ELISA is a diagnostic method for quantitatively or semi-quantitatively determining protein concentrations from blood plasma, serum or cell/tissue extracts in a multi-well plate format (usually 96-wells per plate). Broadly, proteins in solution are absorbed to ELISA plates. Antibodies specific for the protein of interest are used to probe the plate. Background is minimised by optimising blocking and washing methods (as for IHC), and specificity is ensured via the presence of positive and negative controls. Detection methods are usually colorimetric or chemiluminescence based. Immuno-electron microscopy Electron microscopy or EM can be used to study the detailed microarchitecture of tissues or cells. Immuno-EM allows the detection of specific proteins in ultrathin tissue sections. Antibodies labelled with heavy metal particles (e.g. gold) can be directly visualised using transmission electron microscopy. While powerful in detecting the sub-cellular localisation of a protein, immuno-EM can be technically challenging, expensive, and require rigorous optimisation of tissue fixation and processing methods. Protein biotinylation in vivo was proposed to alleviate the problems caused by frequent incompatibility of antibody staining with fixation protocols that better preserve cell morphology. Methodological overview In immunostaining methods, an antibody is used to detect a specific protein epitope. These antibodies can be monoclonal or polyclonal. Detection of this first or primary antibody can be accomplished in multiple ways. The primary antibody can be directly labeled using an enzyme or fluorophore. The primary antibody can be labeled using a small molecule which interacts with a high affinity binding partner that can be linked to an enzyme or fluorophore. The biotin-streptavidin is one commonly used high affinity interaction. The primary antibody can be probed for using a broader species-specific secondary antibody that is labeled using an enzyme, or fluorophore. In the case of electron microscopy, antibodies are linked to a heavy metal particle (typically gold nanoparticles in the range 5-15nm diameter). As previously described, enzymes such as horseradish peroxidase or alkaline phosphatase are commonly used to catalyse reactions that give a coloured or chemiluminescent product. Fluorescent molecules can be visualised using fluorescence microscopy or confocal microscopy. Applications The applications of immunostaining are numerous, but are most typically used in clinical diagnostics and laboratory research. Clinically, IHC is used in histopathology for the diagnosis of specific types of cancers based on molecular markers. In laboratory science, immunostaining can be used for a variety of applications based on investigating the presence or absence of a protein, its tissue distribution, its sub-cellular localisation, and of changes in protein expression or degradation. See also Cutaneous conditions with immunofluorescence findings Immunostaining protocol List of histologic stains that aid in diagnosis of cutaneous conditions References Immunology Flow cytometry Protein methods
Immunostaining
[ "Chemistry", "Biology" ]
1,607
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Immunology", "Flow cytometry" ]
507,854
https://en.wikipedia.org/wiki/Helium%20flash
A helium flash is a very brief thermal runaway nuclear fusion of large quantities of helium into carbon through the triple-alpha process in the core of low-mass stars (between 0.8 solar masses () and 2.0 ) during their red giant phase. The Sun is predicted to experience a flash 1.2 billion years after it leaves the main sequence. A much rarer runaway helium fusion process can also occur on the surface of accreting white dwarf stars. Low-mass stars do not produce enough gravitational pressure to initiate normal helium fusion. As the hydrogen in the core is exhausted, some of the helium left behind is instead compacted into degenerate matter, supported against gravitational collapse by quantum mechanical pressure rather than thermal pressure. Subsequent hydrogen shell fusion further increases the mass of the core until it reaches temperature of approximately 100 million kelvin, which is hot enough to initiate helium fusion (or "helium burning") in the core. However, a quality of degenerate matter is that increases in temperature do not produce an increase in the pressure of the matter until the thermal pressure becomes so very high that it exceeds degeneracy pressure. In main sequence stars, thermal expansion regulates the core temperature, but in degenerate cores, this does not occur. Helium fusion increases the temperature, which increases the fusion rate, which further increases the temperature in a runaway reaction which quickly spans the entire core. This produces a flash of very intense helium fusion that lasts only a few minutes, but during that time, produces energy at a rate comparable to the entire Milky Way galaxy. In the case of normal low-mass stars, the vast energy release causes much of the core to come out of degeneracy, allowing it to thermally expand. This consumes most of the total energy released by the helium flash, and any left-over energy is absorbed into the star's upper layers. Thus the helium flash is mostly undetectable by observation, and is described solely by astrophysical models. After the core's expansion and cooling, the star's surface rapidly cools and contracts in as little as 10,000 years until it is roughly 2% of its former radius and luminosity. It is estimated that the electron-degenerate helium core weighs about 40% of the star mass and that 6% of the core is converted into carbon. Subflashes Subflashes are pulsational instabilities that occur after the main helium flash. They are driven by stars that do not have good convective or radiative boundaries . Subflashes can last several hours to days and can occur for many years with each subsequent flash generally being weaker . Subflashes can be detected by applying fourier transforms to the light curve data . Red giants During the red giant phase of stellar evolution in stars with less than 2.0 the nuclear fusion of hydrogen ceases in the core as it is depleted, leaving a helium-rich core. While fusion of hydrogen continues in the star's shell causing a continuation of the accumulation of helium in the core, making the core denser, the temperature is still unable to reach the level required for helium fusion, as happens in more massive stars. Thus the thermal pressure from fusion is no longer sufficient to counter the gravitational collapse and create the hydrostatic equilibrium found in most stars. This causes the star to start contracting and increasing in temperature until it eventually becomes compressed enough for the helium core to become degenerate matter. This degeneracy pressure is finally sufficient to stop further collapse of the most central material but the rest of the core continues to contract and the temperature continues to rise until it reaches a point () at which the helium can ignite and start to fuse. The explosive nature of the helium flash arises from its taking place in degenerate matter. Once the temperature reaches 100 million–200 million kelvin and helium fusion begins using the triple-alpha process, the temperature rapidly increases, further raising the helium fusion rate and, because degenerate matter is a good conductor of heat, widening the reaction region. However, since degeneracy pressure (which is purely a function of density) is dominating thermal pressure (proportional to the product of density and temperature), the total pressure is only weakly dependent on temperature. Thus, the dramatic increase in temperature only causes a slight increase in pressure, so there is no stabilizing cooling expansion of the core. This runaway reaction quickly climbs to about 100 billion times the star's normal energy production (for a few seconds) until the temperature increases to the point that thermal pressure again becomes dominant, eliminating the degeneracy. The core can then expand and cool down and a stable burning of helium will continue. A star with mass greater than about 2.25 starts to burn helium without its core becoming degenerate, and so does not exhibit this type of helium flash. In a very low-mass star (less than about 0.5 ), the core is never hot enough to ignite helium. The degenerate helium core will keep on contracting, and finally becomes a helium white dwarf. The helium flash is not directly observable on the surface by electromagnetic radiation. The flash occurs in the core deep inside the star, and the net effect will be that all released energy is absorbed by the entire core, causing the degenerate state to become nondegenerate. Earlier computations indicated that a nondisruptive mass loss would be possible in some cases, but later star modeling taking neutrino energy loss into account indicates no such mass loss. In a one solar mass star, the helium flash is estimated to release about , or about 0.3% of the energy release of a type Ia supernova, which is triggered by an analogous ignition of carbon fusion in a carbon–oxygen white dwarf. Binary white dwarfs When hydrogen gas is accreted onto a white dwarf from a binary companion star, the hydrogen can fuse to form helium for a narrow range of accretion rates, but most systems develop a layer of hydrogen over the degenerate white dwarf interior. This hydrogen can build up to form a shell near the surface of the star. When the mass of hydrogen becomes sufficiently large, runaway fusion causes a nova. In a few binary systems where the hydrogen fuses on the surface, the mass of helium built up can burn in an unstable helium flash. In certain binary systems the companion star may have lost most of its hydrogen and donate helium-rich material to the compact star. Note that similar flashes occur on neutron stars. Helium shell flash Helium shell flashes are a somewhat analogous but much less violent, nonrunaway helium ignition event, taking place in the absence of degenerate matter. They occur periodically in asymptotic giant branch stars in a shell outside the core. This is late in the life of a star in its giant phase. The star has burnt most of the helium available in the core, which is now composed of carbon and oxygen. Helium fusion continues in a thin shell around this core, but then turns off as helium becomes depleted. This allows hydrogen fusion to start in a layer above the helium layer. After enough additional helium accumulates, helium fusion is reignited, leading to a thermal pulse which eventually causes the star to expand and brighten temporarily (the pulse in luminosity is delayed because it takes a number of years for the energy from restarted helium fusion to reach the surface). Such pulses may last a few hundred years, and are thought to occur periodically every 10,000 to 100,000 years. After the flash, helium fusion continues at an exponentially decaying rate for about 40% of the cycle as the helium shell is consumed. Thermal pulses may cause a star to shed circumstellar shells of gas and dust. See also Carbon detonation References Concepts in astrophysics Exotic matter Helium Nucleosynthesis Stellar evolution
Helium flash
[ "Physics", "Chemistry" ]
1,601
[ "Nuclear fission", "Concepts in astrophysics", "Astrophysics", "Stellar evolution", "Nucleosynthesis", "Exotic matter", "Nuclear physics", "Nuclear fusion", "Matter" ]
3,645,679
https://en.wikipedia.org/wiki/Axial%20piston%20pump
An axial piston pump is a positive displacement pump that has a number of pistons in a circular array within a cylinder block. It can be used as a stand-alone pump, a hydraulic motor or an automotive air conditioning compressor. Description An axial piston pump has a number of pistons (usually an odd number) arranged in a circular array within a housing which is commonly referred to as a cylinder block, rotor or barrel. This cylinder block is driven to rotate about its axis of symmetry by an integral shaft that is, more or less, aligned with the pumping pistons (usually parallel but not necessarily). Mating surfaces. One end of the cylinder block is convex and wears against a mating surface on a stationary valve plate. The inlet and outlet fluid of the pump pass through different parts of the sliding interface between the cylinder block and valve plate. The valve plate has two semi-circular ports that allow inlet of the operating fluid and exhaust of the outlet fluid respectively. Protruding pistons. The pumping pistons protrude from the opposite end of the cylinder block. There are numerous configurations used for the exposed ends of the pistons but in all cases they bear against a cam. In variable displacement units, the cam is movable and commonly referred to as a swashplate, yoke or hanger. For conceptual purposes, the cam can be represented by a plane, the orientation of which, in combination with shaft rotation, provides the cam action that leads to piston reciprocation and thus pumping. The angle between a vector normal to the cam plane and the cylinder block axis of rotation, called the cam angle, is one variable that determines the displacement of the pump or the amount of fluid pumped per shaft revolution. Variable displacement units have the ability to vary the cam angle during operation whereas fixed displacement units do not. Reciprocating pistons. As the cylinder block rotates, the exposed ends of the pistons are constrained to follow the surface of the cam plane. Since the cam plane is at an angle to the axis of rotation, the pistons must reciprocate axially as they precess about the cylinder block axis. The axial motion of the pistons is sinusoidal. During the rising portion of the piston's reciprocation cycle, the piston moves toward the valve plate. Also, during this time, the fluid trapped between the buried end of the piston and the valve plate is vented to the pump's discharge port through one of the valve plate's semi-circular ports - the discharge port. As the piston moves toward the valve plate, fluid is pushed or displaced through the discharge port of the valve plate. Effect of precession. When the piston is at the top of the reciprocation cycle (commonly referred to as top-dead-center or just TDC), the connection between the trapped fluid chamber and the pump's discharge port is closed. Shortly thereafter, that same chamber becomes open to the pump's inlet port. As the piston continues to precess about the cylinder block axis, it moves away from the valve plate thereby increasing the volume of the trapped chamber. As this occurs, fluid enters the chamber from the pump's inlet to fill the void. This process continues until the piston reaches the bottom of the reciprocation cylinder - commonly referred to as bottom-dead-center or BDC. At BDC, the connection between the pumping chamber and inlet port is closed. Shortly thereafter, the chamber becomes open to the discharge port again and the pumping cycle starts over. Variable displacement. In a variable displacement pump, if the vector normal to the cam plane (swash plate) is set parallel to the axis of rotation, there is no movement of the pistons in their cylinders. Thus there is no output. Movement of the swash plate controls pump output from zero to maximum. There are two kinds of variable-displacement axial piston pumps: direct displacement control pump, a kind of axial piston pump with a direct displacement control. A direct displacement control uses a mechanical lever attached to the swashplate of the axial piston pump. Higher system pressures require more force to move that lever, making direct displacement control only suitable for light or medium duty pumps. Heavy duty pumps require servo control. A direct displacement control pump contains linkages and springs and in some cases magnets rather than a shaft to a motor located outside of the pump (thereby reducing the number of moving parts), keeping parts protected and lubricated and reducing the resistance against the flow of liquid. servo control pump. Pressure. In a typical pressure-compensated pump, the swash plate angle is adjusted through the action of a valve which uses pressure feedback so that the instantaneous pump output flow is exactly enough to maintain a designated pressure. If the load flow increases, pressure will momentarily decrease but the pressure-compensation valve will sense the decrease and then increase the swash plate angle to increase pump output flow so that the desired pressure is restored. In reality most systems use pressure as a control for this type of pump. The operating pressure reaches, say, 200 bar (20 MPa or 2900 psi) and the swash plate is driven towards zero angle (piston stroke nearly zero) and with the inherent leaks in the system allows the pump to stabilize at the delivery volume that maintains the set pressure. As demand increases the swash plate is moved to a greater angle, piston stroke increases and the volume of fluid increases; if the demand slackens the pressure will rise, and the pumped volume diminishes as the pressure rises. At maximum system pressure the output is once again almost zero. If the fluid demand increases beyond the capacity of the pump to deliver, the system pressure will drop to near zero. The swash plate angle will remain at the maximum allowed, and the pistons will operate at full stroke. This continues until system flow-demand eases and the pump's capacity is greater than demand. As the pressure rises the swash-plate angle modulates to try to not exceed the maximum pressure while meeting the flow demand. Design difficulties Designers have a number of problems to overcome in designing axial piston pumps. One is managing to be able to manufacture a pump with the fine tolerances necessary for efficient operation. The mating faces between the rotary piston-cylinder assembly and the stationary pump body have to be almost a perfect seal while the rotary part turns at perhaps 3000 rpm. The pistons are usually less than half an inch (13 mm) in diameter with similar stroke lengths. Keeping the wall to piston seal tight means that very small clearances are involved and that materials have to be closely matched for similar coefficient of expansion. The pistons have to be drawn outwards in their cylinder by some means. On small pumps this can be done by means of a spring inside the cylinder that forces the piston up the cylinder. Inlet fluid pressure can also be arranged so that the fluid pushes the pistons up the cylinder. Often a vane pump is located on the same drive shaft to provide this pressure and it also allows the pump assembly to draw fluid against some suction head from the reservoir, which is not an attribute of the unaided axial piston pump. Another method of drawing pistons up the cylinder is to attach the cylinder heads to the surface of the swash plate. In that way the piston stroke is totally mechanical. However, the designer's problem of lubricating the swash plate face (a sliding contact) is made even more difficult. Internal lubrication of the pump is achieved by use of the operating fluid—normally called hydraulic fluid. Most hydraulic systems have a maximum operating temperature, limited by the fluid, of about 120 °C (250 °F) so that using that fluid as a lubricant brings its own problems. In this type of pump the leakage from the face between the cylinder housing and the body block is used to cool and lubricate the exterior of the rotating parts. The leakage is then carried off to the reservoir or to the inlet side of the pump again. Hydraulic fluid that has been used is always cooled and passed through micrometre-sized filters before recirculating through the pump. Uses Despite the problems indicated above this type of pump can contain most of the necessary circuit controls integrally (the swash-plate angle control) to regulate flow and pressure, be very reliable and allow the rest of the hydraulic system to be very simple and inexpensive. Axial piston pumps are used to power the hydraulic systems of jet aircraft, being gear-driven off of the turbine engine's main shaft, The system used on the F-14 used a 9-piston pump that produced a standard system operating pressure of 3000 psi and a maximum flow of 84 gallons per minute. Automotive air conditioning compressors for cabin cooling are nowadays mostly based around the axial piston pump design (others are based on the scroll compressor or rotary vane pump ones instead) in order to contain their weight and space requirement in the vehicle's engine bay and reduce vibrations. They're available in fixed displacement and dynamically adjusted variable displacement variants, and, depending upon the compressor's design, the actual rotating swashplate either directly drives a set of pistons mated to its edges through a set of hemispherical metal shoes, or a nutating plate on which a set of pistons are mounted by means of rods. They are also used in some pressure washers. For example Kärcher has several models powered by axial piston pumps with three pistons. Axial reciprocating motors are also used to power many machines. They operate on the same principle as described above, except that the circulating fluid is provided under considerable pressure and the piston housing is made to rotate and provide shaft power to another machine. A common use of an axial reciprocating motor is to power small earthmoving plant such as skid loader machines. Another use is to drive the screws of torpedoes. History The first example can be found on page 213 (or page 89 per book's pagination) in Le diverse et artificiose machine by Agostino Ramelli. References External links www.rotarypower.com, Manufacturer of Axial Piston Pumps Tecnapol, Axial Piston Pumps repair/rebuild Engine technology Pumps eo:Aksa piŝta pumpilo
Axial piston pump
[ "Physics", "Chemistry", "Technology" ]
2,071
[ "Pumps", "Engines", "Turbomachinery", "Physical systems", "Engine technology", "Hydraulics" ]
3,646,203
https://en.wikipedia.org/wiki/Expanding%20Earth
The expanding Earth or growing Earth was a hypothesis attempting to explain the position and relative movement of continents by increase in the volume of Earth. With the recognition of plate tectonics in 20th century, the idea has been abandoned. Different forms of the hypothesis Expansion with constant mass In 1834, during the second voyage of HMS Beagle, Charles Darwin investigated stepped plains featuring raised beaches in Patagonia which indicated to him that a huge area of South America had been "uplifted to its present height by a succession of elevations which acted over the whole of this space with nearly an equal force". While his mentor Charles Lyell had suggested forces acting near the crust on smaller areas, Darwin hypothesized that uplift at this continental scale required "the gradual expansion of some central mass" [of the Earth] "acting by intervals on the outer crust" with the "elevations being concentric with form of globe (or certainly nearly so)". In 1835 he extended this concept to include the Andes Mountains as part of a curved enlargement of the Earth's crust due to "the action of one connected force". Not long afterwards, he abandoned this idea and proposed that as the mountains rose, the ocean floor subsided, explaining the formation of coral reefs. In 1889 and 1909 Roberto Mantovani published a hypothesis of Earth expansion and continental drift. He assumed that a closed continent covered the entire surface of a smaller Earth. Thermal expansion caused volcanic activity, which broke the land mass into smaller continents. These continents drifted away from each other because of further expansion at the rip-zones, where oceans currently lie. Although Alfred Wegener noticed some similarities to his own hypothesis of continental drift, he did not mention Earth expansion as the cause of drift in Mantovani's hypothesis. A compromise between Earth-expansion and Earth-contraction is the "theory of thermal cycles" by Irish physicist John Joly. He assumed that heat flow from radioactive decay inside Earth surpasses the cooling of Earth's exterior. Together with British geologist Arthur Holmes, Joly proposed a hypothesis in which Earth loses its heat by cyclic periods of expansion. By their hypothesis, expansion caused cracks and joints in Earth's interior that could fill with magma. This was succeeded by a cooling phase, where the magma would freeze and become solid rock again, causing Earth to shrink. Mass addition In 1888 Ivan Osipovich Yarkovsky suggested that some sort of aether is absorbed within Earth and transformed into new chemical elements, forcing the celestial bodies to expand. This was associated with his mechanical explanation of gravitation. Also the theses of Ott Christoph Hilgenberg (1933, 1974) and Nikola Tesla (1935) were based on absorption and transformation of aether-energy into normal matter. After initially endorsing the idea of continental drift, Australian geologist Samuel Warren Carey advocated expansion from the 1950s (before the idea of plate tectonics was generally accepted) to his death, alleging that subduction and other events could not balance the sea-floor spreading at oceanic ridges, and describing yet unresolved paradoxes that continue to plague plate tectonics. Starting in 1956, he proposed some sort of mass increase in the planets and said that a final solution to the problem is only possible by cosmological processes associated with the expansion of the universe. Bruce Heezen initially interpreted his work on the mid-Atlantic ridge as confirming S. Warren Carey's Expanding Earth Theory, but later ended his endorsement, finally convinced by the data and analysis of his assistant, Marie Tharp. The remaining proponents after the 1970s, like the Australian geologist James Maxlow, are mainly inspired by Carey's ideas. To date no scientific mechanism of action has been proposed for this addition of new mass. Although the earth is constantly acquiring mass through accumulation of rocks and dust from space such accretion, however, is only a minuscule fraction of the mass increase required by the growing earth hypothesis. Decrease of the gravitational constant Paul Dirac suggested in 1938 that the universal gravitational constant had decreased during the billions of years of its existence. This caused German physicist Pascual Jordan to propose in 1964, a modification of the theory of general relativity, that all planets slowly expand. This explanation is considered a viable hypothesis within the context of physics. Measurements of a possible variation of the gravitational constant showed an upper limit for a relative change of per year, excluding Jordan's idea. Formation from a gas giant According to the hypothesis of J. Marvin Herndon (2005, 2013) the Earth originated in its protoplanetary stage from a Jupiter-like gas giant. During the development phases of the young Sun, which resembled those of a T Tauri star, the dense atmosphere of the gas giant was stripped off by infrared eruptions from the sun. The remnant was a rocky planet. Due to the loss of pressure from its atmosphere it would have begun a progressive decompression. Herndon regards the energy released due to the lack of compression as a primary energy source for geotectonic activity, to which some energy from radioactive decomposition processes was added. He terms the resulting changes in the course of Earth's history by the name of his theory Whole-Earth Decompression Dynamics. He considered seafloor spreading at divergent plate boundaries as an effect of it. In his opinion mantle convection as used as a concept in the theory of plate tectonics is physically impossible. His theory includes the effect of solar wind (geomagnetic storms) as cause for the reversals of the Earth magnetic field. The question of mass increase is not addressed. Main arguments against Earth expansion The hypothesis had never developed a plausible and verifiable mechanism of action. During the 1960s, the theory of plate tectonics— based initially on the assumption that Earth's size remains constant, and relating the subduction zones to burying of lithosphere at a scale comparable to seafloor spreading—became the accepted explanation in the Earth Sciences. The scientific community finds that significant evidence contradicts the Expanding Earth theory, and that the evidence used for it is explained better by plate tectonics: Measurements with modern high-precision geodetic techniques and modeling of the measurements by the horizontal motions of independent rigid plates at the surface of a globe of free radius, were proposed as evidence that Earth is not currently increasing in size to within a measurement accuracy of 0.2 mm per year. The main author of the study stated "Our study provides an independent confirmation that the solid Earth is not getting larger at present, within current measurement uncertainties". The motions of tectonic plates and subduction zones measured by a large range of geological, geodetic and geophysical techniques helps verify plate tectonics. Imaging of lithosphere fragments within the mantle is evidence for lithosphere consumption by subduction. Paleomagnetic data has been used to calculate that the radius of Earth 400 million years ago was 102 ± 2.8 percent of the present radius. Examinations of data from the Paleozoic and Earth's moment of inertia suggest that there has not been any significant change of Earth's radius during the last 620 million years. See also Geophysical global cooling - a converse hypothesis :Category:Plate tectonics Timeline of the development of tectonophysics (before 1954) Timeline of the development of tectonophysics (after 1952) Notes Bibliography ; 1976: "The Expanding Earth", Developments in Geotectonics (10), Elsevier, ; digital edition 2013: ASIN B01E3II6VY. ;1988: "Theories of the Earth and Universe: A History of Dogma in the Earth Sciences", Stanford University Press, . ; 1993: Holmes' principles of physical geology, Chapman & Hall (4th ed.), . ; 1990: The Solid Earth, an introduction to Global Geophysics, Cambridge University Press, . ; 1999: Earth System History, W.H. Freeman & Co, . External links Historical Ott Christoph Hilgenberg: G. Scalera: Roberto Mantovani an Italian defender of the continental drift and planetary expansion Giancarlo Scalera: Variable Radius CartographyBirth and Perspectives of a New Experimental Discipline G. Scalera, Braun: Ott Christoph Hilgenberg in twentieth-century geophysics G. Scalera: Samuel Warren Carey – Commemorative memoir Andrew Alden: Warren Carey, Last of the Giants Contemporary Database of Expansion Tectonic Scientists, living and deceased Structure of the Earth Geophysics Geodynamics Obsolete geology theories
Expanding Earth
[ "Physics" ]
1,740
[ "Applied and interdisciplinary physics", "Geophysics" ]
3,647,951
https://en.wikipedia.org/wiki/Laborat%C3%B3rio%20Nacional%20de%20Luz%20S%C3%ADncrotron
Laboratório Nacional de Luz Síncrotron (; LNLS) is the Brazilian Synchrotron Light Laboratory, a research institution on physics, chemistry, material science and life sciences. It is located in the city of Campinas, sub-district of Barão Geraldo, state of São Paulo, Brazil. The Center, which is operated by the Brazilian Center of Research in Energy and Materials (CNPEM) under a contract with the National Research Council (CNPq) and the Ministry of Science and Technology of Brazil, has the only particle accelerator (a synchrotron) in Latin America, which was designed and built in Brazil by a team of physicists, technicians and engineers. Currently, the Brazilian Synchrotron has 6 different beamlines in operation for its user community, covering energies ranging from a few electronvolts to tens of kiloelectronvolts. The uses include: X-Ray Nanoscopy Coherent and Time-resolsed X-ray Scattering X-ray Spectroscopy e Diffraction in Extreme Conditions Infrared Micro and Nanospectroscopy Resonant Inelastic X-ray scattering and Photoelectron spectroscopy Macromolecular Micro and Nanocrystallography These beamlines are part of Sirius, a 3 GeV synchrotron light source. The plan includes an initial 13 beamlines, with a final goal of 40, ranging from 10 eV to 100 keV. It was inaugurated in 2018. References External links Official LNLS Home Page Lightsources.org Sirius Project - LNLS Research institutes in Brazil Organisations based in Campinas Synchrotron radiation facilities
Laboratório Nacional de Luz Síncrotron
[ "Materials_science" ]
339
[ "Materials testing", "Synchrotron radiation facilities" ]
3,650,205
https://en.wikipedia.org/wiki/Single-machine%20scheduling
Single-machine scheduling or single-resource scheduling or Dhinchak Pooja is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on a single machine, in a way that optimizes a certain objective, such as the throughput. Single-machine scheduling is a special case of identical-machines scheduling, which is itself a special case of optimal job scheduling. Many problems, which are NP-hard in general, can be solved in polynomial time in the single-machine case. In the standard three-field notation for optimal job scheduling problems, the single-machine variant is denoted by 1 in the first field. For example, " 1||" is a single-machine scheduling problem with no constraints, where the goal is to minimize the sum of completion times. The makespan-minimization problem 1||, which is a common objective with multiple machines, is trivial with a single machine, since the makespan is always identical. Therefore, other objectives have been studied. Minimizing the sum of completion times The problem 1|| aims to minimize the sum of completion times. It can be solved optimally by the Shortest Processing Time First rule (SPT): the jobs are scheduled by ascending order of their processing time . The problem 1|| aims to minimize the weighted sum of completion times. It can be solved optimally by the Weighted Shortest Processing Time First rule (WSPT): the jobs are scheduled by ascending order of the ratio . The problem 1|chains| is a generalization of the above problem for jobs with dependencies in the form of chains. It can also be solved optimally by a suitable generalization of WSPT. Minimizing the cost of lateness The problem 1|| aims to minimize the maximum lateness. For each job j, there is a due date . If it is completed after its due date, it suffers lateness defined as . 1|| can be solved optimally by the Earliest Due Date First rule (EDD): the jobs are scheduled by ascending order of their deadline . The problem 1|prec| generalizes the 1|| in two ways: first, it allows arbitrary precedence constraints on the jobs; second, it allows each job to have an arbitrary cost function hj, which is a function of its completion time (lateness is a special case of a cost function). The maximum cost can be minimized by a greedy algorithm known as Lawler's algorithm. The problem 1|| generalizes 1|| by allowing each job to have a different release time by which it becomes available for processing. The presence of release times means that, in some cases, it may be optimal to leave the machine idle, in order to wait for an important job that is not released yet. Minimizing maximum lateness in this setting is NP-hard. But in practice, it can be solved using a branch-and-bound algorithm. Maximizing the profit of earliness In settings with deadlines, it is possible that, if the job is completed by the deadline, there is a profit pj. Otherwise, there is no profit. The goal is to maximize the profit. Single-machine scheduling with deadlines is NP-hard; Sahni presents both exact exponential-time algorithms and a polynomial-time approximation algorithm. Maximizing the throughput The problem 1|| aims to minimize the number of late jobs, regardless of the amount of lateness. It can be solved optimally by the Hodgson-Moore algorithm. It can also be interpreted as maximizing the number of jobs that complete on time; this number is called the throughput. The problem 1|| aims to minimize the weight of late jobs. It is NP-hard, since the special case in which all jobs have the same deadline (denoted by 1|| ) is equivalent to the Knapsack problem. The problem 1|| generalizes 1|| by allowing different jobs to have different release times. The problem is NP-hard. However, when all job lengths are equal, the problem can be solved in polynomial time. It has several variants: The weighted optimization variant, 1||, can be solved in time . The unweighted optimization variant, maximizing the number of jobs that finish on time, denoted 1||, can be solved in time using dynamic programming, when all release times and deadlines are integers. The decision variant - deciding whether it is possible that all given jobs complete on time - can be solved by several algorithms, the fastest of them runs in time . Jobs can have execution intervals. For each job j, there is a processing time tj and a start-time sj, so it must be executed in the interval [sj, sj+tj]. Since some of the intervals overlap, not all jobs can be completed. The goal is to maximize the number of completed jobs, that is, the throughput. More generally, each job may have several possible intervals, and each interval may be associated with a different profit. The goal is to choose at most one interval for each job, such that the total profit is maximized. For more details, see the page on interval scheduling. More generally, jobs can have time-windows, with both start-times and deadlines, which may be larger than the job length. Each job can be scheduled anywhere within its time-window. Bar-Noy, Bar-Yehuda, Freund, Naor and Schieber present a (1-ε)/2 approximation. Jobs with non-constant length Workers and machines often become tired after working for a certain amount of time, and this makes them slower when processing future jobs. On the other hand, workers and machines may learn how to work better, and this makes them faster when processing future jobs. In both cases, the length (processing-time) of a job is not constant, but depends on the jobs processed before it. In this setting, even minimizing the maximum completion time becomes non-trivial. There are two common ways to model the change in job length. The job length may depend on the start time of the job. When the length is a weakly-increasing function of the start-time, it is deterioration effect; when it is weakly-decreasing, it is called learning effect. The job length may depend on the sum of normal processing times of previously-processed jobs. When the length is a weakly-increasing function of this sum, it is often called aging effect. Start-time-based length Cheng and Ding studied makespan minimization and maximum-lateness minimization when the actual length of job j scheduled at time sj is given by, where pj is the normal length of j. They proved the following results: When jobs can have arbitrary deadlines, the problems are strongly NP-hard by reduction from 3-partition; When jobs can have one of two deadlines, the problems are NP-complete, by reduction from partition. When jobs can have arbitrary release times, the problems are strongly NP-hard, by reduction from the problem with arbitrary deadlines. When jobs can have one of two release times, either 0 or R, the problems are NP-complete. Kubiak and van-de-Velde studied makespan minimization when the fatigue starts only after a common due-date d. That is, the actual length of job j scheduled at time sj is given by.So, if the job starts before d, its length does not change; if it starts after d, its length grows by a job-dependent rate. They show that the problem is NP-hard, give a pseudopolynomial algorithm that runs in time , and give a branch-and-bound algorithm that solves instances with up to 100 jobs in reasonable time. They also study bounded deterioration, where pj stops growing if the job starts after a common maximum deterioration date D > d. For this case, they give two pseudopolynomial time algorithms. Cheng, Ding and Lin surveyed several studies of a deterioration effect, where the length of job j scheduled at time sj is either linear or piecewise linear, and the change rate can be positive or negative. Sum-of-processing-times-based length The aging effect has two types: In the position-based aging model, the processing time of a job depends on the number of jobs processed before it, that is, on its position in the sequence. In sum-of-processing-time-based aging model, the processing time of a job is a weakly-increasing function of the sum of normal (=unaffected by aging) processing times of the jobs processed before it. Wang, Wang, Wang and Wang studied sum-of-processing-time-based aging model, where the processing-time of job j scheduled at position v is given bywhere is the job scheduled at position , and α is the "aging characteristic" of the machine. In this model, the maximum processing time of the permutation is:Rudek generalized the model in two ways: allowing the fatigue to be different than the processing time, and allowing a job-dependent aging characteristic:Here, f is an increasing function that describes the dependance of the fatigue on the processing time; and αj is the aging characteristic of job j. For this model, he proved the following results: Minimizing the maximum completion time and minimizing the maximum lateness are polynomial-time solvable. Minimizing the maximum completion time and minimizing the maximum lateness are strongly NP-hard if some jobs have deadlines. See also Interval scheduling Many solution techniques have been applied to solving single machine scheduling problems. Some of them are listed below. Genetic algorithms Neural networks Simulated annealing Ant colony optimization Tabu search References Optimal scheduling NP-complete problems
Single-machine scheduling
[ "Mathematics", "Engineering" ]
2,041
[ "Optimal scheduling", "Industrial engineering", "Computational problems", "Mathematical problems", "NP-complete problems" ]
13,099,198
https://en.wikipedia.org/wiki/Model-driven%20integration
In software design, model-driven integration is a subset of model-driven architecture (MDA) which focuses purely on solving Application Integration problems using executable Unified Modeling Language (UML). External links "Model-Driven Integration in Financial Services" case-study by Metada, 2008 Systems engineering Unified Modeling Language
Model-driven integration
[ "Engineering" ]
66
[ "Systems engineering" ]
13,100,537
https://en.wikipedia.org/wiki/SHARE%20Operating%20System
The SHARE Operating System (SOS) is an operating system introduced in 1959 by the SHARE user group. It is an improvement on the General Motors GM-NAA I/O operating system, the first operating system for the IBM 704. The main objective was to improve the sharing of programs. The SHARE Operating System provided new methods to manage buffers and input/output devices. Like GM-NAA I/O, it allowed execution of programs written in assembly language. SOS initially ran on the IBM 709 computer and was then ported to its transistorized successor, the IBM 7090. A series of articles describing innovations in the system appears in the April 1959 Journal of the Association for Computing Machinery. In 1962, IBM discontinued support for SOS and announced an entirely new (and incompatible) operating system, IBM 7090/94 IBSYS. See also Multiple Console Time Sharing System Timeline of operating systems SQUOZE References Further reading (5 pages) (7 pages) (4 pages) (NB. This was presented at the ACM meeting 11-13 June 1958.) External links Upload of the SHARE Operating System software and documentation (partial archive) 1959 software Free software operating systems IBM operating systems Discontinued operating systems
SHARE Operating System
[ "Technology" ]
252
[ "Operating system stubs", "Computing stubs" ]
13,103,839
https://en.wikipedia.org/wiki/Magnetic%20water%20treatment
Magnetic water treatment (also known as anti-scale magnetic treatment or AMT) is a disproven method of reducing the effects of hard water by passing it through a magnetic field as a non-chemical alternative to water softening. A 1996 study by Lawrence Livermore National Laboratory found no significant effect of magnetic water treatment on the formation of scale. As magnets affect water to a small degree, and water containing ions is more conductive than purer water, magnetic water treatment is an example of a valid scientific hypothesis that failed experimental testing and is thus disproven. Any products claiming to utilize magnetic water treatment are absolutely fraudulent. Vendors of magnetic water treatment devices frequently use photos and testimonials to support their claims, but omit quantitative detail and well-controlled studies. Advertisements and promotions generally omit system variables, such as corrosion or system mass balance analyticals, as well as measurements of post-treatment water such as concentration of hardness ions or the distribution, structure, and morphology of suspended particles. See also Fouling Laundry ball Magnet therapy Pulsed-power water treatment References Water treatment Fouling Pseudoscience Magnetic devices
Magnetic water treatment
[ "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
230
[ "Water treatment", "Water pollution", "Environmental engineering", "Water technology", "Materials degradation", "Fouling" ]
13,105,832
https://en.wikipedia.org/wiki/Chloride%20process
The chloride process is used to separate titanium from its ores. The goal of the process is to win high purity titanium dioxide from ores such as ilmenite (FeTiO3) and rutile (TiO2). The strategy exploits the volatility of TiCl4, which is readily purified and converted to the dioxide. Millions of tons of TiO2 are produced annually by this process, mainly for use as white pigments. As of 2017, the chloride process is used alongside the older sulfate process, which relies on hot sulfuric acid to extract iron and other impurities from ores. Process chemistry In this process, the feedstock is treated at 1000 °C with carbon and chlorine gas, giving titanium tetrachloride. Typical is the conversion starting from the ore ilmenite: 2 FeTiO3 + 7 Cl2 + 6 C → 2 TiCl4 + 2 FeCl3 + 6 CO The process is a variant of a carbothermic reaction, which exploits the reducing power of carbon. The titanium tetrachloride is purified by distillation. Other impurities are converted to the respective chlorides as well, but most are less volatile than TiCl4. Vanadium tetrachloride and vanadium oxytrichloride codistill with TiCl4, but these impurities can be removed by chemical reduction. It can be subsequently oxidized in an oxygen flame or plasma to give the pure titanium dioxide. TiCl4 + O2 + heat → TiO2 + 2Cl2 In this way, chlorine is recovered for recycling. Process engineering The standard chloride process for titanium dioxide base material consists of the following main production units: Oxidation Chlorination Condensation Purification The following auxiliary production units are necessary: Ore/coke storage Off-Gas Treatment Dust treatment Under steady state conditions the chloride process is a continuous cycle in which chlorine changes from the oxidized state to the reduced state and reverse. The oxidized form of the chlorine is molecular chlorine Cl2, the reduced form is titanium tetrachloride (TiCl4). The oxidizing agent is molecular oxygen (O2), the reducing agent is coke. Both must be fed into the process. The titanium is fed into the process in form of ore together with the coke. Titanium ore is a mixture of oxides. The added O2 leaves the process with the product TiO2, the added coke leaves the process together with the added oxygen from the titanium ore in form of CO and CO2. The other fed metals leave the process in form of metal chlorides. References External links Chemical processes Chlorine
Chloride process
[ "Chemistry" ]
558
[ "Chemical process engineering", "Chemical processes", "nan" ]
13,106,156
https://en.wikipedia.org/wiki/List%20of%20HTTP%20header%20fields
HTTP header fields are a list of strings sent and received by both the client program and server on every HTTP request and response. These headers are usually invisible to the end-user and are only processed or logged by the server and client applications. They define how information sent/received through the connection are encoded (as in Content-Encoding), the session verification and identification of the client (as in browser cookies, IP address, user-agent) or their anonymity thereof (VPN or proxy masking, user-agent spoofing), how the server should handle data (as in Do-Not-Track or Global Privacy Control), the age (the time it has resided in a shared cache) of the document being downloaded, amongst others. General format In HTTP version 1.x, header fields are transmitted after the request line (in case of a request HTTP message) or the response line (in case of a response HTTP message), which is the first line of a message. Header fields are colon-separated key-value pairs in clear-text string format, terminated by a carriage return (CR) and line feed (LF) character sequence. The end of the header section is indicated by an empty field line, resulting in the transmission of two consecutive CR-LF pairs. In the past, long lines could be folded into multiple lines; continuation lines are indicated by the presence of a space (SP) or horizontal tab (HT) as the first character on the next line. This folding was deprecated in RFC 7230. HTTP/2 and HTTP/3 instead use a binary protocol, where headers are encoded in a single HEADERS and zero or more CONTINUATION frames using HPACK (HTTP/2) or QPACK (HTTP/3), which both provide efficient header compression. The request or response line from HTTP/1 has also been replaced by several pseudo-header fields, each beginning with a colon (:). Field names A core set of fields is standardized by the Internet Engineering Task Force (IETF) in . The Field Names, Header Fields and Repository of Provisional Registrations are maintained by the IANA. Additional field names and permissible values may be defined by each application. Header field names are case-insensitive. This is in contrast to HTTP method names (GET, POST, etc.), which are case-sensitive. HTTP/2 makes some restrictions on specific header fields (see below). Non-standard header fields were conventionally marked by prefixing the field name with X- but this convention was deprecated in June 2012 because of the inconveniences it caused when non-standard fields became standard. An earlier restriction on use of Downgraded- was lifted in March 2013. Field values A few fields can contain comments (i.e. in User-Agent, Server, Via fields), which can be ignored by software. Many field values may contain a quality (q) key-value pair separated by equals sign, specifying a weight to use in content negotiation. For example, a browser may indicate that it accepts information in German or English, with German as preferred by setting the q value for de higher than that of en, as follows: Accept-Language: de; q=1.0, en; q=0.5 Size limits The standard imposes no limits to the size of each header field name or value, or to the number of fields. However, most servers, clients, and proxy software impose some limits for practical and security reasons. For example, the Apache 2.3 server by default limits the size of each field to 8,190 bytes, and there can be at most 100 header fields in a single request. Request fields Standard request fields Common non-standard request fields Response fields Standard response fields Common non-standard response fields Effects of selected fields Avoiding caching If a web server responds with Cache-Control: no-cache then a web browser or other caching system (intermediate proxies) must not use the response to satisfy subsequent requests without first checking with the originating server (this process is called validation). This header field is part of HTTP version 1.1, and is ignored by some caches and browsers. It may be simulated by setting the Expires HTTP version 1.0 header field value to a time earlier than the response time. Notice that no-cache is not instructing the browser or proxies about whether or not to cache the content. It just tells the browser and proxies to validate the cache content with the server before using it (this is done by using If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match attributes mentioned above). Sending a no-cache value thus instructs a browser or proxy to not use the cache contents merely based on "freshness criteria" of the cache content. Another common way to prevent old content from being shown to the user without validation is Cache-Control: max-age=0. This instructs the user agent that the content is stale and should be validated before use. The header field Cache-Control: no-store is intended to instruct a browser application to make a best effort not to write it to disk (i.e not to cache it). The request that a resource should not be cached is no guarantee that it will not be written to disk. In particular, the HTTP/1.1 definition draws a distinction between history stores and caches. If the user navigates back to a previous page a browser may still show you a page that has been stored on disk in the history store. This is correct behavior according to the specification. Many user agents show different behavior in loading pages from the history store or cache depending on whether the protocol is HTTP or HTTPS. The Cache-Control: no-cache HTTP/1.1 header field is also intended for use in requests made by the client. It is a means for the browser to tell the server and any intermediate caches that it wants a fresh version of the resource. The Pragma: no-cache header field, defined in the HTTP/1.0 spec, has the same purpose. It, however, is only defined for the request header. Its meaning in a response header is not specified. The behavior of Pragma: no-cache in a response is implementation specific. While some user agents do pay attention to this field in responses, the HTTP/1.1 RFC specifically warns against relying on this behavior. See also HTTP header injection HTTP ETag List of HTTP status codes References External links Headers: Permanent Message Header Field Names : IETF HTTP State Management Mechanism : HTTP Semantics : HTTP Caching : HTTP/1.1 : HTTP/2 : HTTP/3 : Forwarded HTTP Extension : Prefer Header for HTTP HTTP/1.1 headers from a web server point of view Internet Explorer and Custom HTTP Headers - EricLaw's IEInternals - Site Home - MSDN Blogs HTTP header fields
List of HTTP header fields
[ "Technology" ]
1,446
[ "Computing-related lists", "Internet-related lists" ]
26,937,802
https://en.wikipedia.org/wiki/Progress%20in%20Electromagnetics%20Research
Progress in Electromagnetics Research is a peer-reviewed open access scientific journal covering all aspects of electromagnetic theory and applications. It was established in 1989 as Electromagnetic Waves. The editors-in-chief are Weng Cho Chew (Purdue University) and Sailing He (Royal Institute of Technology). Jin Au Kong was the founding editor-in-chief. Abstracting and indexing The journal is abstracted and indexed by the Science Citation Index Expanded, Current Contents, Inspec, Scopus, and Compendex. It is also a member of CrossRef. According to the Journal Citation Reports, the journal had a 2019 impact factor of 1.898. However, it was not listed in 2012 because of "anomalous citation patterns resulting in a significant distortion of the Journal Impact Factor, so that the rank does not reflect the journal's citation performance in the literature". References External links Engineering journals Electromagnetism journals Academic journals established in 1989 English-language journals Electrical and electronic engineering journals
Progress in Electromagnetics Research
[ "Engineering" ]
205
[ "Electrical engineering", "Electronic engineering", "Electrical and electronic engineering journals" ]
26,939,239
https://en.wikipedia.org/wiki/Piperoxan
Piperoxan, also known as benodaine, was the first antihistamine to be discovered. This compound, derived from benzodioxan, was prepared in the early 1930s by Daniel Bovet and Ernest Fourneau at the Pasteur Institute in France. Formerly investigated by Fourneau as an α-adrenergic-blocking agent, they demonstrated that it also antagonized histamine-induced bronchospasm in guinea pigs, and published their findings in 1933. Bovet went on to win the 1957 Nobel Prize in Physiology or Medicine for his contribution. One of Bovet and Fourneau's students, Anne-Marie Staub, published the first structure–activity relationship (SAR) study of antihistamines in 1939. Piperoxan and analogues themselves were not clinically useful due to the production of toxic effects in humans and were followed by phenbenzamine (Antergan) in the early 1940s, which was the first antihistamine to be marketed for medical use. Synthesis Condensation of catechol [120-80-9] (1) with epichlorohydrin in the presence of an aqueous base can be visualized as proceeding initially with the epoxide (2) Opening of the oxirane ring by the phenoxide anion then leads to 2-hydroxymethyl-1,4-benzodioxane [3663-82-9] (3). Halogenation with thionyl chloride gives 2-chloromethyl-1,4-benzodioxane [2164-33-2] (4). Displacement of the leaving group by piperidine completed the synthesis of piperoxan (5). References Abandoned drugs Alpha blockers Antihistamines Benzodioxans French inventions 1-Piperidinyl compounds
Piperoxan
[ "Chemistry" ]
393
[ "Pharmacology", "Alpha blockers", "Drug safety", "Abandoned drugs" ]
26,940,632
https://en.wikipedia.org/wiki/APC/C%20activator%20protein%20CDH1
Cdh1 (cdc20 homolog 1) is one of the substrate adaptor proteins of the anaphase-promoting complex (APC) in the budding yeast Saccharomyces cerevisiae. Functioning as an activator of the APC/C, Cdh1 regulates the activity and substrate specificity of this ubiquitin E3-ligase. The human homolog is encoded by the FZR1 gene, which is not to be confused with the CDH1 gene. Introduction Cdh1 plays a pivotal role in controlling cell division at the end of mitosis (telophase) and in the subsequent G1 phase of cell cycle: By recognizing and binding proteins (like mitotic cyclins) which contain a destruction box (D-box) and an additional degradation signal (KEN box), Cdh1 recruits them in a C-box-dependent mechanism to the APC for ubiquination and subsequent proteolysis. Cdh1 is required for the exit of mitosis. Furthermore, it is thought to be a possible target of a BUB2-dependent spindle checkpoint pathway. Function The anaphase-promoting complex/cyclosome (APC/c) is an ubiquitin E3-ligase complex. Once activated it attaches chains of ubiquitin molecules to its target substrates. These chains are recognised and the substrate is degraded by the Proteasome. Cdh1 is one of the co- activator proteins of APC/c and therefore contributes to the regulation of protein degradation, by providing substrate specificity to the E3-ligase in a cell cycle regulated manner. Cdh1 can exist in several forms. It can be phosphorylated by CDKs, which inactivates it and it can be dephosphorylated by Cdc14. In the dephosphorylated form it can interact with APC/c and build the active ligase APCCdh1. Suppression of Cdh1 by RNA interference leads to an aberrant accumulation of APCCdh1 target proteins, such as cyclin A and B, the kinase AuroraB, PLK1, Skp2 and Cdc20, another APC/c co- activator. Stabilising G1-phase The main function of Cdh1 is to suppress the re-accumulation of mitotic cyclins and other cell cycle determinants and therefore stabilising the G1-phase. It is inactive in early stages of mitosis and only becomes active in the transition from late mitosis to G1. During the cell cycle Cdk gets activated through cyclins, this leads to the mitotic entry and promotes APCCdc20 activation. APCCdc20 degrades the cyclins, this and the activation of Cdc14 leads to the creation of APCCdh1. APCCdh1 keeps the cyclin concentration low and the Cdk inactive that maintains the G1-Phase. G1/S transition APCCdh1 is thought to prevent premature S-phase entry by degrading mitotic cyclins in G1 and regulate processes unrelated to the cell cycle. To enter S-phase APCCdh1 must be inactivated. This is made through degradation of the complex and through phosphorylation of Cdh1. Exit from mitosis One characteristic of budding yeast cells exit from mitosis after chromosome segregation is the removal of the mitotic determinants. This requires the inactivation of mitotic CDKs which are inactivated through ubiquitin-dependent pathways. The protein phosphatase Cdc14 dephosphorylates Cdh1 and therefore activates APCCdh1. As a result the concentration of many APCCdh1 substrates (e.g. M-Cyclins) drops down at the cell exit from mitosis. Cdh1 functions as a tumour suppressor Cdh1-deficient cells can proliferate but accumulate mitotic errors and have difficulties with cytokinesis. It has been shown that APCCdh1-mediated degradation of PIk1 plays an important role in preventing mitosis in cells that have DNA-damage. In healthy cells Cdh1 stays inactive from late G1 to early mitosis. It stays inactive in early mitosis and only becomes active in the transition from late mitosis to G1. A cell that suffers from DNA-damage shows an active Cdh1 already in late G1 and therefore blocks the mitotic entry. One substrate of APCCdh1 is the transcription factor Ets2, which is activated by the Ras-Raf-MAPK signalling pathway and induces the expression of cyclin D1. This pathway stimulates cell proliferation. It was shown that an increased expression of Ets2 can be associated with various cancer types, in the likes of cervical cancer or oesophageal squamous cell carcinoma. Function of Cdh1 in non-dividing cells It was shown that APCCdh1 is active in adult brain and liver tissues. It seems that the complex has a function in axon growth, morphology and plasticity of synapses as well as in learning and memory. Structure The following structural informations are based on the cdh1 protein of Saccharomyces cerevisiae also named Hct1. Cdh1 is a cdc20 homolog and is Frizzy-related (Drosophila). The protein sequence of cdh1 consists of 566 amino acids and has a molecular weight of 62.8 kDa. Cdh1 comprises different domains important for its proper function, when it interacts with the APC/c complex and the various substrates. Activation and APC/c binding In the N-terminal region at amino acid position 55-61 the cdh1 protein contains a C-Box motif, which is required for the association with the APC/c complex. Especially the residue R56 seems to be important for the binding to APC/c in vitro and Cdh1 function in vivo. Cdh1 contains multiple phosphorylation sites for the kinase cdc28. When cdh1 is hyperphosphorytaled, the association of cdh1 to the APC/c is blocked, thus leading to the inactive form of cdh1. Activation can be induced by dephosphorylation through the phosphatase cdc14, which leads to the binding of cdh1 to the APC/c. Cdh1 as well includes a poly-Ser in the N-terminal region from residue 32-38. In general serine, threonine and tyrosine side chains can act as phosphorylation sites for posttranslational modification. In the cdh1 protein amino acid modifications can be found at residue 156 being a phosphoserine and at residue 157 being a phosphothreonine. Cdh1 also contains a C-terminal Ile-Arg (IR) dipeptide motif at residue 565 and 566, which is suggested to bind to the Cdc27 subunit of APC. Substrate binding Cdh1 has 7 WD repeats, which are located between the middle of the protein and the C-terminal end. They have a conserved core length of about 38 to 43 amino acids, which in general end with tryptophan-aspartic acid (WD). WD repeat proteins are assumed to form a circularized beta propeller structure, which is thought to be essential for the biological function. The WD repeats in cdh1 are suspected to be the binding sites for the APC/c substrates. Thus cdh1 seems to be a sort of linker between the APC/c complex and the substrates. The APC/c substrates contain a D-Box and/or a KEN-Box, which are important for the interaction with cdh1. See also APC/c Cdc20 References External links SWISS-MODEL Repository-Model Details CDH1/YGL003C Summary Cell cycle Saccharomyces cerevisiae genes Proteins
APC/C activator protein CDH1
[ "Chemistry", "Biology" ]
1,693
[ "Biomolecules by chemical classification", "Cellular processes", "Molecular biology", "Proteins", "Cell cycle" ]
26,941,794
https://en.wikipedia.org/wiki/Superconducting%20electric%20machine
Superconducting electric machines are electromechanical systems that rely on the use of one or more superconducting elements. Since superconductors have no DC resistance, they typically have greater efficiency. The most important parameter that is of utmost interest in superconducting machine is the generation of a very high magnetic field that is not possible in a conventional machine. This leads to a substantial decrease in the motor volume; which means a great increase in the power density. However, since superconductors only have zero resistance under a certain superconducting transition temperature, Tc that is hundreds of degrees lower than room temperature, cryogenics are required. History DC homopolar machines are among the oldest electric machines. Michael Faraday made a Homopolar motor in 1831. Superconducting DC homopolar machines use superconductors in their stationary field windings and normal conductors in their rotating pickup winding. In 2005 the General Atomics company received a contract for the creation of a large low speed superconducting homopolar motor for ship propulsion. Superconducting homopolar generators have been considered as pulsed power sources for laser weapon systems. However, homopolar machines have not been practical for most applications. In the past, experimental AC synchronous superconducting machines were made with rotors using low-temperature metal superconductors that exhibit superconductivity when cooled with liquid helium. These worked, however the high cost of liquid helium cooling made them too expensive for most applications. More recently AC synchronous superconducting machines have been made with ceramic rotor conductors that exhibit high-temperature superconductivity. These have liquid nitrogen cooled ceramic superconductors in their rotors. The ceramic superconductors are also called high-temperature or liquid-nitrogen-temperature superconductors. Because liquid nitrogen is relatively inexpensive and easier to handle, there is a greater interest in the ceramic superconductor machines than the liquid helium cooled metal superconductor machines. Present interest Present interest in AC synchronous ceramic superconducting machines is in larger machines like the generators used in utility and ship power plants and the motors used in ship propulsion. American Superconductor and Northrop Grumman created and demonstrated a 36.5 MW ceramic superconductor ship propulsion motor. Because they are light-weight and therefore offer lower tower and construction costs they are seen as a promising generator technology for wind turbines. With super conducting generators the weight and volume of generators could be reduced compared to direct drive synchronous generators, which could lead to lower costs of the whole turbine. First commercial turbines were expected to be installed approximately in 2020. Advantages and disadvantages of superconducting electric machines Compared with a conventional conductor machine Superconducting electric machines typically have the following advantages: Reduced resistive losses but only in the rotor electromagnet. Reduced size and weight per power capacity without considering the refrigeration equipment. There are also the following disadvantages: The cost, size, weight, and complications of the cooling system. A sudden decrease or elimination of motor or generator action if the superconductors leave their superconductive state. A greater tendency for rotor speed instability. A superconducting rotor does not have the inherent damping of a conventional rotor. Its speed may hunt or oscillate around its synchronous speed. Motor bearings need to be able to withstand cold or need to be insulated from the cold rotor. As a synchronous motor, electronic control is essential for practical operation. Electronic control introduces expensive harmonic loss in the supercooled rotor electromagnet. High-temperature superconductors versus Low-temperature superconductors High-temperature superconductors (HTS) become superconducting at more easily obtainable liquid nitrogen temperatures, which is much more economical than liquid helium that is typically used in low-temperature superconductors. HTS are ceramics, and are fragile relative to conventional metal alloy superconductors such as niobium-titanium. Ceramic superconductors cannot be bolted or welded together to form superconducting junctions. Ceramic superconductors must be cast in their final shape when created. This may increase production costs. Ceramic superconductors can be more easily driven out of superconductivity by oscillating magnetic fields. This could be a problem during transient conditions, as during a sudden load or supply change. References Further reading Bumby, J. R., Superconducting Rotating Electrical Machines, Oxford: Clarendon Press, 192 pages, 1983. Kuhlmann, J. H., Design of Electrical Apparatus, 3rd edition; New York: John Wiley & Sons, Inc., 512 pages, 1950. <Note, this book does not consider superconducting machines. However, it provides excellent detailed design information that could be used when designing a superconducting machine.> Tubbs, S. P., Design and Analysis of a Superconducting High Speed Synchronous/Induction Motor, ProQuest Direct Complete Database, Publication No. AAT LD03278, 227 pages, 1995. <Literature evaluation, analysis, experimental results, and a large bibliography.> External links American Superconductor, AC synchronous superconducting ceramic motors and generators http://www.amsc.com/ Electric motors Electrical generators
Superconducting electric machine
[ "Physics", "Technology", "Engineering" ]
1,127
[ "Electrical generators", "Machines", "Engines", "Electric motors", "Physical systems", "Electrical engineering" ]
26,941,845
https://en.wikipedia.org/wiki/COLE%20Publishing
COLE Publishing is a privately held company with offices in Wisconsin and Minnesota. The company creates nine highly focused trade publications for the liquid waste and environmental wastewater industries. Its titles include Pumper, Cleaner, PRO (Portable Restroom Operator), Onsite Installer, Municipal Sewer & Water (MSW), Treatment Plant Operator (TPO), Gas, Oil & Mining Contractor (GOMC), digDifferent and Plumber print and online magazines. The company also founded the annual WWETT (Water & Wastewater Equipment, Treatment & Transport) Show, formerly the Pumper & Cleaner Environmental Expo International, the world’s largest annual trade show for environmental service professionals, and one of the fastest growing trade shows in the United States attracting over 14,000 visitors from more than 50 countries. The WWETT Show was sold to Informa Exhibitions in February 2016. History COLE Publishing was founded by Bob Kendall and Pete Lawonn in June 1979. Lawonn pumped septic systems and had a spare 2,000-gallon vacuum tank he needed to sell, but no effective way to market it. Word-of-mouth and newspaper advertising didn’t seem like the best ways to reach contractors who might be interested in his product. The late John DiVall of Jay’s Waste Equipment advised them that there was a need for an industry trade journal. Taking the advice to heart, Kendall and Lawonn began publication. The first issue of Midwest Pumper was mailed to 2,500 contractors in eight states. Today, Pumper is the flagship magazine for COLE Publishing and COLE Inc. References External links Official site Official site Publishing companies of the United States Companies based in Wisconsin Publishing companies established in 1979 Sewerage
COLE Publishing
[ "Chemistry", "Engineering", "Environmental_science" ]
346
[ "Sewerage", "Environmental engineering", "Water pollution" ]
26,942,963
https://en.wikipedia.org/wiki/Welding%20joint
In metalworking, a welding joint is a point or edge where two or more pieces of metal or plastic are joined together. They are formed by welding two or more workpieces according to a particular geometry. There are five types of joints referred to by the American Welding Society: butt, corner, edge, lap, and tee. These types may have various configurations at the joint where actual welding can occur. Butt welds Butt welds are welds where two pieces of metal to be joined are in the same plane. These welds require only some preparation and are used with thin sheet metals that can be welded with a single pass. Common issues that can weaken a butt weld are the entrapment of slag, excessive porosity, or cracking. For strong welds, the goal is to use the least amount of welding material possible. Butt welds are prevalent in automated welding processes, such as submerged-arc welding, due to their relative ease of preparation. When metals are welded without human guidance, there is no operator to adjust non-ideal joint preparation. Because of this necessity, butt welds can be utilized for their simplistic design to be fed through automated welding machines efficiently. Types There are many types of butt welds, but all fall within one of these categories: single-welded butt joints, double-welded butt joint, and open or closed butt joints. A single welded butt joint is the name for a joint that has only been welded from one side. A double-welded butt joint is created when the weld has been welded from both sides. With double welding, the depths of each weld can vary slightly. A closed weld is a type of joint in which the two pieces that will be joined are touching during the welding process. An open weld is the joint type where the two pieces have a small gap in between them during the welding process. Square butt joints The square groove is a butt welding joint with the two pieces being flat and parallel to each other. This joint is simple to prepare, economical to use, and provides satisfactory strength but is limited by joint thickness. The closed square butt weld is a type of square-groove joint with no spacing in between the pieces. This joint type is common with gas and arc welding. For thicker joints, the edge of each member of the joint must be prepared to a particular geometry to provide accessibility for welding and to ensure the desired weld soundness and strength. The opening or gap at the root of the joint and the included angle of the groove should be selected to require the least weld metal necessary to give needed access and meet strength requirements. Only metal up to 4.5mm thick is usually used for square butt joints. V-joints Single V welds are similar to a bevel joint, but instead of only one side having the bevelled edge, both sides of the weld joint are beveled. In thick metals, and when welding can be performed from both sides of the work piece, a double-V joint is used. When welding thicker metals, a double-V joint requires less filler material because there are two narrower V-joints compared to a wider single-V joint. Also the double-V joint helps compensate for warping forces. With a single-V joint, stress tends to warp the piece in one direction when the V-joint is filled, but with a double-V-joint, there are welds on both sides of the material, having opposing stresses, straightening the material. J-joints Single-J butt welds are when one piece of the weld is shaped like a J that easily accepts filler material and the other piece is square. A J-groove is formed either with special cutting machinery or by grinding the joint edge into the form of a J. Although a J-groove is more difficult and costly to prepare than a V-groove, a single J-groove on metal between a half an inch and three-quarters of an inch thick provides a stronger weld that requires less filler material. Double-J butt welds have one piece that has a J shape from both directions and the other piece is square. U-joints Single-U butt welds are welds that have both edges of the weld surface shaped like a J, but once they come together, they form a U. Double-U joints have a U formation on both the top and bottom of the prepared joint. U-joints are the most expensive edge to prepare and weld. They are usually used on thick base metals where a V-groove would be at such an extreme angle that it would cost too much to fill. Tee-Joints The Tee Weld Joint is formed when two bars or sheets are joined perpendicular to each other in the form of a T shape. This weld is made from the resistance butt welding process. It can also be performed by Extrusion Welding. Usually two flat pieces of poly are welded at 90 degrees to each other, and extrusion welded on both sides. Others Thin sheet metals are often flanged to produce edge-flange or corner-flange welds. These welds are typically made without the addition of filler metal because the flange melts and provides all the filler needed. Pipes and tubing can be made from rolling and welding together strips, sheets, or plates of material. Flare-groove joints are used for welding metals that, because of their shape, form a convenient groove for welding, such as a pipe against a flat surface. Selection of the right weld joint depends on the thickness and process used. The square welds are the most economical for pieces thinner than 3/8”, because they don’t require the edge to be prepared. Double-groove welds are the most economical for thicker pieces because they require less weld material and time. The use of fusion welding is common for closed single-bevel, closed single J, open single J, and closed double J butt joints. The use of gas and arc welding is ideal for double-bevel, closed double-bevel, open double-bevel, single-bevel, and open single-bevel butt welds. Below are listed ideal joint thicknesses for the various types of butt. When the thickness of a butt weld is defined it is measured at the thinner part and does not compensate for the weld reinforcement. Cruciform A is a specific joint in which four spaces are created by the welding of three plates of metal at right angles. Cruciform joints suffer fatigue when subjected to continuously varying loads. In the American Bureau of Shipping Rules for Steel Vessels, cruciform joints may be considered a double barrier if the two substances requiring a double barrier are in opposite corners diagonally. Double barriers are often required to separate oil and seawater, chemicals and potable water, etc. Plate edge preparation In common welding practices, the welding surface must be prepared to ensure the strongest weld possible. Preparation is needed for all forms of welding and all types of joints. Generally, butt welds require very little preparation, but some is still needed for the best results. Plate edges can be prepared for butt joints in various ways, but the five most common techniques are oxyacetylene cutting (oxy-fuel welding and cutting), machining, chipping, grinding, and air carbon-arc cutting or gouging. Each technique has unique advantages to their use. For steel materials, oxyacetylene cutting is the most common form of preparation. This technique is advantageous because of its speed, low cost, and adaptability. Machining is the most effective for reproducibility and mass production of parts. The preparation of J or U joints is commonly prepared by machining due to the need for high accuracy. The chipping method is used to prepare parts that were produced by casting. The use of grinding to prepare pieces is reserved for small sections that cannot be prepared by other methods. Air carbon arc cutting is common in industries that work with stainless steels, cast iron, or ordinary carbon steel. Prior to welding dissimilar materials, one or both faces of the groove can be buttered. The buttered layer can be the same alloy as the filler metal or a different filler metal that will act as a buffer between the two metals to be joined. Standards AWS A03.0: "Standard welding terms and definitions" ISO 9692: "Welding and allied processes. Recommendations for joint preparation." BS 499-2C: "Welding terms and symbols. European arc welding symbols in chart form" See also Fillet weld Mechanical joint References and notes Welding nl:Lassen#Lasverbindingen
Welding joint
[ "Engineering" ]
1,801
[ "Welding", "Mechanical engineering" ]
26,944,086
https://en.wikipedia.org/wiki/Multi-channel%20length
Multi-channel length is a technique for reducing power leakage in both active and idle modes on CMOS (MOSFET) technology. Other techniques to reduce leakage, like power gating and SRAM retention, are targeted at reducing leakage power when the device, or portions of it, are not operating. Short channel length devices provide higher performance than longer channel length devices, but the longer channel length has significantly reduced subthreshold leakage current. In this generation of the power management tool box, two channel lengths summarized in the table below were used for the speed vs. leakage trade-off. Timing-critical paths are constructed of short channel length cells, but for non timing-critical paths, the longer channel length cells can be used to trade off speed for lower leakage. Multiple channel length synthesis achieves up to 30% leakage reduction. One additional usage of longer channel length transistors is for always-on logic and for special power management cells (isolation cells, always on buffers, etc.) where speed is not critical. References Rusu, S.; Tam, S.; Muljono, H.; Ayers, D.; Chang, J.; Cherkauer, B.; Stinson, J.; Benoit, J.; Varada, R.; Leung, J.; Limaye, R. D.; Vora, S.; "A 65-nm Dual-Core Multithreaded Xeon Processor With 16-MB L3 Cache," Solid-State Circuits, IEEE Journal of, vol.42, no.1, pp. 17–25, Jan. 2007, Gammie, G.; Wang, A.; Mair, H.; Lagerquist, R.; Minh Chau; Royannez, P.; Gururajarao, S.; Uming Ko; "SmartReflex Power and Performance Management Technologies for 90 nm, 65 nm, and 45 nm Mobile Application Processors," Proceedings of the IEEE, vol.98, no.2, pp. 144–159, Feb. 2010 40-nm FPGA Power Management and Advantages. Altera Ic. December 2008, ver. 1.2. Power standards Digital electronics Electronic design automation Electronics optimization MOSFETs
Multi-channel length
[ "Engineering" ]
475
[ "Electrical engineering", "Electronic engineering", "Digital electronics", "Power standards" ]
294,218
https://en.wikipedia.org/wiki/Molecular%20assembler
A molecular assembler, as defined by K. Eric Drexler, is a "proposed device able to guide chemical reactions by positioning reactive molecules with atomic precision". A molecular assembler is a kind of molecular machine. Some biological molecules such as ribosomes fit this definition. This is because they receive instructions from messenger RNA and then assemble specific sequences of amino acids to construct protein molecules. However, the term "molecular assembler" usually refers to theoretical human-made devices. Beginning in 2007, the British Engineering and Physical Sciences Research Council has funded development of ribosome-like molecular assemblers. Clearly, molecular assemblers are possible in this limited sense. A technology roadmap project, led by the Battelle Memorial Institute and hosted by several U.S. National Laboratories has explored a range of atomically precise fabrication technologies, including both early-generation and longer-term prospects for programmable molecular assembly; the report was released in December, 2007. In 2008, the Engineering and Physical Sciences Research Council provided funding of £1.5 million over six years (£1,942,235.57, $2,693,808.00 in 2021) for research working towards mechanized mechanosynthesis, in partnership with the Institute for Molecular Manufacturing, amongst others. Likewise, the term "molecular assembler" has been used in science fiction and popular culture to refer to a wide range of fantastic atom-manipulating nanomachines. Much of the controversy regarding "molecular assemblers" results from the confusion in the use of the name for both technical concepts and popular fantasies. In 1992, Drexler introduced the related but better-understood term "molecular manufacturing", which he defined as the programmed "chemical synthesis of complex structures by mechanically positioning reactive molecules, not by manipulating individual atoms". This article mostly discusses "molecular assemblers" in the popular sense. These include hypothetical machines that manipulate individual atoms and machines with organism-like self-replicating abilities, mobility, ability to consume food, and so forth. These are quite different from devices that merely (as defined above) "guide chemical reactions by positioning reactive molecules with atomic precision". Because synthetic molecular assemblers have never been constructed and because of the confusion regarding the meaning of the term, there has been much controversy as to whether "molecular assemblers" are possible or simply science fiction. Confusion and controversy also stem from their classification as nanotechnology, which is an active area of laboratory research which has already been applied to the production of real products; however, there had been, until recently, no research efforts into the actual construction of "molecular assemblers". Nonetheless, a 2013 paper by David Leigh's group, published in the journal Science, details a new method of synthesizing a peptide in a sequence-specific manner by using an artificial molecular machine that is guided by a molecular strand. This functions in the same way as a ribosome building proteins by assembling amino acids according to a messenger RNA blueprint. The structure of the machine is based on a rotaxane, which is a molecular ring sliding along a molecular axle. The ring carries a thiolate group, which removes amino acids in sequence from the axle, transferring them to a peptide assembly site. In 2018, the same group published a more advanced version of this concept in which the molecular ring shuttles along a polymeric track to assemble an oligopeptide that can fold into an α-helix that can perform the enantioselective epoxidation of a chalcone derivative (in a way reminiscent to the ribosome assembling an enzyme). In another paper published in Science in March 2015, chemists at the University of Illinois report a platform that automates the synthesis of 14 classes of small molecules, with thousands of compatible building blocks. In 2017, David Leigh's group reported a molecular robot that could be programmed to construct any one of four different stereoisomers of a molecular product by using a nanomechanical robotic arm to move a molecular substrate between different reactive sites of an artificial molecular machine. An accompanying News and Views article, titled 'A molecular assembler', outlined the operation of the molecular robot as effectively a prototypical molecular assembler. Nanofactories A nanofactory is a proposed system in which nanomachines (resembling molecular assemblers, or industrial robot arms) would combine reactive molecules via mechanosynthesis to build larger atomically precise parts. These, in turn, would be assembled by positioning mechanisms of assorted sizes to build macroscopic (visible) but still atomically-precise products. A typical nanofactory would fit in a desktop box, in the vision of K. Eric Drexler published in Nanosystems: Molecular Machinery, Manufacturing and Computation (1992), a notable work of "exploratory engineering". During the 1990s, others have extended the nanofactory concept, including an analysis of nanofactory convergent assembly by Ralph Merkle, a systems design of a replicating nanofactory architecture by J. Storrs Hall, Forrest Bishop's "Universal Assembler", the patented exponential assembly process by Zyvex, and a top-level systems design for a 'primitive nanofactory' by Chris Phoenix (director of research at the Center for Responsible Nanotechnology). All of these nanofactory designs (and more) are summarized in Chapter 4 of Kinematic Self-Replicating Machines (2004) by Robert Freitas and Ralph Merkle. The Nanofactory Collaboration, founded by Freitas and Merkle in 2000, is a focused, ongoing effort involving 23 researchers from 10 organizations and 4 countries that is developing a practical research agenda specifically aimed at positionally-controlled diamond mechanosynthesis and diamondoid nanofactory development. In 2005, an animated short film of the nanofactory concept was produced by John Burch, in collaboration with Drexler. Such visions have been the subject of much debate, on several intellectual levels. No one has discovered an insurmountable problem with the underlying theories and no one has proved that the theories can be translated into practice. However, the debate continues, with some of it being summarized in the molecular nanotechnology article. If nanofactories could be built, severe disruption to the world economy would be one of many possible negative impacts, though it could be argued that this disruption would have little negative effect, if everyone had such nanofactories. Great benefits also would be anticipated. Various works of science fiction have explored these and similar concepts. The potential for such devices was part of the mandate of a major UK study led by mechanical engineering professor Dame Ann Dowling. Self-replication "Molecular assemblers" have been confused with self-replicating machines. To produce a practical quantity of a desired product, the nanoscale size of a typical science fiction universal molecular assembler requires an extremely large number of such devices. However, a single such theoretical molecular assembler might be programmed to self-replicate, constructing many copies of itself. This would allow an exponential rate of production. Then, after sufficient quantities of the molecular assemblers were available, they would then be re-programmed for production of the desired product. However, if self-replication of molecular assemblers were not restrained then it might lead to competition with naturally occurring organisms. This has been called ecophagy or the grey goo problem. One method of building molecular assemblers is to mimic evolutionary processes employed by biological systems. Biological evolution proceeds by random variation combined with culling of the less-successful variants and reproduction of the more-successful variants. Production of complex molecular assemblers might be evolved from simpler systems since "A complex system that works is invariably found to have evolved from a simple system that worked. . . . A complex system designed from scratch never works and can not be patched up to make it work. You have to start over, beginning with a system that works." However, most published safety guidelines include "recommendations against developing ... replicator designs which permit surviving mutation or undergoing evolution". Most assembler designs keep the "source code" external to the physical assembler. At each step of a manufacturing process, that step is read from an ordinary computer file and "broadcast" to all the assemblers. If any assembler gets out of range of that computer, or when the link between that computer and the assemblers is broken, or when that computer is unplugged, the assemblers stop replicating. Such a "broadcast architecture" is one of the safety features recommended by the "Foresight Guidelines on Molecular Nanotechnology", and a map of the 137-dimensional replicator design space recently published by Freitas and Merkle provides numerous practical methods by which replicators can be safely controlled by good design. Drexler and Smalley debate One of the most outspoken critics of some concepts of "molecular assemblers" was Professor Richard Smalley (1943–2005) who won the Nobel prize for his contributions to the field of nanotechnology. Smalley believed that such assemblers were not physically possible and introduced scientific objections to them. His two principal technical objections were termed the "fat fingers problem" and the "sticky fingers problem". He believed these would exclude the possibility of "molecular assemblers" that worked by precision picking and placing of individual atoms. Drexler and coworkers responded to these two issues in a 2001 publication. Smalley also believed that Drexler's speculations about apocalyptic dangers of self-replicating machines that have been equated with "molecular assemblers" would threaten the public support for development of nanotechnology. To address the debate between Drexler and Smalley regarding molecular assemblers Chemical & Engineering News published a point-counterpoint consisting of an exchange of letters that addressed the issues. Regulation Speculation on the power of systems that have been called "molecular assemblers" has sparked a wider political discussion on the implication of nanotechnology. This is in part due to the fact that nanotechnology is a very broad term and could include "molecular assemblers". Discussion of the possible implications of fantastic molecular assemblers has prompted calls for regulation of current and future nanotechnology. There are very real concerns with the potential health and ecological impact of nanotechnology that is being integrated in manufactured products. Greenpeace for instance commissioned a report concerning nanotechnology in which they express concern into the toxicity of nanomaterials that have been introduced in the environment. However, it makes only passing references to "assembler" technology. The UK Royal Society and Royal Academy of Engineering also commissioned a report entitled "Nanoscience and nanotechnologies: opportunities and uncertainties" regarding the larger social and ecological implications of nanotechnology. This report does not discuss the threat posed by potential so-called "molecular assemblers". Formal scientific review In 2006, the U.S. National Academy of Sciences released the report of a study of molecular manufacturing (not molecular assemblers per se) as part of a longer report, A Matter of Size: Triennial Review of the National Nanotechnology Initiative The study committee reviewed the technical content of Nanosystems, and in its conclusion states that no current theoretical analysis can be considered definitive regarding several questions of potential system performance, and that optimal paths for implementing high-performance systems cannot be predicted with confidence. It recommends funding for experimental research to produce experimental demonstrations in this area: "Although theoretical calculations can be made today, the eventually attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems cannot be reliably predicted at this time. Thus, the eventually attainable perfection and complexity of manufactured products, while they can be calculated in theory, cannot be predicted with confidence. Finally, the optimum research paths that might lead to systems which greatly exceed the thermodynamic efficiencies and other capabilities of biological systems cannot be reliably predicted at this time. Research funding that is based on the ability of investigators to produce experimental demonstrations that link to abstract models and guide long-term vision is most appropriate to achieve this goal." Gray goo One potential scenario that has been envisioned is out-of-control self-replicating molecular assemblers in the form of gray goo which consumes carbon to continue its replication. If unchecked, such mechanical replication could potentially consume whole ecoregions or the whole Earth (ecophagy), or it could simply outcompete natural lifeforms for necessary resources such as carbon, ATP, or UV light (which some nanomotor examples run on). However, the ecophagy and 'grey goo' scenarios, like synthetic molecular assemblers, are based upon still-hypothetical technologies that have not yet been demonstrated experimentally. See also Nanotechnology Molecular machine Bioethics Biosafety Biosecurity Biotechnology Ecocide Ecophagy Santa Claus machine 3D printing Nanotechnology in fiction References External links Molecular Dynamics Studio (2016) free open-source multi-scale modeling and simulation program for nano-composites with special support for structural DNA nanotechnology (originally Nanoengineer-1 by Nanorex) Nano-Hive: Nanospace Simulator (2006) free software for modeling nanotech entities Foresight Guidelines for Responsible Nanotechnology Development (2006) of molecular manufacturing technologies Center for Responsible Nanotechnology (2008) Molecular Assembler website (2008) Rage Against the (Green) Machine (2003) in Wired Government launches nano study UK EducationGuardian, 11 June 2003 Unraveling the Big Debate over Small Machines (2004) from BetterHumans.com Design considerations for an assembler (1995) by Ralph Merkle Kinematic Self-Replicating Machines — online technical book: first comprehensive survey of molecular assemblers (2004) by Robert Freitas and Ralph Merkle Design of a Primitive Nanofactory (2003) Video - Nanofactory in Action (2006) Nanofactory technology Review of Molecular Manufacturing Integrated Nanosystems for Atomically Precise Manufacturing — United States Department of Energy Workshop – August 5–6, 2015 Nanotechnology Molecular machines Self-replication
Molecular assembler
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering", "Biology" ]
2,888
[ "Machines", "Behavior", "Reproduction", "Materials science", "Self-replication", "Physical systems", "Molecular machines", "Nanotechnology" ]
294,316
https://en.wikipedia.org/wiki/Hydraulic%20jump
A hydraulic jump is a phenomenon in the science of hydraulics which is frequently observed in open channel flow such as rivers and spillways. When liquid at high velocity discharges into a zone of lower velocity, a rather abrupt rise occurs in the liquid surface. The rapidly flowing liquid is abruptly slowed and increases in height, converting some of the flow's initial kinetic energy into an increase in potential energy, with some energy irreversibly lost through turbulence to heat. In an open channel flow, this manifests as the fast flow rapidly slowing and piling up on top of itself similar to how a shockwave forms. It was first observed and documented by Leonardo da Vinci in the 1500s. The mathematics were first described by Giorgio Bidone of Turin University when he published a paper in 1820 called Experiences sur le remou et sur la propagation des ondes. The phenomenon is dependent upon the initial fluid speed. If the initial speed of the fluid is below the critical speed, then no jump is possible. For initial flow speeds which are not significantly above the critical speed, the transition appears as an undulating wave. As the initial flow speed increases further, the transition becomes more abrupt, until at high enough speeds, the transition front will break and curl back upon itself. When this happens, the jump can be accompanied by violent turbulence, eddying, air entrainment, and surface undulations, or waves. There are two main manifestations of hydraulic jumps and historically different terminology has been used for each. However, the mechanisms behind them are similar because they are simply variations of each other seen from different frames of reference, and so the physics and analysis techniques can be used for both types. The different manifestations are: The stationary hydraulic jump – rapidly flowing water transitions in a stationary jump to slowly moving water as shown in Figures 1 and 2. The tidal bore – a wall or undulating wave of water moves upstream against water flowing downstream as shown in Figures 3 and 4. If one considers a frame of reference which moves along with the wave front, then the wave front is stationary relative to the frame and has the same essential behavior as the stationary jump. A related case is a cascade – a wall or undulating wave of water moves downstream overtaking a shallower downstream flow of water as shown in Figure 5. If considered from a frame of reference which moves with the wave front, this is amenable to the same analysis as a stationary jump. These phenomena are addressed in an extensive literature from a number of technical viewpoints. Hydraulic Jump is used sometimes in mixing chemicals. Classes of hydraulic jumps Hydraulic jumps can be seen in both a stationary form, which is known as a "hydraulic jump", and a dynamic or moving form, which is known as a positive surge or "hydraulic jump in translation". They can be described using the same analytic approaches and are simply variants of a single phenomenon. Moving hydraulic jump A tidal bore is a hydraulic jump which occurs when the incoming tide forms a wave (or waves) of water that travel up a river or narrow bay against the direction of the current. As is true for hydraulic jumps in general, bores take on various forms depending upon the difference in the waterlevel upstream and down, ranging from an undular wavefront to a shock-wave-like wall of water. Figure 3 shows a tidal bore with the characteristics common to shallow upstream water – a large elevation difference is observed. Figure 4 shows a tidal bore with the characteristics common to deep upstream water – a small elevation difference is observed and the wavefront undulates. In both cases the tidal wave moves at the speed characteristic of waves in water of the depth found immediately behind the wave front. A key feature of tidal bores and positive surges is the intense turbulent mixing induced by the passage of the bore front and by the following wave motion. Another variation of the moving hydraulic jump is the cascade. In the cascade, a series of roll waves or undulating waves of water moves downstream overtaking a shallower downstream flow of water. A moving hydraulic jump is called a surge. The travel of wave is faster in the upper portion than in the lower portion in case of positive surges Stationary hydraulic jump A stationary hydraulic jump is the type most frequently seen on rivers and on engineered features such as outfalls of dams and irrigation works. They occur when a flow of liquid at high velocity discharges into a zone of the river or engineered structure which can only sustain a lower velocity. When this occurs, the water slows in a rather abrupt rise (a step or standing wave) on the liquid surface. Comparing the characteristics before and after, one finds: The other stationary hydraulic jump occurs when a rapid flow encounters a submerged object which throws the water upward. The mathematics behind this form is more complex and will need to take into account the shape of the object and the flow characteristics of the fluid around it. Analysis of the hydraulic jump on a liquid surface In spite of the apparent complexity of the flow transition, application of simple analytic tools to a two dimensional analysis is effective in providing analytic results which closely parallel both field and laboratory results. Analysis shows: Height of the jump: the relationship between the depths before and after the jump as a function of flow rate Energy loss in the jump Location of the jump on a natural or an engineered structure Character of the jump: undular or abrupt Height of the jump The height of the jump is derived from the application of the equations of conservation of mass and momentum. There are several methods of predicting the height of a hydraulic jump. They all reach common conclusions that: The ratio of the water depth before and after the jump depends solely on the ratio of the velocity of the water entering the jump to the speed of the wave over-running the moving water. The height of the jump can be many times the initial depth of the water. For a known flow rate as shown by the figure below, the approximation that the momentum flux is the same just up- and downstream of the energy principle yields an expression of the energy loss in the hydraulic jump. Hydraulic jumps are commonly used as energy dissipators downstream of dam spillways. Applying the continuity principle In fluid dynamics, the equation of continuity is effectively an equation of conservation of mass. Considering any fixed closed surface within an incompressible moving fluid, the fluid flows into a given volume at some points and flows out at other points along the surface with no net change in mass within the space since the density is constant. In case of a rectangular channel, then the equality of mass flux upstream () and downstream () gives: or with the fluid density, and the depth-averaged flow velocities upstream and downstream, and and the corresponding water depths. Conservation of momentum flux For a straight prismatic rectangular channel, the conservation of momentum flux across the jump, assuming constant density, can be expressed as: In rectangular channel, such conservation equation can be further simplified to dimensionless M-y equation form, which is widely used in hydraulic jump analysis in open channel flow. Jump height in terms of flow Dividing by constant and introducing the result from continuity gives which, after some algebra, simplifies to: where Here is the dimensionless Froude number, and relates inertial to gravitational forces in the upstream flow. Solving this quadratic yields: Negative answers do not yield meaningful physical solutions, so this reduces to: so known as Bélanger equation. The result may be extended to an irregular cross-section. This produces three solution classes: When , then (i.e., there is no jump) When , then (i.e., there is a negative jump – this can be shown as not conserving energy and is only physically possible if some force were to accelerate the fluid at that point) When , then (i.e., there is a positive jump) This is equivalent to the condition that . Since the is the speed of a shallow gravity wave, the condition that is equivalent to stating that the initial velocity represents supercritical flow (Froude number > 1) while the final velocity represents subcritical flow (Froude number < 1). Undulations downstream of the jump Practically this means that water accelerated by large drops can create stronger standing waves (undular bores) in the form of hydraulic jumps as it decelerates at the base of the drop. Such standing waves, when found downstream of a weir or natural rock ledge, can form an extremely dangerous "keeper" with a water wall that "keeps" floating objects (e.g., logs, kayaks, or kayakers) recirculating in the standing wave for extended periods. Energy dissipation by a hydraulic jump One of the most important engineering applications of the hydraulic jump is to dissipate energy in channels, dam spillways, and similar structures so that the excess kinetic energy does not damage these structures. The rate of energy dissipation or head loss across a hydraulic jump is a function of the hydraulic jump inflow Froude number and the height of the jump. The energy loss at a hydraulic jump expressed as a head loss is: Location of hydraulic jump in a streambed or an engineered structure In the design of a dam the energy of the fast-flowing stream over a spillway must be partially dissipated to prevent erosion of the streambed downstream of the spillway, which could ultimately lead to failure of the dam. This can be done by arranging for the formation of a hydraulic jump to dissipate energy. To limit damage, this hydraulic jump normally occurs on an apron engineered to withstand hydraulic forces and to prevent local cavitation and other phenomena which accelerate erosion. In the design of a spillway and apron, the engineers select the point at which a hydraulic jump will occur. Obstructions or slope changes are routinely designed into the apron to force a jump at a specific location. Obstructions are unnecessary, as the slope change alone is normally sufficient. To trigger the hydraulic jump without obstacles, an apron is designed such that the flat slope of the apron retards the rapidly flowing water from the spillway. If the apron slope is insufficient to maintain the original high velocity, a jump will occur. Two methods of designing an induced jump are common: If the downstream flow is restricted by the down-stream channel such that water backs up onto the foot of the spillway, that downstream water level can be used to identify the location of the jump. If the spillway continues to drop for some distance, but the slope changes such that it will no longer support supercritical flow, the depth in the lower subcritical flow region is sufficient to determine the location of the jump. In both cases, the final depth of the water is determined by the downstream characteristics. The jump will occur if and only if the level of inflowing (supercritical) water level () satisfies the condition: = Upstream Froude Number g = acceleration due to gravity (essentially constant for this case) h = height of the fluid ( = initial height while = upstream height) Air entrainment in hydraulic jumps The hydraulic jump is characterised by a highly turbulent flow. Macro-scale vortices develop in the jump roller and interact with the free surface leading to air bubble entrainment, splashes and droplets formation in the two-phase flow region. The air–water flow is associated with turbulence, which can also lead to sediment transport. The turbulence may be strongly affected by the bubble dynamics. Physically, the mechanisms involved in these processes are complex. The air entrainment occurs in the form of air bubbles and air packets entrapped at the impingement of the upstream jet flow with the roller. The air packets are broken up in very small air bubbles as they are entrained in the shear region, characterised by large air contents and maximum bubble count rates. Once the entrained bubbles are advected into regions of lesser shear, bubble collisions and coalescence lead to larger air entities that are driven toward the free-surface by a combination of buoyancy and turbulent advection. Tabular summary of the analytic conclusions NB: the above classification is very rough. Undular hydraulic jumps have been observed with inflow/prejump Froude numbers up to 3.5 to 4. Hydraulic jump variations A number of variations are amenable to similar analysis: Shallow fluid hydraulic jumps The hydraulic jump in a sink Figure 2 above illustrates an example of a hydraulic jump, often seen in a kitchen sink. Around the place where the tap water hits the sink, a smooth-looking flow pattern will occur. A little further away, a sudden "jump" in the water level will be present. This is a hydraulic jump. A circular impinging jet creates a thin film of liquid that spreads radially, with a circular hydraulic jump occurring downstream. For laminar jets, the thin film and the hydraulic jump can be remarkably smooth and steady. In 1993, Liu and Lienhard demonstrated the role of surface tension in setting the structure of hydraulic jumps in these thin films. Many subsequent studies have explored surface tension and pattern formation is such jumps. A 2018 study experimentally and theoretically investigated the relative contributions of surface tension and gravity to the circular hydraulic jump. To rule out the role of gravity in the formation of a circular hydraulic jump, the authors performed experiments on horizontal, vertical and inclined surfaces finding that irrespective of the orientation of the substrate, for same flow rate and physical properties of the liquid, the initial hydraulic jump happens at the same location. They proposed a model for the phenomenon and found the general criterion for a thin film hydraulic jump to be where is the local Weber number and is the local Froude number. For kitchen sink scale hydraulic jumps, the Froude number remains high, therefore, the effective criteria for the thin film hydraulic jump is . In other words, a thin film hydraulic jump occurs when the liquid momentum per unit width equals the surface tension of the liquid. However, this model stays heavily contested. Internal wave hydraulic jumps Hydraulic jumps in abyssal fan formation Turbidity currents can result in internal hydraulic jumps (i.e., hydraulic jumps as internal waves in fluids of different density) in abyssal fan formation. The internal hydraulic jumps have been associated with salinity or temperature induced stratification as well as with density differences due to suspended materials. When the slope of the bed (over which the turbidity current flows) flattens, the slower rate of flow is mirrored by increased sediment deposition below the flow, producing a gradual backward slope. Where a hydraulic jump occurs, the signature is an abrupt backward slope, corresponding to the rapid reduction in the flow rate at the point of the jump. Atmospheric hydraulic jumps Hydraulic jumps occur in the atmosphere in the air flowing over mountains. A hydraulic jump also occurs at the tropopause interface between the stratosphere and troposphere downwind of the overshooting top of very strong supercell thunderstorms. A related situation is the Morning Glory cloud observed, for example, in Northern Australia, sometimes called an undular jump. Industrial and recreational applications for hydraulic jumps Industrial The hydraulic jump is the most commonly used choice of design engineers for energy dissipation below spillways and outlets. A properly designed hydraulic jump can provide for 60-70% energy dissipation of the energy in the basin itself, limiting the damage to structures and the streambed. Even with such efficient energy dissipation, stilling basins must be carefully designed to avoid serious damage due to uplift, vibration, cavitation, and abrasion. An extensive literature has been developed for this type of engineering. Recreational While travelling down river, kayaking and canoeing paddlers will often stop and playboat in standing waves and hydraulic jumps. The standing waves and shock fronts of hydraulic jumps make for popular locations for such recreation. Similarly, kayakers and surfers have been known to ride tidal bores up rivers. Hydraulic jumps have been used by glider pilots in the Andes and Alps and to ride Morning Glory effects in Australia. See also References and notes Further reading Fluid dynamics Hydraulics Wave mechanics Vertical position
Hydraulic jump
[ "Physics", "Chemistry", "Engineering" ]
3,282
[ "Vertical position", "Physical phenomena", "Physical quantities", "Distance", "Chemical engineering", "Classical mechanics", "Physical systems", "Waves", "Wave mechanics", "Hydraulics", "Piping", "Fluid dynamics" ]
294,995
https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange%20equation
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange. Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero. In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field. History The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766. Statement Let be a real dynamical system with degrees of freedom. Here is the configuration space and the Lagrangian, i.e. a smooth real-valued function such that and is an -dimensional "vector of speed". (For those familiar with differential geometry, is a smooth manifold, and where is the tangent bundle of Let be the set of smooth paths for which and The action functional is defined via A path is a stationary point of if and only if Here, is the time derivative of When we say stationary point, we mean a stationary point of with respect to any small perturbation in . See proofs below for more rigorous detail. Example A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible. the integrand function being . The partial derivatives of L are: By substituting these into the Euler–Lagrange equation, we obtain that is, the function must have a constant first derivative, and thus its graph is a straight line. Generalizations Single function of single variable with higher derivatives The stationary values of the functional can be obtained from the Euler–Lagrange equation under fixed boundary conditions for the function itself as well as for the first derivatives (i.e. for all ). The endpoint values of the highest derivative remain flexible. Several functions of single variable with single derivative If the problem involves finding several functions () of a single independent variable () that define an extremum of the functional then the corresponding Euler–Lagrange equations are Single function of several variables with single derivative A multi-dimensional generalization comes from considering a function on n variables. If is some surface, then is extremized only if f satisfies the partial differential equation When n = 2 and functional is the energy functional, this leads to the soap-film minimal surface problem. Several functions of several variables with single derivative If there are several unknown functions to be determined and several variables such that the system of Euler–Lagrange equations is Single function of two variables with higher derivatives If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that then the Euler–Lagrange equation is which can be represented shortly as: wherein are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the indices is only over in order to avoid counting the same partial derivative multiple times, for example appears only once in the previous equation. Several functions of several variables with higher derivatives If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that where are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is where the summation over the is avoiding counting the same derivative several times, just as in the previous subsection. This can be expressed more compactly as Field theories Generalization to manifolds Let be a smooth manifold, and let denote the space of smooth functions . Then, for functionals of the form where is the Lagrangian, the statement is equivalent to the statement that, for all , each coordinate frame trivialization of a neighborhood of yields the following equations: Euler-Lagrange equations can also be written in a coordinate-free form as where is the canonical momenta 1-form corresponding to the Lagrangian . The vector field generating time translations is denoted by and the Lie derivative is denoted by . One can use local charts in which and and use coordinate expressions for the Lie derivative to see equivalence with coordinate expressions of the Euler Lagrange equation. The coordinate free form is particularly suitable for geometrical interpretation of the Euler Lagrange equations. See also Lagrangian mechanics Hamiltonian mechanics Analytical mechanics Beltrami identity Functional derivative Notes References Roubicek, T.: ''Calculus of variations. Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, , pp. 551–588. Eponymous equations of mathematics Eponymous equations of physics Ordinary differential equations Partial differential equations Calculus of variations Articles containing proofs Leonhard Euler
Euler–Lagrange equation
[ "Physics", "Mathematics" ]
1,362
[ "Articles containing proofs", "Eponymous equations of physics", "Equations of physics" ]
295,194
https://en.wikipedia.org/wiki/Georgi%E2%80%93Glashow%20model
In particle physics, the Georgi–Glashow model is a particular Grand Unified Theory (GUT) proposed by Howard Georgi and Sheldon Glashow in 1974. In this model, the Standard Model gauge groups SU(3) × SU(2) × U(1) are combined into a single simple gauge group SU(5). The unified group SU(5) is then thought to be spontaneously broken into the Standard Model subgroup below a very high energy scale called the grand unification scale. Since the Georgi–Glashow model combines leptons and quarks into single irreducible representations, there exist interactions which do not conserve baryon number, although they still conserve the quantum number associated with the symmetry of the common representation. This yields a mechanism for proton decay, and the rate of proton decay can be predicted from the dynamics of the model. However, proton decay has not yet been observed experimentally, and the resulting lower limit on the lifetime of the proton contradicts the predictions of this model. Nevertheless, the elegance of the model has led particle physicists to use it as the foundation for more complex models which yield longer proton lifetimes, particularly SO(10) in basic and SUSY variants. (For a more elementary introduction to how the representation theory of Lie algebras are related to particle physics, see the article Particle physics and representation theory.) Also, this model suffers from the doublet–triplet splitting problem. Construction SU(5) acts on and hence on its exterior algebra . Choosing a splitting restricts SU(5) to , yielding matrices of the form with kernel , hence isomorphic to the Standard Model's true gauge group . For the zeroth power , this acts trivially to match a left-handed neutrino, . For the first exterior power , the Standard Model's group action preserves the splitting . The transforms trivially in , as a doublet in , and under the representation of (as weak hypercharge is conventionally normalized as ); this matches a right-handed anti-lepton, (as in SU(2)). The transforms as a triplet in SU(3), a singlet in SU(2), and under the Y = − representation of U(1) (as ); this matches a right-handed down quark, . The second power is obtained via the formula . As SU(5) preserves the canonical volume form of , Hodge duals give the upper three powers by . Thus the Standard Model's representation of one generation of fermions and antifermions lies within . Similar motivations apply to the Pati–Salam model, and to SO(10), E6, and other supergroups of SU(5). Explicit Embedding of the Standard Model (SM) Owing to its relatively simple gauge group , GUTs can be written in terms of vectors and matrices which allows for an intuitive understanding of the Georgi–Glashow model. The fermion sector is then composed of an anti fundamental and an antisymmetric . In terms of SM degrees of freedoms, this can be written as and with and the left-handed up and down type quark, and their righthanded counterparts, the neutrino, and the left and right-handed electron, respectively. In addition to the fermions, we need to break ; this is achieved in the Georgi–Glashow model via a fundamental which contains the SM Higgs, with and the charged and neutral components of the SM Higgs, respectively. Note that the are not SM particles and are thus a prediction of the Georgi–Glashow model. The SM gauge fields can be embedded explicitly as well. For that we recall a gauge field transforms as an adjoint, and thus can be written as with the generators. Now, if we restrict ourselves to generators with non-zero entries only in the upper block, in the lower block, or on the diagonal, we can identify with the colour gauge fields, with the weak fields, and with the hypercharge (up to some normalization .) Using the embedding, we can explicitly check that the fermionic fields transform as they should. This explicit embedding can be found in Ref. or in the original paper by Georgi and Glashow. Breaking SU(5) SU(5) breaking occurs when a scalar field (Which we will denote as ), analogous to the Higgs field and transforming in the adjoint of SU(5), acquires a vacuum expectation value (vev) proportional to the weak hypercharge generator . When this occurs, SU(5) is spontaneously broken to the subgroup of SU(5) commuting with the group generated by Y. Using the embedding from the previous section, we can explicitly check that is indeed equal to by noting that . Computation of similar commutators further shows that all other gauge fields acquire masses. To be precise, the unbroken subgroup is actually Under this unbroken subgroup, the adjoint 24 transforms as to yield the gauge bosons of the Standard Model plus the new X and Y bosons. See restricted representation. The Standard Model's quarks and leptons fit neatly into representations of SU(5). Specifically, the left-handed fermions combine into 3 generations of Under the unbroken subgroup these transform as to yield precisely the left-handed fermionic content of the Standard Model where every generation , , , and correspond to anti-down-type quark, anti-up-type quark, anti-down-type lepton, and anti-up-type lepton, respectively. Also, and correspond to quark and lepton. Fermions transforming as 1 under SU(5) are now thought to be necessary because of the evidence for neutrino oscillations, unless a way is found to introduce an infinitesimal Majorana coupling for the left-handed neutrinos. Since the homotopy group is , this model predicts 't Hooft–Polyakov monopoles. Because the electromagnetic charge is a linear combination of some SU(2) generator with , these monopoles also have quantized magnetic charges , where by magnetic, here we mean magnetic electromagnetic charges. Minimal supersymmetric SU(5) The minimal supersymmetric SU(5) model assigns a matter parity to the chiral superfields with the matter fields having odd parity and the Higgs having even parity to protect the electroweak Higgs from quadratic radiative mass corrections (the hierarchy problem). In the non-supersymmetric version the action is invariant under a similar symmetry because the matter fields are all fermionic and thus must appear in the action in pairs, while the Higgs fields are bosonic. Chiral superfields As complex representations: Superpotential A generic invariant renormalizable superpotential is a (complex) invariant cubic polynomial in the superfields. It is a linear combination of the following terms: The first column is an Abbreviation of the second column (neglecting proper normalization factors), where capital indices are SU(5) indices, and and are the generation indices. The last two rows presupposes the multiplicity of is not zero (i.e. that a sterile neutrino exists). The coupling has coefficients which are symmetric in and . The coupling has coefficients which are symmetric in and . The number of sterile neutrino generations need not be three, unless the SU(5) is embedded in a higher unification scheme such as SO(10). Vacua The vacua correspond to the mutual zeros of the and terms. Let's first look at the case where the VEVs of all the chiral fields are zero except for . The sector The zeros corresponds to finding the stationary points of subject to the traceless constraint So, where is a Lagrange multiplier. Up to an SU(5) (unitary) transformation, The three cases are called case I, II, and III and they break the gauge symmetry into and respectively (the stabilizer of the VEV). In other words, there are at least three different superselection sections, which is typical for supersymmetric theories. Only case III makes any phenomenological sense and so, we will focus on this case from now onwards. It can be verified that this solution together with zero VEVs for all the other chiral multiplets is a zero of the F-terms and D-terms. The matter parity remains unbroken (right up to the TeV scale). Decomposition The gauge algebra 24 decomposes as This 24 is a real representation, so the last two terms need explanation. Both and are complex representations. However, the direct sum of both representation decomposes into two irreducible real representations and we only take half of the direct sum, i.e. one of the two real irreducible copies. The first three components are left unbroken. The adjoint Higgs also has a similar decomposition, except that it is complex. The Higgs mechanism causes one real HALF of the and of the adjoint Higgs to be absorbed. The other real half acquires a mass coming from the D-terms. And the other three components of the adjoint Higgs, and acquire GUT scale masses coming from self pairings of the superpotential, The sterile neutrinos, if any exist, would also acquire a GUT scale Majorana mass coming from the superpotential coupling  . Because of matter parity, the matter representations and 10 remain chiral. It is the Higgs fields 5 and which are interesting. The two relevant superpotential terms here are and Unless there happens to be some fine tuning, we would expect both the triplet terms and the doublet terms to pair up, leaving us with no light electroweak doublets. This is in complete disagreement with phenomenology. See doublet-triplet splitting problem for more details. Fermion masses Problems of the Georgi–Glashow model Proton decay in SU(5) Unification of the Standard Model via an SU(5) group has significant phenomenological implications. Most notable of these is proton decay which is present in SU(5) with and without supersymmetry. This is allowed by the new vector bosons introduced from the adjoint representation of SU(5) which also contains the gauge bosons of the Standard Model forces. Since these new gauge bosons are in (3,2)−5/6 bifundamental representations, they violated baryon and lepton number. As a result, the new operators should cause protons to decay at a rate inversely proportional to their masses. This process is called dimension 6 proton decay and is an issue for the model, since the proton is experimentally determined to have a lifetime greater than the age of the universe. This means that an SU(5) model is severely constrained by this process. As well as these new gauge bosons, in SU(5) models, the Higgs field is usually embedded in a 5 representation of the GUT group. The caveat of this is that since the Higgs field is an SU(2) doublet, the remaining part, an SU(3) triplet, must be some new field - usually called D or T. This new scalar would be able to generate proton decay as well and, assuming the most basic Higgs vacuum alignment, would be massless so allowing the process at very high rates. While not an issue in the Georgi–Glashow model, a supersymmeterised SU(5) model would have additional proton decay operators due to the superpartners of the Standard Model fermions. The lack of detection of proton decay (in any form) brings into question the veracity of SU(5) GUTs of all types; however, while the models are highly constrained by this result, they are not in general ruled out. Mechanism In the lowest-order Feynman diagram corresponding to the simplest source of proton decay in SU(5), a left-handed and a right-handed up quark annihilate yielding an X+ boson which decays to a right-handed (or left-handed) positron and a left-handed (or right-handed) anti-down quark: This process conserves weak isospin, weak hypercharge, and color. GUTs equate anti-color with having two colors, and SU(5) defines left-handed normal leptons as "white" and right-handed antileptons as "black". The first vertex only involves fermions of the representation, while the second only involves fermions in the (or ), demonstrating the preservation of SU(5) symmetry. Mass relations Since SM states are regrouped into representations their Yukawa matrices have the following relations: In particular this predicts at energies close to the scale of unification. This is however not realized in nature. Doublet-triplet splitting As mentioned in the above section the colour triplet of the which contains the SM Higgs can mediate dimension 6 proton decay. Since protons seem to be quite stable such a triplet has to acquire a quite large mass in order to suppress the decay. This is however problematic. For that consider the scalar part of the Greorgi-Glashow Lagrangian: We here have denoted the adjoint used to break to the SM with is VEV by and the defining representation. which contains the SM Higgs and the colour triplet which can induce proton decay. As mentioned, we require in order to sufficiently suppress proton decay. On the other hand, the is typically of order in order to be consistent with observations. Looking at the above equation it becomes clear that one has to be very precise in choosing the parameters and any two random parameters will not do, since then and could be of the same order! This is known as the doublet–triplet (DT) splitting problem: In order to be consistent we have to 'split' the 'masses' of and but for that we need to fine-tune and There are however some solutions to this problem (see e.g.) which can work quite well in SUSY models. A review of the DT splitting problem can be found in. Neutrino masses As the SM the original Georgi–Glashow model proposed in does not include neutrino masses. However, since neutrino oscillation has been observed such masses are required. The solutions to this problem follow the same ideas which have been applied to the SM: One on hand on can include a singulet which then can generate either Dirac masses or Majorana masses. As in the SM one can also implement the type-I seesaw mechanism which then generates naturally light masses. On the other hand, on can just parametrize the ignorance about neutrinos using the dimension 5 Weinbergoperator: with the Yukawa matrix required for the mixing between flavours. References Grand Unified Theory Supersymmetric quantum field theory
Georgi–Glashow model
[ "Physics" ]
3,131
[ "Supersymmetric quantum field theory", "Unsolved problems in physics", "Physics beyond the Standard Model", "Grand Unified Theory", "Supersymmetry", "Symmetry" ]
295,221
https://en.wikipedia.org/wiki/Pati%E2%80%93Salam%20model
In physics, the Pati–Salam model is a Grand Unified Theory (GUT) proposed in 1974 by Abdus Salam and Jogesh Pati. Like other GUTs, its goal is to explain the seeming arbitrariness and complexity of the Standard Model in terms of a simpler, more fundamental theory that unifies what are in the Standard Model disparate particles and forces. The Pati–Salam unification is based on there being four quark color charges, dubbed red, green, blue and violet (or originally lilac), instead of the conventional three, with the new "violet" quark being identified with the leptons. The model also has left–right symmetry and predicts the existence of a high energy right handed weak interaction with heavy W' and Z' bosons and right-handed neutrinos. Originally the fourth color was labelled "lilac" to alliterate with "lepton". Pati–Salam is an alternative to the Georgi–Glashow unification also proposed in 1974. Both can be embedded within an unification model. Core theory The Pati–Salam model states that the gauge group is either or and the fermions form three families, each consisting of the representations and . This needs some explanation. The center of is . The in the quotient refers to the two element subgroup generated by the element of the center corresponding to the two element of and the 1 elements of and . This includes the right-handed neutrino. See neutrino oscillations. There is also a and/or a scalar field called the Higgs field which acquires a non-zero VEV. This results in a spontaneous symmetry breaking from to or from to and also, See restricted representation. Of course, calling the representations things like and is purely a physicist's convention(source?), not a mathematician's convention, where representations are either labelled by Young tableaux or Dynkin diagrams with numbers on their vertices, but still, it is standard among GUT theorists. The weak hypercharge, Y, is the sum of the two matrices: It is possible to extend the Pati–Salam group so that it has two connected components. The relevant group is now the semidirect product . The last also needs explaining. It corresponds to an automorphism of the (unextended) Pati–Salam group which is the composition of an involutive outer automorphism of which isn't an inner automorphism with interchanging the left and right copies of . This explains the name left and right and is one of the main motivations for originally studying this model. This extra "left-right symmetry" restores the concept of parity which had been shown not to hold at low energy scales for the weak interaction. In this extended model, is an irrep and so is . This is the simplest extension of the minimal left-right model unifying QCD with B−L. Since the homotopy group this model predicts monopoles. See 't Hooft–Polyakov monopole. This model was invented by Jogesh Pati and Abdus Salam. This model doesn't predict gauge mediated proton decay (unless it is embedded within an even larger GUT group). Differences from the SU(5) unification As mentioned above, both the Pati–Salam and Georgi–Glashow unification models can be embedded in a unification. The difference between the two models then lies in the way that the symmetry is broken, generating different particles that may or may not be important at low scales and accessible by current experiments. If we look at the individual models, the most important difference is in the origin of the weak hypercharge. In the model by itself there is no left-right symmetry (although there could be one in a larger unification in which the model is embedded), and the weak hypercharge is treated separately from the color charge. In the Pati–Salam model, part of the weak hypercharge (often called ) starts being unified with the color charge in the group, while the other part of the weak hypercharge is in the . When those two groups break then the two parts together eventually unify into the usual weak hypercharge . Minimal supersymmetric Pati–Salam Spacetime The superspace extension of Minkowski spacetime Spatial symmetry N=1 SUSY over Minkowski spacetime with R-symmetry Gauge symmetry group Global internal symmetry Vector superfields Those associated with the gauge symmetry Chiral superfields As complex representations: Superpotential A generic invariant renormalizable superpotential is a (complex) and invariant cubic polynomial in the superfields. It is a linear combination of the following terms: and are the generation indices. Left-right extension We can extend this model to include left-right symmetry. For that, we need the additional chiral multiplets and . Sources Graham G. Ross, Grand Unified Theories, Benjamin/Cummings, 1985, Anthony Zee, Quantum Field Theory in a Nutshell, Princeton U. Press, Princeton, 2003, References External links – Fusion of all three quarks is the only decay mechanism mediated by the Higgs particle, not the gauge bosons, in the Pati–Salam model The Algebra of Grand Unified Theories John Huerta. Slide show: contains an overview of Pati–Salam the Pati-Salam model Motivation for the Pati–Salam model Grand Unified Theory Abdus Salam
Pati–Salam model
[ "Physics" ]
1,136
[ "Unsolved problems in physics", "Physics beyond the Standard Model", "Grand Unified Theory" ]
295,684
https://en.wikipedia.org/wiki/Molybdenum%20disulfide
Molybdenum disulfide (or moly) is an inorganic compound composed of molybdenum and sulfur. Its chemical formula is . The compound is classified as a transition metal dichalcogenide. It is a silvery black solid that occurs as the mineral molybdenite, the principal ore for molybdenum. is relatively unreactive. It is unaffected by dilute acids and oxygen. In appearance and feel, molybdenum disulfide is similar to graphite. It is widely used as a dry lubricant because of its low friction and robustness. Bulk is a diamagnetic, indirect bandgap semiconductor similar to silicon, with a bandgap of 1.23 eV. Production is naturally found as either molybdenite, a crystalline mineral, or jordisite, a rare low temperature form of molybdenite. Molybdenite ore is processed by flotation to give relatively pure . The main contaminant is carbon. also arises by thermal treatment of virtually all molybdenum compounds with hydrogen sulfide or elemental sulfur and can be produced by metathesis reactions from molybdenum pentachloride. Structure and physical properties Crystalline phases All forms of have a layered structure, in which a plane of molybdenum atoms is sandwiched by planes of sulfide ions. These three strata form a monolayer of . Bulk consists of stacked monolayers, which are held together by weak van der Waals interactions. Crystalline exists in one of two phases, 2H- and 3R-, where the "H" and the "R" indicate hexagonal and rhombohedral symmetry, respectively. In both of these structures, each molybdenum atom exists at the center of a trigonal prismatic coordination sphere and is covalently bonded to six sulfide ions. Each sulfur atom has pyramidal coordination and is bonded to three molybdenum atoms. Both the 2H- and 3R-phases are semiconducting. A third, metastable crystalline phase known as 1T- was discovered by intercalating 2H- with alkali metals. This phase has trigonal symmetry and is metallic. The 1T-phase can be stabilized through doping with electron donors such as rhenium, or converted back to the 2H-phase by microwave radiation. The 2H/1T-phase transition can be controlled via the incorporation of sulfur (S) vacancies. Allotropes Nanotube-like and buckyball-like molecules composed of are known. Exfoliated flakes While bulk in the 2H-phase is known to be an indirect-band gap semiconductor, monolayer has a direct band gap. The layer-dependent optoelectronic properties of have promoted much research in 2-dimensional -based devices. 2D can be produced by exfoliating bulk crystals to produce single-layer to few-layer flakes either through a dry, micromechanical process or through solution processing. Micromechanical exfoliation, also pragmatically called "Scotch-tape exfoliation", involves using an adhesive material to repeatedly peel apart a layered crystal by overcoming the van der Waals forces. The crystal flakes can then be transferred from the adhesive film to a substrate. This facile method was first used by Konstantin Novoselov and Andre Geim to obtain graphene from graphite crystals. However, it can not be employed for a uniform 1-D layers because of weaker adhesion of to the substrate (either silicon, glass or quartz); the aforementioned scheme is good for graphene only. While Scotch tape is generally used as the adhesive tape, PDMS stamps can also satisfactorily cleave if it is important to avoid contaminating the flakes with residual adhesive. Liquid-phase exfoliation can also be used to produce monolayer to multi-layer in solution. A few methods include lithium intercalation to delaminate the layers and sonication in a high-surface tension solvent. Mechanical properties excels as a lubricating material (see below) due to its layered structure and low coefficient of friction. Interlayer sliding dissipates energy when a shear stress is applied to the material. Extensive work has been performed to characterize the coefficient of friction and shear strength of in various atmospheres. The shear strength of increases as the coefficient of friction increases. This property is called superlubricity. At ambient conditions, the coefficient of friction for was determined to be 0.150, with a corresponding estimated shear strength of 56.0 MPa. Direct methods of measuring the shear strength indicate that the value is closer to 25.3 MPa. The wear resistance of in lubricating applications can be increased by doping with Cr. Microindentation experiments on nanopillars of Cr-doped found that the yield strength increased from an average of 821 MPa for pure (at 0% Cr) to 1017 MPa at 50% Cr. The increase in yield strength is accompanied by a change in the failure mode of the material. While the pure nanopillar fails through a plastic bending mechanism, brittle fracture modes become apparent as the material is loaded with increasing amounts of dopant. The widely used method of micromechanical exfoliation has been carefully studied in to understand the mechanism of delamination in few-layer to multi-layer flakes. The exact mechanism of cleavage was found to be layer dependent. Flakes thinner than 5 layers undergo homogenous bending and rippling, while flakes around 10 layers thick delaminated through interlayer sliding. Flakes with more than 20 layers exhibited a kinking mechanism during micromechanical cleavage. The cleavage of these flakes was also determined to be reversible due to the nature of van der Waals bonding. In recent years, has been utilized in flexible electronic applications, promoting more investigation into the elastic properties of this material. Nanoscopic bending tests using AFM cantilever tips were performed on micromechanically exfoliated flakes that were deposited on a holey substrate. The yield strength of monolayer flakes was 270 GPa, while the thicker flakes were also stiffer, with a yield strength of 330 GPa. Molecular dynamic simulations found the in-plane yield strength of to be 229 GPa, which matches the experimental results within error. Bertolazzi and coworkers also characterized the failure modes of the suspended monolayer flakes. The strain at failure ranges from 6 to 11%. The average yield strength of monolayer is 23 GPa, which is close to the theoretical fracture strength for defect-free . The band structure of is sensitive to strain. Chemical reactions Molybdenum disulfide is stable in air and attacked only by aggressive reagents. It reacts with oxygen upon heating forming molybdenum trioxide: Chlorine attacks molybdenum disulfide at elevated temperatures to form molybdenum pentachloride: Intercalation reactions Molybdenum disulfide is a host for formation of intercalation compounds. This behavior is relevant to its use as a cathode material in batteries. One example is a lithiated material, . With butyl lithium, the product is . Applications Lubricant Due to weak van der Waals interactions between the sheets of sulfide atoms, has a low coefficient of friction. in particle sizes in the range of 1–100 μm is a common dry lubricant. Few alternatives exist that confer high lubricity and stability at up to 350 °C in oxidizing environments. Sliding friction tests of using a pin on disc tester at low loads (0.1–2 N) give friction coefficient values of <0.1. is often a component of blends and composites that require low friction. For example, it is added to graphite to improve sticking. A variety of oils and greases are used, because they retain their lubricity even in cases of almost complete oil loss, thus finding a use in critical applications such as aircraft engines. When added to plastics, forms a composite with improved strength as well as reduced friction. Polymers that may be filled with include nylon (trade name Nylatron), Teflon and Vespel. Self-lubricating composite coatings for high-temperature applications consist of molybdenum disulfide and titanium nitride, using chemical vapor deposition. Examples of applications of -based lubricants include two-stroke engines (such as motorcycle engines), bicycle coaster brakes, automotive CV and universal joints, ski waxes and bullets. Other layered inorganic materials that exhibit lubricating properties (collectively known as solid lubricants (or dry lubricants)) includes graphite, which requires volatile additives and hexagonal boron nitride. Catalysis is employed as a cocatalyst for desulfurization in petrochemistry, for example, hydrodesulfurization. The effectiveness of the catalysts is enhanced by doping with small amounts of cobalt or nickel. The intimate mixture of these sulfides is supported on alumina. Such catalysts are generated in situ by treating molybdate/cobalt or nickel-impregnated alumina with or an equivalent reagent. Catalysis does not occur at the regular sheet-like regions of the crystallites, but instead at the edge of these planes. finds use as a hydrogenation catalyst for organic synthesis. As it is derived from a common transition metal, rather than a group 10 metal, is chosen when price or resistance to sulfur poisoning are of primary concern. is effective for the hydrogenation of nitro compounds to amines and can be used to produce secondary amines via reductive amination. The catalyst can also effect hydrogenolysis of organosulfur compounds, aldehydes, ketones, phenols and carboxylic acids to their respective alkanes. However, it suffers from low activity, often requiring hydrogen pressures above 96 MPa and temperatures above 185 °C. Research plays an important role in condensed matter physics research. Hydrogen evolution and related molybdenum sulfides are efficient catalysts for hydrogen evolution, including the electrolysis of water; thus, are possibly useful to produce hydrogen for use in fuel cells. Oxygen reduction and evolution @Fe-N-C core/shell nanosphere with atomic Fe-doped surface and interface (/Fe-N-C) can be used as a used an electrocatalyst for oxygen reduction and evolution reactions (ORR and OER) bifunctionally because of reduced energy barrier due to Fe-N4 dopants and unique nature of /Fe-N-C interface. Microelectronics As in graphene, the layered structures of and other transition metal dichalcogenides exhibit electronic and optical properties that can differ from those in bulk. Bulk has an indirect band gap of 1.2 eV, while monolayers have a direct 1.8 eV electronic bandgap, supporting switchable transistors and photodetectors. nanoflakes can be used for solution-processed fabrication of layered memristive and memcapacitive devices through engineering a / heterostructure sandwiched between silver electrodes. -based memristors are mechanically flexible, optically transparent and can be produced at low cost. The sensitivity of a graphene field-effect transistor (FET) biosensor is fundamentally restricted by the zero band gap of graphene, which results in increased leakage and reduced sensitivity. In digital electronics, transistors control current flow throughout an integrated circuit and allow for amplification and switching. In biosensing, the physical gate is removed and the binding between embedded receptor molecules and the charged target biomolecules to which they are exposed modulates the current. has been investigated as a component of flexible circuits. In 2017, a 115-transistor, 1-bit microprocessor implementation was fabricated using two-dimensional . has been used to create 2D 2-terminal memristors and 3-terminal memtransistors. Valleytronics Due to the lack of spatial inversion symmetry, odd-layer MoS2 is a promising material for valleytronics because both the CBM and VBM have two energy-degenerate valleys at the corners of the first Brillouin zone, providing an exciting opportunity to store the information of 0s and 1s at different discrete values of the crystal momentum. The Berry curvature is even under spatial inversion (P) and odd under time reversal (T), the valley Hall effect cannot survive when both P and T symmetries are present. To excite valley Hall effect in specific valleys, circularly polarized lights were used for breaking the T symmetry in atomically thin transition-metal dichalcogenides. In monolayer , the T and mirror symmetries lock the spin and valley indices of the sub-bands split by the spin-orbit couplings, both of which are flipped under T; the spin conservation suppresses the inter-valley scattering. Therefore, monolayer MoS2 have been deemed an ideal platform for realizing intrinsic valley Hall effect without extrinsic symmetry breaking. Photonics and photovoltaics also possesses mechanical strength, electrical conductivity, and can emit light, opening possible applications such as photodetectors. has been investigated as a component of photoelectrochemical (e.g. for photocatalytic hydrogen production) applications and for microelectronics applications. Superconductivity of monolayers Under an electric field monolayers have been found to superconduct at temperatures below 9.4 K. See also Molybdenum diselenide References External links Molybdenum(IV) compounds Disulfides Non-petroleum based lubricants Dry lubricants Semiconductor materials Transition metal dichalcogenides Hydrogenation catalysts Monolayers
Molybdenum disulfide
[ "Physics", "Chemistry" ]
2,924
[ "Monolayers", "Hydrogenation catalysts", "Semiconductor materials", "Hydrogenation", "Atoms", "Matter" ]
295,772
https://en.wikipedia.org/wiki/Epact
The epact (, from () = added days) used to be described by medieval computists as the age of a phase of the Moon in days on 22 March; in the newer Gregorian calendar, however, the epact is reckoned as the age of the ecclesiastical moon on 1 January. Its principal use is in determining the date of Easter by computistical methods. It varies (usually by 11 days) from year to year, because of the difference between the solar year of 365366 days and the lunar year of 354355 days. Lunar calendar Epacts can also be used to relate dates in the lunar calendar to dates in the common solar calendar. Solar and lunar years A solar calendar year has 365 days (366 days in leap years). A lunar calendar year has 12 lunar months which alternate between 30 and 29 days for a total of 354 days (in leap years, one of the lunar months has a day added; since a lunar year lasts a little over  days, a leap year arises every second or third year rather than every fourth.) If a solar and lunar year start on the same day, then after one year the start of the solar year is 11 days after the start of the lunar year. These excess days are epacts, and have to be added to the lunar year to complete the solar year; or from the complementary perspective they are added to the day of the solar year to determine the day in the lunar year. After two years the difference is 22 days, and after 3 years, 33 days. Whenever the epact reaches or exceeds 30 days, an extra (embolismic or intercalary) lunar month is inserted into the lunar calendar, and the epact is reduced by 30 days. Leap days extend both the solar and lunar year, so they do not affect epact calculations for any other dates. 19-year cycle The solar calendar year is slightly shorter than days, while the synodic month, on average, is slightly longer than days meaning both are non-integers. This gets corrected in the following way. Nineteen tropical years are deemed to be as long as 235 synodic months (Metonic cycle). A cycle can last 6939 or 6940 full days, depending on whether there are 4 or 5 leap days in this 19-year period. After 19 years the lunations should fall the same way in the solar years, so the epact should repeat after 19 years. However, and this is not an integer multiple of the full cycle of 30 epact numbers ( not 0). So after 19 years the epact must be corrected by +1 in order for the cycle to repeat over 19 years. This is the ("leap of the moon"). The sequence number of the year in the 19-year cycle is called the golden number. The extra 209 days fill 7 embolismic months, for a total of Lilian (Gregorian) epacts When the Gregorian calendar reform was instituted in 1582, the lunar cycle previously used with the Julian calendar to complete the calculation of Easter dates was adjusted also, in accordance with a (modification of the) scheme devised by Aloysius Lilius. There were two adjustments to the old lunar cycle: a "solar equation", decrementing the epact by 1, whenever the Gregorian calendar drops a leap day (3 times in 400 calendar years), and a "lunar equation", incrementing the epact by 1, 8 times in 2500 calendar years (7 times after an interval of 300 years, and the 8th time after an interval of 400 years). The revised "solar equation" was intended to adjust for the Gregorian change in the solar calendar, if they were applied at 1 January of the Julian calendar instead of the Gregorian calendar as the reformers implemented it; moreover the corrections to the solar calendar are leap days, whereas there are 30 epact values for a mean lunar month of and a bit: Therefore changing the epact by 1 day does not exactly compensate for a dropped leap day. The "lunar equation" only approximately adjusts for what had (by 1582) been seen after many centuries of recording, that the Moon moves a little faster than the expectation of the rate used for it in the old lunar cycle. By 1582 it was noted (for example, in the text of the bull Inter gravissimas itself) that the new and full moons were at that point occurring "four days and something more" sooner than the old lunar cycle indicated. History The discovery of the epact for computing the date of Easter has been attributed to Patriarch Demetrius I of Alexandria, who held office from 189–232 . In the year 214 he used the epact to produce an Easter calendar, which has not survived, which used an eight-year luni-solar cycle. A subsequent application of the epact to an Easter calendar, using a sixteen-year cycle, is found in the Paschal Table of Hippolytus, a 112 year list of Easter dates beginning in the year 222 which is inscribed on the side of a statue found in Rome. Augustalis, whose dates had been disputed from the third to the fifth century, computed a ("little tablet") of Easter dates. As reconstructed, it uses epacts (here the age of the moon on 1 January) and an 84 year luni-solar cycle to compute the dates of Easter using a base date of 213 . If we accept Augustalis's earlier dates, his laterculus extends from 213–312  and Augustalis originated the use of epacts to compute the date of Easter. As early as the fourth century we see Easter computus using the epact and the nineteen-year Metonic cycle in Alexandria, and subsequent computistical tables were influenced by the structure of the Alexandrian calendar. The epact was taken as the age of the Moon on 26 Phamenoth (22 March in the Julian calendar) but that value of the epact also corresponded to the age of the Moon on the last epagomenal day of the preceding year. Thus the epact can be seen as having been established at the beginning of the current year. Subsequent Easter tables, such as those of Bishop Theophilus of Alexandria, which covered 100 years beginning in 380 , and of his successor Bishop Cyril, which covered 95 years beginning in 437  discussed the computation of the epact in their introductory texts. Under the influence of Dionysius Exiguus and later, of Bede, the Alexandrian Easter Tables were adopted throughout Europe where they established the convention that the epact was the age of the Moon on 22 March. This Dionysian epact fell into disuse after the introduction of a perpetual calendar based on the golden number, which made the calculation of epacts unnecessary for ordinary computistical calculations. Two factors led to the creation of three new forms of the epact in the fifteenth and sixteenth centuries. The first was the increasing error of computistical techniques, which led to the introduction of a new Julian epact around 1478 , to be used for practical computations of the phase of the Moon for medical or astrological purposes. With the Gregorian reform of the calendar in 1582, two additional epacts came into use. The first was the Lilian epact, developed by Aloisius Lilius as an element of the ecclesiastical computations using the Gregorian calendar. The Lilian epact included corrections for the motions of the Sun and the Moon that broke the fixed relationship between the epact and the golden number. The second new epact was a simple adjustment of the practical Julian epact to account for the ten-day change produced by the Gregorian Calendar. See also Computus Wikisource English translation of the (Latin) 1582 papal bull 'Inter gravissimas' instituting Gregorian calendar reform Coptic Epact Numbers References External links Epacts from the Catholic Encyclopedia Calendars Units of time
Epact
[ "Physics", "Mathematics" ]
1,617
[ "Calendars", "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
295,829
https://en.wikipedia.org/wiki/Reflection%20%28mathematics%29
In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as the set of fixed points; this set is called the axis (in dimension 2) or plane (in dimension 3) of reflection. The image of a figure by a reflection is its mirror image in the axis or plane of reflection. For example the mirror image of the small Latin letter p for a reflection with respect to a vertical axis (a vertical reflection) would look like q. Its image by reflection in a horizontal axis (a horizontal reflection) would look like b. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state. The term reflection is sometimes used for a larger class of mappings from a Euclidean space to itself, namely the non-identity isometries that are involutions. The set of fixed points (the "mirror") of such an isometry is an affine subspace, but is possibly smaller than a hyperplane. For instance a reflection through a point is an involutive isometry with just one fixed point; the image of the letter p under it would look like a d. This operation is also known as a central inversion , and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. Other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term "reflection" means reflection in a hyperplane. Some mathematicians use "flip" as a synonym for "reflection". Construction In a plane (or, respectively, 3-dimensional) geometry, to find the reflection of a point drop a perpendicular from the point to the line (plane) used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure. To reflect point through the line using compass and straightedge, proceed as follows (see figure): Step 1 (red): construct a circle with center at and some fixed radius to create points and on the line , which will be equidistant from . Step 2 (green): construct circles centered at and having radius . and will be the points of intersection of these two circles. Point is then the reflection of point through line . Properties The matrix for a reflection is orthogonal with determinant −1 and eigenvalues −1, 1, 1, ..., 1. The product of two such matrices is a special orthogonal matrix that represents a rotation. Every rotation is the result of reflecting in an even number of reflections in hyperplanes through the origin, and every improper rotation is the result of reflecting in an odd number. Thus reflections generate the orthogonal group, and this result is known as the Cartan–Dieudonné theorem. Similarly the Euclidean group, which consists of all isometries of Euclidean space, is generated by reflections in affine hyperplanes. In general, a group generated by reflections in affine hyperplanes is known as a reflection group. The finite groups generated in this way are examples of Coxeter groups. Reflection across a line in the plane Reflection across an arbitrary line through the origin in two dimensions can be described by the following formula where denotes the vector being reflected, denotes any vector in the line across which the reflection is performed, and denotes the dot product of with . Note the formula above can also be written as saying that a reflection of across is equal to 2 times the projection of on , minus the vector . Reflections in a line have the eigenvalues of 1, and −1. Reflection through a hyperplane in n dimensions Given a vector in Euclidean space , the formula for the reflection in the hyperplane through the origin, orthogonal to , is given by where denotes the dot product of with . Note that the second term in the above equation is just twice the vector projection of onto . One can easily check that , if is parallel to , and , if is perpendicular to . Using the geometric product, the formula is Since these reflections are isometries of Euclidean space fixing the origin they may be represented by orthogonal matrices. The orthogonal matrix corresponding to the above reflection is the matrix where denotes the identity matrix and is the transpose of a. Its entries are where is the Kronecker delta. The formula for the reflection in the affine hyperplane not through the origin is See also Additive inverse Coordinate rotations and reflections Householder transformation Inversive geometry Plane of rotation Reflection mapping Reflection group Reflection symmetry Notes References External links Reflection in Line at cut-the-knot Understanding 2D Reflection and Understanding 3D Reflection by Roger Germundsson, The Wolfram Demonstrations Project. Euclidean symmetries Functions and mappings Linear operators Transformation (function)
Reflection (mathematics)
[ "Physics", "Mathematics" ]
1,000
[ "Functions and mappings", "Mathematical analysis", "Euclidean symmetries", "Transformation (function)", "Mathematical objects", "Linear operators", "Mathematical relations", "Geometry", "Symmetry" ]
295,917
https://en.wikipedia.org/wiki/Noncommutative%20geometry
Noncommutative geometry (NCG) is a branch of mathematics concerned with a geometric approach to noncommutative algebras, and with the construction of spaces that are locally presented by noncommutative algebras of functions, possibly in some generalized sense. A noncommutative algebra is an associative algebra in which the multiplication is not commutative, that is, for which does not always equal ; or more generally an algebraic structure in which one of the principal binary operations is not commutative; one also allows additional structures, e.g. topology or norm, to be possibly carried by the noncommutative algebra of functions. An approach giving deep insight about noncommutative spaces is through operator algebras, that is, algebras of bounded linear operators on a Hilbert space. Perhaps one of the typical examples of a noncommutative space is the "noncommutative torus", which played a key role in the early development of this field in 1980s and lead to noncommutative versions of vector bundles, connections, curvature, etc. Motivation The main motivation is to extend the commutative duality between spaces and functions to the noncommutative setting. In mathematics, spaces, which are geometric in nature, can be related to numerical functions on them. In general, such functions will form a commutative ring. For instance, one may take the ring C(X) of continuous complex-valued functions on a topological space X. In many cases (e.g., if X is a compact Hausdorff space), we can recover X from C(X), and therefore it makes some sense to say that X has commutative topology. More specifically, in topology, compact Hausdorff topological spaces can be reconstructed from the Banach algebra of functions on the space (Gelfand–Naimark). In commutative algebraic geometry, algebraic schemes are locally prime spectra of commutative unital rings (A. Grothendieck), and every quasi-separated scheme can be reconstructed up to isomorphism of schemes from the category of quasicoherent sheaves of -modules (P. Gabriel–A. Rosenberg). For Grothendieck topologies, the cohomological properties of a site are invariants of the corresponding category of sheaves of sets viewed abstractly as a topos (A. Grothendieck). In all these cases, a space is reconstructed from the algebra of functions or its categorified version—some category of sheaves on that space. Functions on a topological space can be multiplied and added pointwise hence they form a commutative algebra; in fact these operations are local in the topology of the base space, hence the functions form a sheaf of commutative rings over the base space. The dream of noncommutative geometry is to generalize this duality to the duality between noncommutative algebras, or sheaves of noncommutative algebras, or sheaf-like noncommutative algebraic or operator-algebraic structures, and geometric entities of certain kinds, and give an interaction between the algebraic and geometric description of those via this duality. Regarding that the commutative rings correspond to usual affine schemes, and commutative C*-algebras to usual topological spaces, the extension to noncommutative rings and algebras requires non-trivial generalization of topological spaces as "non-commutative spaces". For this reason there is some talk about non-commutative topology, though the term also has other meanings. Applications in mathematical physics Some applications in particle physics are described in the entries noncommutative standard model and noncommutative quantum field theory. The sudden rise in interest in noncommutative geometry in physics follows after the speculations of its role in M-theory made in 1997. Motivation from ergodic theory Some of the theory developed by Alain Connes to handle noncommutative geometry at a technical level has roots in older attempts, in particular in ergodic theory. The proposal of George Mackey to create a virtual subgroup theory, with respect to which ergodic group actions would become homogeneous spaces of an extended kind, has by now been subsumed. Noncommutative C*-algebras, von Neumann algebras The (formal) duals of non-commutative C*-algebras are often now called non-commutative spaces. This is by analogy with the Gelfand representation, which shows that commutative C*-algebras are dual to locally compact Hausdorff spaces. In general, one can associate to any C*-algebra S a topological space Ŝ; see spectrum of a C*-algebra. For the duality between localizable measure spaces and commutative von Neumann algebras, noncommutative von Neumann algebras are called non-commutative measure spaces. Noncommutative differentiable manifolds A smooth Riemannian manifold M is a topological space with a lot of extra structure. From its algebra of continuous functions C(M), we only recover M topologically. The algebraic invariant that recovers the Riemannian structure is a spectral triple. It is constructed from a smooth vector bundle E over M, e.g. the exterior algebra bundle. The Hilbert space L2(M, E) of square integrable sections of E carries a representation of C(M) by multiplication operators, and we consider an unbounded operator D in L2(M, E) with compact resolvent (e.g. the signature operator), such that the commutators [D, f] are bounded whenever f is smooth. A deep theorem states that M as a Riemannian manifold can be recovered from this data. This suggests that one might define a noncommutative Riemannian manifold as a spectral triple (A, H, D), consisting of a representation of a C*-algebra A on a Hilbert space H, together with an unbounded operator D on H, with compact resolvent, such that [D, a] is bounded for all a in some dense subalgebra of A. Research in spectral triples is very active, and many examples of noncommutative manifolds have been constructed. Noncommutative affine and projective schemes In analogy to the duality between affine schemes and commutative rings, we define a category of noncommutative affine schemes as the dual of the category of associative unital rings. There are certain analogues of Zariski topology in that context so that one can glue such affine schemes to more general objects. There are also generalizations of the Cone and of the Proj of a commutative graded ring, mimicking a theorem of Serre on Proj. Namely the category of quasicoherent sheaves of O-modules on a Proj of a commutative graded algebra is equivalent to the category of graded modules over the ring localized on Serre's subcategory of graded modules of finite length; there is also analogous theorem for coherent sheaves when the algebra is Noetherian. This theorem is extended as a definition of noncommutative projective geometry by Michael Artin and J. J. Zhang, who add also some general ring-theoretic conditions (e.g. Artin–Schelter regularity). Many properties of projective schemes extend to this context. For example, there exists an analog of the celebrated Serre duality for noncommutative projective schemes of Artin and Zhang. A. L. Rosenberg has created a rather general relative concept of noncommutative quasicompact scheme (over a base category), abstracting Grothendieck's study of morphisms of schemes and covers in terms of categories of quasicoherent sheaves and flat localization functors. There is also another interesting approach via localization theory, due to Fred Van Oystaeyen, Luc Willaert and Alain Verschoren, where the main concept is that of a schematic algebra. Invariants for noncommutative spaces Some of the motivating questions of the theory are concerned with extending known topological invariants to formal duals of noncommutative (operator) algebras and other replacements and candidates for noncommutative spaces. One of the main starting points of Alain Connes' direction in noncommutative geometry is his discovery of a new homology theory associated to noncommutative associative algebras and noncommutative operator algebras, namely the cyclic homology and its relations to the algebraic K-theory (primarily via Connes–Chern character map). The theory of characteristic classes of smooth manifolds has been extended to spectral triples, employing the tools of operator K-theory and cyclic cohomology. Several generalizations of now-classical index theorems allow for effective extraction of numerical invariants from spectral triples. The fundamental characteristic class in cyclic cohomology, the JLO cocycle, generalizes the classical Chern character. Examples of noncommutative spaces In the phase space formulation of quantum mechanics, the symplectic phase space of classical mechanics is deformed into a non-commutative phase space generated by the position and momentum operators. The noncommutative standard model is a proposed extension of the standard model of particle physics. The noncommutative torus, deformation of the function algebra of the ordinary torus, can be given the structure of a spectral triple. This class of examples has been studied intensively and still functions as a test case for more complicated situations. Snyder space Noncommutative algebras arising from foliations. Examples related to dynamical systems arising from number theory, such as the Gauss shift on continued fractions, give rise to noncommutative algebras that appear to have interesting noncommutative geometries. Connection In the sense of Connes A Connes connection is a noncommutative generalization of a connection in differential geometry. It was introduced by Alain Connes, and was later generalized by Joachim Cuntz and Daniel Quillen. Definition Given a right A-module E, a Connes connection on E is a linear map that satisfies the Leibniz rule . See also Commutativity Fuzzy sphere Koszul connection Moyal product Noncommutative algebraic geometry Noncommutative topology Phase space formulation Quasi-free algebra Citations References References for Connes connection * Further reading External links Introduction to Quantum Geometry by Micho Đurđevich (An easier introduction that is still rather technical) Noncommutative geometry on arxiv.org MathOverflow, Theories of Noncommutative Geometry Noncommutative geometry and particle physics connection in noncommutative geometry in nLab Connection (mathematics) Differential geometry Mathematical quantization Quantum gravity
Noncommutative geometry
[ "Physics" ]
2,250
[ "Unsolved problems in physics", "Quantum mechanics", "Quantum gravity", "Mathematical quantization", "Physics beyond the Standard Model" ]
296,036
https://en.wikipedia.org/wiki/Technicolor%20%28physics%29
Technicolor theories are models of physics beyond the Standard Model that address electroweak gauge symmetry breaking, the mechanism through which W and Z bosons acquire masses. Early technicolor theories were modelled on quantum chromodynamics (QCD), the "color" theory of the strong nuclear force, which inspired their name. Instead of introducing elementary Higgs bosons to explain observed phenomena, technicolor models were introduced to dynamically generate masses for the W and Z bosons through new gauge interactions. Although asymptotically free at very high energies, these interactions must become strong and confining (and hence unobservable) at lower energies that have been experimentally probed. This dynamical approach is natural and avoids issues of quantum triviality and the hierarchy problem of the Standard Model. However, since the Higgs boson discovery at the CERN LHC in 2012, the original models are largely ruled out. Nonetheless, it remains a possibility that the Higgs boson is a composite state. In order to produce quark and lepton masses, technicolor or composite Higgs models have to be "extended" by additional gauge interactions. Particularly when modelled on QCD, extended technicolor was challenged by experimental constraints on flavor-changing neutral current and precision electroweak measurements. The specific extensions of particle dynamics for technicolor or composite Higgs bosons are unknown. Much technicolor research focuses on exploring strongly interacting gauge theories other than QCD, in order to evade some of these challenges. A particularly active framework is "walking" technicolor, which exhibits nearly conformal behavior caused by an infrared fixed point with strength just above that necessary for spontaneous chiral symmetry breaking. Whether walking can occur and lead to agreement with precision electroweak measurements is being studied through non-perturbative lattice simulations. Experiments at the Large Hadron Collider have discovered the mechanism responsible for electroweak symmetry breaking, i.e., the Higgs boson, with mass approximately ; such a particle is not generically predicted by technicolor models. However, the Higgs boson may be a composite state, e.g., built of top and anti-top quarks as in the Bardeen–Hill–Lindner theory. Composite Higgs models are generally solved by the top quark infrared fixed point, and may require a new dynamics at extremely high energies such as topcolor. Introduction The mechanism for the breaking of electroweak gauge symmetry in the Standard Model of elementary particle interactions remains unknown. The breaking must be spontaneous, meaning that the underlying theory manifests the symmetry exactly (the gauge-boson fields are massless in the equations of motion), but the solutions (the ground state and the excited states) do not. In particular, the physical W and Z gauge bosons become massive. This phenomenon, in which the W and Z bosons also acquire an extra polarization state, is called the "Higgs mechanism". Despite the precise agreement of the electroweak theory with experiment at energies accessible so far, the necessary ingredients for the symmetry breaking remain hidden, yet to be revealed at higher energies. The simplest mechanism of electroweak symmetry breaking introduces a single complex field and predicts the existence of the Higgs boson. Typically, the Higgs boson is "unnatural" in the sense that quantum mechanical fluctuations produce corrections to its mass that lift it to such high values that it cannot play the role for which it was introduced. Unless the Standard Model breaks down at energies less than a few TeV, the Higgs mass can be kept small only by a delicate fine-tuning of parameters. Technicolor avoids this problem by hypothesizing a new gauge interaction coupled to new massless fermions. This interaction is asymptotically free at very high energies and becomes strong and confining as the energy decreases to the electroweak scale of 246 GeV. These strong forces spontaneously break the massless fermions' chiral symmetries, some of which are weakly gauged as part of the Standard Model. This is the dynamical version of the Higgs mechanism. The electroweak gauge symmetry is thus broken, producing masses for the W and Z bosons. The new strong interaction leads to a host of new composite, short-lived particles at energies accessible at the Large Hadron Collider (LHC). This framework is natural because there are no elementary Higgs bosons and, hence, no fine-tuning of parameters. Quark and lepton masses also break the electroweak gauge symmetries, so they, too, must arise spontaneously. A mechanism for incorporating this feature is known as extended technicolor. Technicolor and extended technicolor face a number of phenomenological challenges, in particular issues of flavor-changing neutral currents, precision electroweak tests, and the top quark mass. Technicolor models also do not generically predict Higgs-like bosons as light as ; such a particle was discovered by experiments at the Large Hadron Collider in 2012. Some of these issues can be addressed with a class of theories known as "walking technicolor". Early technicolor Technicolor is the name given to the theory of electroweak symmetry breaking by new strong gauge-interactions whose characteristic energy scale is the weak scale itself, . The guiding principle of technicolor is "naturalness": basic physical phenomena should not require fine-tuning of the parameters in the Lagrangian that describes them. What constitutes fine-tuning is to some extent a subjective matter, but a theory with elementary scalar particles typically is very finely tuned (unless it is supersymmetric). The quadratic divergence in the scalar's mass requires adjustments of a part in , where bare is the cutoff of the theory, the energy scale at which the theory changes in some essential way. In the standard electroweak model with (the grand-unification mass scale), and with the Higgs boson mass , the mass is tuned to at least a part in 10. By contrast, a natural theory of electroweak symmetry breaking is an asymptotically free gauge theory with fermions as the only matter fields. The technicolor gauge group G is often assumed to be SU(). Based on analogy with quantum chromodynamics (QCD), it is assumed that there are one or more doublets of massless Dirac "technifermions" transforming vectorially under the same complex representation of G, . Thus, there is a chiral symmetry of these fermions, e.g., , if they all transform according to the same complex representation of G. Continuing the analogy with QCD, the running gauge coupling () triggers spontaneous chiral symmetry breaking, the technifermions acquire a dynamical mass, and a number of massless Goldstone bosons result. If the technifermions transform under as left-handed doublets and right-handed singlets, three linear combinations of these Goldstone bosons couple to three of the electroweak gauge currents. In 1973 Jackiw and Johnson and Cornwall and Norton studied the possibility that a (non-vectorial) gauge interaction of fermions can break itself; i.e., is strong enough to form a Goldstone boson coupled to the gauge current. Using Abelian gauge models, they showed that, if such a Goldstone boson is formed, it is "eaten" by the Higgs mechanism, becoming the longitudinal component of the now massive gauge boson. Technically, the polarization function () appearing in the gauge boson propagator, develops a pole at with residue , the square of the Goldstone boson's decay constant, and the gauge boson acquires mass . In 1973, Weinstein showed that composite Goldstone bosons whose constituent fermions transform in the "standard" way under SU(2) ⊗ U(1) generate the weak boson masses This standard-model relation is achieved with elementary Higgs bosons in electroweak doublets; it is verified experimentally to better than 1%. Here, and are SU(2) and U(1) gauge couplings and defines the weak mixing angle. The important idea of a new strong gauge interaction of massless fermions at the electroweak scale driving the spontaneous breakdown of its global chiral symmetry, of which an SU(2) ⊗ U(1) subgroup is weakly gauged, was first proposed in 1979 by Weinberg. This "technicolor" mechanism is natural in that no fine-tuning of parameters is necessary. Extended technicolor Elementary Higgs bosons perform another important task. In the Standard Model, quarks and leptons are necessarily massless because they transform under as left-handed doublets and right-handed singlets. The Higgs doublet couples to these fermions. When it develops its vacuum expectation value, it transmits this electroweak breaking to the quarks and leptons, giving them their observed masses. (In general, electroweak-eigenstate fermions are not mass eigenstates, so this process also induces the mixing matrices observed in charged-current weak interactions.) In technicolor, something else must generate the quark and lepton masses. The only natural possibility, one avoiding the introduction of elementary scalars, is to enlarge TC to allow technifermions to couple to quarks and leptons. This coupling is induced by gauge bosons of the enlarged group. The picture, then, is that there is a large "extended technicolor" (ETC) gauge group in which technifermions, quarks, and leptons live in the same representations. At one or more high scales ETC, ETC is broken down to TC, and quarks and leptons emerge as the TC-singlet fermions. When TC() becomes strong at scale TC ≈ EW, the fermionic condensate forms. (The condensate is the vacuum expectation value of the technifermion bilinear . The estimate here is based on naive dimensional analysis of the quark condensate in QCD, expected to be correct as an order of magnitude.) Then, the transitions can proceed through the technifermion's dynamical mass by the emission and reabsorption of ETC bosons whose masses ETC ≈ ETC ETC are much greater than TC. The quarks and leptons develop masses given approximately by Here, is the technifermion condensate renormalized at the ETC boson mass scale, where m() is the anomalous dimension of the technifermion bilinear at the scale . The second estimate in Eq. (2) depends on the assumption that, as happens in QCD, TC() becomes weak not far above TC, so that the anomalous dimension m of is small there. Extended technicolor was introduced in 1979 by Dimopoulos and Susskind, and by Eichten and Lane. For a quark of mass  ≈ 1 GeV, and with TC ≈ 246 GeV, one estimates ETC ≈ 15 TeV. Therefore, assuming that , ETC will be at least this large. In addition to the ETC proposal for quark and lepton masses, Eichten and Lane observed that the size of the ETC representations required to generate all quark and lepton masses suggests that there will be more than one electroweak doublet of technifermions. If so, there will be more (spontaneously broken) chiral symmetries and therefore more Goldstone bosons than are eaten by the Higgs mechanism. These must acquire mass by virtue of the fact that the extra chiral symmetries are also explicitly broken, by the standard-model interactions and the ETC interactions. These "pseudo-Goldstone bosons" are called technipions, T. An application of Dashen's theorem gives for the ETC contribution to their mass The second approximation in Eq. (4) assumes that . For EW ≈ TC ≈ 246 GeV and ETC ≈ 15 TeV, this contribution to T is about 50 GeV. Since ETC interactions generate and the coupling of technipions to quark and lepton pairs, one expects the couplings to be Higgs-like; i.e., roughly proportional to the masses of the quarks and leptons. This means that technipions are expected to predominately decay to the heaviest possible and pairs. Perhaps the most important restriction on the ETC framework for quark mass generation is that ETC interactions are likely to induce flavor-changing neutral current processes such as , , and interactions that induce and mixing. The reason is that the algebra of the ETC currents involved in generation imply and ETC currents which, when written in terms of fermion mass eigenstates, have no reason to conserve flavor. The strongest constraint comes from requiring that ETC interactions mediating mixing contribute less than the Standard Model. This implies an effective ETC greater than 1000 TeV. The actual ETC may be reduced somewhat if CKM-like mixing angle factors are present. If these interactions are CP-violating, as they well may be, the constraint from the -parameter is that the effective ETC > 104 TeV. Such huge ETC mass scales imply tiny quark and lepton masses and ETC contributions to T of at most a few GeV, in conflict with LEP searches for T at the 0. Extended technicolor is a very ambitious proposal, requiring that quark and lepton masses and mixing angles arise from experimentally accessible interactions. If there exists a successful model, it would not only predict the masses and mixings of quarks and leptons (and technipions), it would explain why there are three families of each: they are the ones that fit into the ETC representations of , , and . It should not be surprising that the construction of a successful model has proven to be very difficult. Walking technicolor Since quark and lepton masses are proportional to the bilinear technifermion condensate divided by the ETC mass scale squared, their tiny values can be avoided if the condensate is enhanced above the weak-TC estimate in Eq. (2), . During the 1980s, several dynamical mechanisms were advanced to do this. In 1981 Holdom suggested that, if the TC() evolves to a nontrivial fixed point in the ultraviolet, with a large positive anomalous dimension m for , realistic quark and lepton masses could arise with ETC large enough to suppress ETC-induced mixing. However, no example of a nontrivial ultraviolet fixed point in a four-dimensional gauge theory has been constructed. In 1985 Holdom analyzed a technicolor theory in which a "slowly varying" TC() was envisioned. His focus was to separate the chiral breaking and confinement scales, but he also noted that such a theory could enhance and thus allow the ETC scale to be raised. In 1986 Akiba and Yanagida also considered enhancing quark and lepton masses, by simply assuming that TC is constant and strong all the way up to the ETC scale. In the same year Yamawaki, Bando, and Matumoto again imagined an ultraviolet fixed point in a non-asymptotically free theory to enhance the technifermion condensate. In 1986 Appelquist, Karabali and Wijewardhana discussed the enhancement of fermion masses in an asymptotically free technicolor theory with a slowly running, or "walking", gauge coupling. The slowness arose from the screening effect of a large number of technifermions, with the analysis carried out through two-loop perturbation theory. In 1987 Appelquist and Wijewardhana explored this walking scenario further. They took the analysis to three loops, noted that the walking can lead to a power law enhancement of the technifermion condensate, and estimated the resultant quark, lepton, and technipion masses. The condensate enhancement arises because the associated technifermion mass decreases slowly, roughly linearly, as a function of its renormalization scale. This corresponds to the condensate anomalous dimension m in Eq. (3) approaching unity (see below). In the 1990s, the idea emerged more clearly that walking is naturally described by asymptotically free gauge theories dominated in the infrared by an approximate fixed point. Unlike the speculative proposal of ultraviolet fixed points, fixed points in the infrared are known to exist in asymptotically free theories, arising at two loops in the beta function providing that the fermion count f is large enough. This has been known since the first two-loop computation in 1974 by Caswell. If f is close to the value at which asymptotic freedom is lost, the resultant infrared fixed point is weak, of parametric order , and reliably accessible in perturbation theory. This weak-coupling limit was explored by Banks and Zaks in 1982. The fixed-point coupling IR becomes stronger as f is reduced from . Below some critical value fc the coupling becomes strong enough (>  SB) to break spontaneously the massless technifermions' chiral symmetry. Since the analysis must typically go beyond two-loop perturbation theory, the definition of the running coupling TC(), its fixed point value IR, and the strength  SB necessary for chiral symmetry breaking depend on the particular renormalization scheme adopted. For ; i.e., for f just below fc, the evolution of TC(μ) is governed by the infrared fixed point and it will evolve slowly (walk) for a range of momenta above the breaking scale TC. To overcome the -suppression of the masses of first and second generation quarks involved in mixing, this range must extend almost to their ETC scale, of . Cohen and Georgi argued that m = 1 is the signal of spontaneous chiral symmetry breaking, i.e., that m( SB) = 1. Therefore, in the walking-TC region, m ≈ 1 and, from Eqs. (2) and (3), the light quark masses are enhanced approximately by . The idea that TC() walks for a large range of momenta when IR lies just above  SB was suggested by Lane and Ramana. They made an explicit model, discussed the walking that ensued, and used it in their discussion of walking technicolor phenomenology at hadron colliders. This idea was developed in some detail by Appelquist, Terning, and Wijewardhana. Combining a perturbative computation of the infrared fixed point with an approximation of  SB based on the Schwinger–Dyson equation, they estimated the critical value fc and explored the resultant electroweak physics. Since the 1990s, most discussions of walking technicolor are in the framework of theories assumed to be dominated in the infrared by an approximate fixed point. Various models have been explored, some with the technifermions in the fundamental representation of the gauge group and some employing higher representations. The possibility that the technicolor condensate can be enhanced beyond that discussed in the walking literature, has also been considered recently by Luty and Okui under the name "conformal technicolor". They envision an infrared stable fixed point, but with a very large anomalous dimension for the operator . It remains to be seen whether this can be realized, for example, in the class of theories currently being examined using lattice techniques. Top quark mass The enhancement described above for walking technicolor may not be sufficient to generate the measured top quark mass, even for an ETC scale as low as a few TeV. However, this problem could be addressed if the effective four-technifermion coupling resulting from ETC gauge boson exchange is strong and tuned just above a critical value. The analysis of this strong-ETC possibility is that of a Nambu–Jona–Lasinio model with an additional (technicolor) gauge interaction. The technifermion masses are small compared to the ETC scale (the cutoff on the effective theory), but nearly constant out to this scale, leading to a large top quark mass. No fully realistic ETC theory for all quark masses has yet been developed incorporating these ideas. A related study was carried out by Miransky and Yamawaki. A problem with this approach is that it involves some degree of parameter fine-tuning, in conflict with technicolor's guiding principle of naturalness. A large body of closely related work in which the Higgs is a composite state, composed of top and anti-top quarks, is the top quark condensate, topcolor and top-color-assisted technicolor models, in which new strong interactions are ascribed to the top quark and other third-generation fermions. Technicolor on the lattice Lattice gauge theory is a non-perturbative method applicable to strongly interacting technicolor theories, allowing first-principles exploration of walking and conformal dynamics. In 2007, Catterall and Sannino used lattice gauge theory to study SU(2) gauge theories with two flavors of Dirac fermions in the symmetric representation, finding evidence of conformality that has been confirmed by subsequent studies. As of 2010, the situation for SU(3) gauge theory with fermions in the fundamental representation is not as clear-cut. In 2007, Appelquist, Fleming, and Neil reported evidence that a non-trivial infrared fixed point develops in such theories when there are twelve flavors, but not when there are eight. While some subsequent studies confirmed these results, others reported different conclusions, depending on the lattice methods used, and there is not yet consensus. Further lattice studies exploring these issues, as well as considering the consequences of these theories for precision electroweak measurements, are underway by several research groups. Technicolor phenomenology Any framework for physics beyond the Standard Model must conform with precision measurements of the electroweak parameters. Its consequences for physics at existing and future high-energy hadron colliders, and for the dark matter of the universe must also be explored. Precision electroweak tests In 1990, the phenomenological parameters , , and were introduced by Peskin and Takeuchi to quantify contributions to electroweak radiative corrections from physics beyond the Standard Model. They have a simple relation to the parameters of the electroweak chiral Lagrangian. The Peskin–Takeuchi analysis was based on the general formalism for weak radiative corrections developed by Kennedy, Lynn, Peskin and Stuart, and alternate formulations also exist. The , , and -parameters describe corrections to the electroweak gauge boson propagators from physics beyond the Standard Model. They can be written in terms of polarization functions of electroweak currents and their spectral representation as follows: where only new, beyond-standard-model physics is included. The quantities are calculated relative to a minimal Standard Model with some chosen reference mass of the Higgs boson, taken to range from the experimental lower bound of 117 GeV to 1000 GeV where its width becomes very large. For these parameters to describe the dominant corrections to the Standard Model, the mass scale of the new physics must be much greater than and , and the coupling of quarks and leptons to the new particles must be suppressed relative to their coupling to the gauge bosons. This is the case with technicolor, so long as the lightest technivector mesons, T and T, are heavier than 200–300 GeV. The -parameter is sensitive to all new physics at the TeV scale, while is a measure of weak-isospin breaking effects. The -parameter is generally not useful; most new-physics theories, including technicolor theories, give negligible contributions to it. The and -parameters are determined by global fit to experimental data including Z-pole data from LEP at CERN, top quark and -mass measurements at Fermilab, and measured levels of atomic parity violation. The resultant bounds on these parameters are given in the Review of Particle Properties. Assuming = 0, the and parameters are small and, in fact, consistent with zero: where the central value corresponds to a Higgs mass of 117 GeV and the correction to the central value when the Higgs mass is increased to 300 GeV is given in parentheses. These values place tight restrictions on beyond-standard-model theories – when the relevant corrections can be reliably computed. The parameter estimated in QCD-like technicolor theories is significantly greater than the experimentally allowed value. The computation was done assuming that the spectral integral for is dominated by the lightest T and T resonances, or by scaling effective Lagrangian parameters from QCD. In walking technicolor, however, the physics at the TeV scale and beyond must be quite different from that of QCD-like theories. In particular, the vector and axial-vector spectral functions cannot be dominated by just the lowest-lying resonances. It is unknown whether higher energy contributions to are a tower of identifiable T and T states or a smooth continuum. It has been conjectured that T and T partners could be more nearly degenerate in walking theories (approximate parity doubling), reducing their contribution to . Lattice calculations are underway or planned to test these ideas and obtain reliable estimates of in walking theories. The restriction on the -parameter poses a problem for the generation of the top-quark mass in the ETC framework. The enhancement from walking can allow the associated ETC scale to be as large as a few TeV, but – since the ETC interactions must be strongly weak-isospin breaking to allow for the large top-bottom mass splitting – the contribution to the parameter, as well as the rate for the decay , could be too large. Hadron collider phenomenology Early studies generally assumed the existence of just one electroweak doublet of technifermions, or of one techni-family including one doublet each of color-triplet techniquarks and color-singlet technileptons (four electroweak doublets in total). The number D of electroweak doublets determines the decay constant needed to produce the correct electroweak scale, as = = . In the minimal, one-doublet model, three Goldstone bosons (technipions, T) have decay constant = EW = 246 GeV and are eaten by the electroweak gauge bosons. The most accessible collider signal is the production through annihilation in a hadron collider of spin-one , and their subsequent decay into a pair of longitudinally polarized weak bosons, and . At an expected mass of 1.5–2.0 TeV and width of 300–400 GeV, such T's would be difficult to discover at the LHC. A one-family model has a large number of physical technipions, with = = 123 GeV. There is a collection of correspondingly lower-mass color-singlet and octet technivectors decaying into technipion pairs. The T's are expected to decay to the heaviest possible quark and lepton pairs. Despite their lower masses, the T's are wider than in the minimal model and the backgrounds to the T decays are likely to be insurmountable at a hadron collider. This picture changed with the advent of walking technicolor. A walking gauge coupling occurs if  SB lies just below the IR fixed point value IR, which requires either a large number of electroweak doublets in the fundamental representation of the gauge group, e.g., or a few doublets in higher-dimensional TC representations. In the latter case, the constraints on ETC representations generally imply other technifermions in the fundamental representation as well. In either case, there are technipions T with decay constant . This implies so that the lightest technivectors accessible at the LHC – T, T, T (with = 1+ 1−−, 0− 1−−, 1− 1++) – have masses well below a TeV. The class of theories with many technifermions and thus is called low-scale technicolor. A second consequence of walking technicolor concerns the decays of the spin-one technihadrons. Since technipion masses (see Eq. (4)), walking enhances them much more than it does other technihadron masses. Thus, it is very likely that the lightest T < 2T and that the two and three-T decay channels of the light technivectors are closed. This further implies that these technivectors are very narrow. Their most probable two-body channels are , L L, T and L. The coupling of the lightest technivectors to L is proportional to . Thus, all their decay rates are suppressed by powers of or the fine-structure constant, giving total widths of a few GeV (for T) to a few tenths of a GeV (for T and T). A more speculative consequence of walking technicolor is motivated by consideration of its contribution to the -parameter. As noted above, the usual assumptions made to estimate TC are invalid in a walking theory. In particular, the spectral integrals used to evaluate TC cannot be dominated by just the lowest-lying T and T and, if TC is to be small, the masses and weak-current couplings of the T and T could be more nearly equal than they are in QCD. Low-scale technicolor phenomenology, including the possibility of a more parity-doubled spectrum, has been developed into a set of rules and decay amplitudes. An April 2011 announcement of an excess in jet pairs produced in association with a boson measured at the Tevatron has been interpreted by Eichten, Lane and Martin as a possible signal of the technipion of low-scale technicolor. The general scheme of low-scale technicolor makes little sense if the limit on is pushed past about 700 GeV. The LHC should be able to discover it or rule it out. Searches there involving decays to technipions and thence to heavy quark jets are hampered by backgrounds from production; its rate is 100 times larger than that at the Tevatron. Consequently, the discovery of low-scale technicolor at the LHC relies on all-leptonic final-state channels with favorable signal-to-background ratios: , and . Dark matter Technicolor theories naturally contain dark matter candidates. Almost certainly, models can be built in which the lowest-lying technibaryon, a technicolor-singlet bound state of technifermions, is stable enough to survive the evolution of the universe. If the technicolor theory is low-scale (), the baryon's mass should be no more than 1–2 TeV. If not, it could be much heavier. The technibaryon must be electrically neutral and satisfy constraints on its abundance. Given the limits on spin-independent dark-matter-nucleon cross sections from dark-matter search experiments ( for the masses of interest), it may have to be electroweak neutral (weak isospin  = 0) as well. These considerations suggest that the "old" technicolor dark matter candidates may be difficult to produce at the LHC. A different class of technicolor dark matter candidates light enough to be accessible at the LHC was introduced by Francesco Sannino and his collaborators. These states are pseudo Goldstone bosons possessing a global charge that makes them stable against decay. See also Higgsless model Topcolor Top quark condensate Infrared fixed point References Mass Electroweak theory Physics beyond the Standard Model
Technicolor (physics)
[ "Physics", "Mathematics" ]
6,724
[ "Scalar physical quantities", "Physical phenomena", "Physical quantities", "Quantity", "Mass", "Unsolved problems in physics", "Electroweak theory", "Size", "Particle physics", "Fundamental interactions", "Wikipedia categories named after physical quantities", "Physics beyond the Standard Mode...
296,060
https://en.wikipedia.org/wiki/Vacuum%20expectation%20value
In quantum field theory the vacuum expectation value (also called condensate or simply VEV) of an operator is its average or expectation value in the vacuum. The vacuum expectation value of an operator O is usually denoted by One of the most widely used examples of an observable physical effect that results from the vacuum expectation value of an operator is the Casimir effect. This concept is important for working with correlation functions in quantum field theory. It is also important in spontaneous symmetry breaking. Examples are: The Higgs field has a vacuum expectation value of 246 GeV. This nonzero value underlies the Higgs mechanism of the Standard Model. This value is given by , where MW is the mass of the W Boson, the reduced Fermi constant, and the weak isospin coupling, in natural units. It is also near the limit of the most massive nuclei, at v = 264.3 Da. The chiral condensate in quantum chromodynamics, about a factor of a thousand smaller than the above, gives a large effective mass to quarks, and distinguishes between phases of quark matter. This underlies the bulk of the mass of most hadrons. The gluon condensate in quantum chromodynamics may also be partly responsible for masses of hadrons. The observed Lorentz invariance of space-time allows only the formation of condensates which are Lorentz scalars and have vanishing charge. Thus, fermion condensates must be of the form , where ψ is the fermion field. Similarly a tensor field, Gμν, can only have a scalar expectation value such as . In some vacua of string theory, however, non-scalar condensates are found. If these describe our universe, then Lorentz symmetry violation may be observable. See also Correlation function (quantum field theory) Dark energy Spontaneous symmetry breaking Vacuum energy Wightman axioms References External links Quantum field theory Standard Model
Vacuum expectation value
[ "Physics" ]
407
[ "Standard Model", "Quantum field theory", "Quantum mechanics", "Particle physics", "Quantum physics stubs" ]
296,077
https://en.wikipedia.org/wiki/Instanton
An instanton (or pseudoparticle) is a notion appearing in theoretical and mathematical physics. An instanton is a classical solution to equations of motion with a finite, non-zero action, either in quantum mechanics or in quantum field theory. More precisely, it is a solution to the equations of motion of the classical field theory on a Euclidean spacetime. In such quantum theories, solutions to the equations of motion may be thought of as critical points of the action. The critical points of the action may be local maxima of the action, local minima, or saddle points. Instantons are important in quantum field theory because: they appear in the path integral as the leading quantum corrections to the classical behavior of a system, and they can be used to study the tunneling behavior in various systems such as a Yang–Mills theory. Relevant to dynamics, families of instantons permit that instantons, i.e. different critical points of the equation of motion, be related to one another. In physics instantons are particularly important because the condensation of instantons (and noise-induced anti-instantons) is believed to be the explanation of the noise-induced chaotic phase known as self-organized criticality. Mathematics Mathematically, a Yang–Mills instanton is a self-dual or anti-self-dual connection in a principal bundle over a four-dimensional Riemannian manifold that plays the role of physical space-time in non-abelian gauge theory. Instantons are topologically nontrivial solutions of Yang–Mills equations that absolutely minimize the energy functional within their topological type. The first such solutions were discovered in the case of four-dimensional Euclidean space compactified to the four-dimensional sphere, and turned out to be localized in space-time, prompting the names pseudoparticle and instanton. Yang–Mills instantons have been explicitly constructed in many cases by means of twistor theory, which relates them to algebraic vector bundles on algebraic surfaces, and via the ADHM construction, or hyperkähler reduction (see hyperkähler manifold), a geometric invariant theory procedure. The groundbreaking work of Simon Donaldson, for which he was later awarded the Fields medal, used the moduli space of instantons over a given four-dimensional differentiable manifold as a new invariant of the manifold that depends on its differentiable structure and applied it to the construction of homeomorphic but not diffeomorphic four-manifolds. Many methods developed in studying instantons have also been applied to monopoles. This is because magnetic monopoles arise as solutions of a dimensional reduction of the Yang–Mills equations. Quantum mechanics An instanton can be used to calculate the transition probability for a quantum mechanical particle tunneling through a potential barrier. One example of a system with an instanton effect is a particle in a double-well potential. In contrast to a classical particle, there is non-vanishing probability that it crosses a region of potential energy higher than its own energy. Motivation of considering instantons Consider the quantum mechanics of a single particle motion inside the double-well potential The potential energy takes its minimal value at , and these are called classical minima because the particle tends to lie in one of them in classical mechanics. There are two lowest energy states in classical mechanics. In quantum mechanics, we solve the Schrödinger equation to identify the energy eigenstates. If we do this, we will find only the unique lowest-energy state instead of two states. The ground-state wave function localizes at both of the classical minima instead of only one of them because of the quantum interference or quantum tunneling. Instantons are the tool to understand why this happens within the semi-classical approximation of the path-integral formulation in Euclidean time. We will first see this by using the WKB approximation that approximately computes the wave function itself, and will move on to introduce instantons by using the path integral formulation. WKB approximation One way to calculate this probability is by means of the semi-classical WKB approximation, which requires the value of to be small. The time independent Schrödinger equation for the particle reads If the potential were constant, the solution would be a plane wave, up to a proportionality factor, with This means that if the energy of the particle is smaller than the potential energy, one obtains an exponentially decreasing function. The associated tunneling amplitude is proportional to where a and b are the beginning and endpoint of the tunneling trajectory. Path integral interpretation via instantons Alternatively, the use of path integrals allows an instanton interpretation and the same result can be obtained with this approach. In path integral formulation, the transition amplitude can be expressed as Following the process of Wick rotation (analytic continuation) to Euclidean spacetime (), one gets with the Euclidean action The potential energy changes sign under the Wick rotation and the minima transform into maxima, thereby exhibits two "hills" of maximal energy. Let us now consider the local minimum of the Euclidean action with the double-well potential , and we set just for simplicity of computation. Since we want to know how the two classically lowest energy states are connected, let us set and . For and , we can rewrite the Euclidean action as The above inequality is saturated by the solution of with the condition and . Such solutions exist, and the solution takes the simple form when and . The explicit formula for the instanton solution is given by Here is an arbitrary constant. Since this solution jumps from one classical vacuum to another classical vacuum instantaneously around , it is called an instanton. Explicit formula for double-well potential The explicit formula for the eigenenergies of the Schrödinger equation with double-well potential has been given by Müller–Kirsten with derivation by both a perturbation method (plus boundary conditions) applied to the Schrödinger equation, and explicit derivation from the path integral (and WKB). The result is the following. Defining parameters of the Schrödinger equation and the potential by the equations and the eigenvalues for are found to be: Clearly these eigenvalues are asymptotically () degenerate as expected as a consequence of the harmonic part of the potential. Results Results obtained from the mathematically well-defined Euclidean path integral may be Wick-rotated back and give the same physical results as would be obtained by appropriate treatment of the (potentially divergent) Minkowskian path integral. As can be seen from this example, calculating the transition probability for the particle to tunnel through a classically forbidden region () with the Minkowskian path integral corresponds to calculating the transition probability to tunnel through a classically allowed region (with potential −V(X)) in the Euclidean path integral (pictorially speaking – in the Euclidean picture – this transition corresponds to a particle rolling from one hill of a double-well potential standing on its head to the other hill). This classical solution of the Euclidean equations of motion is often named "kink solution" and is an example of an instanton. In this example, the two "vacua" (i.e. ground states) of the double-well potential, turn into hills in the Euclideanized version of the problem. Thus, the instanton field solution of the (Euclidean, i. e., with imaginary time) (1 + 1)-dimensional field theory – first quantized quantum mechanical description – allows to be interpreted as a tunneling effect between the two vacua (ground states – higher states require periodic instantons) of the physical (1-dimensional space + real time) Minkowskian system. In the case of the double-well potential written the instanton, i.e. solution of (i.e. with energy ), is where is the Euclidean time. Note that a naïve perturbation theory around one of those two vacua alone (of the Minkowskian description) would never show this non-perturbative tunneling effect, dramatically changing the picture of the vacuum structure of this quantum mechanical system. In fact the naive perturbation theory has to be supplemented by boundary conditions, and these supply the nonperturbative effect, as is evident from the above explicit formula and analogous calculations for other potentials such as a cosine potential (cf. Mathieu function) or other periodic potentials (cf. e.g. Lamé function and spheroidal wave function) and irrespective of whether one uses the Schrödinger equation or the path integral. Therefore, the perturbative approach may not completely describe the vacuum structure of a physical system. This may have important consequences, for example, in the theory of "axions" where the non-trivial QCD vacuum effects (like the instantons) spoil the Peccei–Quinn symmetry explicitly and transform massless Nambu–Goldstone bosons into massive pseudo-Nambu–Goldstone ones. Periodic instantons In one-dimensional field theory or quantum mechanics one defines as "instanton" a field configuration which is a solution of the classical (Newton-like) equation of motion with Euclidean time and finite Euclidean action. In the context of soliton theory the corresponding solution is known as a kink. In view of their analogy with the behaviour of classical particles such configurations or solutions, as well as others, are collectively known as pseudoparticles or pseudoclassical configurations. The "instanton" (kink) solution is accompanied by another solution known as "anti-instanton" (anti-kink), and instanton and anti-instanton are distinguished by "topological charges" +1 and −1 respectively, but have the same Euclidean action. "Periodic instantons" are a generalization of instantons. In explicit form they are expressible in terms of Jacobian elliptic functions which are periodic functions (effectively generalisations of trigonometrical functions). In the limit of infinite period these periodic instantons – frequently known as "bounces", "bubbles" or the like – reduce to instantons. The stability of these pseudoclassical configurations can be investigated by expanding the Lagrangian defining the theory around the pseudoparticle configuration and then investigating the equation of small fluctuations around it. For all versions of quartic potentials (double-well, inverted double-well) and periodic (Mathieu) potentials these equations were discovered to be Lamé equations, see Lamé function. The eigenvalues of these equations are known and permit in the case of instability the calculation of decay rates by evaluation of the path integral. Instantons in reaction rate theory In the context of reaction rate theory, periodic instantons are used to calculate the rate of tunneling of atoms in chemical reactions. The progress of a chemical reaction can be described as the movement of a pseudoparticle on a high dimensional potential energy surface (PES). The thermal rate constant can then be related to the imaginary part of the free energy by whereby is the canonical partition function, which is calculated by taking the trace of the Boltzmann operator in the position representation. Using a Wick rotation and identifying the Euclidean time with , one obtains a path integral representation for the partition function in mass-weighted coordinates: The path integral is then approximated via a steepest descent integration, which takes into account only the contributions from the classical solutions and quadratic fluctuations around them. This yields for the rate constant expression in mass-weighted coordinates where is a periodic instanton and is the trivial solution of the pseudoparticle at rest which represents the reactant state configuration. Inverted double-well formula As for the double-well potential one can derive the eigenvalues for the inverted double-well potential. In this case, however, the eigenvalues are complex. Defining parameters by the equations the eigenvalues as given by Müller-Kirsten are, for The imaginary part of this expression agrees with the well known result of Bender and Wu. In their notation Quantum field theory In studying quantum field theory (QFT), the vacuum structure of a theory may draw attention to instantons. Just as a double-well quantum mechanical system illustrates, a naïve vacuum may not be the true vacuum of a field theory. Moreover, the true vacuum of a field theory may be an "overlap" of several topologically inequivalent sectors, so called "topological vacua". A well understood and illustrative example of an instanton and its interpretation can be found in the context of a QFT with a non-abelian gauge group, a Yang–Mills theory. For a Yang–Mills theory these inequivalent sectors can be (in an appropriate gauge) classified by the third homotopy group of SU(2) (whose group manifold is the 3-sphere ). A certain topological vacuum (a "sector" of the true vacuum) is labelled by an unaltered transform, the Pontryagin index. As the third homotopy group of has been found to be the set of integers, there are infinitely many topologically inequivalent vacua, denoted by , where is their corresponding Pontryagin index. An instanton is a field configuration fulfilling the classical equations of motion in Euclidean spacetime, which is interpreted as a tunneling effect between these different topological vacua. It is again labelled by an integer number, its Pontryagin index, . One can imagine an instanton with index to quantify tunneling between topological vacua and . If Q = 1, the configuration is named BPST instanton after its discoverers Alexander Belavin, Alexander Polyakov, Albert S. Schwarz and Yu. S. Tyupkin. The true vacuum of the theory is labelled by an "angle" theta and is an overlap of the topological sectors: Gerard 't Hooft first performed the field theoretic computation of the effects of the BPST instanton in a theory coupled to fermions in . He showed that zero modes of the Dirac equation in the instanton background lead to a non-perturbative multi-fermion interaction in the low energy effective action. Yang–Mills theory The classical Yang–Mills action on a principal bundle with structure group G, base M, connection A, and curvature (Yang–Mills field tensor) F is where is the volume form on . If the inner product on , the Lie algebra of in which takes values, is given by the Killing form on , then this may be denoted as , since For example, in the case of the gauge group U(1), F will be the electromagnetic field tensor. From the principle of stationary action, the Yang–Mills equations follow. They are The first of these is an identity, because dF = d2A = 0, but the second is a second-order partial differential equation for the connection A, and if the Minkowski current vector does not vanish, the zero on the rhs. of the second equation is replaced by . But notice how similar these equations are; they differ by a Hodge star. Thus a solution to the simpler first order (non-linear) equation is automatically also a solution of the Yang–Mills equation. This simplification occurs on 4 manifolds with : so that on 2-forms. Such solutions usually exist, although their precise character depends on the dimension and topology of the base space M, the principal bundle P, and the gauge group G. In nonabelian Yang–Mills theories, and where D is the exterior covariant derivative. Furthermore, the Bianchi identity is satisfied. In quantum field theory, an instanton is a topologically nontrivial field configuration in four-dimensional Euclidean space (considered as the Wick rotation of Minkowski spacetime). Specifically, it refers to a Yang–Mills gauge field A which approaches pure gauge at spatial infinity. This means the field strength vanishes at infinity. The name instanton derives from the fact that these fields are localized in space and (Euclidean) time – in other words, at a specific instant. The case of instantons on the two-dimensional space may be easier to visualise because it admits the simplest case of the gauge group, namely U(1), that is an abelian group. In this case the field A can be visualised as simply a vector field. An instanton is a configuration where, for example, the arrows point away from a central point (i.e., a "hedgehog" state). In Euclidean four dimensions, , abelian instantons are impossible. The field configuration of an instanton is very different from that of the vacuum. Because of this instantons cannot be studied by using Feynman diagrams, which only include perturbative effects. Instantons are fundamentally non-perturbative. The Yang–Mills energy is given by where ∗ is the Hodge dual. If we insist that the solutions to the Yang–Mills equations have finite energy, then the curvature of the solution at infinity (taken as a limit) has to be zero. This means that the Chern–Simons invariant can be defined at the 3-space boundary. This is equivalent, via Stokes' theorem, to taking the integral This is a homotopy invariant and it tells us which homotopy class the instanton belongs to. Since the integral of a nonnegative integrand is always nonnegative, for all real θ. So, this means If this bound is saturated, then the solution is a BPS state. For such states, either ∗F = F or ∗F = − F depending on the sign of the homotopy invariant. In the Standard Model instantons are expected to be present both in the electroweak sector and the chromodynamic sector, however, their existence has not yet been experimentally confirmed. Instanton effects are important in understanding the formation of condensates in the vacuum of quantum chromodynamics (QCD) and in explaining the mass of the so-called 'eta-prime particle', a Goldstone-boson which has acquired mass through the axial current anomaly of QCD. Note that there is sometimes also a corresponding soliton in a theory with one additional space dimension. Recent research on instantons links them to topics such as D-branes and Black holes and, of course, the vacuum structure of QCD. For example, in oriented string theories, a Dp brane is a gauge theory instanton in the world volume (p + 5)-dimensional U(N) gauge theory on a stack of N D(p + 4)-branes. Various numbers of dimensions Instantons play a central role in the nonperturbative dynamics of gauge theories. The kind of physical excitation that yields an instanton depends on the number of dimensions of the spacetime, but, surprisingly, the formalism for dealing with these instantons is relatively dimension-independent. In 4-dimensional gauge theories, as described in the previous section, instantons are gauge bundles with a nontrivial four-form characteristic class. If the gauge symmetry is a unitary group or special unitary group then this characteristic class is the second Chern class, which vanishes in the case of the gauge group U(1). If the gauge symmetry is an orthogonal group then this class is the first Pontrjagin class. In 3-dimensional gauge theories with Higgs fields, 't Hooft–Polyakov monopoles play the role of instantons. In his 1977 paper Quark Confinement and Topology of Gauge Groups, Alexander Polyakov demonstrated that instanton effects in 3-dimensional QED coupled to a scalar field lead to a mass for the photon. In 2-dimensional abelian gauge theories worldsheet instantons are magnetic vortices. They are responsible for many nonperturbative effects in string theory, playing a central role in mirror symmetry. In 1-dimensional quantum mechanics, instantons describe tunneling, which is invisible in perturbation theory. 4d supersymmetric gauge theories Supersymmetric gauge theories often obey nonrenormalization theorems, which restrict the kinds of quantum corrections which are allowed. Many of these theorems only apply to corrections calculable in perturbation theory and so instantons, which are not seen in perturbation theory, provide the only corrections to these quantities. Field theoretic techniques for instanton calculations in supersymmetric theories were extensively studied in the 1980s by multiple authors. Because supersymmetry guarantees the cancellation of fermionic vs. bosonic non-zero modes in the instanton background, the involved 't Hooft computation of the instanton saddle point reduces to an integration over zero modes. In N = 1 supersymmetric gauge theories instantons can modify the superpotential, sometimes lifting all of the vacua. In 1984, Ian Affleck, Michael Dine and Nathan Seiberg calculated the instanton corrections to the superpotential in their paper Dynamical Supersymmetry Breaking in Supersymmetric QCD. More precisely, they were only able to perform the calculation when the theory contains one less flavor of chiral matter than the number of colors in the special unitary gauge group, because in the presence of fewer flavors an unbroken nonabelian gauge symmetry leads to an infrared divergence and in the case of more flavors the contribution is equal to zero. For this special choice of chiral matter, the vacuum expectation values of the matter scalar fields can be chosen to completely break the gauge symmetry at weak coupling, allowing a reliable semi-classical saddle point calculation to proceed. By then considering perturbations by various mass terms they were able to calculate the superpotential in the presence of arbitrary numbers of colors and flavors, valid even when the theory is no longer weakly coupled. In N = 2 supersymmetric gauge theories the superpotential receives no quantum corrections. However the correction to the metric of the moduli space of vacua from instantons was calculated in a series of papers. First, the one instanton correction was calculated by Nathan Seiberg in Supersymmetry and Nonperturbative beta Functions. The full set of corrections for SU(2) Yang–Mills theory was calculated by Nathan Seiberg and Edward Witten in " Electric – magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang–Mills theory," in the process creating a subject that is today known as Seiberg–Witten theory. They extended their calculation to SU(2) gauge theories with fundamental matter in Monopoles, duality and chiral symmetry breaking in N=2 supersymmetric QCD. These results were later extended for various gauge groups and matter contents, and the direct gauge theory derivation was also obtained in most cases. For gauge theories with gauge group U(N) the Seiberg–Witten geometry has been derived from gauge theory using Nekrasov partition functions in 2003 by Nikita Nekrasov and Andrei Okounkov and independently by Hiraku Nakajima and Kota Yoshioka. In N = 4 supersymmetric gauge theories the instantons do not lead to quantum corrections for the metric on the moduli space of vacua. Explicit solutions on R4 An ansatz provided by Corrigan and Fairlie provides a solution to the anti-self dual Yang–Mills equations with gauge group SU(2) from any harmonic function on . The ansatz gives explicit expressions for the gauge field and can be used to construct solutions with arbitrarily large instanton number. Defining the antisymmetric -valued objects as where Greek indices run from 1 to 4, Latin indices run from 1 to 3, and is a basis of satisfying . Then is a solution as long as is harmonic. In four dimensions, the fundamental solution to Laplace's equation is for any fixed . Superposing of these gives -soliton solutions of the form All solutions of instanton number 1 or 2 are of this form, but for larger instanton number there are solutions not of this form. See also References and notes Notes Citations General Instantons in Gauge Theories, a compilation of articles on instantons, edited by Mikhail A. Shifman, Solitons and Instantons, R. Rajaraman (Amsterdam: North Holland, 1987), The Uses of Instantons, by Sidney Coleman in Proc. Int. School of Subnuclear Physics, (Erice, 1977); and in Aspects of Symmetry p. 265, Sidney Coleman, Cambridge University Press, 1985, ; and in Instantons in Gauge Theories Solitons, Instantons and Twistors. M. Dunajski, Oxford University Press. . The Geometry of Four-Manifolds, S.K. Donaldson, P.B. Kronheimer, Oxford University Press, 1990, . External links Quantum mechanics Gauge theories Differential geometry Quantum chromodynamics Anomalies (physics)
Instanton
[ "Physics" ]
5,130
[ "Theoretical physics", "Quantum mechanics" ]
296,079
https://en.wikipedia.org/wiki/Bogomol%27nyi%E2%80%93Prasad%E2%80%93Sommerfield%20bound
In the classical bosonic sector of a supersymmetric field theory, the Bogomol'nyi–Prasad–Sommerfield (BPS) bound (named after Evgeny Bogomolny, M.K. Prasad, and Charles Sommerfield) provides a lower limit on the energy of static field configurations, depending on their topological charges or boundary conditions at spatial infinity. This bound manifests as a series of inequalities for solutions of the classical bosonic field equations. Saturating this bound, meaning the energy of the configuration equals the bound, leads to a simplified set of first-order partial differential equations known as the Bogomolny equations. Classical solutions that saturate the BPS bound are called "BPS states". These BPS states are not only important solutions within the classical bosonic theory but also play a crucial role in the full quantum supersymmetric theory, often corresponding to stable, non-perturbative states in both field theory and string theory. Their existence and properties are deeply connected to the underlying supersymmetry of the theory, even though the bound itself can be formulated within the bosonic sector alone. In theoretical physics, specifically in theories with extended supersymmetry, the BPS bound is a lower limit on the mass of a physical state in terms of its charges. States that saturate this bound are known as BPS states, and they have special properties, such as being invariant under some fraction of the supersymmetry transformations. The acronym BPS stands for Bogomol'nyi, Prasad, and Sommerfield, who first derived the bound in the context of magnetic monopoles in Yang-Mills theory in 1975. Overview Supersymmetry is a theoretical framework that relates bosons and fermions, particles with integer and half-integer spin, respectively. Extended supersymmetry involves multiple supersymmetry generators, denoted by , where is a spinor index and labels the different supersymmetry generators. The supersymmetry algebra includes anticommutators of these generators, which typically involve the Hamiltonian (energy operator) , momentum operators , and central charges and . Central charges are operators that commute with all other operators in the supersymmetry algebra and are typically topological charges. The BPS bound arises from the positivity of the norm of states in the Hilbert space. Consider the following anticommutator from the supersymmetry algebra: where are the Pauli matrices (or their higher-dimensional generalizations), is the four-momentum, and are the central charges. Taking the expectation value of this anticommutator in a physical state , we obtain: Since the norm of any state is non-negative, this leads to the inequality: where is the mass of the state (in the rest frame, where ) and is a linear combination of the central charges. This inequality is the BPS bound. BPS states States that satisfy the BPS bound, i.e., , are called BPS states. They have the following important properties: Short multiplets: BPS states form shorter irreducible representations of the supersymmetry algebra compared to generic states. This is because some of the supersymmetry generators annihilate the BPS states, reducing the number of states in the multiplet. Stability: BPS states are often stable against quantum corrections. Their mass is protected from renormalization, meaning it does not change as one varies the parameters of the theory. This stability makes BPS states crucial for studying non-perturbative aspects of supersymmetric theories. Supersymmetry preservation: BPS states preserve a fraction of the supersymmetry. Specifically, if a state saturates the BPS bound, some linear combination of the supersymmetry generators must annihilate the state: for some spinor . The number of independent supersymmetry generators that annihilate the state determines the fraction of supersymmetry preserved. Examples BPS bounds and states appear in various contexts in theoretical physics: Magnetic Monopoles: The original BPS bound was derived for magnetic monopoles in Yang-Mills theory. The mass of the monopole is bounded by its magnetic charge. Solitons and D-branes: In string theory, BPS states include solitons like D-branes. The mass of a D-brane is determined by its tension, which is related to its charge under the Ramond-Ramond fields. Supersymmetric Gauge Theories: BPS states play a crucial role in understanding the dynamics of supersymmetric gauge theories, such as N=4 Super Yang-Mills theory. They provide insights into the non-perturbative behavior of these theories and are related to the concept of S-duality. Black Holes: In supergravity and string theory, extremal black holes can be BPS states. Their mass is related to their charge and angular momentum. Significance The BPS bound and BPS states are powerful tools for studying supersymmetric theories. They provide a window into the non-perturbative regime of these theories and allow for exact calculations of certain quantities. BPS states have played a crucial role in the development of string theory, the AdS/CFT correspondence, and our understanding of black hole physics. Example Monopoles and dyons The concept of the BPS bound first arose in the study of magnetic monopoles in non-abelian gauge theories. Specifically, it was shown that the mass of a 't Hooft-Polyakov monopole and a Julia-Zee dyon is bounded from below by a quantity proportional to its topological charge. Solutions that saturate this bound are called BPS monopoles or BPS dyons, and they possess special properties and play a significant role in both classical and quantum field theory. 't Hooft-Polyakov monopole The 't Hooft-Polyakov monopole is a static, finite-energy solution in a non-abelian gauge theory, typically SU(2), spontaneously broken to U(1) by a scalar Higgs field in the adjoint representation. The Lagrangian for the bosonic sector is given by: where is the field strength tensor, is the covariant derivative, is the Higgs field in the adjoint representation, and is the Higgs potential, often taken to be , where is the vacuum expectation value of the Higgs field. The energy of a static field configuration is given by: where is the magnetic field. In the limit where the Higgs self-coupling goes to zero (the BPS limit), it can be shown that the energy is bounded by the magnetic charge: where is the magnetic charge, with . This inequality is the BPS bound for the monopole. The BPS bound is saturated when the following Bogomolny equations are satisfied: Solutions to these first-order equations are the BPS monopoles, which have the minimal energy for a given magnetic charge. The choice of sign in the Bogomolny equation determines whether the monopole is a regular monopole or an antimonopole. Julia–Zee dyon The Julia–Zee dyon is a generalization of the 't Hooft-Polyakov monopole that also carries electric charge. This is achieved by adding a term proportional to to the Lagrangian, where is the vacuum angle and is the dual field strength tensor. This term introduces a coupling between the electric and magnetic fields. The energy of a static dyon configuration is bounded by: where is the electric charge. This is the BPS bound for the dyon. The bound is saturated when the following generalized Bogomolny equations are satisfied: where is the electric field. Solutions to these equations are the BPS dyons, which have the minimal energy for given electric and magnetic charges. Significance The BPS monopoles and dyons are important because they are stable, finite-energy solutions that saturate a classical energy bound. Their existence and properties are closely related to the topology of the gauge group and the Higgs field. Moreover, these classical solutions have quantum counterparts, and the BPS bound plays a crucial role in understanding the spectrum of the quantum theory. In supersymmetric extensions of the theory, BPS states correspond to short representations of the supersymmetry algebra and are protected from quantum corrections. Extremal Reissner–Nordström black holes The Reissner-Nordström metric describes the spacetime geometry around a spherically symmetric, electrically charged black hole. The metric is characterized by two parameters: the mass and the electric charge of the black hole. The ADM mass, a concept from general relativity that defines the total mass-energy of an asymptotically flat spacetime, is simply for the Reissner-Nordström solution. The event horizon(s) of the Reissner-Nordström black hole are located at the radial coordinates where the metric component vanishes. This leads to the equation: The solutions are given by: When , there are two horizons: an outer event horizon and an inner Cauchy horizon . When , there are no horizons, and the singularity is naked, which is generally considered unphysical. The critical case, where , is known as the extremal Reissner-Nordström black hole, and in this case, the two horizons coincide: In the context of supergravity theories (supersymmetric extensions of general relativity), extremal Reissner-Nordström black holes can be interpreted as BPS states. This connection arises from the fact that the extremality condition can be viewed as a saturation of a classical inequality, analogous to the BPS bound in supersymmetric field theories. Classical inequalities and energy conditions In general relativity, various energy conditions are imposed on the stress-energy tensor to ensure physically reasonable behavior of matter and energy. One such condition is the weak energy condition, which states that for any timelike vector , the following inequality holds: This essentially means that the energy density measured by any observer is non-negative. For the electromagnetic field, which is the source of the Reissner-Nordström black hole's charge, the weak energy condition translates to: where and are electric and magnetic field strengths. For the Reissner-Nordström solution, the total energy, which is the ADM mass , can be decomposed into contributions from the gravitational field and the electromagnetic field. It can be shown that the electromagnetic contribution is precisely equal to , the absolute value of the charge. The weak energy condition then implies that the gravitational contribution must be non-negative. This leads to the inequality: This inequality is strikingly similar to the BPS bound, with the ADM mass playing the role of the energy and the electric charge playing the role of the central charge. The extremal Reissner-Nordström black hole, with , saturates this inequality, making it a classical analog of a BPS state. Supersymmetry and extremal black holes In the context of supergravity, the extremal Reissner-Nordström black hole is not just an analog but a true BPS state. The supersymmetry algebra in this theory includes central charges that are proportional to the electric and magnetic charges of the black hole. The BPS bound then becomes: where is the magnetic charge (which is zero for the Reissner-Nordström solution we are considering). The extremal black hole, with , saturates this bound and preserves half of the supersymmetry. This means that some of the supersymmetry transformations leave the black hole solution invariant. The fact that extremal black holes can be BPS states has profound implications. It suggests a deep connection between gravity, supersymmetry, and the quantum nature of spacetime. BPS black holes are stable against quantum corrections and provide valuable insights into the microscopic structure of black holes and the nature of quantum gravity. See also Supersymmetry Central charge Magnetic monopole D-brane Extremal black hole References Partial differential equations Quantum field theory Solitons
Bogomol'nyi–Prasad–Sommerfield bound
[ "Physics" ]
2,522
[ "Quantum field theory", "Quantum mechanics" ]
296,428
https://en.wikipedia.org/wiki/Admittance
In electrical engineering, admittance is a measure of how easily a circuit or device will allow a current to flow. It is defined as the reciprocal of impedance, analogous to how conductance and resistance are defined. The SI unit of admittance is the siemens (symbol S); the older, synonymous unit is mho, and its symbol is ℧ (an upside-down uppercase omega Ω). Oliver Heaviside coined the term admittance in December 1887. Heaviside used to represent the magnitude of admittance, but it quickly became the conventional symbol for admittance itself through the publications of Charles Proteus Steinmetz. Heaviside probably chose simply because it is next to in the alphabet, the conventional symbol for impedance. Admittance , measured in siemens, is defined as the inverse of impedance , measured in ohms: Resistance is a measure of the opposition of a circuit to the flow of a steady current, while impedance takes into account not only the resistance but also dynamic effects (known as reactance). Likewise, admittance is not only a measure of the ease with which a steady current can flow, but also the dynamic effects of the material's susceptance to polarization: where is the admittance (siemens); is the conductance (siemens); is the susceptance (siemens); and , the imaginary unit. The dynamic effects of the material's susceptance relate to the universal dielectric response, the power law scaling of a system's admittance with frequency under alternating current conditions. Conversion from impedance to admittance The impedance, , is composed of real and imaginary parts, where is the resistance (ohms); and is the reactance (ohms). Admittance, just like impedance, is a complex number, made up of a real part (the conductance, ), and an imaginary part (the susceptance, ), thus: where (conductance) and (susceptance) are given by: The magnitude and phase of the admittance are given by: where is the conductance, measured in siemens; and is the susceptance, also measured in siemens. Note that (as shown above) the signs of reactances become reversed in the admittance domain; i.e. capacitive susceptance is positive and inductive susceptance is negative. Shunt admittance in electrical power systems modeling In the context of electrical modeling of transformers and transmission lines, shunt components that provide paths of least resistance in certain models are generally specified in terms of their admittance. Each side of most transformer models contains shunt components which model magnetizing current and core losses. These shunt components can be referenced to the primary or secondary side. For simplified transformer analysis, admittance from shunt elements can be neglected. When shunt components have non-negligible effects on system operation, the shunt admittance must be considered. In the diagram below, all shunt admittances are referred to the primary side. The real and imaginary components of the shunt admittance, conductance and susceptance, are represented by and , respectively. Transmission lines can span hundreds of kilometers, over which the line's capacitance can affect voltage levels. For short length transmission line analysis, which applies to lines shorter than , this capacitance can be ignored and shunt components are not necessary in the model. Lines from , generally considered to be in the medium-line category, contain a shunt admittance governed by where is the total shunt admittance; is the shunt admittance per unit length; is the length of the transmission line; and is the capacitance of the line. See also Nodal admittance matrix SI electromagnetism units Immittance References Physical quantities Electrical resistance and conductance
Admittance
[ "Physics", "Mathematics" ]
801
[ "Physical phenomena", "Physical quantities", "Quantity", "Wikipedia categories named after physical quantities", "Physical properties", "Electrical resistance and conductance" ]
296,435
https://en.wikipedia.org/wiki/Electrical%20susceptance
In electrical engineering, susceptance () is the imaginary part of admittance (), where the real part is conductance (). The reciprocal of admittance is impedance (), where the imaginary part is reactance () and the real part is resistance (). In SI units, susceptance is measured in siemens (S). Origin The term was coined by C.P. Steinmetz in a 1894 paper. In some sources Oliver Heaviside is given credit for coining the term, or with introducing the concept under the name permittance. This claim is mistaken according to Steinmetz's biographer. The term susceptance does not appear anywhere in Heaviside's collected works, and Heaviside used the term permittance to mean capacitance, not susceptance. Formula The general equation defining admittance is given by where The admittance () is the reciprocal of the impedance (), if the impedance is not zero: and where The susceptance is the imaginary part of the admittance The magnitude of admittance is given by: And similar formulas transform admittance into impedance, hence susceptance () into reactance (): hence The reactance and susceptance are only reciprocals in the absence of either resistance or conductance (only if either or , either of which implies the other, as long as , or equivalently as long as ). Relation to capacitance In electronic and semiconductor devices, transient or frequency-dependent current between terminals contains both conduction and displacement components. Conduction current is related to moving charge carriers (electrons, holes, ions, etc.), while displacement current is caused by time-varying electric field. Carrier transport is affected by electric field and by a number of physical phenomena, such as carrier drift and diffusion, trapping, injection, contact-related effects, and impact ionization. As a result, device admittance is frequency-dependent, and the simple electrostatic formula for capacitance, is not applicable. A more general definition of capacitance, encompassing electrostatic formula, is: where is the device admittance, and is the susceptance, both evaluated at the angular frequency in question, and is that angular frequency. It is common for electrical components to have slightly reduced capacitances at extreme frequencies, due to slight inductance of the internal conductors used to make capacitors (not just the leads), and permittivity changes in insulating materials with frequency: is very nearly, but not quite a constant. Relationship to reactance Reactance is defined as the imaginary part of electrical impedance, and is analogous to but not generally equal to the negative reciprocal of the susceptance – that is their reciprocals are equal and opposite only in the special case where the real parts vanish (either zero resistance or zero conductance). In the special case of entirely zero admittance or exactly zero impedance, the relations are encumbered by infinities. However, for purely-reactive impedances (which are purely-susceptive admittances), the susceptance is equal to the negative reciprocal of the reactance, except when either is zero. In mathematical notation: The minus sign is not present in the relationship between electrical resistance and the analogue of conductance but otherwise a similar relation holds for the special case of reactance-free impedance (or susceptance-free admittance): If the imaginary unit is included, we get for the resistance-free case since, Applications High susceptance materials are used in susceptors built into microwavable food packaging for their ability to convert microwave radiation into heat. See also Electrical measurements SI electromagnetism units References Physical quantities Electrical engineering
Electrical susceptance
[ "Physics", "Mathematics", "Engineering" ]
787
[ "Physical phenomena", "Physical quantities", "Quantity", "Electrical engineering", "Physical properties" ]
25,451,462
https://en.wikipedia.org/wiki/SEMAT
SEMAT (Software Engineering Method and Theory) is an initiative to reshape software engineering such that software engineering qualifies as a rigorous discipline. The initiative was launched in December 2009 by Ivar Jacobson, Bertrand Meyer, and Richard Soley with a call for action statement and a vision statement. The initiative was envisioned as a multi-year effort for bridging the gap between the developer community and the academic community and for creating a community giving value to the whole software community. The work is now structured in four different but strongly related areas: Practice, Education, Theory, and Community. The Practice area primarily addresses practices. The Education area is concerned with all issues related to training for both the developers and the academics including students. The Theory area is primarily addressing the search for a General Theory in Software Engineering. Finally, the Community area works with setting up legal entities, creating websites and community growth. It was expected that the Practice area, the Education area and the Theory area would at some point in time integrate in a way of value to all of them: the Practice area would be a "customer" of the Theory area, and direct the research to useful results for the developer community. The Theory area would give a solid and practical platform for the Practice area. And, the Education area would communicate the results in proper ways. Practice area The first step was here to develop a common ground or a kernel including the essence of software engineering – things we always have, always do, always produce when developing software. The second step was envisioned to add value on top of this kernel in the form of a library of practices to be composed to become specific methods, specific for all kinds of reasons such as the preferences of the team using it, kind of software being built, etc. The first step is as of this writing just about to be concluded. The results are a kernel including universal elements for software development – called the Essence Kernel, and a language – called the Essence Language - to describe these elements (and elements built on top of the kernel (practices, methods, and more). Essence, including both the kernel and language, has been published as an OMG standard in beta status in July 2013 and is expected to become a formally adopted standard in early 2014. The second step has just started, and the Practice area will be divided into a number of separate but interconnected tracks: the practice (library track), the tool track are so far identified and work has started or is about to get started. The practice track is currently working on a Users Guide. Education area The area focuses on leveraging the work of SEMAT in software engineering education, both within academia and industry. It promotes global education based on a common ground called Essence. The area's target groups are instructors such as university professors and industrial coaches as well as their students and learning practitioners. The goal of the area is to create educational courses and course materials that are internationally viable, identify pedagogical approaches that are appropriate and effective for specific target groups and disseminate experience and lessons learned. The area includes members from a number of universities and institutes worldwide. Most members have already been involved in leveraging aspects of SEMAT in the context of their software engineering courses. They are gathering their resources and starting a common venture towards defining a new generation of SEMAT-powered software engineering curricula. As of 2018, some studies of utilizing Essence in educational settings exist. One example of the use of Essence in university education was a software engineering course carried out in Norwegian University of Science and Technology. A study was conducted by introducing Essence into a project-based software engineering course, with the aim of understanding what difficulties the students faced in using Essence, and whether they considered it to have been useful. The results indicated that Essence could also be useful for novice software engineers by (1) encouraging them to look up and study new practices and methods in order to create their own, (2) encouraging them to adjust their way-of-working reflectively and in a situation-specific manner, (3) helping them structure their way of working. The findings of another study introducing students to Essence through a digital game supported these findings: the students felt that Essence will be useful to them in future, real-world projects, and that they wish to utilize it in them. Theory area An important part of SEMAT is that a general theory of software engineering is planned to emerge with significant benefits. A series of workshops held under the title SEMAT Workshop on a General Theory of Software Engineering (GTSE) are a key component in awareness building around general theories. In addition to community awareness building, SEMAT also aims to contribute with a specific general theory of software engineering. This theory should be solidly based on the SEMAT Essence language and kernel, and should support software engineering practitioners' goal-oriented decision making. As argued elsewhere, such support is predicated on the predictive capabilities of the theory. Thus, the SEMAT Essence should be augmented to allow the prediction of critical software engineering phenomena. The GTSE workshop series assists in the development of the SEMAT general software engineering theory by engaging a larger community in the search for, development of, and evaluation of promising theories, which may be used as a base for the SEMAT theory. Organizational structure Main organization SEMAT is chaired by Sumeet S. Malhotra of Tata Consultancy Services. The CEO of the organization is Ste Nadin of Fujitsu. The Executive Management Committee of SEMAT are Ivar Jacobson, Ste Nadin, Sumeet S. Malhotra, Paul E. McMahon, Michael Goedicke and Cecile Peraire. Japan Chapter Japan Chapter was established in April 2013, and it has more than 250 members as of November 2013. Member activities include carrying out seminars about SEMAT, considering utilization of SEMAT Essence for integrating different requirements engineering techniques and body of knowledges (BoKs), and translating articles into Japanese. Korea Chapter The chapter was inaugurated with about 50 members in October 2013. Member activities include: 2e Consulting started rewriting their IT service engagement methods using the Essence kernel, and uEngine Solutions started developing a tool to orchestrate Essence-kernel based practices into a project method. Korean government supported KAIST to conduct research in Essence. Latin American Chapter Semat Latin American Chapter was created in August 2011 in Medellin (Colombia) by Ivar Jacobson during the Latin American Software Engineering Symposium. This Chapter has 9 Executive Committee members from Colombia, Venezuela, Peru, Brazil, Argentina, Chile, and Mexico, chaired by Dr. Carlos Zapata from Colombia. More than 80 people signed the initial declaration of the Chapter and nowadays the Chapter members are in charge of disseminating the Semat ideas in all Latin America. Chapter members have participated in various Latin American conferences, including the Latin American Conference on Informatics (CLEI), the Ibero American Software Engineering and Knowledge Engineering Journeys (JIISIC), the Colombian Computing Conference (CCC), and the Chilean Computing Meeting (ECC). The Chapter contributed in the submission sent in response to the OMG call for proposals and currently studies didactic strategies for teaching the Semat kernel by games, theoretical studies about some kernel elements, and practical representations of several software development and quality methods by using the Semat kernel. Some of the members also translated the Essence book and some other Semat materials and papers into Spanish. Russia Chapter Russian Chapter has about 20 members. A few universities have incorporated SEMAT in their training courses , including Moscow State University, Moscow Institute of Physics and Technology, Higher School of Economics, Moscow State University of Economics, Statistics, and Informatics. The chapter and some commercial companies are carrying out seminars about SEMAT. INCOSE Russian Chapter is working on an extension of SEMAT to Systems Engineering. EC-leasing is working on an extension of the Kernel for Software Life Cycle. Russian Chapter attended in two conferences: Actual Problems of System and Software Engineering and SECR with SEMAT section and articles. Translation of the Essence book into Russian is in progress. Practical Applications of SEMAT Ideas developed by the SEMAT community have been applied by both industry and academia. Notable examples include: Reinsurance company Munich Re have assembled a family of "collaboration models" to cover the whole spectrum of software and application work. Four collaboration models — exploratory, standard, maintenance, and support — have been built on the same kernel from the same set of 12 practices. Tools supporting SEMAT The first tool that supported the authoring and development of SEMAT practices based on a kernel was the EssWork Practice Workbench tool provided by Ivar Jacobson International. The Practice Workbench tool was made available to the SEMAT community in June 2012 and is now publicly available and free to use. The Practice Workbench is an Integrated Practice Development Environment with support for collaborative practice and method development. Key features of the Practice Workbench include: Interactive presentation of the Essence Kernel Practice authoring and extension using the Essence Language Method composition Innovative card-based representation Publication of methods, practices and kernels as card-based HTML web-sites Export to the EssWork deployment environment Other publicly available tools supporting SEMAT's Essence include: SematAcc, the Essence Accelerator System, designed to speed up the learning of Essence Theory in Software Engineering and to easily test it with any software project The Essence Board Game, intended to teach the basics of Essence in a fun fashion Essencery, an Open Source alternative for composing methods using the Essence graphical language syntax References External links The SEMAT Initiative: A Call for Action Why We Need a Theory for Software Engineering Methods Need Theory SEMAT - Software Engineering Method and Theory The Essence of Software Engineering: The SEMAT Kernel Software engineering organizations Software engineering Operations research
SEMAT
[ "Mathematics", "Technology", "Engineering" ]
1,980
[ "Systems engineering", "Software engineering organizations", "Computer engineering", "Applied mathematics", "Software engineering", "Operations research", "Information technology" ]
25,451,813
https://en.wikipedia.org/wiki/Plasmonic%20solar%20cell
A plasmonic-enhanced solar cell, commonly referred to simply as plasmonic solar cell, is a type of solar cell (including thin-film or wafer-based cells) that converts light into electricity with the assistance of plasmons, but where the photovoltaic effect occurs in another material. A direct plasmonic solar cell is a solar cell that converts light into electricity using plasmons as the active, photovoltaic material. The active material thickness varies from that of traditional silicon PV (~100-200 μm wafers) , to less than 2 μm thick, and theoretically could be as thin as 100 nm. The devices can be supported on substrates cheaper than silicon, such as glass, steel, plastic or other polymeric materials (e.g. paper). One of the challenges for thin film solar cells is that they do not absorb as much light as thicker solar cells made with materials with the same absorption coefficient. Methods for light trapping are important for thin film solar cells. Plasmonic-enhanced cells improve absorption by scattering light using metal nano-particles excited at their localized surface plasmon resonance. Plasmonic core-shell nanoparticles located in the front of the thin film solar cells can aid weak absorption of Si solar cells in the near-infrared region—the fraction of light scattered into the substrate and the maximum optical path length enhancement can be as high as 3133. On the other hand, direct plasmonic solar cells exploit the fact that incoming light at the plasmon resonance frequency induces electron oscillations at the surface of the nanoparticles. The oscillation electrons can then be captured by a conductive layer producing an electrical current. The voltage produced is dependent on the bandgap of the conductive layer and the potential of the electrolyte in contact with the nanoparticles. There is still considerable research necessary to enable these technologies to reach their full potential and enable the commercialization of plasmonic solar cells. History Devices There are currently three different generations of solar cells. The first generation (those in the market today) are made with crystalline semiconductor wafers, with crystalline silicon making "up to 93% market share and about 75 GW installed in 2016". Current solar cells trap light by creating pyramids on the surface which have dimensions bigger than most thin film solar cells. Making the surface of the substrate rough (typically by growing SnO2 or ZnO on surface) with dimensions on the order of the incoming wavelengths and depositing the SC on top has been explored. This method increases the photocurrent, but the thin film solar cells would then have poor material quality. The second generation solar cells are based on thin film technologies such as those presented here. These solar cells focus on lowering the amount of material used as well as increasing the energy production. Third generation solar cells are currently being researched. They focus on reducing the cost of the second generation solar cells. The third generation SCs are discussed in more detail under the "Recent advancements" section. Design The design for plasmonic-enhanced solar cells varies depending on the method being used to trap and scatter light across the surface and through the material. Nanoparticle cells A common design is to deposit metal nano-particles on the top surface of the solar cell. When light hits these metal nano-particles at their surface plasmon resonance, the light is scattered in many different directions. This allows light to travel along the solar cell and bounce between the substrate and the nano-particles enabling the solar cell to absorb more light. The concentrated near field intensity induced by localized surface plasmon of the metal nanoparticles will promote the optical absorption of semiconductors. Recently, the plasmonic asymmetric modes of nanoparticles have found to favor the broadband optical absorption and promote the electrical properties of solar cells. The simultaneously plasmon-optical and plasmon-electrical effects of nanoparticles reveal a promising feature of nanoparticle plasmon. Recently, the core (metal)-shell (dielectric) nanoparticle has demonstrated a zero backward scattering with enhanced forward scattering on Si substrate when surface plasmon is located in front of a solar cell. The core-shell nanoparticles can support simultaneously both electric and magnetic resonances, demonstrating entirely new properties when compared with bare metallic nanoparticles if the resonances are properly engineered. Despite these effects, the application of metal nanoparticles at the solar cells' front can bring considerable optical losses, chiefly due to partial shading and reflection of the impinging light. Instead, their integration at the rear side of thin-film devices, particularly in between the absorber layer and the rear metallic contact (acting as reflective mirror), can circumvent such issues since the particles interact only with the longer-wavelength light that is weakly-absorbed by the cell, for which the plasmonic scattering effects can allow pronounced photocurrent gains. Such so-called plasmonic back reflector configuration has allowed the highest PV efficiency enhancements, for instance as demonstrated in thin-film silicon solar cells. Metal film cells Other methods utilizing surface plasmons for harvesting solar energy are available. One other type of structure is to have a thin film of silicon and a thin layer of metal deposited on the lower surface. The light will travel through the silicon and generate surface plasmons on the interface of the silicon and metal. This generates electric fields inside of the silicon since electric fields do not travel very far into metals. If the electric field is strong enough, electrons can be moved and collected to produce a photocurrent. The thin film of metal in this design must have nanometer sized grooves which act as waveguides for the incoming light in order to excite as many photons in the silicon thin film as possible. Principles General When a photon is excited in the substrate of a solar cell, an electron and hole are separated. Once the electrons and holes are separated, they will want to recombine since they are of opposite charge. If the electrons can be collected prior to this happening they can be used as a current for an external circuit. Designing the thickness of a solar cell is always a trade-off between minimizing this recombination (thinner layers) and absorbing more photons (thicker layer). Nano-particles Scattering and Absorption The basic principles for the functioning of plasmonic-enhanced solar cells include scattering and absorption of light due to the deposition of metal nano-particles. Silicon does not absorb light very well. For this reason, more light needs to be scattered across the surface in order to increase the absorption. It has been found that metal nano-particles help to scatter the incoming light across the surface of the silicon substrate. The equations that govern the scattering and absorption of light can be shown as: This shows the scattering of light for particles which have diameters below the wavelength of light. This shows the absorption for a point dipole model. This is the polarizability of the particle. V is the particle volume. is the dielectric function of the particle. is the dielectric function of the embedding medium. When the polarizability of the particle becomes large. This polarizability value is known as the surface plasmon resonance. The dielectric function for metals with low absorption can be defined as: In the previous equation, is the bulk plasma frequency. This is defined as: N is the density of free electrons, e is the electronic charge and m is the effective mass of an electron. is the dielectric constant of free space. The equation for the surface plasmon resonance in free space can therefore be represented by: Many of the plasmonic solar cells use nano-particles to enhance the scattering of light. These nano-particles take the shape of spheres, and therefore the surface plasmon resonance frequency for spheres is desirable. By solving the previous equations, the surface plasmon resonance frequency for a sphere in free space can be shown as: As an example, at the surface plasmon resonance for a silver nanoparticle, the scattering cross-section is about 10x the cross-section of the nanoparticle. The goal of the nano-particles is to trap light on the surface of the SC. The absorption of light is not important for the nanoparticle, rather, it is important for the SC. One would think that if the nanoparticle is increased in size, then the scattering cross-section becomes larger. This is true, however, when compared with the size of the nanoparticle, the ratio () is reduced. Particles with a large scattering cross section tend to have a broader plasmon resonance range. Wavelength dependence Surface plasmon resonance mainly depends on the density of free electrons in the particle. The order of densities of electrons for different metals is shown below along with the type of light which corresponds to the resonance. Aluminum - Ultra-violet Silver - Ultra-violet Gold - Visible Copper - Visible If the dielectric constant for the embedding medium is varied, the resonant frequency can be shifted. Higher indexes of refraction will lead to a longer resonant wavelength. Light trapping The metal nano-particles are deposited at a distance from the substrate in order to trap the light between the substrate and the particles. The particles are embedded in a material on top of the substrate. The material is typically a dielectric, such as silicon or silicon nitride. When performing experiment and simulations on the amount of light scattered into the substrate due to the distance between the particle and substrate, air is used as the embedding material as a reference. It has been found that the amount of light radiated into the substrate decreases with distance from the substrate. This means that nano-particles on the surface are desirable for radiating light into the substrate, but if there is no distance between the particle and substrate, then the light is not trapped and more light escapes. The surface plasmons are the excitations of the conduction electrons at the interface of metal and the dielectric. Metallic nano-particles can be used to couple and trap freely propagating plane waves into the semiconductor thin film layer. Light can be folded into the absorbing layer to increase the absorption. The localized surface plasmons in metal nano-particles and the surface plasmon polaritons at the interface of metal and semiconductor are of interest in the current research. In recent reported papers, the shape and size of the metal nano-particles are key factors to determine the incoupling efficiency. The smaller particles have larger incoupling efficiency due to the enhanced near-field coupling. However, very small particles suffer from large ohmic losses. Nevertheless, in certain types of nanostructured solar cells, such as the emerging quantum-dot intermediate band solar cells, the highly intense scattered near-field produced in the vicinity of plasmonic nanoparticles may be exploited for local absorption amplification in the quantum dots that are embedded in a host semiconductor. Recently, the plasmonic asymmetric modes of nano particles have found to favor the broadband optical absorption and promote the electrical properties of solar cells. The simultaneously plasmon-optical and plasmon-electrical effects of nanoparticles reveal a promising feature of nanoparticle plasmon. Metal film As light is incident upon the surface of the metal film, it excites surface plasmons. The surface plasmon frequency is specific for the material, but through the use of gratings on the surface of the film, different frequencies can be obtained. The surface plasmons are also preserved through the use of waveguides as they make the surface plasmons easier to travel on the surface and the losses due to resistance and radiation are minimized. The electric field generated by the surface plasmons influences the electrons to travel toward the collecting substrate. Materials Applications There are many applications for plasmonic-enhanced solar cells. The need for cheaper and more efficient solar cells is considerable. In order for solar cells to be considered cost-effective, they need to provide energy for a smaller price than that of traditional power sources such as coal and gasoline. The movement toward a more green world has helped to spark research in the area of plasmonic-enhanced solar cells. Currently, solar cells cannot exceed efficiencies of about 30% (first generation). With new technologies (third generation), efficiencies of up to 40-60% can be expected. With a reduction of materials through the use of thin film technology (second Generation), prices can be driven lower. Certain applications for plasmonic-enhanced solar cells would be for space exploration vehicles. A main contribution for this would be the reduced weight of the solar cells. An external fuel source would also not be needed if enough power could be generated from the solar cells. This would drastically help to reduce the weight as well. Solar cells have a great potential to help rural electrification. An estimated two million villages near the equator have limited access to electricity and fossil fuels, and approximately 25% of people in the world do not have access to electricity. When the cost of extending power grids, running rural electricity and using diesel generators is compared with the cost of solar cells, in many cases the solar cells are superior. If the efficiency and cost of the current solar cell technology is decreased even further, then many rural communities and villages around the world could obtain electricity when current methods are out of the question. Specific applications for rural communities would be water pumping systems, residential electric supply and street lights. A particularly interesting application would be for health systems in countries where motorized vehicles are not overly abundant. Solar cells could be used to provide the power to refrigerate medications in coolers during transport. Solar cells could also provide power to lighthouses, buoys, or even battleships out in the ocean. Industrial companies could use them to power telecommunications systems or monitoring and control systems along pipelines. If the solar cells could be produced on a large scale and be cost effective, then entire power stations could be built in order to provide power to the electrical grids. With a reduction in size, they could be implemented on both commercial and residential buildings with a much smaller footprint. Other applications are in hybrid systems. The solar cells could help to power high-consumption devices such as automobiles in order to reduce the amount of fossil fuels used. In consumer electronics devices, solar cells could be used to replace batteries for low-power electronics. This would save money and it would also reduce the amount of waste going into landfills. Recent advancements Choice of plasmonic metal nano-particles Proper choice of plasmatic metal nanoparticles is crucial for the maximum light absorption in the active layer. Front surface located nanoparticles of silver and gold (Ag and Au) are the most widely used materials due to their surface plasmon resonances being located in the visible range, therefore interacting more strongly with the peak solar intensity. However, such noble metal nanoparticles always introduce reduced light coupling into Si at the short wavelengths below the surface plasmon resonance due to the detrimental Fano effect, i.e. the destructive interference between the scattered and unscattered light. Moreover, the noble metal nano-particles are impractical to use for large-scale solar cell manufacture due to their high cost and scarcity in the Earth's crust. Recently, Zhang et al. demonstrated that low-cost and earth-abundant aluminium (Al) nano-particles can outperform the widely used Ag and Au nanoparticles. Al nanoparticles, with their surface plasmon resonances located in the UV region below the desired solar spectrum edge at 300 nm, can avoid the reduction and introduce extra enhancement in the shorter wavelength range. Shape choice of nano-particles Light trapping for absorption enhancement As discussed earlier, being able to concentrate and scatter light from the surface or the back side of the plasmonic-enhanced solar cell will help to increase efficiencies, particularly when employing thin photovoltaic materials. Recently, research at Sandia National Laboratories has discovered a photonic waveguide which collects light at a certain wavelength and traps it within the structure. This new structure can contain 95% of the light that enters it compared to 30% for other traditional waveguides. It can also direct the light within one wavelength which is ten times greater than traditional waveguides. The wavelength this device captures can be selected by changing the structure of the lattice which comprises the structure. If this structure is used to trap light and keep it in the structure until the solar cell can absorb it, the efficiency of the solar cell could be increased dramatically. Another recent advancement in plasmonic-enhanced solar cells is using other methods to aid in the absorption of light. One method being researched is the use of metal wires on top of the substrate to scatter the light. This would help by utilizing a larger area of the surface of the solar cell for light scattering and absorption. The danger in using lines instead of dots would be creating a reflective layer which would reject light from the system. This is very undesirable for solar cells. This would be very similar to the thin metal film approach, but it also utilizes the scattering effect of the nano-particles. Yue et al. used a type of new materials, called topological insulators, to increase the absorption of ultrathin a-Si solar cells. The topological insulator nanostructure has intrinsically core-shell configuration. The core is dielectric and has ultrahigh refractive index. The shell is metallic and support surface plasmon resonances. Through integrating the nanocone arrays into a-Si thin film solar cells, up to 15% enhancement of light absorption was predicted in the ultraviolet and visible ranges. Third generation The goal of third generation solar cells is to increase the efficiency using second generation solar cells (thin film) and using materials that are found abundantly on earth. This has also been a goal of the thin film solar cells. With the use of common and safe materials, third generation solar cells should be able to be manufactured in mass quantities, further reducing the costs. The initial costs would be high in order to produce the manufacturing processes, but after that they should be cheap. The way third generation solar cells will be able to improve efficiency is to absorb a wider range of frequencies. The current thin film technology has been limited to one frequency due to the use of single band gap devices. Multiple energy levels The idea for multiple energy level solar cells is to basically stack thin film solar cells on top of each other. Each thin film solar cell would have a different band gap which means that if part of the solar spectrum was not absorbed by the first cell then the one just below would be able to absorb part of the spectrum. These can be stacked and an optimal band gap can be used for each cell in order to produce the maximum amount of power. There are multiple options for how each cell can be connected, such as serial or parallel. The serial connection is desired because the output of the solar cell would just be two leads. The lattice structure in each of the thin film cells needs to be the same. If it is not then there will be losses. The processes used for depositing the layers are complex. They include Molecular Beam Epitaxy and Metal Organic Vapour Phase Epitaxy. The current efficiency record is made with this process but doesn't have exact matching lattice constants. The losses due to this are not as effective because the differences in lattices allows for more optimal band gap material for the first two cells. This type of cell is expected to be able to be 50% efficient. Lower-quality materials that use cheaper deposition processes are being researched as well. These devices are not as efficient, but the price, size and power combined allow them to be just as cost effective. Since the processes are simpler and the materials are more readily available, the mass production of these devices is more economical. Hot carrier cells A problem with solar cells is that the high energy photons that hit the surface are converted to heat. This is a loss for the cell because the incoming photons are not converted into usable energy. The idea behind the hot carrier cell is to utilize some of that incoming energy which is converted to heat. If the electrons and holes can be collected while hot, a higher voltage can be obtained from the cell. The problem with doing this is that the contacts which collect the electrons and holes will cool the material. Thus far, keeping the contacts from cooling the cell has been theoretical. Another way of improving the efficiency of the solar cell using the heat generated is to have a cell which allows lower energy photons to excite electron and hole pairs. This requires a small bandgap. Using a selective contact, the lower energy electrons and holes can be collected while allowing the higher energy ones to continue moving through the cell. The selective contacts are made using a double barrier resonant tunneling structure. The carriers are cooled which they scatter with phonons. If a material has a large bandgap of phonons then the carriers will carry more of the heat to the contact and it won't be lost in the lattice structure. One material which has a large bandgap of phonons is indium nitride. The hot carrier cells are in their infancy but are beginning to move toward the experimental stage. Plasmonic-electrical solar cells Having unique features of tunable resonances and unprecedented near-field enhancement, plasmon is an enabling technique for light management. Recently, performances of thin-film solar cells have been pronouncedly improved by introducing metallic nanostructures. The improvements are mainly attributed to the plasmonic-optical effects for manipulating light propagation, absorption, and scattering. The plasmonic-optical effects could: (1) boost optical absorption of active materials; (2) spatially redistribute light absorption at the active layer due to the localized near-field enhancement around metallic nanostructures. Except for the plasmonic-optical effects, the effects of plasmonically modified recombination, transport and collection of photocarriers (electrons and holes), hereafter named plasmonic-electrical effects, have been proposed by Sha, etal. For boosting device performance, they conceived a general design rule, tailored to arbitrary electron to hole mobility ratio, to decide the transport paths of photocarriers. The design rule suggests that electron to hole transport length ratio should be balanced with electron to hole mobility ratio. In other words, the transport time of electrons and holes (from initial generation sites to corresponding electrodes) should be the same. The general design rule can be realized by spatially redistributing light absorption at the active layer of devices (with the plasmonic-electrical effect). They also demonstrated the breaking of space charge limit in plasmonic-electrical organic solar cell. Recently, the plasmonic asymmetric modes of nano particles have found to favor the broadband optical absorption and promote the electrical properties of solar cells. The simultaneously plasmon-optical and plasmon-electrical effects of nanoparticles reveal a promising feature of nanoparticle plasmon. Ultra-thin plasmonic wafer solar cells Reducing the silicon wafer thickness at a minimized efficiency loss represents a mainstream trend in increasing the cost-effectiveness of wafer-based solar cells. Recently, Zhang et al. have demonstrated that, using the advanced light trapping strategy with a properly designed nano-particle architecture, the wafer thickness can be dramatically reduced to only around 1/10 of the current thickness (180 μm) without any solar cell efficiency loss at 18.2%. Nano-particle integrated ultra-thin solar cells with only 3% of the current wafer thickness can potentially achieve 15.3% efficiency combining the absorption enhancement with the benefit of thinner wafer induced open circuit voltage increase. This represents a 97% material saving with only 15% relative efficiency loss. These results demonstrate the feasibility and prospect of achieving high-efficiency ultra-thin silicon wafer cells with plasmonic light trapping. Direct plasmonic solar cells The development of direct plasmonic solar cells that use plasmonic nanoparticles directly as light absorbers is much more recent than plasmonic-enhanced cells. In 2013 it was confirmed that hot carriers in plasmonic nanoparticles can be generated by excitation of localized surface plasmon resonance. The hot electrons were shown to be injected into a TiO2 conduction band, confirming their usability for light conversion to electricity. In 2019 another article was published describing how the hot electrons counterpart, the hot holes, can also be injected into a p-type semiconductor. This separation of charges enables direct use of plasmonic nanoparticles as light absorbers in photovoltaic cells. A spin-off company from Uppsala university, Peafowl Solar Power, is developing direct plasmonic solar cell technology for commercial applications such as transparent solar cells for dynamic glass. References Solar cells Plasmonics
Plasmonic solar cell
[ "Physics", "Chemistry", "Materials_science" ]
5,204
[ "Plasmonics", "Surface science", "Condensed matter physics", "Nanotechnology", "Solid state engineering" ]
25,453,985
https://en.wikipedia.org/wiki/Atomic%20clock
An atomic clock is a clock that measures time by monitoring the resonant frequency of atoms. It is based on atoms having different energy levels. Electron states in an atom are associated with different energy levels, and in transitions between such states they interact with a very specific frequency of electromagnetic radiation. This phenomenon serves as the basis for the International System of Units' (SI) definition of a second: The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency, , the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be when expressed in the unit Hz, which is equal to s−1. This definition is the basis for the system of International Atomic Time (TAI), which is maintained by an ensemble of atomic clocks around the world. The system of Coordinated Universal Time (UTC) that is the basis of civil time implements leap seconds to allow clock time to track changes in Earth's rotation to within one second while being based on clocks that are based on the definition of the second, though leap seconds will be phased out in 2035. The accurate timekeeping capabilities of atomic clocks are also used for navigation by satellite networks such as the European Union's Galileo Programme and the United States' GPS. The timekeeping accuracy of the involved atomic clocks is important because the smaller the error in time measurement, the smaller the error in distance obtained by multiplying the time by the speed of light is (a timing error of a nanosecond or 1 billionth of a second (10 or second) translates into an almost distance and hence positional error). The main variety of atomic clock uses caesium atoms cooled to temperatures that approach absolute zero. The primary standard for the United States, the National Institute of Standards and Technology (NIST)'s caesium fountain clock named NIST-F2, measures time with an uncertainty of 1 second in 300 million years (relative uncertainty ). NIST-F2 was brought online on 3 April 2014. History The Scottish physicist James Clerk Maxwell proposed measuring time with the vibrations of light waves in his 1873 Treatise on Electricity and Magnetism: 'A more universal unit of time might be found by taking the periodic time of vibration of the particular kind of light whose wave length is the unit of length.' Maxwell argued this would be more accurate than the Earth's rotation, which defines the mean solar second for timekeeping. During the 1930s, the American physicist Isidor Isaac Rabi built equipment for atomic beam magnetic resonance frequency clocks. The accuracy of mechanical, electromechanical and quartz clocks is reduced by temperature fluctuations. This led to the idea of measuring the frequency of an atom's vibrations to keep time much more accurately, as proposed by James Clerk Maxwell, Lord Kelvin, and Isidor Rabi. He proposed the concept in 1945, which led to a demonstration of a clock based on ammonia in 1949. This led to the first practical accurate atomic clock with caesium atoms being built at the National Physical Laboratory in the United Kingdom in 1955 by Louis Essen in collaboration with Jack Parry. In 1949, Alfred Kastler and Jean Brossel developed a technique called optical pumping for electron energy level transitions in atoms using light. This technique is useful for creating much stronger magnetic resonance and microwave absorption signals. Unfortunately, this caused a side effect with a light shift of the resonant frequency. Claude Cohen-Tannoudji and others managed to reduce the light shifts to acceptable levels. Ramsey developed a method, commonly known as Ramsey interferometry nowadays, for higher frequencies and narrower resonances in the oscillating fields. Kolsky, Phipps, Ramsey, and Silsbee used this technique for molecular beam spectroscopy in 1950. After 1956, atomic clocks were studied by many groups, such as the National Institute of Standards and Technology (formerly the National Bureau of Standards) in the USA, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, the National Research Council (NRC) in Canada, the National Physical Laboratory in the United Kingdom, International Time Bureau (French: Bureau International de l'Heure, abbreviated BIH), at the Paris Observatory, the National Radio Company, Bomac, Varian, Hewlett–Packard and Frequency & Time Systems. During the 1950s, the National Radio Company sold more than 50 units of the first atomic clock, the Atomichron. In 1964, engineers at Hewlett-Packard released the 5060 rack-mounted model of caesium clocks. Definition of the second In 1968, the SI defined the duration of the second to be vibrations of the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom. Prior to that it was defined by there being seconds in the tropical year 1900. In 1997, the International Committee for Weights and Measures (CIPM) added that the preceding definition refers to a caesium atom at rest at a temperature of absolute zero. Following the 2019 revision of the SI, the definition of every base unit except the mole and almost every derived unit relies on the definition of the second. Timekeeping researchers are currently working on developing an even more stable atomic reference for the second, with a plan to find a more precise definition of the second as atomic clocks improve based on optical clocks or the Rydberg constant around 2030. Metrology advancements and optical clocks Technological developments such as lasers and optical frequency combs in the 1990s led to increasing accuracy of atomic clocks. Lasers enable the possibility of optical-range control over atomic states transitions, which has a much higher frequency than that of microwaves; while optical frequency comb measures highly accurately such high frequency oscillation in light. The first advance beyond the precision of caesium clocks occurred at NIST in 2010 with the demonstration of a "quantum logic" optical clock that used aluminum ions to achieve a precision of . Optical clocks are a very active area of research in the field of metrology as scientists work to develop clocks based on elements ytterbium, mercury, aluminum, and strontium. Scientists at JILA demonstrated a strontium clock with a frequency precision of in 2015. Scientists at NIST developed a quantum logic clock that measured a single aluminum ion in 2019 with a frequency uncertainty of . At JILA in September 2021, scientists demonstrated an optical strontium clock with a differential frequency precision of between atomic ensembles separated by . The second is expected to be redefined when the field of optical clocks matures, sometime around the year 2030 or 2034. In order for this to occur, optical clocks must be consistently capable of measuring frequency with accuracy at or better than . In addition, methods for reliably comparing different optical clocks around the world in national metrology labs must be demonstrated, and the comparison must show relative clock frequency accuracies at or better than . Chip-scale atomic clocks In addition to increased accuracy, the development of chip-scale atomic clocks has expanded the number of places atomic clocks can be used. In August 2004, NIST scientists demonstrated a chip-scale atomic clock that was 100 times smaller than an ordinary atomic clock and had a much smaller power consumption of . The atomic clock was about the size of a grain of rice with a frequency of about 9 GHz. This technology became available commercially in 2011. Atomic clocks on the scale of one chip require less than 30 milliwatts of power. The National Institute of Standards and Technology created a program NIST on a chip to develop compact ways of measuring time with a device just a few millimeters across. Metrologists are currently (2022) designing atomic clocks that implement new developments such as ion traps and optical combs to reach greater accuracies. Measuring time with atomic clocks Clock mechanism An atomic clock is based on a system of atoms which may be in one of two possible energy states. A group of atoms in one state is prepared, then subjected to microwave radiation. If the radiation is of the correct frequency, a number of atoms will transition to the other energy state. The closer the frequency is to the inherent oscillation frequency of the atoms, the more atoms will switch states. Such correlation allows very accurate tuning of the frequency of the microwave radiation. Once the microwave radiation is adjusted to a known frequency where the maximum number of atoms switch states, the atom and thus, its associated transition frequency, can be used as a timekeeping oscillator to measure elapsed time. All timekeeping devices use oscillatory phenomena to accurately measure time, whether it is the rotation of the Earth for a sundial, the swinging of a pendulum in a grandfather clock, the vibrations of springs and gears in a watch, or voltage changes in a quartz crystal watch. However all of these are easily affected by temperature changes and are not very accurate. The most accurate clocks use atomic vibrations to keep track of time. Clock transition states in atoms are insensitive to temperature and other environmental factors and the oscillation frequency is much higher than any of the other clocks (in microwave frequency regime and higher). One of the most important factors in a clock's performance is the atomic line quality factor, , which is defined as the ratio of the absolute frequency of the resonance to the linewidth of the resonance itself . Atomic resonance has a much higher than mechanical devices. Atomic clocks can also be isolated from environmental effects to a much higher degree. Atomic clocks have the benefit that atoms are universal, which means that the oscillation frequency is also universal. This is different from quartz and mechanical time measurement devices that do not have a universal frequency. A clock's quality can be specified by two parameters: accuracy and stability. Accuracy is a measurement of the degree to which the clock's ticking rate can be counted on to match some absolute standard such as the inherent hyperfine frequency of an isolated atom or ion. Stability describes how the clock performs when averaged over time to reduce the impact of noise and other short-term fluctuations (see precision). The instability of an atomic clock is specified by its Allan deviation . The limiting instability due to atom or ion counting statistics is given by where is the spectroscopic linewidth of the clock system, is the number of atoms or ions used in a single measurement, is the time required for one cycle, and is the averaging period. This means instability is smaller when the linewidth is smaller and when (the signal to noise ratio) is larger. The stability improves as the time over which the measurements are averaged increases from seconds to hours to days. The stability is most heavily affected by the oscillator frequency . This is why optical clocks such as strontium clocks (429 terahertz) are much more stable than caesium clocks (9.19 GHz). Modern clocks such as atomic fountains or optical lattices that use sequential interrogation are found to generate type of noise that mimics and adds to the instability inherent in atom or ion counting. This effect is called the Dick effect and is typically the primary stability limitation for the newer atomic clocks. It is an aliasing effect; high frequency noise components in the local oscillator ("LO") are heterodyned to near zero frequency by harmonics of the repeating variation in feedback sensitivity to the LO frequency. The effect places new and stringent requirements on the LO, which must now have low phase noise in addition to high stability, thereby increasing the cost and complexity of the system. For the case of an LO with Flicker frequency noise where is independent of , the interrogation time is , and where the duty factor has typical values , the Allan deviation can be approximated as This expression shows the same dependence on as does , and, for many of the newer clocks, is significantly larger. Analysis of the effect and its consequence as applied to optical standards has been treated in a major review (Ludlow, et al., 2015) that lamented on "the pernicious influence of the Dick effect", and in several other papers. Tuning and optimization The core of the traditional radio frequency atomic clock is a tunable microwave cavity containing a gas. In a hydrogen maser clock the gas emits microwaves (the gas mases) on a hyperfine transition, the field in the cavity oscillates, and the cavity is tuned for maximum microwave amplitude. Alternatively, in a caesium or rubidium clock, the beam or gas absorbs microwaves and the cavity contains an electronic amplifier to make it oscillate. For both types, the atoms in the gas are prepared in one hyperfine state prior to filling them into the cavity. For the second type, the number of atoms that change hyperfine state is detected and the cavity is tuned for a maximum of detected state changes. Most of the complexity of the clock lies in this adjustment process. The adjustment tries to correct for unwanted side-effects, such as frequencies from other electron transitions, temperature changes, and the spreading in frequencies caused by the vibration of molecules including Doppler broadening. One way of doing this is to sweep the microwave oscillator's frequency across a narrow range to generate a modulated signal at the detector. The detector's signal can then be demodulated to apply feedback to control long-term drift in the radio frequency. In this way, the quantum-mechanical properties of the atomic transition frequency of the caesium can be used to tune the microwave oscillator to the same frequency, except for a small amount of experimental error. When a clock is first turned on, it takes a while for the oscillator to stabilize. In practice, the feedback and monitoring mechanism is much more complex. Many of the newer clocks, including microwave clocks such as trapped ion or fountain clocks, and optical clocks such as lattice clocks use a sequential interrogation protocol rather than the frequency modulation interrogation described above. An advantage of sequential interrogation is that it can accommodate much higher Q's, with ringing times of seconds rather than milliseconds. These clocks also typically have a dead time, during which the atom or ion collections are analyzed, renewed and driven into a proper quantum state, after which they are interrogated with a signal from a local oscillator (LO) for a time of perhaps a second or so. Analysis of the final state of the atoms is then used to generate a correction signal to keep the LO frequency locked to that of the atoms or ions. Accuracy The accuracy of atomic clocks has improved continuously since the first prototype in the 1950s. The first generation of atomic clocks were based on measuring caesium, rubidium, and hydrogen atoms. In a time period from 1959 to 1998, NIST developed a series of seven caesium-133 microwave clocks named NBS-1 to NBS-6 and NIST-7 after the agency changed its name from the National Bureau of Standards to the National Institute of Standards and Technology. The first clock had an accuracy of , and the last clock had an accuracy of . The clocks were the first to use a caesium fountain, which was introduced by Jerrod Zacharias, and laser cooling of atoms, which was demonstrated by Dave Wineland and his colleagues in 1978. The next step in atomic clock advances involves going from accuracies of to accuracies of and even . The goal is to redefine the second when clocks become so accurate that they will not lose or gain more than a second in the age of the universe. To do so, scientists must demonstrate the accuracy of clocks that use strontium and ytterbium and optical lattice technology. Such clocks are also called optical clocks where the energy level transitions used are in the optical regime (giving rise to even higher oscillation frequency), which thus, have much higher accuracy as compared to traditional atomic clocks. The goal of an atomic clock with accuracy was first reached at the United Kingdom's National Physical Laboratory's NPL-CsF2 caesium fountain clock and the United States' NIST-F2. The increase in precision from NIST-F1 to NIST-F2 is due to liquid nitrogen cooling of the microwave interaction region; the largest source of uncertainty in NIST-F1 is the effect of black-body radiation from the warm chamber walls. The performance of primary and secondary frequency standards contributing to International Atomic Time (TAI) is evaluated. The evaluation reports of individual (mainly primary) clocks are published online by the International Bureau of Weights and Measures (BIPM). Comparing atomic clocks Time standards A number of national metrology laboratories maintain atomic clocks: including Paris Observatory, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, the National Institute of Standards and Technology (NIST) in Colorado and Maryland, USA, JILA in the University of Colorado Boulder, the National Physical Laboratory (NPL) in the United Kingdom, and the All-Russian Scientific Research Institute for Physical-Engineering and Radiotechnical Metrology. They do this by designing and building frequency standards that produce electric oscillations at a frequency whose relationship to the transition frequency of caesium 133 is known, in order to achieve a very low uncertainty. These primary frequency standards estimate and correct various frequency shifts, including relativistic Doppler shifts linked to atomic motion, the thermal radiation of the environment (blackbody shift) and several other factors. The best primary standards currently produce the SI second with an accuracy approaching an uncertainty of one part in . It is important to note that at this level of accuracy, the differences in the gravitational field in the device cannot be ignored. The standard is then considered in the framework of general relativity to provide a proper time at a specific point. The International Bureau of Weights and Measures (BIPM) provides a list of frequencies that serve as secondary representations of the second. This list contains the frequency values and respective standard uncertainties for the rubidium microwave transition and other optical transitions, including neutral atoms and single trapped ions. These secondary frequency standards can be as accurate as one part in ; however, the uncertainties in the list are one part in –. This is because the uncertainty in the central caesium standard against which the secondary standards are calibrated is one part in –. Primary frequency standards can be used to calibrate the frequency of other clocks used in national laboratories. These are usually commercial caesium clocks having very good long-term frequency stability, maintaining a frequency with a stability better than 1 part in over a few months. The uncertainty of the primary standard frequencies is around one part in . Hydrogen masers, which rely on the 1.4 GHz hyperfine transition in atomic hydrogen, are also used in time metrology laboratories. Masers outperform any commercial caesium clock in terms of short-term frequency stability. In the past, these instruments have been used in all applications that require a steady reference across time periods of less than one day (frequency stability of about 1 part in ten for averaging times of a few hours). Because some active hydrogen masers have a modest but predictable frequency drift with time, they have become an important part of the BIPM's ensemble of commercial clocks that implement International Atomic Time. Synchronization with satellites The time readings of clocks operated in metrology labs operating with the BIPM need to be known very accurately. Some operations require synchronization of atomic clocks separated by great distances over thousands of kilometers. Global Navigational Satellite Systems (GNSS) provide a satisfactory solution to the problem of time transfer. Atomic clocks are used to broadcast time signals in the United States Global Positioning System (GPS), the Russian Federation's Global Navigation Satellite System (GLONASS), the European Union's Galileo system and China's BeiDou system. The signal received from one satellite in a metrology laboratory equipped with a receiver with an accurately known position allows the time difference between the local time scale and the GNSS system time to be determined with an uncertainty of a few nanoseconds when averaged over 15 minutes. Receivers allow the simultaneous reception of signals from several satellites, and make use of signals transmitted on two frequencies. As more satellites are launched and start operations, time measurements will become more accurate. These methods of time comparison must make corrections for the effects of special relativity and general relativity of a few nanoseconds. In June 2015, the National Physical Laboratory (NPL) in Teddington, UK; the French department of Time-Space Reference Systems at the Paris Observatory (LNE-SYRTE); the German German National Metrology Institute (PTB) in Braunschweig; and Italy's Istituto Nazionale di Ricerca Metrologica (INRiM) in Turin labs have started tests to improve the accuracy of current state-of-the-art satellite comparisons by a factor of 10, but it will still be limited to one part in . These four European labs are developing and host a variety of experimental optical clocks that harness different elements in different experimental set-ups and want to compare their optical clocks against each other and check whether they agree. International timekeeping National laboratories usually operate a range of clocks. These are operated independently of one another and their measurements are sometimes combined to generate a scale that is more stable and more accurate than that of any individual contributing clock. This scale allows for time comparisons between different clocks in the laboratory. These atomic time scales are generally referred to as TA(k) for laboratory k. Coordinated Universal Time (UTC) is the result of comparing clocks in national laboratories around the world to International Atomic Time (TAI), then adding leap seconds as necessary. TAI is a weighted average of around 450 clocks in some 80 time institutions. The relative stability of TAI is around one part in . Before TAI is published, the frequency of the result is compared with the SI second at various primary and secondary frequency standards. This requires relativistic corrections to be applied to the location of the primary standard which depend on the distance between the equal gravity potential and the rotating geoid of Earth. The values of the rotating geoid and the TAI change slightly each month and are available in the BIPM Circular T publication. The TAI time-scale is deferred by a few weeks as the average of atomic clocks around the world is calculated. TAI is not distributed in everyday timekeeping. Instead, an integer number of leap seconds are added or subtracted to correct for the Earth's rotation, producing UTC. The number of leap seconds is changed so that mean solar noon at the prime meridian (Greenwich) does not deviate from UTC noon by more than 0.9 seconds. National metrology institutions maintain an approximation of UTC referred to as UTC(k) for laboratory k. UTC(k) is distributed by the BIPM's Consultative Committee for Time and Frequency. The offset UTC-UTC(k) is calculated every 5 days, the results are published monthly. Atomic clocks record UTC(k) to no more than 100 nanoseconds. In some countries, UTC(k) is the legal time that is distributed by radio, television, telephone, Internet, fiber-optic cables, time signal transmitters, and speaking clocks. In addition, GNSS provides time information accurate to a few tens of nanoseconds or better. Fiber Optics In a next phase, these labs strive to transmit comparison signals in the visible spectrum through fibre-optic cables. This will allow their experimental optical clocks to be compared with an accuracy similar to the expected accuracies of the optical clocks themselves. Some of these labs have already established fibre-optic links, and tests have begun on sections between Paris and Teddington, and Paris and Braunschweig. Fibre-optic links between experimental optical clocks also exist between the American NIST lab and its partner lab JILA, both in Boulder, Colorado but these span much shorter distances than the European network and are between just two labs. According to Fritz Riehle, a physicist at PTB, "Europe is in a unique position as it has a high density of the best clocks in the world". In August 2016 the French LNE-SYRTE in Paris and the German PTB in Braunschweig reported the comparison and agreement of two fully independent experimental strontium lattice optical clocks in Paris and Braunschweig at an uncertainty of via a newly established phase-coherent frequency link connecting Paris and Braunschweig, using of telecom fibre-optic cable. The fractional uncertainty of the whole link was assessed to be , making comparisons of even more accurate clocks possible. In 2021, NIST compared transmission of signals from a series of experimental atomic clocks located about apart at the NIST lab, its partner lab JILA, and the University of Colorado all in Boulder, Colorado over air and fiber optic cable to a precision of . Microwave atomic clocks Caesium The SI second is defined as a certain number of unperturbed ground-state hyperfine transitions of the caesium-133 atom. Caesium standards are therefore regarded as primary time and frequency standards. Caesium clocks include the NIST-F1 clock, developed in 1999, and the NIST-F2 clock, developed in 2013. Caesium has several properties that make it a good choice for an atomic clock. Whereas a hydrogen atom moves at 1,600 m/s at room temperature and a nitrogen atom moves at 510 m/s, a caesium atom moves at a much slower speed of 130 m/s due to its greater mass. The hyperfine frequency of caesium (~9.19 GHz) is also higher than other elements such as rubidium (~6.8 GHz) and hydrogen (~1.4 GHz). The high frequency of caesium allows for more accurate measurements. Caesium reference tubes suitable for national standards currently last about seven years and cost about US$35,000. Primary frequency and time standards like the United States Time Standard atomic clocks, NIST-F1 and NIST-F2, use far higher power. Block diagram In a caesium beam frequency reference, timing signals are derived from a high stability voltage-controlled quartz crystal oscillator (VCXO) that is tunable over a narrow range. The output frequency of the VCXO (typically 5 MHz) is multiplied by a frequency synthesizer to obtain microwaves at the frequency of the caesium atomic hyperfine transition (about ). The output of the frequency synthesizer is amplified and applied to a chamber containing caesium gas which absorbs the microwaves. The output current of the caesium chamber increases as absorption increases. The remainder of the circuitry simply adjusts the running frequency of the VCXO to maximize the output current of the caesium chamber which keeps the oscillator tuned to the resonance frequency of the hyperfine transition. Rubidium The BIPM defines the unperturbed ground-state hyperfine transition frequency of the rubidium-87 atom, 6 834 682 610.904 312 6 Hz, in terms of the caesium standard frequency. Atomic clocks based on rubidium standards are therefore regarded as secondary representations of the second. Rubidium standard clocks are prized for their low cost, small size (commercial standards are as small as ) and short-term stability. They are used in many commercial, portable and aerospace applications. Modern rubidium standard tubes last more than ten years, and can cost as little as US$50. Some commercial applications use a rubidium standard periodically corrected by a global positioning system receiver (see GPS disciplined oscillator). This achieves excellent short-term accuracy, with long-term accuracy equal to (and traceable to) the US national time standards. Hydrogen The BIPM defines the unperturbed optical transition frequency of the hydrogen-1 neutral atom, 1 233 030 706 593 514 Hz, in terms of the caesium standard frequency. Atomic clocks based on hydrogen standards are therefore regarded as secondary representations of the second. Hydrogen masers have superior short-term stability compared to other standards, but lower long-term accuracy. The long-term stability of hydrogen maser standards decreases because of changes in the cavity's properties over time. The relative error of hydrogen masers is 5 × 10−16 for periods of 1000 seconds. This makes hydrogen masers good for radio astronomy, in particular for very long baseline interferometry. Hydrogen masers are used for flywheel oscillators in laser-cooled atomic frequency standards and broadcasting time signals from national standards laboratories, although they need to be corrected as they drift from the correct frequency over time. The hydrogen maser is also useful for experimental tests of the effects of special relativity and general relativity such as gravitational red shift. Other types of atomic clocks Quantum clocks In March 2008, physicists at NIST described a quantum logic clock based on individual ions of beryllium and aluminium. This clock was compared to NIST's mercury ion clock. These were the most accurate clocks that had been constructed, with neither clock gaining nor losing time at a rate that would exceed a second in over a billion years. In February 2010, NIST physicists described a second, enhanced version of the quantum logic clock based on individual ions of magnesium and aluminium. Considered the world's most precise clock in 2010 with a fractional frequency inaccuracy of , it offers more than twice the precision of the original. In July 2019, NIST scientists demonstrated such an Al+ quantum logic clock with total uncertainty of , which is the first demonstration of such a clock with uncertainty below . Nuclear clock concept One theoretical possibility for improving the performance of atomic clocks is to use a nuclear energy transition (between different nuclear isomers) rather than the atomic electron transitions which current atomic clocks measure. Most nuclear transitions operate at far too high a frequency to be measured, but the exceptionally low excitation energy of produces "gamma rays" in the ultraviolet frequency range. In 2003, Ekkehard Peik and Christian Tamm noted this makes a clock possible with current optical frequency-measurement techniques. In 2012, it was shown that a nuclear clock based on a single ion could provide a total fractional frequency inaccuracy of , which was better than existing 2019 optical atomic clock technology. Although a precise clock remains an unrealized theoretical possibility, efforts through the 2010s to measure the transition energy culminated in the 2024 measurement of the optical frequency with sufficient accuracy ( = ) that an experimental optical nuclear clock can now be constructed. Although neutral atoms decay in microseconds by internal conversion, this pathway is energetically prohibited in ions, as the second and higher ionization energy is greater than the nuclear excitation energy, giving ions a long half-life on the order of . It is the large ratio between transition frequency and isomer lifetime which gives the clock a high quality factor. A nuclear energy transition offers the following potential advantages: Higher frequency. All other things being equal, a higher-frequency transition offers greater stability for simple statistical reasons (fluctuations are averaged over more cycles). Insensitivity to environmental effects. Due to its small size and the shielding effect of the surrounding electrons, an atomic nucleus is much less sensitive to ambient electromagnetic fields than is an electron in an orbital. Greater number of atoms. Because of the aforementioned insensitivity to ambient fields, it is not necessary to have the clock atoms well-separated in a dilute gas. Current measurements take advantage of the Mössbauer effect and place the thorium ions in a solid, which allows billions of atoms to be interrogated. Potential for redefining the second In 2022, the best realisation of the second is done with caesium primary standard clocks such as IT-CsF2, NIST-F2, NPL-CsF2, PTB-CSF2, SU–CsFO2 or SYRTE-FO2. These clocks work by laser-cooling a cloud of caesium atoms to a microkelvin in a magneto-optic trap. These cold atoms are then launched vertically by laser light. The atoms then undergo Ramsey excitation in a microwave cavity. The fraction of excited atoms are then detected by laser beams. These clocks have systematic uncertainty, which is equivalent to 50 picoseconds per day. A system of several fountains worldwide contributes to International Atomic Time. These caesium clocks also underpin optical frequency measurements. The advantage of optical clocks can be explained by the statement that the instability , where is the instability, f is the frequency, and S/N is the signal-to-noise ratio. This leads to the equation . Optical clocks are based on forbidden optical transitions in ions or atoms. They have frequencies around , with a natural linewidth of typically 1 Hz, so the Q-factor is about , or even higher. They have better stabilities than microwave clocks, which means that they can facilitate evaluation of lower uncertainties. They also have better time resolution, which means the clock "ticks" faster. Optical clocks use either a single ion, or an optical lattice with – atoms. Rydberg constant A definition based on the Rydberg constant would involve fixing the value to a certain value: . The Rydberg constant describes the energy levels in a hydrogen atom with the nonrelativistic approximation . The only viable way to fix the Rydberg constant involves trapping and cooling hydrogen. Unfortunately, this is difficult because it is very light and the atoms move very fast, causing Doppler shifts. The radiation needed to cool the hydrogen —— is also difficult. Another hurdle involves improving the uncertainty in quantum electrodynamics/QED calculations. In the Report of the 25th meeting of the Consultative Committee for Units (2021), 3 options were considered for the redefinition of the second sometime around 2026, 2030, or 2034. The first redefinition approach considered was a definition based on a single atomic reference transition. The second redefinition approach considered was a definition based on a collection of frequencies. The third redefinition approach considered was a definition based on fixing the numerical value of a fundamental constant, such as making the Rydberg constant the basis for the definition. The committee concluded there was no feasible way to redefine the second with the third option, since no physical constant is known to enough digits currently to enable realizing the second with a constant. Requirements A redefinition must include improved optical clock reliability. TAI must be contributed to by optical clocks before the BIPM affirms a redefinition. A consistent method of sending signals, such as fiber-optics, must be developed before the second is redefined. Secondary representations of the second Representations of the second other than the SI cesium standard are motivated by the increasing accuracy of other atomic clocks. In particular the high frequencies and small linewidths of optical clocks promise significantly improved signal-to-noise ratio and instability. Further secondary representations would aid in the preparation of a future redefinition of the second A list of frequencies recommended for secondary representations of the second is maintained by the International Bureau of Weights and Measures (BIPM) since 2006 and is available online. The list contains the frequency values and the respective standard uncertainties for the rubidium microwave transition and for several optical transitions. These secondary frequency standards are accurate at the level of ; however, the uncertainties provided in the list are in the range – since they are limited by the linking to the caesium primary standard that currently (2018) defines the second. Twenty-first century experimental atomic clocks that provide non-caesium-based secondary representations of the second are becoming so precise that they are likely to be used as extremely sensitive detectors for other things besides measuring frequency and time. For example, the frequency of atomic clocks is altered slightly by gravity, magnetic fields, electrical fields, force, motion, temperature and other phenomena. The experimental clocks tend to continue to improve, and leadership in performance has shifted back and forth between various types of experimental clocks. Applications The development of atomic clocks has led to many scientific and technological advances such as precise global and regional navigation satellite systems, and applications in the Internet, which depend critically on frequency and time standards. Atomic clocks are installed at sites of time signal radio transmitters. They are used at some long-wave and medium-wave broadcasting stations to deliver a very precise carrier frequency. Atomic clocks are used in many scientific disciplines, such as for long-baseline interferometry in radio astronomy. Global navigation satellite systems The Global Positioning System (GPS) operated by the United States Space Force provides very accurate timing and frequency signals. A GPS receiver works by measuring the relative time delay of signals from a minimum of four, but usually more, GPS satellites, each of which has at least two onboard caesium and as many as two rubidium atomic clocks. The relative times are mathematically transformed into three absolute spatial coordinates and one absolute time coordinate. GPS Time (GPST) is a continuous time scale and theoretically accurate to about 14 nanoseconds. However, most receivers lose accuracy in the interpretation of the signals and are only accurate to 100 nanoseconds. GPST is related to but differs from TAI (International Atomic Time) and UTC (Coordinated Universal Time). GPST remains at a constant offset from TAI (TAI – GPST = 19 seconds) and like TAI does not implement leap seconds. Periodic corrections are performed to the on-board clocks in the satellites to keep them synchronized with ground clocks. The GPS navigation message includes the difference between GPST and UTC. As of July 2015, GPST is 17 seconds ahead of UTC because of the leap second added to UTC on 30 June 2015. Receivers subtract this offset from GPS Time to calculate UTC. The GLObal NAvigation Satellite System (GLONASS) operated by the Russian Aerospace Defence Forces provides an alternative to the Global Positioning System (GPS) system and is the second navigational system in operation with global coverage and of comparable precision. GLONASS Time (GLONASST) is generated by the GLONASS Central Synchroniser and is typically better than 1,000 nanoseconds. Unlike GPS, the GLONASS time scale implements leap seconds, like UTC. The Galileo Global Navigation Satellite System is operated by the European GNSS Agency and European Space Agency. Galileo started offering global Early Operational Capability (EOC) on 15 December 2016, providing the third, and first non-military operated, global navigation satellite system. Galileo System Time (GST) is a continuous time scale which is generated on the ground at the Galileo Control Centre in Fucino, Italy, by the Precise Timing Facility, based on averages of different atomic clocks and maintained by the Galileo Central Segment and synchronised with TAI with a nominal offset below 50 nanoseconds. According to the European GNSS Agency, Galileo offers 30 nanoseconds timing accuracy. The March 2018 Quarterly Performance Report by the European GNSS Service Centre reported the UTC Time Dissemination Service Accuracy was ≤ 7.6 nanoseconds, computed by accumulating samples over the previous 12 months, and exceeding the ≤ 30 ns target. Each Galileo satellite has two passive hydrogen maser and two rubidium atomic clocks for onboard timing. The Galileo navigation message includes the differences between GST, UTC and GPST, to promote interoperability. In the summer of 2021, the European Union settled on a passive hydrogen maser for the second generation of Galileo satellites, starting in 2023, with an expected lifetime of 12 years per satellite. The masers are about 2 feet long with a weight of 40 pounds. The BeiDou-2/BeiDou-3 satellite navigation system is operated by the China National Space Administration. BeiDou Time (BDT) is a continuous time scale starting at 1 January 2006 at 0:00:00 UTC and is synchronised with UTC within 100 ns. BeiDou became operational in China in December 2011, with 10 satellites in use, and began offering services to customers in the Asia-Pacific region in December 2012. On 27 December 2018 the BeiDou Navigation Satellite System started to provide global services with a reported timing accuracy of 20 ns. The final, 35th, BeiDou-3 satellite for global coverage was launched into orbit on 23 June 2020. Experimental space clock In April 2015, NASA announced that it planned to deploy a Deep Space Atomic Clock (DSAC), a miniaturized, ultra-precise mercury-ion atomic clock, into outer space. NASA said that the DSAC would be much more stable than other navigational clocks. The clock was successfully launched on 25 June 2019, activated on 23 August 2019 and deactivated two years later on 18 September 2021. Military usage In 2022, DARPA announced a drive to upgrade to the U.S. military timekeeping systems for greater precision over time when sensors do not have access to GPS satellites, with a plan to reach precision of 1 part in . The Robust Optical Clock Network will balance usability and accuracy as it is developed over 4 years. Time signal radio transmitters A radio clock is a clock that automatically synchronizes itself by means of radio time signals received by a radio receiver. Some manufacturers may label radio clocks as atomic clocks, because the radio signals they receive originate from atomic clocks. Normal low-cost consumer-grade receivers that rely on the amplitude-modulated time signals have a practical accuracy uncertainty of ± 0.1 second. This is sufficient for many consumer applications. Instrument grade time receivers provide higher accuracy. Radio clocks incur a propagation delay of approximately 1 ms for every 300 kilometres (186 mi) of distance from the radio transmitter. Many governments operate transmitters for timekeeping purposes. General relativity General relativity predicts that clocks tick slower deeper in a gravitational field, and this gravitational redshift effect has been well documented. Atomic clocks are effective at testing general relativity on ever smaller scales. A project to observe twelve atomic clocks from 11 November 1999 to October 2014 resulted in a further demonstration that Einstein's theory of general relativity is accurate at small scales. In 2021 a team of scientists at JILA measured the difference in the passage of time due to gravitational redshift between two layers of atoms separated by one millimeter using a strontium optical clock cooled to 100 nanokelvins with a precision of seconds. Given its quantum nature and the fact that time is a relativistic quantity, atomic clocks can be used to see how time is influenced by general relativity and quantum mechanics at the same time. Financial systems Atomic clocks keep accurate records of transactions between buyers and sellers to the millisecond or better, particularly in high-frequency trading. Accurate timekeeping is needed to prevent illegal trading ahead of time, in addition to ensuring fairness to traders on the other side of the globe. The current system known as NTP is only accurate to a millisecond. Transportable Optical Clocks Many of the most accurate optical clocks are big and only available in large metrology labs. Thus they are not readily useful for space-limited factories or other industrial environments that could use an atomic clock for GPS accuracy. Researchers have designed a strontium optical lattice clock that can be moved around in an air-conditioned car trailer. They achieved a relative uncertainty of compared to a stationary one. See also Caesium standard Clock drift Dick effect List of atomic clocks Network Time Protocol Optical clock Primary Atomic Reference Clock in Space Pulsar clock Speaking clock Time metrology Time transfer Explanatory notes References Resonance Electronic test equipment Metrology Time measurement systems
Atomic clock
[ "Physics", "Chemistry", "Technology", "Engineering" ]
8,893
[ "Resonance", "Physical phenomena", "Physical quantities", "Time measurement systems", "Time", "Electronic test equipment", "Measuring instruments", "Waves", "Scattering", "Spacetime" ]
1,940,824
https://en.wikipedia.org/wiki/Cyanogen%20iodide
Cyanogen iodide or iodine cyanide (ICN) is a pseudohalogen composed of iodine and the cyanide group. It is a highly toxic inorganic compound. It occurs as white crystals that react slowly with water to form hydrogen cyanide. Synthesis Cyanogen iodide is prepared by combining I2 and a cyanide, most commonly sodium cyanide in ice-cold water. The product is extracted with diethyl ether. I2 + NaCN → NaI + ICN Applications Cyanogen iodide has been used in taxidermy as a preservative because of its toxicity. History Cyanogen iodide was first synthesized in 1824 by the French chemist Georges-Simon Serullas (1774–1832). Cyanogen iodide was considered one of the impurities in commercially sold iodine before the 1930s. Hazards Cyanogen iodide is toxic if inhaled or ingested and may be fatal if swallowed or absorbed through the skin. Cyanogen iodide may cause convulsions, paralysis and death from respiratory failure. It is a strong irritant and may cause burns to the eyes and skin if contacted. If cyanogen iodide is heated enough to undergo complete decomposition, it may releases toxic fumes of nitrogen oxides, cyanide and iodide. A fire may cause the release of poisonous gas. Cyanogen iodide decomposes when contacted with acids, bases, ammonia, alcohols, and with heating. ICN slowly reacts with water or carbon dioxide to produce hydrogen cyanide. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. Solutions in pyridine Cyanogen iodide solutions in pyridine conduct electric current. Dilute solutions of ICN in pyridine are colorless at first, but upon standing become successively yellow, orange, red-brown and deep red-brown. This effect is due to a change in conductivity, which in turn is due to the formation of an electrolyte. When electrical conductivity of ICN is compared with that of iodine-pyridine solutions, the formation of the electrolyte in ICN proceeds much more slowly. Results confirm that cyanides are much weaker salts in pyridine than are iodides, although cyanogen iodide solutions are able to be dissolved in pyridine giving solutions with electrical conductivity that increases over time and results in maximum values. External links References Iodine compounds Triatomic molecules Cyano compounds Nonmetal halides Pseudohalogens
Cyanogen iodide
[ "Physics", "Chemistry" ]
597
[ "Pseudohalogens", "Inorganic compounds", "Molecules", "Triatomic molecules", "Matter" ]
1,942,466
https://en.wikipedia.org/wiki/High%20resolution%20electron%20energy%20loss%20spectroscopy
High resolution electron energy loss spectroscopy (HREELS) is a tool used in surface science. The inelastic scattering of electrons from surfaces is utilized to study electronic excitations or vibrational modes of the surface of a material or of molecules adsorbed to a surface. In contrast to other electron energy loss spectroscopies (EELS), HREELS deals with small energy losses in the range of 10−3 eV to 1 eV. It plays an important role in the investigation of surface structure, catalysis, dispersion of surface phonons and the monitoring of epitaxial growth. Overview of HREELS In general, electron energy loss spectroscopy is based on the energy losses of electrons when inelastically scattered on matter. An incident beam of electrons with a known energy (Ei) is scattered on a sample. The scattering of these electrons can excite the electronic structure of the sample. If this is the case the scattered electron loses the specific energy (ΔE) needed to cause the excitation. Those scattering processes are called inelastic. It may be easiest to imagine that the energy loss is for example due to an excitation of an electron from an atomic K-shell to the M-shell. The energy for this excitation is taken away from the electron's kinetic energy. The energies of the scattered electrons (Es) are measured and the energy loss can be calculated. From the measured data an intensity versus energy loss diagram is established. In the case of scattering by phonons the so-called energy loss can also be a gain of energy (similar to anti-Stokes Raman spectroscopy). These energy losses allow, using comparison to other experiments or theory, one to draw conclusions about surface properties of a sample. Excitations of the surface structure are usually very low energy, ranging from 10−3 eV to 10 eV. In HREELS spectra electrons with only small energy losses, like also Raman scattering, the interesting features are all located very close together and especially near to the very strong elastic scattering peak. Hence EELS spectrometers require a high energy resolution. Therefore, this regime of EELS is called High Resolution EELS. In this context resolution shall be defined as the energy difference in which two features in a spectrum are just distinguishable divided by the mean energy of those features: In the case of EELS the first thing to think of in order to achieve high resolution is using incident electrons of a very precisely defined energy and a high quality analyzer. Further high resolution is only possible when the energies of the incident electrons are not far bigger than the energy losses. For HREELS the energy of the incident electrons is therefore mostly significantly smaller than 102 eV. Considering that 102 eV electrons have a mean free path of around 1 nm (corresponds to a few monolayers), which decreases with lower energies, this automatically implies that HREELS is a surface sensitive technique. This is the reason why HREELS must be measured in reflection mode and must be implemented in ultra high vacuum (UHV). This is in contrast to Core Level EELS which operates at very high energies and can therefore also be found in transmission electron microscopes (TEM). Instrumental developments have also enabled vibrational spectroscopy to be performed in TEM. In HREELS not only the electron energy loss can be measured, often the angular distribution of electrons of a certain energy loss in reference to the specular direction gives interesting insight to the structures on a surface. Physics of HREELS As mentioned above HREELS involves an inelastic scattering process on a surface. For those processes the conservation of energy as well as the conservation of momentum's projection parallel to the surface hold: E are energies, k and q are wave vectors and G denotes a reciprocal lattice vector. One should mention at this point that for non perfect surfaces G is not in any case a well defined quantum number, what has to be considered when using the second relation. Variables subscripted with i denote values of incident electrons those subscripted with s values of scattered electrons. "||" denotes parallel to the surface. For the description of the inelastic scattering processes due to the excitation of vibrational modes of adsorbates different approaches exist. The simplest approach distinguishes between regimes of small and large scattering angles: Dipole scattering The so-called dipole scattering can be applied when the scattered beam is very near to the specular direction. In this case a macroscopic theory can be applied to explain the results. It can be approached using the so-called dielectrical theory introduced by Lucas and Šunjić of which a quantum mechanical treatment was first presented by E. Evans and D.L. Mills in the early 1970s. Alternatively there is a more unfamiliar model which only holds exactly for perfect conductors: A unit cell at the surface does not have a homogeneous surrounding, hence it is supposed to have an electrical dipole moment. When a molecule is adsorbed to the surface there can be an additional dipole moment and the total dipole moment P is present. This dipole moment causes a long range electronic potential in the vacuum above the surface. On this potential the incident electron can scatter inelastically which means it excites vibrations in the dipole structure. The dipole moment can then be written as . When the adsorbate sticks to a metal surface, imaginary dipoles occur as shown in the figure on the right. Hence for an adsorbed dipole normal to the surface the dipole moment "seen" from the vacuum doubles. Whereas the dipole moment of a parallel to the surface adsorbed dipole vanishes. Hence an incident electron can excite the adsorbed dipole only when it is adsorbed normal to the surface and the vibrational mode can be detected in the energy loss spectrum. If the dipole is adsorbed parallel then no energy losses will be detected and the vibrational modes of the dipole are missing in the energy loss spectrum. When measuring the intensity of the electron energy loss peaks and comparing to other experimental results or to theoretical models it can also be determined whether a molecule is adsorbed normal to the surface or tilted by an angle. The dielectric model also holds when the material on which the molecule adsorbs is not a metal. The picture shown above is then the limit for where denotes the relative dielectrical constant. As the incident electron in this model is scattered in the region above the surface, it does not directly impact the surface and because the amount of momentum transferred is small the scattering is mostly in the specular direction. Impact scattering Impact scattering is the regime which deals with electrons that are scattered further away from the specular direction. In those cases no macroscopic theory exists and a microscopic theory like, quantum mechanical dispersion theory, has to be applied. Symmetry considerations then also result in certain selection rules (it is also assumed that the energy loss in the inelastic scattering process is negligible): When the scattering plane is a plane of reflection symmetry then the scattering amplitude for every ks in the scattering plane vanishes. When the plane perpendicular to the surface and the scattering plane is a plane of reflection symmetry and time reversal symmetry holds then the scattering amplitudes in the specular direction vanish for modes whose normal coordinates are odd under the reflection. When the axis normal to the surface is an axis of two-fold symmetry, and time reversal symmetry holds then the scattering amplitudes in the specular direction vanish for modes whose normal modes are odd under the twofold rotation. All those selection rules make it possible to identify the normal coordinates of the adsorbed molecules. Intermediate negative ion resonance In intermediate negative ion resonance the electron forms a compound state with an adsorbed molecule during the scattering process. However, the lifetime of those states are so short that this type of scattering is barely observed. All of these regimes can at once be described with the help of the single microscopic theory. Selection rules for dipole scattering from the perspective of vibrational eigenmodes A microscopic theory makes it possible to approach the selection rule for dipole scattering in a more exact way. The scattering cross section is only non-vanishing in the case of a non-zero matrix element . Where denotes the initial and the final vibrational energy level of the adsorbed molecule and the component of its dipole moment. As the dipole moment is something like charge times length, has the same symmetry properties as , which is totally symmetric. Hence the product of and must also be a totally symmetric function, otherwise the matrix element vanishes. Hence excitations from the totally symmetrical ground state of a molecule are only possible to a totally symmetric vibrational state. This is the surface selection rule for dipole scattering. Note that it says nothing about the intensity for scattering or the displacement of the atoms of the adsorbate, but its total dipole moment is the operator in the matrix element. This is important as a vibration of the atoms parallel to the surface can also cause a vibration of the dipole moment normal to the surface. Therefore, the result in the "dipole scattering" section above is not exactly correct. When trying to gain information from selection rules, one must carefully consider whether a pure dipole or impact scattering region is investigated. Further symmetry-breaking due to strong bindings to the surface must be considered. Another problem is that in cases of larger molecules often many vibrational modes are degenerate, which could again be resolved due to strong molecule-surface interactions. Those interactions can also generate completely new dipole moments which the molecule does not have on its own. But when carefully investigating it is mostly possible to get a very good picture of how the molecule adheres to the surface by analysis of normal dipole modes. High resolution electron energy loss spectrometer As the electrons used for HREELS are of low energy they do not only have a very short mean free path length in the sample materials but also under normal atmospheric conditions. Therefore, one has to set up the spectrometer in UHV. The spectrometer is in general a computer simulated design that optimizes the resolution while keeping an acceptable electron flux. The electrons are generated in an electron source, by heating a tungsten cathode, which is encapsulated by a negatively charged so called repeller that prevents stray electrons from coming into the detector unit. The electrons can leave the source only through a lens system, like e.g. a slot lens system consisting of several slits all on different potential. The purpose of this system is to focus the electrons on the entrance of the monochromator unit, to get a high initial electron flux. The monochromator is usually a concentric hemispherical analyser (CHA). In more sensitive setups an additional pre-monochromator is used. The task of the monochromator is to reduce the energy of the passing electrons to some eV due to the help of electron lenses. It further lets only those electrons pass which have the chosen initial energy. To achieve a good resolution it is already important to have incident electrons of a well defined energy one normally chooses a resolution of for the monochromator. This means, the electrons leaving the monochromator with e.g. 10 eV have an energy accurate to 10−1 eV. The beam's flux is then in the orders of 10−8 A to 10−10 A. The radii of the CHA are in the order of several 10 mm. And the deflector electrodes have a saw tooth profile to backscatter electrons which are reflected from the walls in order to reduce the background of electrons with the wrong Ei. The electrons are then focused by a lens system onto the sample. These lenses are, in contrary to those of the emitter system very flexible, as it is important is to get a good focus on the sample. To enable measurements of angular distributions all those elements are mounted on a rotate able table with the axis cantered at the sample.Its negative charge causes the electron beam to broaden. What can be prevented by charging the top and bottom plates of the CHA deflectors negative. What again causes a change in the deflection angle and has to be considered when designing the experiment. In the scattering process at the sample the electrons can lose energies from several 10−2 eV up to a few electron volt. The scattered electron beam which is of around 10−3 lower flux than the incident beam then enters, the analyzer, another CHA. The analyzer CHA again allows only electrons of certain energies to pass to the analyzing unit, a channel electron multiplier (CEM). For this analyzing CHA the same facts are valid as for the monochromator. Except that a higher resolution as in the monochromator is wanted. Hence the radial dimensions of this CHA are mostly bigger by like a factor 2. Due to aberrations of the lens systems the beam has also broadened. To sustain a high enough electron flux to the analyzer the apertures are also about a factor 2 bigger. To make the analysis more accurate, especially to reduce the background of in the deflector scattered electrons often two analyzers are used, or additional apertures are added behind the analyzers as scattered electrons of the wrong energy normally leave the CHAs under large angles. In this way energy losses of 10−2 eV to 10 eV can be detected with accuracies of about 10−2 eV. General problems of HREEL spectrometers Due to the electron flux the apertures can become negatively charged, which makes them effectively smaller for the passing electrons. This has to be considered when doing the design of the setup as it is anyway difficult to keep different potentials, of repeller, lenses, screening elements, and the reflector, constant. Unstable potentials on lenses or CHA deflectors would cause fluctuations in the measured signal. Similar problems are caused by external electric or magnetic fields, either they cause fluctuations in the signal, or add a constant offset. That is why the sample is normally shielded by equipotential, metal electrodes to keep the region of the sample field free so that neither the probe electrons nor the sample is affected by external electric fields. Further a cylinder of a material with a high magnetic permeability, e.g. Mu-metal, built around the whole spectrometer to keep magnetic fields or field inhomogeneities at the experiment down to 10 mG or 1mG/cm. Because of the same reason the whole experiment, except the lenses which are normally made of coated copper, is designed in stainless antimagnetic steel and insulating parts are avoided wherever possible. See also Electron energy loss spectroscopy References Bibliography External links Department of Chemistry University of Guelph, (HR)EELS Queen Mary University of London, HREELS in context with IR Leibniz-Institut für Festkörper- und Werkstoffforschung Dresden, HREELS Scientific techniques Vibrational spectroscopy Electron spectroscopy
High resolution electron energy loss spectroscopy
[ "Physics", "Chemistry" ]
3,070
[ "Electron spectroscopy", "Vibrational spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
1,942,497
https://en.wikipedia.org/wiki/Vibronic%20coupling
Vibronic coupling (also called nonadiabatic coupling or derivative coupling) in a molecule involves the interaction between electronic and nuclear vibrational motion. The term "vibronic" originates from the combination of the terms "vibrational" and "electronic", denoting the idea that in a molecule, vibrational and electronic interactions are interrelated and influence each other. The magnitude of vibronic coupling reflects the degree of such interrelation. In theoretical chemistry, the vibronic coupling is neglected within the Born–Oppenheimer approximation. Vibronic couplings are crucial to the understanding of nonadiabatic processes, especially near points of conical intersections. The direct calculation of vibronic couplings used to be uncommon due to difficulties associated with its evaluation, but has recently gained popularity due to increased interest in the quantitative prediction of internal conversion rates, as well as the development of cheap but rigorous ways to analytically calculate the vibronic couplings, especially at the TDDFT level. Definition Vibronic coupling describes the mixing of different electronic states as a result of small vibrations. Evaluation The evaluation of vibronic coupling often involves complex mathematical treatment. Numerical gradients The form of vibronic coupling is essentially the derivative of the wave function. Each component of the vibronic coupling vector can be calculated with numerical differentiation methods using wave functions at displaced geometries. This is the procedure used in MOLPRO. First order accuracy can be achieved with forward difference formula: Second order accuracy can be achieved with central difference formula: Here, is a unit vector along direction . is the transition density between the two electronic states. Evaluation of electronic wave functions for both electronic states are required at N displacement geometries for first order accuracy and 2*N displacements to achieve second order accuracy, where N is the number of nuclear degrees of freedom. This can be extremely computationally demanding for large molecules. As with other numerical differentiation methods, the evaluation of nonadiabatic coupling vector with this method is numerically unstable, limiting the accuracy of the result. Moreover, the calculation of the two transition densities in the numerator are not straightforward. The wave functions of both electronic states are expanded with Slater determinants or configuration state functions (CSF). The contribution from the change of CSF basis is too demanding to evaluate using numerical method, and is usually ignored by employing an approximate diabatic CSF basis. This will also cause further inaccuracy of the calculated coupling vector, although this error is usually tolerable. Analytic gradient methods Evaluating derivative couplings with analytic gradient methods has the advantage of high accuracy and very low cost, usually much cheaper than one single point calculation. This means an acceleration factor of 2N. However, the process involves intense mathematical treatment and programming. As a result, few programs have currently implemented analytic evaluation of vibronic couplings at wave function theory levels. Details about this method can be found in ref. For the implementation for SA-MCSCF and MRCI in COLUMBUS, please see ref. TDDFT-based methods The computational cost of evaluating the vibronic coupling using (multireference) wave function theory has led to the idea of evaluating them at the TDDFT level, which indirectly describes the excited states of a system without describing its excited state wave functions. However, the derivation of the TDDFT vibronic coupling theory is not trivial, since there are no electronic wave functions in TDDFT that are available for plugging into the defining equation of the vibronic coupling. In 2000, Chernyak and Mukamel showed that in the complete basis set (CBS) limit, knowledge of the reduced transition density matrix between a pair of states (both at the unperturbed geometry) suffices to determine the vibronic couplings between them. The vibronic couplings between two electronic states are given by contracting their reduced transition density matrix with the geometric derivatives of the nuclear attraction operator, followed by dividing by the energy difference of the two electronic states: This enables one to calculate the vibronic couplings at the TDDFT level, since although TDDFT does not give excited state wave functions, it does give reduced transition density matrices, not only between the ground state and an excited state, but also between two excited states. The proof of the Chernyak-Mukamel formula is straightforward and involves the Hellmann-Feynman theorem. While the formula provides useful accuracy for a plane-wave basis (see e.g. ref.), it converges extremely slowly with respect to the basis set if an atomic orbital basis set is used, due to the neglect of the Pulay force. Therefore, modern implementations in molecular codes typically use expressions that include the Pulay force contributions, derived from the Lagrangian formalism. They are more expensive than the Chernyak-Mukamel formula, but still much cheaper than the vibronic couplings at wave function theory levels (more specifically, they are roughly as expensive as the SCF gradient for ground state-excited state vibronic couplings, and as expensive as the TDDFT gradient for excited state-excited state vibronic couplings). Moreover, they are much more accurate than the Chernyak-Mukamel formula for realistically sized atomic orbital basis sets. In programs where even the Chernyak-Mukamel formula is not implemented, there exists a third way to calculate the vibronic couplings, which gives the same results as the Chernyak-Mukamel formula. The key observation is that the contribution of an atom to the Chernyak-Mukamel vibronic coupling can be expressed as the nuclear charge of the atom times the electric field generated by the transition density (the so-called transition electric field), evaluated at the position of that atom. Therefore, Chernyak-Mukamel vibronic couplings can in principle be calculated by any program that both supports TDDFT and can compute the electric field generated by an arbitrary electron density at an arbitrary position. This technique was used to compute vibronic couplings using early versions of Gaussian, before Gaussian implemented vibronic couplings with the Pulay term. Crossings and avoided crossings of potential energy surfaces Vibronic coupling is large in the case of two adiabatic potential energy surfaces coming close to each other (that is, when the energy gap between them is of the order of magnitude of one oscillation quantum). This happens in the neighbourhood of an avoided crossing of potential energy surfaces corresponding to distinct electronic states of the same spin symmetry. At the vicinity of conical intersections, where the potential energy surfaces of the same spin symmetry cross, the magnitude of vibronic coupling approaches infinity. In either case the adiabatic or Born–Oppenheimer approximation fails and vibronic couplings have to be taken into account. The large magnitude of vibronic coupling near avoided crossings and conical intersections allows wave functions to propagate from one adiabatic potential energy surface to another, giving rise to nonadiabatic phenomena such as radiationless decay. Therefore, one of the most important applications of vibronic couplings is the quantitative calculation of internal conversion rates, through e.g. nonadiabatic molecular dynamics (including but not limited to surface hopping and path integral molecular dynamics). When the potential energy surfaces of both the initial and the final electronic state are approximated by multidimensional harmonic oscillators, one can compute the internal conversion rate by evaluating the vibration correlation function, which is much cheaper than nonadiabatic molecular dynamics and is free from random noise; this gives a fast method to compute the rates of relatively slow internal conversion processes, for which nonadiabatic molecular dynamics methods are not affordable. The singularity of vibronic coupling at conical intersections is responsible for the existence of Geometric phase, which was discovered by Longuet-Higgins in this context. Difficulties and alternatives Although crucial to the understanding of nonadiabatic processes, direct evaluation of vibronic couplings has been very limited until very recently. Evaluation of vibronic couplings is often associated with severe difficulties in mathematical formulation and program implementations. As a result, the algorithms to evaluate vibronic couplings at wave function theory levels, or between two excited states, are not yet implemented in many quantum chemistry program suites. By comparison, vibronic couplings between the ground state and an excited state at the TDDFT level, which are easy to formulate and cheap to calculate, are more widely available. The evaluation of vibronic couplings typically requires correct description of at least two electronic states in regions where they are strongly coupled. This usually requires the use of multi-reference methods such as MCSCF and MRCI, which are computationally demanding and delicate quantum-chemical methods. However, there are also applications where vibronic couplings are needed but the relevant electronic states are not strongly coupled, for example when calculating slow internal conversion processes; in this case even methods like TDDFT, which fails near ground state-excited state conical intersections, can give useful accuracy. Moreover, TDDFT can describe the vibronic coupling between two excited states in a qualitatively correct fashion, even if the two excited states are very close in energy and therefore strongly coupled (provided that the equation-of-motion (EOM) variant of the TDDFT vibronic coupling is used in place of the time-dependent perturbation theory (TDPT) variant). Therefore, the unsuitability of TDDFT for calculating ground state-excited state vibronic couplings near a ground state-excited state conical intersection can be bypassed by choosing a third state as the reference state of the TDDFT calculation (i.e. the ground state is treated like an excited state), leading to the popular approach of using spin-flip TDDFT to evaluate ground state-excited state vibronic couplings. When even an approximate calculation is unrealistic, the magnitude of vibronic coupling is often introduced as an empirical parameter determined by reproducing experimental data. Alternatively, one can avoid explicit use of derivative couplings by switch from the adiabatic to the diabatic representation of the potential energy surfaces. Although rigorous validation of a diabatic representation requires knowledge of vibronic coupling, it is often possible to construct such diabatic representations by referencing the continuity of physical quantities such as dipole moment, charge distribution or orbital occupations. However, such construction requires detailed knowledge of a molecular system and introduces significant arbitrariness. Diabatic representations constructed with different method can yield different results and the reliability of the result relies on the discretion of the researcher. Theoretical development The first discussion of the effect of vibronic coupling on molecular spectra is given in the paper by Herzberg and Teller. Calculations of the lower excited levels of benzene by Sklar in 1937 (with the valence bond method) and later in 1938 by Goeppert-Mayer and Sklar (with the molecular orbital method) demonstrated a correspondence between the theoretical predictions and experimental results of the benzene spectrum. The benzene spectrum was the first qualitative computation of the efficiencies of various vibrations at inducing intensity absorption. See also References Quantum chemistry Molecular vibration Dynamics (mechanics)
Vibronic coupling
[ "Physics", "Chemistry" ]
2,324
[ "Physical phenomena", "Quantum chemistry", "Spectrum (physical sciences)", "Molecular physics", "Molecular vibration", "Quantum mechanics", "Classical mechanics", "Theoretical chemistry", "Motion (physics)", "Dynamics (mechanics)", " molecular", "Atomic", "Spectroscopy", " and optical phys...
1,943,694
https://en.wikipedia.org/wiki/Single-molecule%20magnet
A single-molecule magnet (SMM) is a metal-organic compound that has superparamagnetic behavior below a certain blocking temperature at the molecular scale. In this temperature range, an SMM exhibits magnetic hysteresis of purely molecular origin. In contrast to conventional bulk magnets and molecule-based magnets, collective long-range magnetic ordering of magnetic moments is not necessary. Although the term "single-molecule magnet" was first employed in 1996, the first single-molecule magnet, [Mn12O12(OAc)16(H2O)4] (nicknamed "Mn12") was reported in 1991. This manganese oxide compound features a central Mn(IV)4O4 cube surrounded by a ring of 8 Mn(III) units connected through bridging oxo ligands, and displays slow magnetic relaxation behavior up to temperatures of ca. 4 K. Efforts in this field primarily focus on raising the operating temperatures of single-molecule magnets to liquid nitrogen temperature or room temperature in order to enable applications in magnetic memory. Along with raising the blocking temperature, efforts are being made to develop SMMs with high energy barriers to prevent fast spin reorientation. Recent acceleration in this field of research has resulted in significant enhancements of single-molecule magnet operating temperatures to above 70 K. Measurement Arrhenius behavior of magnetic relaxation Because of single-molecule magnets' magnetic anisotropy, the magnetic moment has usually only two stable orientations antiparallel to each other, separated by an energy barrier. The stable orientations define the molecule's so called “easy axis”. At finite temperature, there is a finite probability for the magnetization to flip and reverse its direction. Identical to a superparamagnet, the mean time between two flips is called the Néel relaxation time and is given by the following Néel–Arrhenius equation: where: τ is the magnetic relaxation time, or the average amount of time that it takes for the molecule's magnetization to randomly flip as a result of thermal fluctuations τ0 is a length of time, characteristic of the material, called the attempt time or attempt period (its reciprocal is called the attempt frequency); its typical value is between 10−9 and 10−10 second Ueff is the energy barrier associated with the magnetization moving from its initial easy axis direction, through a “hard plane”, to the other easy axis direction. The barrier Ueff is generally reported in cm−1 or in kelvins. kB is the Boltzmann constant T is the temperature This magnetic relaxation time, τ, can be anywhere from a few nanoseconds to years or much longer. Magnetic blocking temperature The so-called magnetic blocking temperature, TB, is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation technique. Historically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, τ, is 100 seconds. This definition is the current standard for comparison of single-molecule magnet properties, but otherwise is not technologically significant. There is typically a correlation between increasing an SMM's blocking temperature and energy barrier. The average blocking temperature for SMMs is 4K. Dy-metallocenium salts are the most recent SMM to achieve the highest temperature of magnetic hysteresis, greater than that of liquid nitrogen. Intramolecular magnetic exchange The magnetic coupling between the spins of the metal ions is mediated by superexchange interactions and can be described by the following isotropic Heisenberg Hamiltonian: where is the coupling constant between spin i (operator ) and spin j (operator ). For positive J the coupling is called ferromagnetic (parallel alignment of spins) and for negative J the coupling is called antiferromagnetic (antiparallel alignment of spins): a high spin ground state, a high zero-field-splitting (due to high magnetic anisotropy), and negligible magnetic interaction between molecules. The combination of these properties can lead to an energy barrier, so that at low temperatures the system can be trapped in one of the high-spin energy wells. Barrier to magnetic relaxation A single-molecule magnet can have a positive or negative magnetic moment, and the energy barrier between these two states greatly determines the molecule's relaxation time. This barrier depends on the total spin of the molecule's ground state and on its magnetic anisotropy. The latter quantity can be studied with EPR spectroscopy. Performance The performance of single-molecule magnets is typically defined by two parameters: the effective barrier to slow magnetic relaxation, Ueff, and the magnetic blocking temperature, TB. While these two variables are linked, only the latter variable, TB, directly reflects the performance of the single-molecule magnet in practical use. In contrast, Ueff, the thermal barrier to slow magnetic relaxation, only correlates to TB when the molecule's magnetic relaxation behavior is perfectly Arrhenius in nature. The table below lists representative and record 100-s magnetic blocking temperatures and Ueff values that have been reported for single-molecule magnets. Abbreviations: OAc=acetate, Cpttt=1,2,4‐tri(tert‐butyl)cyclopentadienide, CpMe5= 1,2,3,4,5-penta(methyl)cyclopentadienide, CpiPr4H= 1,2,3,4-tetra(isopropyl)cyclopentadienide, CpiPr4Me= 1,2,3,4-tetra(isopropyl)-5-(methyl)cyclopentadienide, CpiPr4Et= 1-(ethyl)-2,3,4,5-tetra(isopropyl)cyclopentadienide, CpiPr5= 1,2,3,4,5-penta(isopropyl)cyclopentadienide *indicates parameters from magnetically dilute samples Types Metal clusters Metal clusters formed the basis of the first decade-plus of single-molecule magnet research, beginning with the archetype of single-molecule magnets, "Mn12". This complex is a polymetallic manganese (Mn) complex having the formula [Mn12O12(OAc)16(H2O)4], where OAc stands for acetate. It has the remarkable property of showing an extremely slow relaxation of their magnetization below a blocking temperature. [Mn12O12(OAc)16(H2O)4]·4H2O·2AcOH, which is called "Mn12-acetate" is a common form of this used in research. Single-molecule magnets are also based on iron clusters because they potentially have large spin states. In addition, the biomolecule ferritin is also considered a nanomagnet. In the cluster Fe8Br the cation Fe8 stands for [Fe8O2(OH)12(tacn)6]8+, with tacn representing 1,4,7-triazacyclononane. The ferrous cube complex [Fe4(sae)4(MeOH)4] was the first example of a single-molecule magnet involving an Fe(II) cluster, and the core of this complex is a slightly distorted cube with Fe and O atoms on alternating corners. Remarkably, this single-molecule magnet exhibits non-collinear magnetism, in which the atomic spin moments of the four Fe atoms point in opposite directions along two nearly perpendicular axes. Theoretical computations showed that approximately two magnetic electrons are localized on each Fe atom, with the other atoms being nearly nonmagnetic, and the spin–orbit-coupling potential energy surface has three local energy minima with a magnetic anisotropy barrier just below 3 meV. Applications There are many discovered types and potential uses. Single-molecule magnets represent a molecular approach to nanomagnets (nanoscale magnetic particles). Due to the typically large, bi-stable spin anisotropy, single-molecule magnets promise the realization of perhaps the smallest practical unit for magnetic memory, and thus are possible building blocks for a quantum computer. Consequently, many groups have devoted great efforts into synthesis of additional single-molecule magnets. Single-molecule magnets have been considered as potential building blocks for quantum computers. A single-molecule magnet is a system of many interacting spins with clearly defined low-lying energy levels. The high symmetry of the single-molecule magnet allows for a simplification of the spins that can be controllable in external magnetic fields. Single-molecule magnets display strong anisotropy, a property which allows a material to assume a variation of properties in different orientations. Anisotropy ensures that a collection of independent spins would be advantageous for quantum computing applications. A large amount of independent spins compared to a singular spin, permits the creation of a larger qubit and therefore a larger faculty of memory. Superposition and interference of the independent spins also allows for further simplification of classical computation algorithms and queries. Theoretically, quantum computers can overcome the physical limitations presented by classical computers by encoding and decoding quantum states. Single-molecule magnets have been utilized for the Grover algorithm, a quantum search theory. The quantum search problem typically requests for a specific element to be retrieved from an unordered database. Classically the element would be retrieved after N/2 attempts, however a quantum search utilizes superpositions of data in order to retrieve the element, theoretically reducing the search to a single query. Single molecular magnets are considered ideal for this function due to their cluster of independent spins. A study conducted by Leuenberger and Loss, specifically utilized crystals to amplify the moment of the single spin molecule magnets Mn12 and Fe8. Mn12 and Fe8 were both found to be ideal for memory storage with a retrieval time of approximately 10−10 seconds. Another approach to information storage with SMM Fe4 involves the application of a gate voltage for a state transition from neutral to anionic. Using electrically gated molecular magnets offers the advantage of control over the cluster of spins during a shortened time scale. The electric field can be applied to the SMM using a tunneling microscope tip or a strip-line. The corresponding changes in conductance are unaffected by the magnetic states, proving that information storage could be performed at much higher temperatures than the blocking temperature. The specific mode of information transfer includes DVD to another readable medium, as shown with Mn12 patterned molecules on polymers. Another application for SMMs is in magnetocaloric refrigerants . A machine learning approach using experimental data has been able to predict novel SMMs that would have large entropy changes, and therefore more suitable for magnetic refrigeration. Three hypothetical SMMs are proposed for experimental synthesis:Cr2Gd2(OAc)5+, Mn2Gd2(OAc)5+, [Fe4Gd6(O3PCH2Ph)6(O2CtBu)14(MeCN)2]. The main SMM characteristics that contribute to the entropy properties include dimensionality and the coordinating ligands. In addition, single-molecule magnets have provided physicists with useful test-beds for the study of quantum mechanics. Macroscopic quantum tunneling of the magnetization was first observed in Mn12O12, characterized by evenly spaced steps in the hysteresis curve. The periodic quenching of this tunneling rate in the compound Fe8 has been observed and explained with geometric phases. See also Ferromagnetism Antiferromagnetism Magnetic anisotropy Single-molecule experiment Magnetism Superparamagnetism Magnetochemistry References External links Molecular Magnetism Web, Jürgen Schnack Condensed matter physics Quantum magnetism Types of magnets
Single-molecule magnet
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,481
[ "Phases of matter", "Quantum mechanics", "Materials science", "Quantum magnetism", "Condensed matter physics", "Matter" ]
1,943,968
https://en.wikipedia.org/wiki/Cativa%20process
The Cativa process is a method for the production of acetic acid by the carbonylation of methanol. The technology, which is similar to the Monsanto process, was developed by BP Chemicals and is under license by BP Plc. The process is based on an iridium-containing catalyst, such as the complex [Ir(CO)2I2]− (1). The Cativa and Monsanto processes are sufficiently similar that they can use the same chemical plant. Initial studies by Monsanto had shown iridium to be less active than rhodium for the carbonylation of methanol. Subsequent research, however, showed that the iridium catalyst could be promoted by ruthenium, and this combination leads to a catalyst that is superior to the rhodium-based systems. The switch from rhodium to iridium also allows the use of less water in the reaction mixture. This change reduces the number of drying columns necessary, decreases formation of by-products, such as propionic acid, and suppresses the water gas shift reaction. The catalytic cycle for the Cativa process, shown above, begins with the reaction of methyl iodide with the square planar active catalyst species (1) to form the octahedral iridium(III) species (2), the fac-isomer of [Ir(CO)2(CH3)I3]−. This oxidative addition reaction involves the formal insertion of the iridium(I) centre into the carbon-iodine bond of methyl iodide. After ligand exchange (iodide for carbon monoxide), the migratory insertion of carbon monoxide into the iridium-carbon bond, step (3) to (4), results in the formation of a square pyramidal species with a bound acetyl ligand. The active catalyst species (1) is regenerated by the reductive elimination of acetyl iodide from (4), a de-insertion reaction. The acetyl iodide is hydrolysed to produce the acetic acid product, in the process generating hydroiodic acid which is in turn used to convert the starting material (methanol) to the methyl iodide used in the first step. References Organometallic chemistry Industrial processes Catalysis Organoiridium compounds BP
Cativa process
[ "Chemistry" ]
481
[ "Catalysis", "Chemical kinetics", "Organometallic chemistry" ]
1,944,381
https://en.wikipedia.org/wiki/Falling%20film%20evaporator
A falling film evaporator is an industrial device to concentrate solutions, especially with heat sensitive components. The evaporator is a special type of heat exchanger. General In general evaporation takes place inside vertical tubes, but there are also applications where the process fluid evaporates on the outside of horizontal or vertical tubes. In all cases, the process fluid to be evaporated flows downwards by gravity as a continuous film. The fluid will create a film along the tube walls, progressing downwards (falling) - hence the name. The fluid distributor has to be designed carefully in order to maintain an even liquid distribution for all tubes along which the solution falls. A typical distributor is shown in Fig. 2; these distributors are usually called ferrules due to their concentric shape. In the majority of applications the heating medium is placed on the outside of the tubes. High heat transfer coefficients are required in order to achieve equally balanced heat transfer resistances. Therefore, condensing steam is commonly used as a heating medium. For internally evaporating fluids, separation between the liquid phase (the solution) and the gaseous phase takes place inside the tubes. In order to maintain conservation of mass as this process proceeds, the downward vapor velocity increases, increasing the shear force acting on the liquid film and therefore also the velocity of the solution. The result can be a high film velocity of a progressively thinner film resulting in increasingly turbulent flow. The combination of these effects allows very high heat transfer coefficients. The heat transfer coefficient on the evaporating side of the tube is mostly determined by the hydrodynamic flow conditions of the film. For low mass flows or high viscosities the film flow can be laminar, in which case heat transfer is controlled purely by conduction through the film. Therefore in this condition the heat transfer coefficient decreases with increased mass flow. With increased mass flow the film becomes wavy laminar and then turbulent. Under turbulent conditions the heat transfer coefficient increases with increased flow. Evaporation takes place at very low mean temperature differences between heating medium and process stream, typically between 3 - 6K, therefore these devices are ideal for heat recovery in multi stage processes. A further advantage of the falling film evaporator is the very short residence time of the liquid and the absence of superheating of the same. Not considering the vapour separator, the residence time inside the tubes is measured in seconds, making it ideal for heat-sensitive products such as milk, fruit juice, pharmaceuticals, and many others. Falling film evaporators are also characterised by very low pressure drops; therefore, they are often used in deep vacuum applications. Fouling Due to the intimate contact of the liquid with the heating surface, these evaporators are sensitive to fouling from precipitating solids. Low liquid velocity at the inlet is usually not sufficient to perform an effective self-cleaning of the tubes. Falling film evaporators are therefore used in clean, non-precipitating liquids. A typical application, in chemical industry, is for concentration of caustic soda. Falling film evaporators versus flooded evaporators Falling film evaporators have a number of advantages over their flooded evaporator counterparts. They require a lower charge, as the entire shell (in the case of horizontal evaporators) or all the tubes (in the case of a vertical evaporator) need not be filled with liquid as a thin film is now used to cover the surfaces. In industries such as heating and air-conditioning this can save significant money due to the high costs of a refrigerant charge. Falling film evaporators also show improved heat transfer characteristics over their flooded counterparts, particularly in cases with low heat flux. A number of disadvantages exist, primarily being the comparable lack of understanding of falling film evaporators compared to flooded evaporators, particularly for horizontal falling film evaporators. Furthermore the fluid distribution for horizontal falling film evaporators is a challenge, as the performance is severely limited if an uneven distribution of film over the tubes is created. Horizontal versus vertical falling film evaporators Horizontal falling film evaporators have a number of potential advantages over their vertical counterparts in the petrochemical industry, such as the ability to use tubes with external enhancements; while internally-enhanced tubes are available for vertical falling film evaporators, external enhancements are typically superior for boiling applications. The chief disadvantage of horizontal falling film evaporators is that if a corrosive or fouling liquid is to be evaporated, it will have to placed on the shell side. This is against best practice, as it is easier to clean fouling found on the inside of tubes rather than the outside. See also Climbing and falling film plate evaporator References External links Falling Film Evaporators Wolverine Tube Heat Transfer Databooks Evaporators
Falling film evaporator
[ "Chemistry", "Engineering" ]
985
[ "Chemical equipment", "Distillation", "Evaporators" ]
1,944,711
https://en.wikipedia.org/wiki/Indel
Indel (insertion-deletion) is a molecular biology term for an insertion or deletion of bases in the genome of an organism. Indels ≥ 50 bases in length are classified as structural variants. In coding regions of the genome, unless the length of an indel is a multiple of 3, it will produce a frameshift mutation. For example, a common microindel which results in a frameshift causes Bloom syndrome in the Jewish or Japanese population. Indels can be contrasted with a point mutation. An indel inserts or deletes nucleotides from a sequence, while a point mutation is a form of substitution that replaces one of the nucleotides without changing the overall number in the DNA. Indels can also be contrasted with Tandem Base Mutations (TBM), which may result from fundamentally different mechanisms. A TBM is defined as a substitution at adjacent nucleotides (primarily substitutions at two adjacent nucleotides, but substitutions at three adjacent nucleotides have been observed). Indels, being either insertions, or deletions, can be used as genetic markers in natural populations, especially in phylogenetic studies. It has been shown that genomic regions with multiple indels can also be used for species-identification procedures. An indel change of a single base pair in the coding part of an mRNA results in a frameshift during mRNA translation that could lead to an inappropriate (premature) stop codon in a different frame. Indels that are not multiples of 3 are particularly uncommon in coding regions but relatively common in non-coding regions. There are approximately 192-280 frameshifting indels in each person. Indels are likely to represent between 16% and 25% of all sequence polymorphisms in humans. In most known genomes, including humans, indel frequency tends to be markedly lower than that of single nucleotide polymorphisms (SNP), except near highly repetitive regions, including homopolymers and microsatellites. The term "indel" has been co-opted in recent years by genome scientists for use in the sense described above. This is a change from its original use and meaning, which arose from systematics. In systematics, researchers could find differences between sequences, such as from two different species. But it was impossible to infer if one species lost the sequence or the other species gained it. For example, species A has a run of 4 G nucleotides at a locus and species B has 5 G's at the same locus. If the mode of selection is unknown, one can not tell if species A lost one G (a "deletion" event") or species B gained one G (an "insertion" event). When one cannot infer the phylogenetic direction of the sequence change, the sequence change event is referred to as an "indel". Using passenger-immunoglobulin mouse models, a study found that the most prevalent indel events are the activation-induced cytidine deaminase (AID)-dependent ±1-base pair (bp) indels, which can lead to deleterious outcomes, whereas longer in-frame indels were rare outcomes. See also Insertion (genetics) Deletion (genetics) References Mutation Molecular biology
Indel
[ "Chemistry", "Biology" ]
683
[ "Biochemistry", "Molecular biology" ]
1,944,887
https://en.wikipedia.org/wiki/Synovial%20bursa
A synovial bursa, usually simply bursa (: bursae or bursas), is a small fluid-filled sac lined by synovial membrane with an inner capillary layer of viscous synovial fluid (similar in consistency to that of a raw egg white). It provides a cushion between bones and tendons and/or muscles around a joint. This helps to reduce friction between the bones and allows free movement. Bursae are found around most major joints of the body. Structure Based on location, there are three types of bursa: subcutaneous, submuscular and subtendinous. A subcutaneous bursa is located between the skin and an underlying bone. It allows skin to move smoothly over the bone. Examples include the prepatellar bursa located over the kneecap and the olecranon bursa at the tip of the elbow. A submuscular bursa is found between a muscle and an underlying bone, or between adjacent muscles. These prevent rubbing of the muscle during movements. A large submuscular bursa, the trochanteric bursa, is found at the lateral hip, between the greater trochanter of the femur and the overlying gluteus maximus muscle. A subtendinous bursa is found between a tendon and a bone. Examples include the subacromial bursa that protects the tendon of shoulder muscle as it passes under the acromion of the scapula, and the suprapatellar bursa that separates the tendon of the large anterior thigh muscle from the distal femur just above the knee. An adventitious bursa is a non-native bursa. When any surface of the body is subjected to repeated stress, an adventitious bursa develops under it. Examples are student's elbow and bunion. Clinical significance Infection or irritation of a bursa leads to bursitis (inflammation of a bursa). The general term for disease of bursae is "bursopathy." Etymology is Medieval Latin for "purse", so named for the bag-like function of an anatomical bursa. Bursae or bursas is its plural form. See also Bursa of Fabricius (a lymphatic organ in birds) Bursectomy Knee bursae Shoulder joint#Bursae External links Diagram of elbow with olecranon bursa Reference Source text Soft tissue Musculoskeletal system
Synovial bursa
[ "Biology" ]
514
[ "Organ systems", "Musculoskeletal system" ]
1,944,940
https://en.wikipedia.org/wiki/Internal%20fertilization
Internal fertilization is the union of an egg and sperm cell during sexual reproduction inside the female body. Internal fertilization, unlike its counterpart, external fertilization, brings more control to the female with reproduction. For internal fertilization to happen there needs to be a method for the male to introduce the sperm into the female's reproductive tract. Most taxa that reproduce by internal fertilization are gonochoric. Male mammals, reptiles, and certain other vertebrates transfer sperm into the female's vagina or cloaca through an intromittent organ during copulation. In most birds, the cloacal kiss is used, the two animals pressing their cloacas together while transferring sperm. Salamanders, spiders, some insects and some molluscs undertake internal fertilization by transferring a spermatophore, a bundle of sperm, from the male to the female. Following fertilization, the embryos are laid as eggs in oviparous organisms, or continue to develop inside the reproductive tract of the mother to be born later as live young in viviparous organisms. Evolution of internal fertilization Internal fertilization evolved many times in animals. According to David B. Dusenbery all the features with internal fertilization were most likely a result from oogamy. It has been argued that internal fertilization evolve because of sexual selection through sperm competition. In amphibians, internal fertilization evolved from external fertilization. Methods of internal fertilization Fertilization which takes place inside the female body is called internal fertilization in animals is done through the following different ways: Copulation, which involves the insertion of the penis or other intromittent organ into the vagina (in most mammals) or to the cloaca in monotremes, most reptiles, some birds, the tailed frog, some fish, the disappeared dinosaurs, as well as in other non-vertebrate animals. Cloacal kiss, which consists in that the two animals touch their cloacae together in order to transfer the sperm of the male to the female. It is used in most birds and in the tuatara, that do not have an intromittent organ. Via spermatophore, a sperm-containing cap placed by the male in the female's cloaca. Usually, the sperm is stored in spermathecae on the roof of the cloaca until it is needed at the time of oviposition. It is used by some salamander and newt species, by the Arachnida, some insects and some mollusks. In sponges, sperm cells are released into the water to fertilize ova that are retained by the female. Some species of sponge participate in external fertilization where the ova is released. Expulsion At some point, the growing egg or offspring must be expelled. There are several possible modes of reproduction. These are traditionally classified as follows: Oviparity, as in most invertebrates and reptiles, monotremes, dinosaurs and all birds which lay eggs that continue to develop after being laid, and hatch later. Viviparity, as in almost all mammals (such as whales, kangaroos and humans) which bear their young live. The developing young spend proportionately more time within the female's reproductive tract. The young are later released to survive on their own, with varying amounts of help from the parent(s) of the species. Ovoviviparity, as in the garter snake, most vipers, and the Madagascar hissing cockroach, which have eggs (with shells) that hatch as they are laid, making it resemble live birth. Advantages to internal fertilization Internal fertilization allows for: Female mate choice, which gives the female the ability to choose her partner before and after mating. The female cannot do this with external fertilization because she may have limited control of who is fertilizing her eggs, and when they are being fertilized. Making a decision for the conditions of reproduction, like location and time. In external fertilization a female can only choose the time in which she releases her eggs, but not when they are fertilized. This is similar, in ways, to cryptic female choice. Egg protection on dry land. While oviparous animals either have a jelly like ovum or a hard shell enclosing their egg, internally fertilizing animals grow their eggs and offspring inside themselves. This offers protection from predators and from dehydration on land. This allows for a higher chance of survival when there is a regulated temperature and protected area within the mother. Disadvantages to internal fertilization Gestation can and will add additional risks for the mother. The additional risks from gestation come from extra energy demands. Along with internal fertilization comes sexual reproduction, in most cases. Sexual reproduction comes with some risks as well. The risks with sexual reproduction are with intercourse, it is infrequent and only works well during peak fertility. While animals which externally fertilize are able to release egg and sperm, usually into the water, not needing a specific partner to reproduce. Fewer offspring are produced through internal fertilization in comparison to external fertilization. This is both because the mother cannot hold and grow as many offspring as eggs, and the mother cannot provide and obtain enough resources for a larger amount of offspring. Fish Some species of fish like guppies have the ability to internally fertilize, this process happens by the male inserting a tubular fin into the female's reproductive opening and then will deposit sperm into her reproductive tract. There are other species of fish that are mouthbrooders which means that one fish puts the eggs in its mouth for incubation. A certain type of fish that is a mouthbrooder is called cichlids and many of them are maternal mouthbrooders. The process for this is the female would lay the egg and pick it up in her mouth. Then the males will encourage the female to open her mouth so they can fertilize the eggs while it is in the female's mouth. Internal fertilization in cartilaginous fishes contains the same evolutionary origin as reptiles, birds, and mammals that internally fertilize. Also in these internally fertilizing fish while the sperm is transferred to the reproductive tract there is no noticeable change in tonality. Amphibians Most amphibians have external fertilization but there is an exception to some like salamanders which mostly have internal fertilization. Salamanders do not use intercourse for sexual reproduction due to their lack of external penis. Rather, the male salamander produces an encased capsule of sperm and nutrients called a spermatophore. The male deposits a spermatophore on the ground and the female will pick it up with her cloaca (a combined urinary and genital opening) and fertilize her eggs with it. Over time amphibians have been found evolving to increasing internal fertilization. Within amphibians, it is common for high vertebrates to internally fertilize because of the transition from water to land during vertebrate evolution. There is an advantage for the amphibians who are internally fertilizing allowing for the selection of a time and place for reproduction. Birds Most birds do not have penises, but achieve internal fertilization via cloacal contact (or "cloaca kiss"). In these birds, males and females contact their cloacas together, typically briefly, and transfer sperm to the female. However, water fowls such as ducks and geese have penises and are able to use them for internal fertilization. While birds have internal fertilization, most species no longer have phallus structures. This makes them the only vertebrate taxon to fall into both categories of lacking the phallus but participating in internal fertilization. See also Insemination Fertilization References Reproduction in animals
Internal fertilization
[ "Biology" ]
1,656
[ "Reproduction in animals", "Behavior", "Reproduction" ]
1,945,043
https://en.wikipedia.org/wiki/Whorl
A whorl ( or ) is an individual circle, oval, volution or equivalent in a whorled pattern, which consists of a spiral or multiple concentric objects (including circles, ovals and arcs). In nature For mollusc whorls, the body whorl in a mollusc shell is the most recently formed whorl of a spiral shell, terminating in the aperture. Artificial objects See also Whirl (disambiguation) References External links Patterns Geometric shapes
Whorl
[ "Mathematics" ]
100
[ "Geometric shapes", "Mathematical objects", "Geometric objects", "Geometry", "Geometry stubs" ]
1,945,275
https://en.wikipedia.org/wiki/Wigner%20effect
The Wigner effect (named for its discoverer, Eugene Wigner), also known as the discomposition effect or Wigner's disease, is the displacement of atoms in a solid caused by neutron radiation. Any solid can display the Wigner effect. The effect is of most concern in neutron moderators, such as graphite, intended to reduce the speed of fast neutrons, thereby turning them into thermal neutrons capable of sustaining a nuclear chain reaction involving uranium-235. Cause To cause the Wigner effect, neutrons that collide with the atoms in a crystal structure must have enough energy to displace them from the lattice. This amount (threshold displacement energy) is approximately 25 eV. A neutron's energy can vary widely, but it is not uncommon to have energies up to and exceeding 10 MeV (10,000,000 eV) in the centre of a nuclear reactor. A neutron with a significant amount of energy will create a displacement cascade in a matrix via elastic collisions. For example, a 1 MeV neutron striking graphite will create 900 displacements. Not all displacements will create defects, because some of the struck atoms will find and fill the vacancies that were either small pre-existing voids or vacancies newly formed by the other struck atoms. Frenkel defect The atoms that do not find a vacancy come to rest in non-ideal locations; that is, not along the symmetrical lines of the lattice. These interstitial atoms (or simply "interstitials") and their associated vacancies are a Frenkel defect. Because these atoms are not in the ideal location, they have a Wigner energy associated with them, much as a ball at the top of a hill has gravitational potential energy. When a large number of interstitials have accumulated, they risk releasing all of their energy suddenly, creating a rapid, great increase in temperature. Sudden, unplanned increases in temperature can present a large risk for certain types of nuclear reactors with low operating temperatures. One such release was the indirect cause of the Windscale fire. Accumulation of energy in irradiated graphite has been recorded as high as 2.7 kJ/g--enough to raise the temperature by thousands of degrees--but is typically much lower than this. Not linked to Chernobyl disaster Despite some reports, Wigner energy buildup had nothing to do with the cause of the Chernobyl disaster: this reactor, like all contemporary power reactors, operated at a high enough temperature to allow the displaced graphite structure to realign itself before any potential energy could be stored. Wigner energy may have played some part following the prompt critical neutron spike, when the accident entered the graphite fire phase of events. Dissipation of Wigner energy A buildup of Wigner energy can be relieved by heating the material. This process is known as annealing. In graphite this occurs at . Intimate Frenkel pairs In 2003, it was postulated that Wigner energy can be stored by the formation of metastable defect structures in graphite. Notably, the large energy release observed at 200–250 °C has been described in terms of a metastable interstitial-vacancy pair. The interstitial atom becomes trapped on the lip of the vacancy, and there is a barrier for it to recombine to give perfect graphite. Citations General references Glasstone, Samuel, and Alexander Sesonske [1963] (1994). Nuclear Reactor Engineering. Boston: Springer. . . Condensed matter physics Crystallographic defects Neutron Nuclear technology Physical phenomena Radiation effects
Wigner effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
738
[ "Physical phenomena", "Crystallographic defects", "Phases of matter", "Materials science", "Nuclear technology", "Crystallography", "Radiation", "Condensed matter physics", "Nuclear physics", "Radiation effects", "Materials degradation", "Matter" ]
1,945,347
https://en.wikipedia.org/wiki/Link%20budget
A link budget is an accounting of all of the power gains and losses that a communication signal experiences in a telecommunication system; from a transmitter, through a communication medium such as radio waves, cable, waveguide, or optical fiber, to the receiver. It is an equation giving the received power from the transmitter power, after the attenuation of the transmitted signal due to propagation, as well as the antenna gains and feedline and other losses, and amplification of the signal in the receiver or any repeaters it passes through. A link budget is a design aid, calculated during the design of a communication system to determine the received power, to ensure that the information is received intelligibly with an adequate signal-to-noise ratio. Randomly varying channel gains such as fading are taken into account by adding some margin depending on the anticipated severity of its effects. The amount of margin required can be reduced by the use of mitigating techniques such as antenna diversity or multiple-input and multiple-output (MIMO). A simple link budget equation looks like this: Received power (dBm) = transmitted power (dBm) + gains (dB) − losses (dB) Power levels are expressed in (dBm), Power gains and losses are expressed in decibels (dB), which is a logarithmic measurement, so adding decibels is equivalent to multiplying the actual power ratios. In radio systems For a line-of-sight radio system, the primary source of loss is the decrease of the signal power as it spreads over an increasing area while it propagates, proportional to the square of the distance (geometric spreading). Transmitting antennas can be Omnidirectional, Directional, or Sectorial, depending on the way in which the antenna power is oriented. An omnidirectional antenna will distribute the power equally in every direction of a plane, so the radiation pattern has the shape of a sphere squeezed between two parallel flat surfaces. They are widely used in many applications, for instance in WiFi Access Points. Directional antennas concentrate the power in a specific direction, called the bore sight, and are widely used in point to point applications, like wireless bridges and satellite communications. Sectorial antennas concentrate the power in a wider region, typically embracing 45º, 60º, 90º or 120º. They are routinely deployed in Cellular towers. Simplifications needed The free space loss is easily calculated using Friis transmission equation which states that the loss is proportional to the square of the distance and the square of the frequency. Additionally losses are incurred in most radio links, including atmospheric attenuation by gases, rain, fog and clouds. Fading due to variations of the channel, multipath losses and antenna misalignment. In non line of sight links, diffraction and reflection losses are the most important since the direct path is not available. Transmission line and polarization loss In practical situations (deep space telecommunications, weak signal DXing etc.) other sources of signal loss must also be accounted for The transmitting and receiving antennas may be partially cross-polarized. The cabling between the radios and antennas may introduce significant additional loss. Fresnel zone losses due to a partially obstructed line of sight path. Doppler shift induced signal power losses in the receiver. Endgame If the estimated received power is sufficiently large (typically relative to the receiver sensitivity), which may be dependent on the communications protocol in use, the link will be useful for sending data. The amount by which the received power exceeds receiver sensitivity is called the link margin. Equation A link budget equation including all these effects, expressed logarithmically, might look like this: where: , received power (dBm) , transmitter output power (dBm) , transmitter antenna gain (dBi) , transmitter losses (coax, connectors...) (dB) , path loss, usually free space loss (dB) , miscellaneous losses (fading margin, body loss, polarization mismatch, other losses, ...) (dB) , receiver antenna gain (dBi) , receiver losses (coax, connectors, ...) (dB) The loss due to propagation between the transmitting and receiving antennas, often called the path loss, can be written in dimensionless form by normalizing the distance to the wavelength: (where distance and wavelength are in the same units) When substituted into the link budget equation above, the result is the logarithmic form of the Friis transmission equation. In some cases, it is convenient to consider the loss due to distance and wavelength separately, but in that case, it is important to keep track of which units are being used, as each choice involves a differing constant offset. Some examples are provided below. (dB) ≈ 32.45 dB + 20 log10[frequency (MHz)] + 20 log10[distance (km)] (dB) ≈ −27.55 dB + 20 log10[frequency (MHz)] + 20 log10[distance (m)] (dB) ≈ 36.6 dB + 20 log10[frequency (MHz)] + 20 log10[distance (miles)] These alternative forms can be derived by substituting wavelength with the ratio of propagation velocity (c, approximately ) divided by frequency, and by inserting the proper conversion factors between km or miles and meters, and between MHz and (1/s). Non-line-of-sight radio Because of building obstructions such as walls and ceilings, propagation losses indoors can be significantly higher. This occurs because of a combination of attenuation by walls and ceilings, and blockage due to equipment, furniture, and even people. For example, a "2 by 4" wood stud wall with drywall on both sides results in about 6 dB loss per wall at 2.4 GHz. Older buildings may have even greater internal losses than new buildings due to materials and line of sight issues. Experience has shown that line-of-sight propagation holds only for about the first 3 meters. Beyond 3 meters propagation losses indoors can increase at up to 30 dB per 30 meters in dense office environments. This is a good rule-of-thumb, in that it is conservative (it overstates path loss in most cases). Actual propagation losses may vary significantly depending on building construction and layout. The attenuation of the signal is highly dependent on the frequency of the signal. In waveguides and cables Guided media such as coaxial and twisted pair electrical cable, radio frequency waveguide and optical fiber have losses that are exponential with distance. The path loss will be in terms of dB per unit distance. This means that there is always a crossover distance beyond which the loss in a guided medium will exceed that of a line-of-sight path of the same length. Long distance fiber-optic communication became practical only with the development of ultra-transparent glass fibers. A typical path loss for single-mode fiber is 0.2 dB/km, far lower than any other guided medium. Earth–Moon–Earth communications Link budgets are important in Earth–Moon–Earth communications. As the albedo of the Moon is very low (maximally 12% but usually closer to 7%), and the path loss over the 770,000 kilometre return distance is extreme (around 250 to 310dB depending on VHF-UHF band used, modulation format and Doppler shift effects), high power (more than 100 watts) and high-gain antennas (more than 20 dB) must be used. In practice, this limits the use of this technique to the spectrum at VHF and above. The Moon must be above the horizon in order for EME communications to be possible. Voyager program The Voyager program spacecraft have the highest known path loss (308dB as of 2002) and lowest link budgets of any telecommunications circuit. The Deep Space Network has been able to maintain the link at a higher than expected bitrate through a series of improvements, such as increasing the antenna size from 64m to 70m for a 1.2dB gain, and upgrading to low noise electronics for a 0.5dB gain in 2000–2001. During the Neptune flyby, in addition to the 70-m antenna, two 34-m antennas and twenty-seven 25-m antennas were used to increase the gain by 5.6dB, providing additional link margin to be used for a 4× increase in bitrate. See also Antenna gain-to-noise-temperature Friis transmission equation Isotropic radiator Multipath propagation Optical power budget Radiation pattern RF planning References External links Link budget calculator for wireless LAN Point-to-point link budget calculator MUOS Link budget calculator/planner Example LTE, GSM and UMTS Link Budgets Python link budget calculator for satellites Small satellites link budget (with python examples) Budgets Telecommunications engineering Radio frequency propagation
Link budget
[ "Physics", "Engineering" ]
1,822
[ "Physical phenomena", "Telecommunications engineering", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves", "Electrical engineering" ]
21,056,112
https://en.wikipedia.org/wiki/Powder%20mixture
A powder is an assembly of dry particles dispersed in air. If two different powders are mixed perfectly, theoretically, three types of powder mixtures can be obtained: the random mixture, the ordered mixture or the interactive mixture. Different powder types A powder is called free-flowing if the particles do not stick together. If particles are cohesive, they cling to one another to form aggregates. The significance of cohesion increases with decreasing size of the powder particles; particles smaller than 100 μm are generally cohesive. Random mixture A random mixture can be obtained if two different free-flowing powders of approximately the same particle size, density and shape are mixed (see figure A). Only primary particles are present in this type of mixture, i.e., the particles are not cohesive and do not cling to one another. The mixing time will determine the quality of the random mixture. However, if powders with particles of different size, density or shape are mixed, segregation can occur. Segregation will cause separation of the powders as, for example, lighter particles will be prone to travel to the top of the mixture whereas heavier particles are kept at the bottom. Ordered mixture The term ordered mixture was first introduced to describe a completely homogeneous mixture where the two components adhere to each other to form ordered units. However, a completely homogeneous mixture is only achievable in theory and other denotations were introduced later such as adhesive mixture or interactive mixture. Interactive mixture If a free-flowing powder is mixed with a cohesive powder an interactive mixture can be obtained. The cohesive particles adhere to the free-flowing particles (now called carrier particles) to form interactive units as shown in figure B. An interactive mixture may not contain free aggregates of the cohesive powder, which means that all small particles must be adhered to the larger ones. The difference from an ordered mixture is instead that all carrier particles do not need to be of the same size and a different number of small particles attached to each one. A narrow size range of the carrier particles is preferred to avoid segregation of the interactive units. In practice a combination of a random mixture and an interactive mixture may be obtained which consists of carrier particles, aggregates of the small particles and interactive units. Formation The formation of interactive mixtures cannot automatically be assumed, especially if smaller carrier particles or a greater proportion of fine particles are used. If an interactive mixture is to be formed, it is necessary that enough force is exerted by the carrier particles during dry mixing to break up the aggregates formed by the fine particles. Adhesion can then be achieved if the adhesive forces exceed the gravitational forces that otherwise lead to separation of the constituents. Applications Interactive mixtures for example can be used in the manufacturing of tablets enhancing the dissolution of poorly soluble drugs or for nasal administration. One common application is for inhalation therapy, where the concept has been used in the development of alternatives to pressurised metered dose inhalers. The quality by design initiative (QbD) of the U.S. Food and Drug Administration requires a process to be controllable and predictable. Theories and methods to characterize powder mixture have facilitated the implementation of QbD approaches to predict flow properties of powder mixture. For example, QbD approach is shown to be useful for predicting flow performance and finding design space during formulation development. References External links Improving Powder Flow During Pharmaceutical Operations, an Rx Times article Granularity of materials Food technology Materials science Routes of administration Mixture
Powder mixture
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
711
[ "Pharmacology", "Applied and interdisciplinary physics", "Routes of administration", "Materials science", "Materials", "Powders", "nan", "Particle technology", "Granularity of materials", "Matter" ]
21,057,904
https://en.wikipedia.org/wiki/Japanese%20encephalitis%20vaccine
Japanese encephalitis vaccine is a vaccine that protects against Japanese encephalitis. The vaccines are more than 90% effective. The duration of protection with the vaccine is not clear but its effectiveness appears to decrease over time. Doses are given either by injection into a muscle or just under the skin. It is recommended as part of routine immunizations in countries where the disease is a problem. One or two doses are given depending on the version of the vaccine. Extra doses are not typically needed in areas where the disease is common. In those with HIV/AIDS or those who are pregnant an inactivated vaccine should be used. Immunization of travellers who plan to spend time outdoors in areas where the disease is common is recommended. The vaccines are relatively safe. Pain and redness may occur at the site of injection. , 15 different vaccines are available: some are based on recombinant DNA techniques, others weakened virus, and others inactivated virus. The Japanese encephalitis vaccines first became available in the 1930s. It is on the World Health Organization's List of Essential Medicines. Efficacy Randomized control trials on JE-VAX have shown that a two-dose schedule provides protection for one year. History Japanese encephalitis vaccines first became available in the 1930s. One of them was an inactivated mouse brain-derived vaccine (the Nakayama and/or Beijing-1 strain), made by BIKEN and marketed by Sanofi Pasteur as JE-VAX, until production ceased in 2005. The other was an inactivated vaccine cultivated on primary hamster kidney cells (the Beijing-3 strain). The Beijing-3 strain was the main variant of the vaccine used in the People's Republic of China from 1968 until 2005. Three second-generation vaccines have entered markets since then: SA14-14-2, IC51 and ChimeriVax-JE. The live-attenuated SA14-14-2 strain was introduced in China in 1988. It is much cheaper than alternative vaccines, and is administered to 20 million Chinese children each year. A purified, formalin-inactivated, wholevirus vaccine known as IC51 (marketed in Australia and New Zealand as JESPECT and elsewhere as IXIARO) was licensed for use in the United States, Australia, and Europe during the spring of 2009. It is based on a SA14-14-2 strain and cultivated in Vero cells. In September 2012, the Indian firm Biological E. Limited launched an inactivated cell culture derived vaccine based on SA 14-14-2 strain which was developed in a technology transfer agreement with Intercell and is a thiomersal-free vaccine. Another vaccine, a live-attenuated recombinant chimeric virus vaccine developed using the Yellow fever virus known as ChimeriVax-JE (marketed as IMOJEV) was licensed for use in Australia in August 2010 and in Thailand in December 2012. References External links Inactivated vaccines Vaccines World Health Organization essential medicines (vaccines) Wikipedia medicine articles ready to translate
Japanese encephalitis vaccine
[ "Biology" ]
623
[ "Vaccination", "Vaccines" ]
21,064,035
https://en.wikipedia.org/wiki/Locks%20with%20ordered%20sharing
In databases and transaction processing the term Locks with ordered sharing comprises several variants of the two-phase locking (2PL) concurrency control protocol generated by changing the blocking semantics of locks upon conflicts. Further softening of locks eliminates thrashing. See also Autonomic computing References D. Agrawal, A. El Abbadi, A. E. Lang: The Performance of Protocols Based on Locks with Ordered Sharing, IEEE Transactions on Knowledge and Data Engineering, Volume 6, Issue 5, October 1994, pp. 805–818, Mahmoud, H. A., Arora, V., Nawab, F., Agrawal, D., & El Abbadi, A. (2014). Maat: Effective and scalable coordination of distributed transactions in the cloud. Proceedings of the VLDB Endowment, 7(5), 329-340. Data management Databases Concurrency control Transaction processing
Locks with ordered sharing
[ "Technology" ]
186
[ "Data management", "Data" ]
21,065,169
https://en.wikipedia.org/wiki/Cartan%E2%80%93Eilenberg%20resolution
In homological algebra, the Cartan–Eilenberg resolution is in a sense, a resolution of a chain complex. It can be used to construct hyper-derived functors. It is named in honor of Henri Cartan and Samuel Eilenberg. Definition Let be an Abelian category with enough projectives, and let be a chain complex with objects in . Then a Cartan–Eilenberg resolution of is an upper half-plane double complex (i.e., for ) consisting of projective objects of and an "augmentation" chain map such that If then the p-th column is zero, i.e. for all q. For any fixed column , The complex of boundaries obtained by applying the horizontal differential to (the st column of ) forms a projective resolution of the boundaries of . The complex obtained by taking the homology of each row with respect to the horizontal differential forms a projective resolution of degree p homology of . It can be shown that for each p, the column is a projective resolution of . There is an analogous definition using injective resolutions and cochain complexes. The existence of Cartan–Eilenberg resolutions can be proved via the horseshoe lemma. Hyper-derived functors Given a right exact functor , one can define the left hyper-derived functors of on a chain complex by Constructing a Cartan–Eilenberg resolution , Applying the functor to , and Taking the homology of the resulting total complex. Similarly, one can also define right hyper-derived functors for left exact functors. See also Hyperhomology References Homological algebra
Cartan–Eilenberg resolution
[ "Mathematics" ]
329
[ "Fields of abstract algebra", "Mathematical structures", "Category theory", "Homological algebra" ]
21,066,540
https://en.wikipedia.org/wiki/Ion%20gel
An Ion gel (or Ionogel) is a composite material consisting of an ionic liquid immobilized by an inorganic or a polymer matrix. The material has the quality of maintaining high ionic conductivity while in the solid state. To create an ion gel, the solid matrix is mixed or synthesized in-situ with an ionic liquid. A common practice is to utilize a block copolymer which is polymerized in solution with an ionic liquid so that a self-assembled nanostructure is generated where the ions are selectively soluble. Ion gels can also be made using non-copolymer polymers such as cellulose, oxides such as silicon dioxide or refractory materials such as boron nitride. Types of Ion Gels Ion gels can be divided in two broad classes based on the major component of the matrix in the composite: polymeric and inorganic. These broad classes can be further subdivided based on the chemical class of the matrix. Across typical ion gel applications, it is desired that the matrix components be electrically insulating to separate contacts within a device and supply ionic conductivity alone. The matrix selection of a material has ramifications on the ionic conductivity as well as the mechanical properties of the final composite material. Inorganic Classes: Non-metal oxide (e.g. SiO2) Functionalized non-metal oxide Metal Oxide Ionic liquid tethered nanoparticles Metal Organic Framework Refractory Materials (e.g. boron nitride) Polymeric Classes: Poly(ethylene oxide) Poly(methyl methacrylate) Poly(vinylidene fluoride) Poly(ethylene glycol) diacrylate Poly(acrylonitrile) Although these subtypes of ion gels can classify many materials in this broad class, there are yet still hybrid materials that fall outside these categorizations. Examples have been demonstrated of ion gels with both polymeric and inorganic materials to provide both flexibility and strength in the final composite. Applications Ion gels have been utilized in many electrical device systems such as in capacitors as dielectrics, as insulators for field effect transistors, and as electrolytes for lithium-ion batteries. The solid state and yet flexible form of ion gels are attractive for modern mobile devices such as formable screens, health monitoring systems, and solid state batteries. Especially for solid state battery applications, the high viscosity of ion gels provides sufficient strength to serve as both an electrolyte and separator between the anode and cathode. In addition, ion gels are sought after in battery applications as the viscoelastic flow of the gel under stress creates a high quality electrode/electrolyte contact compared to other solid state electrolytes. Thermal Stability Ion gels have been known to be able to sustain upwards of 300 °C before onset of degradation. The high temperature capability is typically limited by the underlying ionic liquid, which can have a wide range of thermal stability, but are typically stable to at least 250 °C. This high temperature stability has been exploited to operate lithium-ion battery cells at lab scale up to 175 °C, which is well beyond the capabilities of current commercial electrolytes. Mechanical Properties Given the variety of ion gels, the mechanical properties of this broad class of materials spans a wide range. Often mechanical properties are tailored towards the desired application. Applications that require high flexibility target a highly elastic matrix material such as a cross-linked polymer. These types of elastomeric materials offer high degree of elastic strain with full recovery, which is desirable in wearable devices that will undergo many stress cycles during their lifetime. Additionally, these types of materials can achieve up to 135% strain at failure indicating a degree of ductility. Applications that require higher strength ion gel will often use a refractory matrix to generate composite strengthening. This is particularly desirable in lithium-ion battery applications, which seek to deter the growth of lithium dendrites in the cell that can result in an internal short-circuit. A relationship has been established in lithium-ion batteries between high modulus, strong, solid electrolytes and a reduction in lithium dendrite growth. Thereby, a strong ion gel composite can improve the longevity of lithium-ion batteries through reduced internal short circuit failures. The elastic resistance to flow of ion gels is often measure via Dynamic Mechanical Spectroscopy. This method reveals the storage modulus as well as the loss modulus, which define the stress-strain response of the gel. All ion gels are in the quasi-solid to solid state regime indicating that the storage modulus is higher than the loss modulus (i.e. elastic behavior prevails over the energy dissipating liquid-like behavior). The magnitude of the storage modulus and its ratio to the loss modulus dictate the strength and the toughness of composite material. Storage modulus values for ion gels can vary from approximately 1.0 kPa for typical polymeric-based matrices up to approximately 1.0 MPa for refractory-based matrices. The structure of the composite matrix can play a large role in the outcome of the final bulk mechanical properties. This is especially true for inorganic based matrix materials. Several lab-scale examples have demonstrated a general trend that smaller matrix particle sizes can result in orders of magnitude increase in storage modulus. This has been attributed to higher surface area to volume ratio of the matrix particles and the higher concentration of nanoscale interactions between the particle and the immobilized ionic liquid. The higher the interaction forces between the components in the ion gel composite results in a higher force required for plastic deformation and an overall stiffer material. Another degree of freedom in ion gel design lies in the ratio of matrix to ionic liquid in the final composite. As the concentration of ionic liquid in the matrix increases, the material will become more liquid-like in general corresponding to a decrease in storage modulus. Conversely, a decrease in concentration will generally strengthen the material and depending on the matrix material can generate a more elastomeric or brittle stress-strain response. The general tradeoff in a reduced concentration in ionic liquid is a subsequent decrease in ionic conductivity of the overall composite making optimization necessary for the specific application. References Materials
Ion gel
[ "Physics" ]
1,287
[ "Materials", "Matter" ]
8,451,369
https://en.wikipedia.org/wiki/Podocin
Podocin is a protein component of the filtration slits of podocytes. Glomerular capillary endothelial cells, the glomerular basement membrane and the filtration slits function as the filtration barrier of the kidney glomerulus. Mutations in the podocin gene NPHS2 can cause nephrotic syndrome, such as focal segmental glomerulosclerosis (FSGS) or minimal change disease (MCD). Symptoms may develop in the first few months of life (congenital nephrotic syndrome) or later in childhood. Structure Podocin is a membrane protein of the band-7-stomatin family, consisting of 383 amino acids. It has a transmembrane domain forming a hairpin structure, with two cytoplasmic ends at the N- and C-terminus, the latter of which interacts with the cytosolic tail of nephrin, with CD2AP serving as an adaptor. Function Podocin is localized on the membranes of podocyte foot processes (pedicels) where it oligomerizes in lipid rafts together with nephrin to form the filtration slits. References Proteins
Podocin
[ "Chemistry" ]
255
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
8,453,546
https://en.wikipedia.org/wiki/Open%20Regulatory%20Annotation%20Database
The Open Regulatory Annotation Database (also known as ORegAnno) is designed to promote community-based curation of regulatory information. Specifically, the database contains information about regulatory regions, transcription factor binding sites, regulatory variants, and haplotypes. Overview Data Management For each entry, cross-references are maintained to EnsEMBL, dbSNP, Entrez Gene, the NCBI Taxonomy database and PubMed. The information within ORegAnno is regularly mapped and provided as a UCSC Genome Browser track. Furthermore, each entry is associated with its experimental evidence, embedded as an Evidence Ontology within ORegAnno. This allows the researcher to analyze regulatory data using their own conditions as to the suitability of the supporting evidence. Software and data access The project is open source - all data and all software that is produced in the project can be freely accessed and used. Database contents As of December 20, 2006, ORegAnno contained 4220 regulatory sequences (excluding deprecated records) for 2190 transcription factor binding sites, 1853 regulatory regions (enhancers, promoters, etc.), 170 regulatory polymorphisms, and 7 regulatory haplotypes for 17 different organisms (predominantly Drosophila melanogaster, Homo sapiens, Mus musculus, Caenorhabditis elegans, and Rattus norvegicus in that order). These records were obtained by manual curation of 828 publications by 45 ORegAnno users from the gene regulation community. The ORegAnno publication queue contained 4215 publications of which 858 were closed, 34 were in progress (open status), and 3321 were awaiting annotation (pending status). ORegAnno is continually updated and therefore current database contents should be obtained from www.oreganno.org. RegCreative Jamboree 2006 The RegCreative jamboree was stimulated by a community initiative to curate in perpetuity the genomic sequences which have been experimentally determined to control gene expression. This objective is of fundamental importance to evolutionary analysis and translational research as regulatory mechanisms are widely implicated in species-specific adaptation and the etiology of disease. This initiative culminated in the formation of an international consortium of like-minded scientists dedicated to accomplishing this task. The RegCreative jamboree was the first opportunity for these groups to meet to be able to accurately assess the current state of knowledge in gene regulation and to begin to develop standards by which to curate regulatory information. In total, 44 researchers attended the workshop from 9 different countries and 23 institutions. Funding was also obtained from ENFIN, the BioSapiens Network, FWO Research Foundation, Genome Canada and Genome British Columbia. The specific outcomes of the RegCreative meeting to date are: Prior to the RegCreative Jamboree, attendees were asked to participate in an interannotator agreement assessment. Two ORegAnno mirrors were established with identical sets of publications to be annotated in their queue. In total, 33 redundant annotations from 18 publications were collected. (79 annotations for 31 papers and 60 annotations for 21 papers were collected on servers 1 and 2, respectively.) This effort was used as a baseline from which to establish annotator efficiency. Hands-on annotation activities occurred during the first 2 days of the 3-day workshop. In total, 39 researchers contributed 184 TFBS and 317 Regulatory Regions from 96 papers. Many of these researchers were also trained on the ORegAnno system, significantly increasing its experienced-user community. The contribution of these annotations to individual species was 339 annotations in Homo sapiens, 42 annotations in Mus musculus, 72 annotations in Drosophila melanogaster, 24 annotations in Ciona intestinalis, 14 annotations in Rattus norvegicus, 6 annotations in Halocynthia roretzi, 2 annotations in Ciona savignyi and 2 annotations in HIV. Within these annotations, one new dataset was added to ORegAnno; 274 human enhancers were programmatically annotated by Maximillian Haessler, Institute Alfred Fessard, from Visel et al., Nucleic Acids Research, 2006. In total, 130 scientific studies were examined in depth. The annotated papers were pre-selected from expert-curated publications in the ORegAnno queue that had full-text available through HighWire Press. There exists an immediate need for improved data standardization and development of associated ontologies. Specifically, this should include the open access development and integration of transcription factor naming conventions and sequence, cell type, cell line, tissue, and evidence ontologies. The groundwork for addressing and prioritizing these needs was accomplished in several ways during the meeting: Transcription factor naming issues were addressed through discussion of integration of transcription factor prediction pipelines, such as DBD or flyTF, which have been supplemented with manual curation versus solely manual curated implementations like TFcat. Marc Halfon, University at Buffalo, led a breakout session to improve the Sequence Ontology from existing ORegAnno and REDfly database conventions within the framework being developed as part of the Open Biomedical Ontologies. A preliminary version of these improvements can be found on the ORegAnno wiki. Learning-based ontology development was widely regarded as an essential feature of the annotation process. Such that, annotators are not restricted from annotating based on the limitations of the controlled vocabulary and that these exceptions can be used to further develop the backbone ontologies. Ontology development should be decentralized from the ORegAnno annotation framework. Specifically, it is planned that the ORegAnno evidence ontology will be removed and made available to broader community development. Renewed focus on integrating species-specific resources with annotation framework. A specific focus of the workshop was addressing the role of text mining in facilitating regulatory annotation. Sessions were led by Dr. Lynette Hirschman, MITRE, and Dr. Martin Krallinger, CNIO, to formulate where text-mining can help. A short term object of text-mining based analyses was formulated around both populating the ORegAnno queue and using the expert-curated portion of the ORegAnno queue to validate text-mining-based publication acquisition. The latter objectives are being led by Dr. Stein Aerts, University of Leuven. References External links ORegAnno RegCreative Jamboree 2006 Biological databases Gene expression
Open Regulatory Annotation Database
[ "Chemistry", "Biology" ]
1,341
[ "Gene expression", "Bioinformatics", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Biological databases" ]
22,560,835
https://en.wikipedia.org/wiki/Downhole%20heat%20exchanger
A downhole heat exchanger, (DHE) also called a borehole heat exchanger, (BHE) is a heat exchanger installed inside a vertical or inclined borehole. It is used to capture or dissipate heat to or from the ground. DHT's are used for geothermal heating, sometimes with the help of a geothermal heat pump. Downhole heat exchangers, like other use of geothermal energy, have the potential to significantly contribute to the reduction of emissions. In northern Europe, DHE are already widely deployed. Types U-tube The heat exchanger usually consists of one or two u-tubes through which the carrier fluid, usually water, circulates. The space around the u-tubes is filled with groundwater or backfilled with thermally conductive grout. Open pipe Another design uses a single open pipe to flow water downward. The water then returns through the annular gap between the pipe and the casing. This design provides better thermal contact than u-tubes, but risks contamination by groundwater. Since this involves practically no downhole equipment, these systems usually only go by the name of borehole heat exchangers (BHT). Standing column well If no casing is installed and groundwater is permitted to charge the system, this arrangement is no longer a BHT, but rather a standing column well. External links Video documentation: Installation of a downhole heat exchanger John W. Lund: The use of downhole heat exchangers (2003) References Heat exchangers Hole making Geothermal energy Energy conversion Building engineering Sustainable technologies
Downhole heat exchanger
[ "Chemistry", "Engineering" ]
322
[ "Building engineering", "Chemical equipment", "Civil engineering", "Heat exchangers", "Architecture" ]